How do camera sensors work




















There are plenty of detailed articles that will go into more depth on this topic. But hopefully, you will now have a basic understanding of how digital sensors work.

Photo credits: Kav Dadfar — All rights reserved. No usage without permission. Kav is also the co-founder of That Wild Idea , a company specializing in photography workshops and tours both in the UK and around the world.

Receive updates, tips, cool tutorials, free stuff and special discounts. Cart Checkout My Account. Submit Your Review Contact Shop. Latest Articles Author Bio.

Latest articles by Kav Dadfar see all articles. Kav Dadfar. See All Articles by Kav Dadfar. Related Posts. Bayer "demosaicing" is the process of translating this Bayer array of primary colors into a final image which contains full color information at each pixel. How is this possible if the camera is unable to directly measure full color? One way of understanding this is to instead think of each 2x2 array of red, green and blue as a single full color cavity.

This would work fine, however most cameras take additional steps to extract even more image information from this color array. If the camera treated all of the colors in each 2x2 array as having landed in the same place, then it would only be able to achieve half the resolution in both the horizontal and vertical directions.

On the other hand, if a camera computed the color using several overlapping 2x2 arrays, then it could achieve a higher resolution than would be possible with a single set of 2x2 arrays. The following combination of overlapping 2x2 arrays could be used to extract more image information.

Note how we did not calculate image information at the very edges of the array, since we assumed the image continued in each direction. If these were actually the edges of the cavity array, then calculations here would be less accurate, since there are no longer pixels on all sides. This is typically negligible though, since information at the very edges of an image can easily be cropped out for cameras with millions of pixels. Other demosaicing algorithms exist which can extract slightly more resolution, produce images which are less noisy, or adapt to best approximate the image at each location.

Images with small-scale detail near the resolution limit of the digital sensor can sometimes trick the demosaicing algorithm—producing an unrealistic looking result.

Two separate photos are shown above—each at a different magnification. These shutter types are different in their operation and final imaging results, especially when the camera or target is in motion. The diagram to the left shows the exposure timing of a global shutter sensor.

All pixels begin and end exposure at the same time but readout still happens line by line. This timing produces non-distorted images without wobble or skewing. Global shutter sensors are essential for imaging high speed moving objects. The diagram to the left shows the exposure timing of a rolling shutter sensor.

Exposure timing is different line by line with reset and readout happening at shifted times. This row by row exposure produces image distortion if either the target or camera are in motion. Rolling shutter sensors offer excellent sensitivity for imaging static or slow moving objects. Understanding the terms and technology in digital sensors will allow you to better pinpoint the appropriate camera for your application.

For example, certain sensor specifications, such as pixel size and sensor format, will play an important role in choosing the correct lens. If you are ready to discuss your camera requirements, please contact our knowledgeable Lucid sales staff. Buy Online! Learn More. Understanding The Digital Image Sensor The image sensor is one of the most important components of any machine vision camera. While a sensor's function is to convert light into an electrical signal, not all sensors are built the same.

Learning more about how image sensors work and how they are categorized will help you better choose the right one. Sensors can be classified in several ways such as its structure type CCD or CMOS , chroma type color or monochromatic , or shutter type global or rolling shutter. They can also be classified by resolution, frame rate, pixel size, and sensor format. Understanding these terms can help one better understand which sensor is best for their application.

Above: Diagram of a CMOS Image Sensor The solid-state image sensor chip contains pixels which are made up of light sensitive elements, micro lenses, and micro electrical components.

Image Sensor Format Size Image sensors come in different format types also known as optical class, sensor size or type and packages.

View more. Shopping Cart 0. Each individual photosite simply collects only the amount of light hitting it and passes that data on; no color information is collected.

Thus, a bare sensor is a monochromatic device. Plenty of ways exist to make monochromatic information into color data. For example, you could split the light coming through the lens to three different sensors, each of which was tuned to react to a certain light spectrum some video cameras do that.

But most digital still cameras use a different method: they place an array of colored filters over the photosites. One filter arrays is commonly used, and several others are possible:. Each of these methods has advantages and disadvantages.

The repeat of the green filter in Bayer patterns and addition of a green filter to the subtractive CYM method is due partly to the fact that our eyes are most sensitive to small changes in green wavelengths. By repeating or adding this color in the filter, the accuracy of the luminance data in the critical area where our eyes are most sensitive is slightly improved. So, each individual photosite has a filter above that limits the spectrum of light it sees.

Later in the picture-taking process, the camera integrates the various color information into full-color data for individual pixels a process sometimes called interpolation, but more accurately called demosaicing. But one important point should be made: the color accuracy of your digital camera is significantly influenced by the quality of the filter array that sits on top of the photosites.

Imagine, for a moment, a filter array where each red filter was slightly different—you'd have random information in the red channel of your resulting image. A damaged filter would result in inaccurate color information at the damage point.

One thing that isn't immediately apparent about the Bayer pattern filter is that the ultimate resolution of color boundaries varies. Consider a diagonal boundary between a pure red and a pure black object in a scene. Black is defined as the absence of light reaching the sensor, thus the data value would be 0 for the G and B photosites. That means that only the photosites under the red sensors are getting any useful information! Since no individual color is repeated in a CYMG pattern, all boundaries should render the same, regardless of colors.

Most sensors these days are built with microlenses that incorporate the filter pattern below them and sit directly on top of the photosites. This microlens layer not only incorporates the Bayer filter pattern just underneath it, but redirects light rays that hit at an angle to move more perpendicular to the photosites.

If light were to hit the photosites at severe angles, not only would the photosite be less likely to get an accurate count of the light hitting it, but adjacent cells would tend to be slightly more influenced by the energy since the filters sit above the photosites and have no "guards" between them. On top of the microlenses are yet another set of filters that take out the ultraviolet UV and infrared IR light spectrum and provide anti-aliasing I'll discuss anti-aliasing in the next section.

Current cameras allow very little light outside the visible spectrum to get to the photosites, though many older ones often let significant IR through. We have one more exception to talk about sensors have gotten complicated since I first wrote about them in the 's.

That's the Foveon sensor now owned by Sigma, the only cameras that use it. Unlike the Bayer-pattern sensors that get their color information by using adjacent photosites tuned to different spectrums, the Foveon sensor uses layers in its photosite design.

The primary benefit of this approach is that it gets rid of color aliasing issues that occur when you demosaic Bayer-pattern data, and thus allows you to get rid of or at least lower the value of the antialiasing filter over the sensor. The benefit can be described in two words: edge acuity. Another benefit is that there is no guessing about color at any final pixel point, which means that colors are generally robust and, with a lot of intense calculation, accurate.

The primary disadvantage to the Foveon approach is noise. Obviously, less light gets to the bottom layer of silicon in a photosite than the top layer. Foveon has done a remarkably good job of mitigating the drawbacks while emphasizing the positive in the latest iteration of the sensor. Getting Data Off the Sensor. At this point, we have an array of filtered photosites that respond to different colored light that usually looks something like this:. The data at each of the individual photosites, by the way, is still in analog form the number of electrons in the well.

The method by which that data is retrieved may surprise you, however: in most CCD sensors the values are rapidly shifted one line at a time to the edge of the sensor. This process is called an interline or row transfer, and the pathways that the data moves down are one of the reasons why photosites on CCDs have space between them to make room for the pathway.

While the data is moved off in "rows," it's important to note the short axis is usually the direction that the data is moved if you're looking at a horizontal image, you'd see these as columns.

As the data moves to the edge of the sensor, it is usually first correlated to reduce noise, then read by analog to digital converters ADC. This reduces "read noise" as transmission errors don't come into play. One common misconception is that bit depth equates to dynamic range the range of dark to bright light that can be captured. This isn't true. Dynamic range of a camera is determined mostly by the sensor electron well capacity minus baseline noise determines the maximum range of exposure tolerated; another reason why larger photosites are better than small.

In essence, you get more precise Digital Numbers with more bits, less precise with fewer bits, but the underlying data is the same.



0コメント

  • 1000 / 1000