{"id":3373,"date":"2017-07-24T14:56:27","date_gmt":"2017-07-24T13:56:27","guid":{"rendered":"https:\/\/ebestpicks.com\/?p=3373"},"modified":"2021-10-15T13:44:01","modified_gmt":"2021-10-15T12:44:01","slug":"howdcw-htm","status":"publish","type":"post","link":"https:\/\/ebestpicks.com\/howdcw.htm","title":{"rendered":"How Digital Cameras Work: Photography Explained"},"content":{"rendered":"

There are several different techniques used to make digital cameras. These techniques result in creating cameras with differing quality and applicability to a variety of uses. Understanding the methods used in the camera’s design will lead to an understanding of how successful the camera will be for a particular use. Historically, this is consistent with the evolution of the camera as a tool. Over the last 140 years of photography, cameras have evolved into hundreds of species, ranging from single use pocket cameras to 4×5 studio cameras. Each camera has unique features that make it especially well suited to a particular need. When bridging the gap to digital imaging, one must again consider the need for unique variations that better fit particular requirements.<\/p>\n

The quality of an image produced by a digital camera can be measured in several ways. These include: resolution, dynamic range and color fidelity. As well, the camera system can also have several attributes which are important. These include: aperture range, available focal lengths, illumination alternatives, tilt and swing alternatives, shutter speeds, ISO equivalency, portability and battery life. All of these qualities must be considered when choosing a digital camera.<\/p>\n

How Digital Cameras Work – Definition of Terms\u00a0<\/strong><\/h2>\n

First, it is necessary to define the vocabulary required to describe a digital camera. Many of the familiar photographic terms like resolution, ISO and grain have different meanings when applied to digital imaging.<\/p>\n

Resolution<\/strong><\/h3>\n

In many ways, the resolution of an image seems the easiest attribute to understand. However, upon careful inspection, there are areas for confusion even in this simple measurement. In the most basic sense, the resolution of a digital camera can be stated to be the total number of pixels used. For example, a camera with a sensor that has 800 by 1000 pixels can be said to have 800,000 pixels of total resolution.<\/p>\n

See Also:\u00a0Best Digital Camera Under $200: Cheap DSLR, Point and Shoot Camera Review<\/a><\/strong><\/p>\n

A good way of visualizing this is to think of mosaic tile art where each mosaic tile can be only a single color and brightness. \u00a0A single tile can be considered to be equivalent to a single pixel. \u00a0For digital images, the tiles are arranged in neat rows and columns. \u00a0 When the art (or image) is perceived from a distance, the nature of the individual tiles (or pixels) are lost and the image as a whole is appreciated.<\/p>\n

However, in most consumer cameras, a “color 1 shot” sensor is used. Onto this sensor is superimposed a mosaic color filter pattern – often consisting of quad of red, blue and 2 green pixels. The quad is then repeated over the entire sensor. When an image is projected upon the sensor, each pixel measures only one of the three possible color components. In that measurement is some information about the luminance of the pixel and some information about the color of the pixel. Unfortunately, a complete understanding about the color and luminance of a single pixel is not possible.<\/p>\n

Camera manufacturers have had two alternatives of dealing with this problem. First, the image can be intentionally blurred. This will cause the smallest object to cover multiple pixels. This leads to a complete understanding of the color and luminance of even the smallest object. Unfortunately, however, this results in a reduction in the resolution of the image. So, even an 800,000 pixel camera that uses this technique for color imaging, results in less than true 800,000 pixel resolution.<\/p>\n

Color Aliasing<\/strong><\/h3>\n

Most professional cameras deal with this problem by actually sampling an individual red, green and blue measurement for each and every pixel. This is done with either a scanning technique, a rotating color filter technique or three parallel sensors. Each of these techniques are described in detail below.
\nPrint Size<\/p>\n

Once you determine the true resolution of the camera, this still leaves open the issue of “how large can I print the image?” This requires an understanding of the limitation of the print media that is used. For example, if your target is an offset press at a 133 line screen, then 800,000 pixels of image data should be adequate for approximately 8 by 10 inches. More image data won’t show a discernable difference in the print. Alternatively, if your target output is a newspaper with a 100 line screen, then the target image size for the same 800,000 pixel image can be as large as 11 by 14 inches. The best way to determine the maximum printable size is to test the process with your printer. However, as a rule of thumb, we have found that roughly .8 pixel per line rule is required to hold resolution.
\nContinuous tone printers must be considered differently. Each dot of a continuous tone printer is in fact, one full color pixel. Therefore, maximum resolution requires a one to one ratio between input and output pixels. In these cases, the limitation often winds up being the resolving ability of the viewer. As a point of interest, a 20 year old can see approximately 2000 pixels across 8 inches viewed at reading distance of approximately 1 foot. The size of an “8×10” print and a magazine page are probably what they are because they fill the viewers field of view when viewed at a comfortable reading distance of about 1 foot. Likewise, when you look at a 4×5 print, you typically hold it closer so as to fill your field of view as well.<\/p>\n

For Web output, image requirements are quite low. Images are typically considered to be 72 dpi continuous tone. Thus, an 800 x 1000 image would occupy a screen area of approximately 11 by 14 inches, far too large for most applications.<\/p>\n

Dynamic Range<\/strong><\/h3>\n

A more difficult aspect of imaging to understand is the measurement of dynamic range. Using the mosaic tile analogy again, this can be considered to be the number of different colors (or shades of gray for a monochrome case) that can be selected from, when choosing individual tiles. For example, for 24 bit color, each tile could have one of 256 x 256 x 256 colors or 16,777,216 different colors<\/p>\n

When designing a digital camera, the dynamic range relates to both the accuracy of the A\/D converter that changes the analog voltage representing a pixel brightness to a digital number, and the noise present in the system. Typical cameras have A\/D converters of 8 bits or 256 different discernable levels. Better cameras have 10, 12 or even 14 bit A\/Ds.
\nThe broader the dynamic range of the camera, the easier it will be to differentiate small gradations in brightness or color. This is especially important if you expect to modify the image later with tools like PhotoShop.<\/p>\n

The noise is more difficult to measure. It can come from several different places including: optical noise from lens flare, thermal and electronic noise from the power supply, electronic circuitry and the sensor itself. Unique to digital cameras, a large component of the noise is thermally generated within the sensor. Cooling the sensor can reduce this noise but electronically cooling the sensor with a thermoelectric (TE) cooler is both expensive and consumes quite a bit of power.<\/p>\n

Unfortunately, as the exposure time increases, the accumulation of thermal noise increases as well.
\nIf we consider each pixel to be a bucket that fills with electrons (made from photons) as light strikes it, then we can consider the thermal noise to be a slow drip that is constantly filling the bucket even when no light is striking it. The more noise, the less room in the bucket for light to accumulate. The longer the exposure time, the more noise that is accumulated and the less room for signal (light).<\/p>\n

See Also:\u00a0Best DSLR Camera Under 500 \u2013 Nikon, Canon, Sony, Pentax, Olympus<\/a><\/strong><\/p>\n

In total, the dynamic range will interpret into your ability to observe fine gradations in the darkest shadow areas of an image or equivalently in the brightest highlights. Better cameras will preserve the color and texture even into the shadow areas. How much you need again relates to your output media and what details of the image you are interested in preserving.<\/p>\n

Achieving Long Exposure Times<\/p>\n

Sometimes, it is desirable to take an excessively long exposure in order to achieve the appropriate lighting effect. In these cases, digital cameras have been unacceptable because of the accumulation of thermal noise eating the dynamic range until it can no longer take useful image data. However, in concert with a computer, you can extend the dynamic range of a digital camera. To do so, one can take multiple exposures in sequence and simply add them together within the computer. In order for this to work, It is necessary that the raw sensor data be available to the host computer. Otherwise the mathematics of this process will not be correct.
\nBlooming<\/p>\n

There are some special artifacts that are present in digital cameras that are not found (or are differently manifested) in conventional film photography. The most noticeable is Blooming. Blooming occurs when a pixel receives too much light In these cases, the “bucket” can be considered to literally overflow. What happens to the extra electrons is what is important. in well designed cameras, the overflow is properly siphoned off without affecting neighboring pixels. In less well designed systems, the extra charge is allowed to spill into neighboring pixels thereby spreading the size of the highlight. If the spread is non uniform, as is typically found in a scanning camera, then the artifact can be quite disturbing since it appears as an unnatural stripe in the image.
\nIn photography one can typically find specular highlights that are several f-stops brighter than the rest of the image. For this reason, the effect of blooming is important to quantify for a specific camera.<\/p>\n

Sensitivity (or ISO)<\/strong><\/h3>\n

For a digital camera, like a film camera, one is interested in the speed of the system in order to understand its applicability to different lighting conditions. Film has a relatively broad range of exposure. While it is generally tolerable of overexposure, underexposed images can appear grainy. In fact, the chemical processing can be varied to “push” the film in an attempt to make it appear more sensitive with the apparent loss of grain quality. And, in general, printing can be varied to attempt to retrieve an improperly exposed negative.
\nDigital cameras are far less tolerant of over exposure. Once a pixel saturates or overflows, no further information about its brightness is available. Under exposed images suffer from a similar fate as film in that the noise becomes pronounced. Achieving proper exposure is critical to good digital photography (as in conventional photography).<\/p>\n

To understand sensitivity we must consider how much light is required to fill, but not overflow, the pixels of a sensor. Obviously, the brighter areas of the image will fill their respective pixels more quickly, however the goal for proper exposure is to just allow the brightest areas of the scene to fill (or slightly over-fill) their respective pixels). This is a function of the sensor’s sensitivity to light and the speed of the lens.<\/p>\n

Primarily, the photodiode sensitivity of a sensor is determined by the quantum efficiency, or how efficiently the sensor changes photons into electrons. Most sensors, whether CCD or CMOS have similar quantum efficiencies. However, the number of photons required to “almost fill” a pixel will vary with the size of the bucket and the size of the active area that is doing photon to electron conversions.<\/p>\n

A principal difference between CCDs and CMOS detectors is that in a CCD, nearly the entire pixel is available for conversion where in CMOS detector, much of the pixel is used for active circuitry and is not (supposed) to gather photons. The “fill factor” is the term that commonly refers to the percentage of the pixel site that is active in a sensor. Typical CMOS sensors today have fill factors of 30% while CCDs have fill factors closer to 95%. The CCD Fill factor can be reduced in two ways: first, there are column and row traces that may occlude parts of the pixel. Second, it is necessary to prevent photons from inadvertently crossing at an angle from one pixel’s color dye to another’s photodiode. Therefore, an opaque boundary stripe is often used to surround pixels and thus reduces the fill factor further.<\/p>\n

Keep in mind however, that just comparing fill factors is not adequate since the sizes of the sensor and pixel size will vary between sensors as well. For example, a 800 x 1000 CMOS sensor that has a 10 micron pixel size and a 25% fill factor will still have the same size active area as a CCD with a 2.5 micron pixel and equivalent resolution.<\/p>\n

The larger the bucket (or well, as it is referred to,) the longer it will take to fill. However, the larger well will also offer a repository of more electrons to count and thus will offer a better signal to noise. Therefore, a compromise must be made between the objective of high sensitivity and high dynamic range.<\/p>\n

Square pixels, frame transfer, interlaced?<\/strong><\/h3>\n

Many CCDs were born out of the television industry. Unfortunately, televisions were not defined to have square pixels, they are rectangular – wider than tall. As well, because of the limitations of transmission speeds, televisions were designed to be interlaced. They alternate between odd and even frames which each are comprised of half of the total number of horizontal lines of the image: the odd lines and the even lines. As such, CCDs are poorly suited to still photography where the output may be directed to a computer that has square pixels and no interlace.<\/p>\n

Various Camera Designs<\/strong><\/h2>\n

The following section contains a summary of each of several techniques that are used to build digital cameras. Each camera has strengths and weaknesses that make it appropriate for differing use.<\/p>\n

The Basics<\/strong><\/p>\n

\"how<\/p>\n

A digital camera is similar to a film camera except that the film is replaced with an electronic sensor. The sensor is comprised of a grid of photo diodes which change the photons that strike them into electrons. The electrons are stored in small buckets (capacitors) which are read out as a series of varying voltages which are proportional to the image brightness. The voltage is converted to a number by an Analog to Digital Converter and the series numbers are stored and processed by a computer within the camera.<\/p>\n

In many designs, a mechanical shutter is used in the same way that it is used in a film camera – to gate the light that is allowed to reach the sensor or film. Other cameras use what is called an electronic shutter which allows the control of when the sensor gathers light through electronic control signals to the sensor.<\/p>\n

The Scanning Camera (or Camera Back).<\/strong><\/h3>\n

How it’s done<\/strong><\/p>\n

This technique involves a linear array of photodiodes. Typical arrays are either a single row of photodiodes or a triplet of 3 rows that are covered with red, green and blue dyes, respectively. For color imaging, if a single column is used, then a color filter set of red, green and blue filters are moved sequentially into position. The array is positioned in the film plain of the camera and moved with a motor through thousands of positions perpendicular to its orientation. In this way, thousands of lines are acquired, one (or 3 for the 3 color case), for each physical position.<\/p>\n

Advantages<\/strong><\/p>\n

This type of camera offers two principal advantages: First, since linear arrays can be easily obtained with very high pixel counts (up to 10,000 per color), it is possible to get a very high resolution image. For example, with a 10,000 pixel “stick” oriented in the short axis of a 4×5 frame, the total number of pixels acquired can be 10,000 x 12,000 x 3 colors. (or a 360 megabyte file!) A second advantage is that these “sticks” can be large enough to be placed in the film plane of existing 4×5 cameras thereby providing perfect compatibility with the lenses and camera body already owned by the photographer.<\/p>\n

Disadvantages<\/strong><\/p>\n

However, the principal disadvantage of this type of camera is that it requires a long time to perform a single image. Therefore, the photographer must use a constant lighting source like an HMI light which is both expensive, hot and not readily available. It also limits the size of the object being photographed to one that can be adequately lit. Of course, the second limitation of this design is that the object being photographed cannot be moving. And, finally, one can not preview the image in a “video” mode for composition, focus or lighting since the scanning process is relatively slow.<\/p>\n

Applications<\/strong><\/p>\n

This camera design is well suited to graphic arts photography of stationary small objects where the target usage for the image is either graveur printing or display advertising at large sizes.<\/p>\n

The One Moving Chip Camera<\/strong><\/h3>\n

How it’s done<\/strong><\/p>\n

The Moving One Chip Camera is a variant of the scanning camera. It uses an inexpensive “1 Shot” type color sensor. However, it micro steps it in sub-pixel amounts in both the X and Y direction. As a result, with some computation, one is able to build a high quality color image with minimal aliasing.<\/p>\n

Advantages<\/strong><\/p>\n

This technique offers moderate cost and very high resolution.<\/p>\n

Disadvantages<\/strong><\/p>\n

The scanning speed and re-ordering of data makes it quite slow. As such it requires HMI (Continuous) lighting and is therefore not acceptable for many types of product shots.<\/p>\n

Applications<\/strong><\/p>\n

This camera design is well suited to graphic arts photography of stationary small objects where the target usage for the image is either graveur printing or display advertising at large sizes.<\/p>\n

The One Chip Camera – the “color 1 shot”<\/strong><\/h3>\n

How it’s done<\/strong><\/p>\n

The “One Chip” color camera uses a single 2 dimensional Photodiode array. The array is covered with a set of miniature color filters, red, green and blue, which cover individual pixels in a predefined pattern. Various patterns have been used. A popular pattern, called the Bayer pattern, uses a square for 4 cells that include 2 green on one diagonal, 1 red and one blue on the opposite diagonal.<\/p>\n

Advantages<\/strong><\/p>\n

This type of camera is the least expensive to build. It can also be used for live action photography.<\/p>\n

Disadvantages<\/strong><\/p>\n

The principal disadvantage is an image artifact called color aliasing. This occurs when an object that is being photographed has a feature which is so small that it only covers less than 3 color pixels. The result is that the real color of the object is impossible to ascertain since one has only partial color information. This causes a miss coloration in the image that may appear as a colored dot in the hair or a colored fringe on a sharp edge.<\/p>\n

Applications<\/strong><\/p>\n

This camera design is well suited to web design, identification and badging, low resolution printing and consumer usage.<\/p>\n

The Three Chip Color Camera<\/strong><\/h3>\n

How it’s done<\/strong><\/p>\n

This type of camera uses a beam splitter to separate the incoming image into three component images which are sent to three independent sensors simultaneously. A beam splitter is a set of semi-transparent mirrors that reflect part of the image and transmit part. Some beam splitters can separate the colors of an image in this way as well.<\/p>\n

Advantages<\/strong><\/p>\n

This camera can shoot moving objects in color. It offers excellent resolution with no color aliasing. It is compatible with all types of lighting. (except for low frequency fluorescent which may beat with the electronic shutter)<\/p>\n

Disadvantages<\/strong><\/p>\n

The cost of building this type of camera is high due to the triplication of the sensor and the strict tolerance for alignment of the three sensors. As well, the method used for splitting the image can result in the addition of various artifacts like secondary images or non-ideal spectral response.<\/p>\n

Applications<\/strong><\/p>\n

This camera design is well suited to graphic arts applications where moderate resolution is required. However, there is a price penalty for the ability to shoot live action.<\/p>\n

The One Chip \/ Three Shot Camera<\/strong><\/h3>\n

How it’s done<\/strong><\/p>\n

This type of camera uses a monochrome sensor and a rotating color filter wheel with 4 positions. The neutral position is used for focusing and composition and then three successive pictures are taken through each of the three filters: red, green and blue.<\/p>\n

Advantages<\/strong><\/p>\n

The Three Shot Camera has the advantage of allowing the use of either strobe or ambient illumination. Thus, the photographer has the convenience of staging shots with equipment that they are accustomed to. The availability of strobe allows greater flexibility of the use of the aperture for depth of field. This design provides excellent resolution with no color aliasing. Additionally, this design can be accomplished at relatively low cost.<\/p>\n

Disadvantages<\/strong><\/p>\n

This type of camera can only shoot stationary objects in color but it can shoot moving objects in monochrome.<\/p>\n

Applications<\/strong><\/p>\n

This camera design is well suited to graphic arts photography of stationary objects where the target usage for the image is either web or offset press at moderate to large sizes.<\/p>\n

Choosing a Type of Camera by Application<\/strong><\/h2>\n

Different applications will lead to different camera designs. The following brief descriptions of applications rationalizes certain camera design attributes.<\/p>\n

Graphic Arts<\/strong><\/p>\n

Catalogue and Advertising for the Professional Photographer<\/p>\n

The first premise of graphic arts photography is that the cost of photography is a small part of the total production cost of a piece of advertising or direct mail. Therefore, there is little motivation to reduce the cost of photography if the quality is not equal to or better than film. This first rule, makes one shot cameras unacceptable due to their compromise in resolution or color aliasing. Scanning cameras and three shot designs have found acceptance in this market. However, the clear leader is the three shot design because of its compatibility with conventional strobe illumination.<\/p>\n

See Also:\u00a0Best Digital Cameras Under 300<\/a><\/strong><\/p>\n

The professional graphic arts photographer needs to have control over the selection of lens and the aperture setting for artistic management of the depth of field of the image. The only way to be able to shoot with small apertures is to have very large amounts of light. This is impractical in any way other than with conventional strobe.<\/p>\n

The digital workflow is clearly superior to the film alternative. It reduces the cost of materials (Polaroids and Film) and it reduces the time to the finished product. The Digital Workflow eliminates the need to bracket since the result is immediately evident. Also eliminated is the need to shoot two batches of film to protect against processing variations. Most importantly, however, the photographer is protected against having to re-stage the shot as a result of an exposure or processing failure.<\/p>\n

The Film Workflow<\/strong><\/p>\n