Datasheet
2
c h a p t e r 1: DIGITAL IMAGING BASICS, WORKFLOW, AND CALIBRATION ■
Chips and Pixels
All cameras function like human eyes (Figure 1.1). In both, a lens focuses light through
a small hole (iris) onto a receptive surface (retina/film/chip) that translates the varying
intensities and colors of the light into meaningful information. The main distinguish-
ing feature between different cameras and the eye has to do with the receptive surface.
The eye’s retina is a receptive surface comprising two different structures (rods and
cones) with three basic color sensitivities (red, green, and blue). Film is made of silver
salt grains suspended in gelatin in three different layers to receive color. Digital cam-
era chips contain photoreceptor sites on a silicon chip; each photoreceptor site has one
of three different colored filters to record light.
Figure 1.1 All cameras function like human eyes.
Digital cameras are similar to eyes in that the camera’s chip translates the light
into information (electrical signals) directly. Much as the eye translates the light fall-
ing on the retina into nerve impulses (electrical signals) that travel to the brain for pro-
cessing, the electrical signals from a digital camera require processing in a computer
“brain” before they can be used to create photos.
The actual process is rather more complex, but a few things are important to
understand. Most digital cameras capture images using chips with receptor sites that
have red, green, and blue filters arranged in a regular pattern on the surface of the
chip. Light intensity is the only thing captured at a receptor site. During the processing
phase, the color of light hitting a receptor is determined by calculating differences in
intensities between adjacent sites that have red, green, or blue filters. This process pro-
duces an RGB bitmap image. A bitmap is a regular grid of square units of color. These
units are called pixels. Color is determined by the relative values of red, green, and
592120c01.indd 2 6/17/10 9:49:28 AM