How Film Works
Photographic negative film contains millions of tiny, light-sensitive silver halide crystals on the surface of the film. Each individual picture on a roll of film is recorded on a unique area on the film called a frame. As you take pictures and wind the film, the most recently exposed frame moves out of the area behind the camera's lens, and another, unexposed frame moves into place, until you get to the end of the roll of film.
When the film is developed, the crystals that were exposed to light remain on the film; those that weren't exposed to light are removed in the developing process. (The process works just the opposite for slide film, which produces a positive image instead of a negative.) As a result, dark areas on the film have more crystals; lighter areas have fewer.
Where Do Pixels Come From?
Digital images on your computer screen are composed of a series of colored squares called pixels. Each pixel is described by three or four numbers that define each pixel's color and brightness. In the RGB color space system most commonly used for consumer digital imaging, each picture has a red, green, and blue value, and each value ranges from zero (dark) to 255 (bright). Red, green, and blue light combine to make white, so a pixel with an RGB value of 255,255,255 displays as 100% white. Similarly, a pixel with a value of 0,0,0 displays as black, and a pixel with a value of 0,255,0 displays as bright green. There are other color space systems besides RGB. For example, the cyan, magenta, yellow, black (CMYK) system is often used for images that are to be printed via conventional four-color offset printing presses, which use cyan, magenta, yellow, and black inks.
What's a JPEG?
Digital images are stored in electronic files, and the most common of these is the Joint Photographic Experts Group, or JPEG, format. JPEG files can be stored with varying degrees of electronic compression, which make the files smaller and faster to work with. Information about file formats and compression is presented in more detail in Chapter 20, "Outsourcing Your Printing."
Digital cameras are basically small computers that convert live images into digital files. They record images by electronically detecting light (photons) striking the face of an electronic image sensor. The face of the image sensor contains millions of light-sensitive transistors called phototransistors or photosites. Each photosite represents one pixel, and the terms are often used interchangeably when discussing image sensors. When light strikes one of the photosites, it causes a change in the electrical charge flowing through the transistor. The stronger the light, the stronger the change.
The camera builds an image from the array of pixels by electronically scanning the contents of each pixel. Image sensors are monochrome; that is, they see light as black or white. To make a black-and-white sensor see color, each photosite on the sensor is covered with a layer of color filters called a color filter array, or CFA. Most cameras use red, green, and blue (called GRGB) CFAs, although some use a cyan, yellow, green, and magenta (CYGM) array. For clarity, I'll illustrate the more common GRGB arrangement, but the process is the same for CYGM sensors.
The dye layers effectively make each photosite sensitive to only a single color, depending on the color of the dye. The dye is applied in a pattern (called a Bayer pattern) such that each row has either alternating red and green or blue and green pixels. If you do the math, you'll see that in a GRGB sensor, there are twice as many green pixels are there are red or blue. That's because green provides much of the perceived detail in the picture, while red and blue contribute relatively little detail information. By using twice as many green pixels, camera designers can squeeze the most detail out of the image sensor.
When you take a picture, a chip inside the camera called an image processor reads the data collected by the image sensor. The processor mathematically combines the data from each pixel with the data from its neighboring pixels to produce an RGB value for each pixel. The RGB data is collected and saved as an image file on the cameras' storage media.
When the film is developed, the crystals that were exposed to light remain on the film; those that weren't exposed to light are removed in the developing process. (The process works just the opposite for slide film, which produces a positive image instead of a negative.) As a result, dark areas on the film have more crystals; lighter areas have fewer.
Where Do Pixels Come From?
Digital images on your computer screen are composed of a series of colored squares called pixels. Each pixel is described by three or four numbers that define each pixel's color and brightness. In the RGB color space system most commonly used for consumer digital imaging, each picture has a red, green, and blue value, and each value ranges from zero (dark) to 255 (bright). Red, green, and blue light combine to make white, so a pixel with an RGB value of 255,255,255 displays as 100% white. Similarly, a pixel with a value of 0,0,0 displays as black, and a pixel with a value of 0,255,0 displays as bright green. There are other color space systems besides RGB. For example, the cyan, magenta, yellow, black (CMYK) system is often used for images that are to be printed via conventional four-color offset printing presses, which use cyan, magenta, yellow, and black inks.
What's a JPEG?
Digital images are stored in electronic files, and the most common of these is the Joint Photographic Experts Group, or JPEG, format. JPEG files can be stored with varying degrees of electronic compression, which make the files smaller and faster to work with. Information about file formats and compression is presented in more detail in Chapter 20, "Outsourcing Your Printing."
Digital cameras are basically small computers that convert live images into digital files. They record images by electronically detecting light (photons) striking the face of an electronic image sensor. The face of the image sensor contains millions of light-sensitive transistors called phototransistors or photosites. Each photosite represents one pixel, and the terms are often used interchangeably when discussing image sensors. When light strikes one of the photosites, it causes a change in the electrical charge flowing through the transistor. The stronger the light, the stronger the change.
The camera builds an image from the array of pixels by electronically scanning the contents of each pixel. Image sensors are monochrome; that is, they see light as black or white. To make a black-and-white sensor see color, each photosite on the sensor is covered with a layer of color filters called a color filter array, or CFA. Most cameras use red, green, and blue (called GRGB) CFAs, although some use a cyan, yellow, green, and magenta (CYGM) array. For clarity, I'll illustrate the more common GRGB arrangement, but the process is the same for CYGM sensors.
The dye layers effectively make each photosite sensitive to only a single color, depending on the color of the dye. The dye is applied in a pattern (called a Bayer pattern) such that each row has either alternating red and green or blue and green pixels. If you do the math, you'll see that in a GRGB sensor, there are twice as many green pixels are there are red or blue. That's because green provides much of the perceived detail in the picture, while red and blue contribute relatively little detail information. By using twice as many green pixels, camera designers can squeeze the most detail out of the image sensor.
When you take a picture, a chip inside the camera called an image processor reads the data collected by the image sensor. The processor mathematically combines the data from each pixel with the data from its neighboring pixels to produce an RGB value for each pixel. The RGB data is collected and saved as an image file on the cameras' storage media.


0 Comments:
Post a Comment
<< Home