Digital Camera Fundamentals

Digital photographs (and scanned impressions) should be captured in a color mode.  (In some instances, 16-bit grayscale mode is acceptable for flatbed scanners).  Some digital cameras provide an option for 12- or 14-bit color depth.  In either case, the camera should be set to its highest bit depth.  Failure to do so will limit the sensitivity (dynamic range) of the imaging sensor.  The higher the dynamic range (bit depth), the easier it is to distinguish the contrast between the impression (ridge) detail and the background.

Pixel values for digital images captured with a digital camera are based on filtered, light intensity.  The light coming through the lens is first filtered by a color filter array consisting of Red, Green and Blue filters.  This process suppresses (removes) the color value allowing the photoreceptors to capture only the light (brightness) data.  Color is added BACK into the image when the camera RAW converter changes the brightness data into color values.  (This process is commonly referred to as demosaicing.)

Based on the selected camera setting, the light intensity (brightness) for each photoreceptor can be adjusted to distinguish between 256 shades of brightness (TIF and JPG), 1 of 4096 shades of brightness (12 bit), or 1 of 16,384 shades of brightness (14 bit). 

There is not a single video card or monitor available today that can accurately synthesize or render hue (color), saturation (diffusion of color) and brightness (whiteness) more than 256 shades per color channel.  Also, some software applications and drivers do not handle “on the fly” color conversion because of the 8-, 12-, or 14-bit brightness issue. 

The Microsoft Windows environment is based 8-bit color depth.  Color is rendered as 8 bits per color channel and is stored with 8 bits or one byte.  Therefore:

  • An 8-bit grayscale image consists of 1 byte for every pixel in the image. (A 12-million-pixel (aka 12 megapixels) image would consist of 12 million bytes of data or approximately 11.4 MB (converting 1024 bytes to 1 kilobyte and 1024 kilobytes to 1 megabyte).
  • A 24-bit color image consists of 3 bytes of image data where 8 bits (1 byte) are used to express the shade of color for Red, 8 bits (1 byte) are used to express the shade of color for Green, 8 bits (1 byte) are used to express the shade of color for Blue. (Again, using a 12-million-pixel (aka 12 megapixels) image would consist of 36 million bytes of data or approximately 34.3 MB (converting bytes to kilobytes to megabytes).

Both 12- and 14-bit images are converted to 16 bits because the Windows operating system is based on a multiples of 8.  Even JPG compression is based on multiples of 8: 8-pixel by 8-pixel grid.

In 12- and 14-bit images, the last 4 and last 2 bits, respectively, are filled with null values to complete the 16-bit (8 bits + 8 bits) or 2-byte architecture.  Thus:

  • A 16-bit grayscale image consists of 2 bytes for every pixel in the image. (A 12 million pixel image would consist of 24 million bytes of data or approximately 22.8 MB (converting 1024 bytes to 1 kilobyte and 1024 kilobytes to 1 megabyte).
  • A 48-bit color image consists of 6 bytes of image data where 16 bits (2 bytes) are used to express the shade of color for Red, 16 bits (2 bytes) are used to express the shade of color for Green, 16 bits (2 bytes) used to express the shade of color for Blue for every pixel in the image. (Again, using a 12 million pixel image would consist of 36 million bytes of data or approximately 34.3 MB (converting bytes to kilobytes to megabytes).

At this point, you are probably asking, “who cares?”  The answer:  the image color cares.  The pixel values in a digital image are dependent upon the translation.  In the 12- and 14-bit camera RAW images, color (aka white balance) can be impacted significantly based upon how the null values are treated in the computation of color.  A pixel in a digital image with a 16-bit color value of:

12 bits

14 bits

1 1 1 1 1 1 1 1 1 1 1 _ _ _ _

1 1 1 1 1 1 1 1 1 1 1 1 1 _ _

is rendered with a completely different color value than:

12 bits

14 bits

1 1 1 1 1 1 1 1 1 1 1 0 0 0 0

1 1 1 1 1 1 1 1 1 1 1 1 1 0 0

To make matters worse, some applications truncate all of the bits above the first 8 bits for display purposes.  That drastically changes the color value for each pixel.

This process of color aliasing is why it is imperative the latent print examiner or crime scene technician adjusts the white balance of a digital image properly when opening a camera RAW image in Adobe Photoshop or when displaying a camera RAW image in the Foray ADAMS full view. 

Proper white balance is also defined based upon the camera settings of the camera itself.  The one challenge that has plagued me in all the years that I have worked at Foray, is getting examiners to understand that “Auto” is not an acceptable camera setting for photographing forensic evidence!

Below are some of the more important issues and settings for capturing latent impressions.

© 2023 Foray, LLC - All Rights Reserved