Datasheet

Table Of Contents
PDF: 09005aef8202ec2e/Source: 09005aef8202ebf7 Micron Technology, Inc., reserves the right to change products or specifications without notice.
MT9D111__2_REV5.fm - Rev. B 2/06 EN
15 ©2004 Micron Technology, Inc. All rights reserved.
MT9D111 - 1/3.2-Inch 2-Megapixel SOC Digital Image Sensor
Architecture Overview
Micron Confidential and Proprietary
Black Level Conditioning and Digital Gain
Image stream processing starts with black level conditioning and multiplication of all
pixel values by a programmable digital gain.
Lens Shading Correction
Inexpensive lenses tend to produce images whose brightness is significantly attenuated
near the edges. Chromatic aberration in such lenses can cause color variation across the
field of view. There are also other factors causing fixed-pattern signal gradients in images
captured by image sensors. The cumulative result of all these factors is known as lens
shading. The MT9D111 has an embedded lens shading correction (LC) module that can
be programmed to precisely counter the shading effect of a lens on each RGB color sig-
nal. The LC module multiplies RGB signals by a 2-dimensional correction function
F(x,y), whose profile in both x and y direction is a piecewise quadratic polynomial with
coefficients independently programmable for each direction and color.
Line Buffers
Several data processing steps following the lens shading correction require access to
pixel values from up to 8 consecutive image lines. For these lines to be simultaneously
available for processing, they must be buffered. The IFP includes a number of SRAM line
buffers that are used to perform defect correction, color interpolation, image decima-
tion, and JPEG encoding.
Defect Correction
The IFP performs on-the-fly defect correction that can mask pixel array defects such as
high-dark-current (“hot”) pixels and pixels that are darker or brighter than their neigh-
bors due to photoresponse non uniformity. The defect correction algorithm uses several
pixel features to distinguish between normal and defective pixels. After identifying the
latter, it replaces their actual values with values inferred from the values of nearest same-
color neighbors.
Color Interpolation and Edge Detection
In the raw data stream fed by the sensor core to the IFP, each pixel is represented by a 10-
bit integer number, which, to make things simple, can be considered proportional to the
pixel’s response to a one-color light stimulus, red, green or blue, depending on the pixels
position under the color filter array. Initial data processing steps, up to and including the
defect correction, preserve the 1-color-per-pixel nature of the data stream, but after the
defect correction it must be converted to a 3-colors-per-pixel stream appropriate for
standard color processing. The conversion is done by an edge-sensitive color interpola-
tion module. The module pads the incomplete color information available for each pixel
with information extracted from an appropriate set of neighboring pixels. The algorithm
used to select this set and extract the information seeks the best compromise between
maintaining the sharpness of the image and filtering out high-frequency noise. The sim-
plest interpolation algorithm is to sort the nearest 8 neighbors of every pixel into 3 sets,
red, green, and blue, discard the set of pixels of the same color as the center pixel (if there
are any), calculate average pixel values for the remaining 2 sets, and use the averages in
lieu of the missing color data for the center pixel. Such averaging reduces high-fre-
quency noise, but it also blurs and distorts sharp transitions (edges) in the image. To
avoid this problem, the interpolation module performs edge detection in the neighbor-
hood of every processed pixel and, depending on its results, extracts color information
from neighboring pixels in a number of different ways. In effect, it does low-pass filtering
in flat-field image areas and avoids doing it near edges.