User`s manual
width of 32 bits, which is insufficient for our needs. Furthermore, they discour-
age creation of pipe-lined dividers of greater width due to the high area cost.
As such, we require 80 clock cycles per pixel, resulting in a frame rate of 1-2
frames per second. Note that with sufficient hardware resources, this may be
easily converted into a real-time system.
5.13 ntsc to bram (James)
Most of the staff code from the zbt 6111 example was kept intact, including the
ADV7185 initialization module and the NTSC decoding module. ntsc to zbt
was converted to ntsc to bram, which involved a few changes. First off, ROW START
and COL START were changed to be 0, since we want our image to fill the en-
tire screen (as opposed to the staff example, where the upper left corner of the
image does not start at pixel (0,0)). We expanded all of the registers holding
NTSC data to have widths of 30 bits (as opposed to 8), since we want 12-bit
color and need the full YCrCb data. We maintained the original approach of
reading only scan lines from field 0 and alternately writing them to even and
odd screen rows. Since we stored pixels in BRAM, which has a maximum capac-
ity of 2.5 Mbits, and we needed to store a frame from the camera and another
frame with the transformed output in addition to the look-up table (discussed
above), we only had space to store a 320 by 240 frame with 12 bits of color per
pixel. So we cropped the NTSC input by taking only the upper left 320 by 240
rectangle.
Since we now store one pixel per memory location as opposed to four (we
were able to size our BRAM so that each location was 12 bits), we write to
BRAM whenever the x address is less than 320 and the y address is less than
240 and we are at a write enable positive edge, given by the we edge signal in
the code. (In the staff code, we wrote to ZBT only when the x address was a
multiple of 4 since we stored 4 pixels per memory location). We determined
the BRAM address from the x and y coordinates by treating BRAM as a 2D
array with 240 rows and 320 columns laid out in row-major order (row 1 in
the first 320 locations, row 2 in the next 320, etc.). So we simply multiplied
the y coordinate by 320 and added the x coordinate to get the BRAM address.
Finally, since we needed to convert the YCrCb data to RGB using the staff-
provided module, which takes 3 clock cycles, we needed to delay the BRAM
address and BRAM write enable signals by three clock cycles as well using the
staff-provided synchronize module so that we wrote the right data to the right
addresses.
Since our BRAM has both read and write ports, we were able to write camera
data to the BRAM at the same time that our transformation code fetched pixels
from this BRAM, eliminating the need for any coordination mechanisms.
One major challenge in designing this module was eliminating the appear-
ance of randomly colored pixels in the video output. We expected the video to
be grainy because we were scaling a 320 by 240 image to 640 by 480 resolution
for projection, but the random pixels were unexplained until Gim pointed out
that we were writing the BRAM at a different clock rate (the system clock rate)
19