Specifications
Silicon Updates
SPRZ293A—November 2009 TMS320C6457 Fixed-Point Digital Signal Processor Silicon Errata 27
www.ti.com
Submit Documentation Feedback Silicon Revisions 1.0, 1.1, 1.2, 1.3, 1.4 .
When using the above flowchart, if one of the OK fields is reached, then the buffer
should not have a potential of being affected. When using the above flowchart, if one of
the Potential Problem fields is reached, see the workarounds below.
Note—Figure 4 assumes that each buffer is aligned to a 64B-boundary and
spans a multiple of 64B. This is because the cache line size of the L1D is 64B. If
that is not the case, there is a chance that the user might still see this issue even
if an OK state in the diagram was reached (see the Workaround for
False-sharing section below).
The bug occurs when the CPU writes within the same L1D cache line that the DMA
reads or writes. This can happen for multiple reasons. The following sections detail
workarounds for three scenarios:
1. The CPU writes to a buffer that the DMA then reads. This could either be due to
an in-place algorithm that operates on data brought to it by DMA or an
out-of-place algorithm in which the CPU fills a buffer that the DMA then reads.
In either case, the CPU and DMA explicitly synchronize.
2. The CPU and DMA are updating distinct or unrelated objects that happen to
share a cache line. (This is sometimes called false sharing.) Because the objects are
unrelated, the DMA and CPU are not synchronized.
3. The CPU and DMA are both writing to the same structure without external
synchronization. This pattern often underlies software synchronization
implementations and lockless multiprocessing algorithms.
Workaround 1: Workaround for Synchronizing DMA and CPU Access to Buffers
The CPU potentially triggers this bug when it reads and later writes to a buffer that the
DMA also accesses (read or write). The bug can happen when the DMA accesses the
affected line when the L1D cache writes it back to L2. To avoid this bug, programmers
can explicitly manage coherence on the buffer so that the buffer is not present and dirty
in L1D when the DMA accesses it.
To explicitly manage coherence on the buffer, programmers should adhere to the
programming model described earlier: Programs should write back or discard
in-bound DMA buffers immediately after use and keep a strict policy of buffer
ownership such that a given buffer is owned only by the CPU or the DMA at any given
time.
This model assumes the following:
1. The DMA fills the buffer during a period when the CPU does not access it.
2. The DMA engine or other mechanism signals to the CPU that it has finished
filling the buffer.
3. The CPU operates on the buffer, reading and writing to it, as necessary. The DMA
does not access the buffer at this time.
4. The CPU relinquishes control of the buffer so that DMA may refill it. (This may
be an implicit step in many implementations if the period between refills is much
longer than the time it takes the CPU to process the refilled buffer.)










