User guide

141MAXDATA SR1202 M1 – StorView
®
RAID User Guide
16 Optimizing RAID 5 Write Performance
Introduction
With a typical RAID 5 implementation, there are a number of steps that are performed when data is
written to the media. Every write from the host system will typically generate two XOR operations
and their associated data transfers, to two drives. If the accesses are sequential, the parity information
will be updated a number of times in succession. However, if the host writes sufficient data to cover a
complete stripe, the parity data does not need to be updated for each write, but it can be recalculated
instead. This operation takes only one XOR operation per host write, compared to two for a standard
RAID 5 write. The number of data transfers necessary are also reduced, increasing the available
bandwidth. This type of write access is termed a “Full Stripe Write.”
The following illustration displays the distribution of data chunks (denoted by Cx) and their associated
parity (denoted by P(y-z)) in a RAID 5 array of five drives. An “array” is defined as a set of drives, on
which data is distributed. An array will have one RAID level. A “chunk” is the amount of contiguous
data stored on one drive before the controller switches over to the next drive. This parameter is
adjustable from 64K to 256K, and should be carefully chosen to match the access sizes of the operating
system. A Stripe is a set of disk chunks in an array with the same address. In the example below,
Stripe 0 consists of C0, C1, C2, and C3 and their associated parity P(0-3).
Figure 110. Distribution of Data and Parity in a RAID 5 with Five Drives
Maximum performance will be achieved when all drives are performing multiple commands in parallel.
To take advantage of a Full Stripe Write, the host has to send enough data to the controller. This
can be accomplished in two ways. First, if the host sends one command with sufficient data to fill
a stripe, then the controller can perform a Full Stripe Write. Alternatively, if the host sends multiple
sequential commands, smaller than a stripe size (typically matching the chunk size), the controller can
internally combine these commands to get the same effect. In the above example, if a 256K chunk
size is used, then the stripe size is 1 MB (4 chunks * 256K). So, for maximum performance, the host
could either send 5 * 1 MB write commands, or 20 * 256K write commands.
The effectiveness of the controller’s ability to perform a Full Stripe Write depends on a number of
parameters: