Administrator Guide
• Example 2: Consider a RAID-5 disk group with six disks. The equivalent of five disks now provide usable capacity. Assume the
controller again uses a stripe unit of 512-KB. When a 4-MB page is pushed to the disk group, one stripe will contain a full page, but the
controller must read old data and old parity from two of the disks in combination with the new data in order to calculate new parity.
This is known as a read-modify-write, and it's a performance killer with sequential workloads. In essence, every page push to a disk
group would result in a read-modify-write.
To mitigate this issue, the controllers use a stripe unit of 64-KB when a RAID-5 or RAID-6 disk group isn't created with a power-of-
two data disks. This results in many more full-stripe writes, but at the cost of many more I/O transactions per disk to push the same
4-MB page.
The following table shows recommended disk counts for RAID-6 and RAID-5 disk groups. Each entry specifies the total number of disks
and the equivalent numbers of data and parity disks in the disk group. Note that parity is actually distributed among all the disks.
Table 41. Recommended disk group sizes
RAID level Total disks Data disks (equivalent) Parity disks (equivalent)
RAID 6 4 2 2
6 4 2
10 8 2
RAID 5 3 2 1
5 4 1
9 8 1
To ensure best performance with sequential workloads and RAID-5 and RAID-6 disk groups, use a power-of-two data disks.
Disk groups in a pool
For better efficiency and performance, use similar disk groups in a pool.
• Disk count balance: For example, with 20 disks, it is better to have two 8+2 RAID-6 disk groups than one 10+2 RAID-6 disk group and
one 6+2 RAID-6 disk group.
• RAID balance: It is better to have two RAID-5 disk groups than one RAID-5 disk group and one RAID-6 disk group.
• In terms of the write rate, due to wide striping, tiers and pools are as slow as their slowest disk groups.
• All disks in a tier should be the same type. For example, use all 10K disks or all 15K disks in the Standard tier.
Create more small disk groups instead of fewer large disk groups.
• Each disk group has a write queue depth limit of 100. This means that in write-intensive applications this architecture will sustain
bigger queue depths within latency requirements.
• Using smaller disk groups will cost more raw capacity. For less performance-sensitive applications, such as archiving, bigger disk
groups are desirable.
Tier setup
In general, it is best to have two tiers instead of three tiers. The highest tier will nearly fill before using the lowest tier. The highest tier
must be 95% full before the controller will evict cold pages to a lower tier to make room for incoming writes.
Typically, you should use tiers with SSDs and 10K/15K disks, or tiers with SSDs and 7K disks. An exception may be if you need to use both
SSDs and faster spinning disks to hit a combination of price for performance, but you cannot hit your capacity needs without the 7K disks;
this should be rare.
Multipath configuration
ME4 Series storage systems comply with the SCSI-3 standard for Asymmetrical Logical Unit Access (ALUA).
ALUA compliant storage systems will provide optimal and non-optimal path information to the host during device discovery, but the
operating system must be directed to use ALUA. You can use the following procedures to direct Windows and Linux systems to use ALUA.
Use one of the following procedures to enable MPIO.
Best practices
167