Specifications
© IBM Copyright, 2012 Version: January 26, 2012
www.ibm.com/support/techdocs 20
Summary of Best Practices for Storage Area Networks
4.4 DS3000, DS4000 and DS5000 Storage Systems
With DS3000, DS4000 or DS5000 arrays, the number of physical drives to put into
an array always presents a compromise. On one hand striping across a larger
number of drives can improve performance for transaction based workload and on
the other it can have a negative effect on sequential workload. A common mistake
made when selecting array width is the tendency to focus only on the capability of a
single array to perform various workloads, however also at play in this decision is the
aggregate throughput requirements of the entire storage server. Since only one
controller of the DS3/4/5000 will be actively accessing a given array a large number
of physical disks in an array can create a workload imbalance between the
controllers.
When selecting an array width, an additional consideration is its effect on rebuild
time and availability. A larger number of disks in an array will increase the rebuild
time for disk failures which can have a negative effect on performance. Additionally
more disks in an array increases the probability of having a second drive fail within
the same array prior to rebuild completion of an initial drive failure which is an
inherent exposure to the RAID5 architecture. If RAID6 architecture is implemented,
then the array can tolerate a second drive failure.
The storage system will automatically create a logical drive for each host attached
(logical drive id 31). This drive is used for in-band management, so if the DS Storage
System will not be managed from that host, this logical drive can be deleted. This
access drive does count towards the maximum of accessible volumes per host
maximums, so this action will allow for one more logical drive to use per host. If a
Linux or AIX based server is connected to a DS Storage System, the mapping of this
access logical drive should be deleted.
When storage resources are utilized by a SVC cluster, best practice guidelines
suggest RAID arrays using either four or eight data drives plus parity drive(s)
depending on the RAID level.
Whenever DS3/4/5k storage systems are configured for data replication, the ports
connecting the source and target storage systems should be dedicated for data
mirroring only. No other traffic types, such as SVC or host connections, should be
shared on the mirror ports.
With direct attached hosts, considerations are often made to align device data
partitions to physical drive boundaries within the storage controller. For the SVC this