Storage Multi-Pathing choices in HP-UX Serviceguard environments, August 2009

8
I/O requests travel through multiple layers of software and hardware from the application which
issued the initial request to the storage system and back. Load balancing multi-pathing software and
volume managers are two of the layers which need to be configured carefully to allow those read-
ahead mechanisms to work successfully with sequential read operations. The key during the
configuration is not to divide the sequential read requests coming from a layer above into multiple
smaller requests – which might then be too small for the array to be recognized or handled as
sequential I/O – before sending it down to the next layer. The following two paragraphs illustrate this
matter based on the example configuration from figure 1:
If you have two active paths to a LUN (e. g. from Node-A to LUN-A)
Node-A -> HBA-1 -> FC-1 -> Ctr-A; port-1 -> LUN-A (FC cable 1 and 5)
Node-A -> HBA-2 -> FC-2 -> Ctr-B ; port-1 -> LUN-A (FC cable 2 and 6)
you want to choose a multi-pathing load balancing policy which sends sufficient I/Os down one
path before it switches to the other path. Each storage system might have a different threshold in
regards of how many sequential read requests (I/O blocks) it needs to receive in order to trigger a
read-ahead. Symantec DMP for instances addresses this matter by allowing one to configure the
number of blocks (DMP_PATHSWITCH_BLKS_SHIFT) being sent over one path before switching to
the next path with the “balanced path routing” policy. For HP StorageWorks XP 10000 / 12000
arrays, tests have shown that increasing this parameter from the default of 2 MB to a value of 32
MB increased the sequential read performance.
If you have 4 LUNs on different physical media (array groups on HP StorageWorks XP Disk Arrays)
you would want to stripe a logical volume over all 4 LUNs with a stripe size that is large enough to
allow the array to recognize sequential reads and efficiently use the read-ahead. Setting the stripe
size too small would actually not just prevent the read-ahead, it would actually convert the
sequential I/O into multiple random I/Os. A stripe size of 4 MB proved to be nondestructive for the
read-ahead algorithm of the HP StorageWorks XP 10000 / 12000 arrays.
The implementation of read-ahead mechanism varies between different types of arrays. It is advisable
to research this topic and consider the array specific characteristics only for environments with a
predominant sequential read I/O pattern.
Multi-Pathing solutions on HP-UX 11i v2
HP-UX 11i v2 does not include native multi-pathing functionality as part of the OS; however, there are
a number of choices available in a Serviceguard cluster to protect against storage path failure. These
multi-pathing solutions are either tightly integrated with a volume manager or they are storage system
specific. Both types have their pros and cons.
On HP-UX 11i v2 a device special file (DSF) represents a path to a device. Devices that are
connected to the server via multiple paths have multiple device files – one per path.
The following table provides an overview of the multi-pathing solutions available in Serviceguard
clusters on HP-UX 11i v2.
Table 3: Multi-pathing solutions available on HP-UX 11i v2 Serviceguard clusters
automatic
path
failover
load balancing
across multiple
active paths
Volume
Manager
independent
Storage System
independent
Included in the
software stack
1
LVM PVlinks yes no
2
no yes
3
yes
1
Software stack in this regard is either the Serviceguard Storage Management Suite or the volume manager included in the OS.
2
PVLinks don’t provide automatic load balancing, but static load balancing can be implemented as shown in the example in Figure 2.
3
PVLinks are not supported in Serviceguard clusters with active/passive devices.