Veritas Volume Manager 4.1 Administrator's Guide (HP-UX 11i v3, February 2007)

How DMP Works
104 VERITAS Volume Manager Administrators Guide
Note The persistent device naming feature, introduced in VxVM 4.1, makes the names of
disk devices (DMP node names) persistent across system reboots. If operating
system-based naming is selected, each disk name is usually set to the name of one of
the paths to the disk. After hardware reconfiguration and a subsequent reboot, the
operating system may generate different names for the paths to the disks. As DDL
assigns persistent disk names using the persistent device name database that was
generated during a previous boot session, a disk name may no longer correspond to
an actual path to the disk. Since DMP device node names are arbitrary, this does not
prevent the disks from being used. See “Regenerating the Persistent Device Name
Database” on page 74 for details of how to regenerate the persistent device name
database, and restore the relationship between the disk and path names.
See “Discovering and Configuring Newly Added Disk Devices” on page 63 for a
description of how to make newly added disk hardware known to a host system.
Path Failover Mechanism
The DMP feature of VxVM enhances system reliability when used with multiported disk
arrays. In the event of the loss of one connection to the disk array, DMP automatically
selects the next available I/O path for I/O requests dynamically without action from the
administrator.
DMP is also informed when you repair or restore a connection, and when you add or
remove devices after the system has been fully booted (provided that the operating
system recognizes the devices correctly).
Load Balancing
By default, DMP uses the balanced path mechanism to provide load balancing across paths
for Active/Active, A/P-C, A/PF-C and A/PG-C disk arrays. Load balancing maximizes
I/O throughput by using the total bandwidth of all available paths. Sequential I/O
starting within a certain range is sent down the same path in order to benefit from disk
track caching. Large sequential I/O that does not fall within the range is distributed across
the available paths to reduce the overhead on any one path.
For Active/Passive disk arrays, I/O is sent down the primary path. If the primary path
fails, I/O is switched over to the other available primary paths or secondary paths. As the
continuous transfer of ownership of LUNs from one controller to another results in severe
I/O slowdown, load balancing across paths is not performed for Active/Passive disk
arrays unless they support concurrent I/O.