Veritas Volume Manager 5.0 Administrator's Guide (September 2006)

141Administering dynamic multipathing (DMP)
Administering DMP using vxdmpadm
The next example displays the setting of partitionsize for the enclosure enc0, on
which the balanced I/O policy with a partition size of 2MB has been set:
# vxdmpadm getattr enclosure enc0 partitionsize
ENCLR_NAME DEFAULT CURRENT
---------------------------------------
enc0 1024 2048
Specifying the I/O policy
You can use the vxdmpadm setattr command to change the I/O policy for distributing
I/O load across multiple paths to a disk array or enclosure. You can set policies for an
enclosure (for example, HDS01), for all enclosures of a particular type (such as HDS), or
for all enclosures of a particular array type (such as A/A for Active/Active, or A/P for
Active/Passive).
Note: Starting with release 4.1 of VxVM, I/O policies are recorded in the file
/etc/vx/dmppolicy.info, and are persistent across reboots of the system.
Do not edit this file yourself.
The following policies may be set:
adaptive
This policy attempts to maximize overall I/O throughput from/to the disks by
dynamically scheduling I/O on the paths. It is suggested for use where I/O loads can
vary over time. For example, I/O from/to a database may exhibit both long transfers
(table scans) and short transfers (random look ups). The policy is also useful for a
SAN environment where different paths may have different number of hops. No
further configuration is possible as this policy is automatically managed by DMP.
In this example, the adaptive I/O policy is set for the enclosure enc1:
# vxdmpadm setattr enclosure enc1 iopolicy=adaptive
balanced [partitionsize=size]
This policy is designed to optimize the use of caching in disk drives and RAID
controllers. The size of the cache typically ranges from 120KB to 500KB or more,
depending on the characteristics of the particular hardware. During normal operation,
the disks (or LUNs) are logically divided into a number of regions (or partitions), and
I/O from/to a given region is sent on only one of the active paths. Should that path
fail, the workload is automatically redistributed across the remaining paths.