Veritas Volume Manager 4.1 Administrator's Guide (HP-UX 11i v3, February 2007)

Chapter 13, Administering Cluster Functionality
Overview of Cluster Volume Management
357
When an array is seen by DMP as Active/Passive. The local detach policy causes
unpredictable behavior for Active/Passive arrays.
For clusters with four or fewer nodes. With a small number of nodes in a cluster, it is
preferable to keep all nodes actively using the volumes, and to keep the applications
running on all the nodes.
If only non-mirrored, small mirrored, or hardware mirrored volumes are configured.
This avoids the system overhead of the extra messaging that is required by the local
detach policy.
The local detach policy may be suitable in the following cases:
When large mirrored volumes are configured. Resynchronizing a reattached plex can
degrade system performance. The local detach policy can avoid the need to detach the
plex at all. (Alternatively, the dirty region logging (DRL) feature can be used to reduce
the amount of resynchronization that is required.)
For clusters with more than four nodes. Keeping an application running on a
particular node is less critical when there are many nodes in a cluster. It may be
possible to configure the cluster management software to move an application to a
node that has access to the volumes. In addition, load balancing may be able to move
applications to a different volume from the one that experienced the I/O problem.
This preserves data redundancy, and other nodes may still be able to perform I/O
from/to the volumes on the disk.
If you have a critical disk group that you do not want to become disabled in the case that
the master node loses access to the copies of the logs, set the disk group failure policy to
leave. This prevents I/O failure on the master node disabling the disk group. However,
critical applications running on the master node fail if they lose access to the other shared
disk groups. In such a case, it may be preferable to set the policy to dgdisable, and to
allow the disk group to be disabled.
The default settings for the detach and failure policies are global and dgdisable
respectively. You can use the vxdg command to change both the detach and failure
policies on a shared disk group, as shown in this example:
# vxdg -g diskgroup set diskdetpolicy=local dgfailpolicy=leave
Effect of Disk Connectivity on Cluster Reconfiguration
The detach policy, previous I/O errors, or access to disks are not considered when a new
master node is chosen. When the master node leaves a cluster, the node that takes over as
master of the cluster may already have seen I/O failures for one or more disks. Under the
local detach policy, only one node was affected before reconfiguration, but when the node
becomes the master, the failure is treated as described in “Effect of Disk Connectivity on
Cluster Reconfiguration.”