Veritas Volume Manager 5.0 Administrator's Guide (September 2006)
396 Administering cluster functionality
Overview of cluster volume management
determine the behavior of the master node in such cases. This policy has two possible
settings as shown in the following table:
The behavior of the master node under the disk group failure policy is independent of the
setting of the disk detach policy. If the disk group failure policy is set to leave, all nodes
panic in the unlikely case that none of them can access the log copies.
See “Setting the disk group failure policy on a shared disk group” on page 412 for
information on how to use the
vxdg command to set the failure policy on a shared disk
group.
Guidelines for choosing detach and failure policies
In most cases it is recommended that you use the global detach policy, and particularly if
any of the following conditions apply:
■ When an array is seen by DMP as Active/Passive. The local detach policy causes
unpredictable behavior for Active/Passive arrays.
■ For clusters with four or fewer nodes. With a small number of nodes in a cluster, it is
preferable to keep all nodes actively using the volumes, and to keep the applications
running on all the nodes.
■ If only non-mirrored, small mirrored, or hardware mirrored volumes are configured.
This avoids the system overhead of the extra messaging that is required by the local
detach policy.
The local detach policy may be suitable in the following cases:
■ When large mirrored volumes are configured. Resynchronizing a reattached plex can
degrade system performance. The local detach policy can avoid the need to detach
the plex at all. (Alternatively, the dirty region logging (DRL) feature can be used to
reduce the amount of resynchronization that is required.)
■ For clusters with more than four nodes. Keeping an application running on a
particular node is less critical when there are many nodes in a cluster. It may be
possible to configure the cluster management software to move an application to a
node that has access to the volumes. In addition, load balancing may be able to move
Table 13-4 Behavior of master node for different failure policies
Type of I/O failure Leave
(dgfailpolicy=leave)
Disable
(dgfailpolicy=dgdisable)
Master node loses access
to all copies of the logs.
The master node panics with the
message “klog update failed” for a
failed kernel-initiated transaction,
or “cvm config update failed” for a
failed user-initiated transaction.
The master node disables the disk
group.