Veritas Volume Manager 5.1 SP1 Administrator"s Guide (5900-1506, April 2011)

Table 13-3
Cluster behavior under I/O failure to a mirrored volume for different
disk detach policies
Global (diskdetpolicy=global)Local (diskdetpolicy=local)Type of I/O
failure
The plex is detached, and I/O
from/to the volume continues. An
I/O error is generated if no plexes
remain.
Reads fail only if no plexes remain
available to the affected node.
Writes to the volume fail.
Failure of path to
one disk in a
volume for a single
node
The plex is detached, and I/O
from/to the volume continues. An
I/O error is generated if no plexes
remain.
I/O fails for the affected node.Failure of paths to
all disks in a
volume for a single
node
The plex is detached, and I/O
from/to the volume continues. An
I/O error is generated if no plexes
remain.
The plex is detached, and I/O
from/to the volume continues. An
I/O error is generated if no plexes
remain.
Failure of one or
more disks in a
volume for all
nodes.
Guidelines for choosing detach policies
In most cases it is recommended that you use the global detach policy, and
particularly if any of the following conditions apply:
When cluster-wide access to the shared data volumes is more critical than
retaining data redundancy.
If only non-mirrored, small mirrored, or hardware mirrored volumes are
configured. In these cases, the global detach policy avoids the system overhead
of the extra messaging that the local detach policy requires.
The local detach policy may be suitable in the following cases:
When large mirrored volumes are configuredResynchronizing a reattached
plex can degrade system performance. The local detach policy can avoid the
need to detach the plex at all. (Alternatively, the dirty region logging (DRL)
feature can be used to reduce the amount of resynchronization that is required.)
For clusters with more than four nodesKeeping an application running on a
particular node is less critical when there are many nodes in a cluster. It may
be possible to configure the cluster management software to move an
application to a node that has access to the volumes. In addition, load balancing
may be able to move applications to a different volume from the one that
experienced the I/O problem. This preserves data redundancy, and other nodes
may still be able to perform I/O from/to the volumes on the disk.
Administering cluster functionality (CVM)
Overview of clustering
434