Veritas Volume Manager 4.1 Administrator's Guide (HP-UX 11i v3, February 2007)
Overview of Cluster Volume Management
354 VERITAS Volume Manager Administrator’s Guide
The practical implication of this design is that I/O failure on any node results in the
configuration of all nodes being changed. This is known as the global detach policy.
However, in some cases, it is not desirable to have all nodes react in this way to I/O
failure. To address this, an alternate way of responding to I/O failures, known as the local
detach policy, was introduced in release 3.2 of VxVM.
The local detach policy is intended for use with shared mirrored volumes in a cluster. This
policy prevents I/O failure on a single slave node from causing a plex to be detached. This
would require the plex to be resynchronized when it is subsequently reattached. The local
detach policy is available for disk groups that have a version number of 70 or greater.
Note For small mirrored volumes, non-mirrored volumes, volumes that use hardware
mirrors, and volumes in private disk groups, there is no benefit in configuring the
local detach policy. In most cases, it is recommended that you use the default global
detach policy.
The detach policies have no effect if the master node loses access to all copies of the
configuration database and logs in a disk group. If this happened in releases prior to 4.1,
the master node always disabled the disk group. Release 4.1 introduces the disk group
failure policy, which allows you to change this behavior for critical disk groups. This policy
is only available for disk groups that have a version number of 120 or greater.
The following sections describe the detach and failure policies in greater detail.
Global Detach Policy
Caution The global detach policy must be selected when Dynamic MultiPathing (DMP)
is used to manage multipathing on Active/Passive arrays, This ensures that all
nodes correctly coordinate their use of the active path.
The global detach policy is the traditional and default policy for all nodes on the
configuration. If there is a read or write I/O failure on a slave node, the master node
performs the usual I/O recovery operations to repair the failure, and the plex is detached
cluster-wide. All nodes remain in the cluster and continue to perform I/O, but the
redundancy of the mirrors is reduced. When the problem that caused the I/O failure has
been corrected, the mirrors that were detached must be recovered before the redundancy
of the data can be restored.