VERITAS Storage Foundation 4.1 Cluster File System HP Serviceguard Storage Management Suite Extracts, December 2005
Chapter 9, CVM Administration
Overview of Cluster Volume Management
103
Connectivity Policy of Shared Disk Groups
The nodes in a cluster must always agree on the status of a disk. In particular, if one node
cannot write to a given disk, all nodes must stop accessing that disk before the results of
the write operation are returned to the caller. Therefore, if a node cannot contact a disk, it
should contact another node to check on the disk’s status. If the disk fails, no node can
access it and the nodes can agree to detach the disk. If the disk does not fail, but rather the
access paths from some of the nodes fail, the nodes cannot agree on the status of the disk.
Either of the following policies for resolving this type of discrepancy may be applied:
◆ Under the global connectivity policy, the detach occurs cluster-wide (globally) if any
node in the cluster reports a disk failure. This is the default policy.
◆ Under the local connectivity policy, in the event of disks failing, the failures are
confined to the particular nodes that saw the failure. Note that an attempt is made to
communicate with all nodes in the cluster to ascertain the disks’ usability. If all nodes
report a problem with the disks, a cluster-wide detach occurs.
The vxdg command can be used to set the disk dettach and dg fail policy. The dgfailpolicy
sets the disk group failure policy in the case that the master node loses connectivity to the
configuration and log copies within a shared disk group. This attribute requires that the
disk group version is 120 or greater. The following policies are supported:
◆ dgdisable—The master node disables the diskgroup for all user or kernel initiated
transactions. First write and final close fail. This is the default policy.
◆ leave—The master node panics instead of disabling the disk group if a log update fails
for a user or kernel initiated transaction (including first write or final close). If the
failure to access the log copies is global, all nodes panic in turn as they become the
master node.
Limitations of Shared Disk Groups
The cluster functionality of VxVM does not support RAID-5 volumes, or task monitoring
for cluster-shareable disk groups. These features can, however, be used in private disk
groups that are attached to specific nodes of a cluster. Online relayout is supported
provided that it does not involve RAID-5 volumes.
The root disk group (rootdg) cannot be made cluster-shareable. It must be private.
Only raw device access may be performed via the cluster functionality of VxVM. It does
not support shared access to file systems in shared volumes unless the appropriate
software, such as VERITAS Cluster File System, is installed and configured.
If a shared disk group contains unsupported objects, deport it and then re-import the disk
group as private on one of the cluster nodes. Reorganize the volumes into layouts that are
supported for shared disk groups, and then deport and re-import the disk group as
shared.