Managing Serviceguard A.11.20, March 2013

Veritas Cluster Volume Manager (CVM)
NOTE: Check the Serviceguard/SGeRAC/SMS/Serviceguard Manager Plug-in Compatibility
and Feature Matrix and the latest Release Notes for your version of Serviceguard for up-to-date
information on CVM support: http://www.hp.com/go/hpux-serviceguard-docs.
You may choose to configure cluster storage with the Veritas Cluster Volume Manager (CVM)
instead of the Volume Manager (VxVM). The Base-VxVM provides some basic cluster features when
Serviceguard is installed, but there is no support for software mirroring, dynamic multipathing (for
active/active storage devices), or numerous other features that require the additional licenses.
The VxVM Full Product and CVM are enhanced versions of the VxVM volume manager specifically
designed for cluster use. When installed with the Veritas Volume Manager, the CVM add-on product
provides most of the enhanced VxVM features in a clustered environment. CVM is truly cluster-aware,
obtaining information about cluster membership from Serviceguard directly.
Cluster information is provided via a special system multi-node package, which runs on all nodes
in the cluster. The cluster must be up and must be running this package before you can configure
VxVM disk groups for use with CVM. Disk groups must be created from the CVM Master node.
The Veritas CVM package for CVM version 4.1 and later is named SG-CFS-pkg.
CVM allows you to activate storage on one node at a time, or you can perform write activation
on one node and read activation on another node at the same time (for example, allowing backups).
CVM provides full mirroring and dynamic multipathing (DMP) for clusters.
CVM supports concurrent storage read/write access between multiple nodes by applications which
can manage read/write access contention, such as Oracle Real Application Cluster (RAC).
CVM 4.1 and later can be used with Veritas Cluster File System (CFS) in Serviceguard. Several of
the HP Serviceguard Storage Management Suite bundles include features to enable both CVM
and CFS.
CVM can be used in clusters that:
run applications that require fast disk group activation after package failover;
require storage activation on more than one node at a time, for example to perform a backup
from one node while a package using the volume is active on another node. In this case, the
package using the disk group would have the disk group active in exclusive write mode while
the node that is doing the backup would have the disk group active in shared read mode;
run applications, such as Oracle RAC, that require concurrent storage read/write access
between multiple nodes.
For heartbeat requirements, see “Redundant Heartbeat Subnets” (page 87).
Shared storage devices must be connected to all nodes in the cluster, whether or not the node
accesses data on the device.
Cluster Startup Time with CVM
All shared disk groups (DGs) are imported when the system multi-node’s control script starts up
CVM. Depending on the number of DGs, the number of nodes and the configuration of these
(number of disks, volumes, etc.) this can take some time (current timeout value for this package is
3 minutes but for larger configurations this may have to be increased). Any failover package that
uses a CVM DG will not start until the system multi-node package is up. Note that this delay does
not affect package failover time; it is a one-time overhead cost at cluster startup.
Propagation of Disk Groups with CVM
CVM disk groups are created on one cluster node known as the CVM master node. CVM verifies
that each node can see each disk and will not allow invalid DGs to be created.
86 Understanding Serviceguard Software Components