Managing Serviceguard 12th Edition, March 2006

Understanding Serviceguard Software Components
Volume Managers for Data Storage
Chapter 3120
Cluster information is provided via a special system multi-node package,
which runs on all nodes in the cluster. The cluster must be up and must
be running this package in order to configure VxVM disk groups for use
with CVM. The VERITAS CVM package for version 3.5 is named
VxVM-CVM-pkg; the package for CVM version 4.1 is named SG-CFS-pkg.
CVM allows you to activate storage on more than one node at a time.
CVM also allows you to activate storage on one node at a time, or you can
perform write activation on one node and read activation on another
node at the same time (for example, allowing backups). CVM provides
full mirroring and dynamic multipathing (DMP) for clusters. If CVM is
being used disk groups must be created from the CVM Master node and
the cluster must be up.
CVM supports concurrent storage read/write access between multiple
nodes by applications which can manage read/write access contention,
such as Oracle Real Application Cluster.
CVM 4.1 can be used with VERITAS Cluster File System (CFS) in
Serviceguard. Several of the HP Serviceguard Storage Management
Suite bundles include features to enable both CVM and CFS.
CVM can be used in clusters that:
will run applications that require fast disk group activation after
package failover.
require activation on more than one node at a time, for example to
perform a backup from one node while a package using the volume is
active on another node. In this case, the package using the disk
group would have the disk group active in exclusive write mode while
the node that is doing the backup would have the disk group active in
shared read mode.
require activation on more than one node at the same time, for
example Oracle RAC.
Heartbeat configurations are configured differently for CVM 3.5 and 4.1.
See “Redundant Heartbeat Subnet Required” on page 121.
CVM is supported on 8 nodes or fewer at this release. Shared storage
devices must be connected to all nodes in the cluster, whether or not the
node accesses data on the device.