Veritas Volume Manager 4.1 Administrator's Guide (HP-UX 11i v3, February 2007)
Overview of Cluster Volume Management
348 VERITAS Volume Manager Administrator’s Guide
For additional information about using the Dynamic Multipathing (DMP) feature of
VxVM in a clustered environment, see “DMP in a Clustered Environment” on page 105.
Overview of Cluster Volume Management
In recent years, tightly coupled cluster systems have become increasingly popular in the
realm of enterprise-scale mission-critical data processing. The primary advantage of
clusters is protection against hardware failure. Should the primary node fail or otherwise
become unavailable, applications can continue to run by transferring their execution to
standby nodes in the cluster. This ability to provide continuous availability of service by
switching to redundant hardware is commonly termed failover.
Another major advantage of clustered systems is their ability to reduce contention for
system resources caused by activities such as backup, decision support and report
generation. Businesses can derive enhanced value from their investment in cluster
systems by performing such operations on lightly loaded nodes in the cluster rather than
on the heavily loaded nodes that answer requests for service. This ability to perform some
operations on the lightly loaded nodes is commonly termed load balancing.
The cluster functionality of VxVM works together with the cluster monitor daemon that is
provided by VCS or by the host operating system. When configured correctly, the cluster
monitor informs VxVM of changes in cluster membership. Each node starts up
independently and has its own cluster monitor plus its own copies of the operating
system and VxVM with support for cluster functionality. When a node joins a cluster, it
gains access to shared disk groups and volumes. When a node leaves a cluster, it no longer
has access to these shared objects. A node joins a cluster when the cluster monitor is
started on that node.
Caution The cluster functionality of VxVM is supported only when used in conjunction
with a cluster monitor that has been configured correctly to work with VxVM.
“Example of a 4-Node Cluster” on page 349 illustrates a simple cluster arrangement
consisting of four nodes with similar or identical hardware characteristics (CPUs, RAM
and host adapters), and configured with identical software (including the operating
system). The nodes are fully connected by a private network and they are also separately
connected to shared external storage (either disk arrays or JBODs: just a bunch of disks) via
SCSI or Fibre Channel.
Note In this example, each node has two independent paths to the disks, which are
configured in one or more cluster-shareable disk groups. Multiple paths provide
resilience against failure of one of the paths, but this is not a requirement for cluster
configuration. Disks may also be connected by single paths.