Managing Serviceguard Eighteenth Edition, September 2010
network (LAN) component. In the event that one component fails, the redundant
component takes over. Serviceguard and other high availability subsystems coordinate
the transfer between components.
A Serviceguard cluster is a networked grouping of HP 9000 or HP Integrity servers
(or both), known as nodes, having sufficient redundancy of software and hardware
that a single point of failure will not significantly disrupt service.
A package groups application services (individual HP-UX processes) together. There
are failover packages, system multi-node packages, and multi-node packages:
• The typical high availability package is a failover package. It usually is configured
to run on several nodes in the cluster, and runs on one at a time. If a service, node,
network, or other package resource fails on the node where it is running,
Serviceguard can automatically transfer control of the package to another cluster
node, allowing services to remain available with minimal interruption.
• There are also packages that run on several cluster nodes at once, and do not fail
over. These are called system multi-node packages and multi-node packages.
Examples are the packages HP supplies for use with the Veritas Cluster Volume
Manager and Veritas Cluster File System from Symantec (on HP-UX releases that
support them; see “About Veritas CFS and CVM from Symantec” (page 32)).
A system multi-node package must run on all nodes that are active in the cluster.
If it fails on one active node, that node halts. System multi-node packages are
supported only for HP-supplied applications.
A multi-node package can be configured to run on one or more cluster nodes. It
is considered UP as long as it is running on any of its configured nodes.
In Figure 1-1, node 1 (one of two SPU's) is running failover package A, and node 2 is
running package B. Each package has a separate group of disks associated with it,
containing data needed by the package's applications, and a mirror copy of the data.
Note that both nodes are physically connected to both groups of mirrored disks. In this
example, however, only one node at a time may access the data for a given group of
disks. In the figure, node 1 is shown with exclusive access to the top two disks (solid
line), and node 2 is shown as connected without access to the top disks (dotted line).
Similarly, node 2 is shown with exclusive access to the bottom two disks (solid line),
and node 1 is shown as connected without access to the bottom disks (dotted line).
Mirror copies of data provide redundancy in case of disk failures. In addition, a total
of four data buses are shown for the disks that are connected to node 1 and node 2.
This configuration provides the maximum redundancy and also gives optimal I/O
performance, since each package is using different buses.
Note that the network hardware is cabled to provide redundant LAN interfaces on
each node. Serviceguard uses TCP/IP network services for reliable communication
among nodes in the cluster, including the transmission of heartbeat messages , signals
from each functioning node which are central to the operation of the cluster. TCP/IP
30 Serviceguard at a Glance