Managing Serviceguard 11th Edition, Version A.11.16, Second Printing June 2004

Building an HA Cluster Configuration
Managing the Running Cluster
Chapter 5240
This system is a node in a high availability cluster.
Halting this system may cause applications and services to
start up on another node in the cluster.
You might wish to include a list of all cluster nodes in this message,
together with additional cluster-specific information.
The /etc/issue and /etc/motd files may be customized to include
cluster-related information.
Managing a Single-Node Cluster
The number of nodes you will need for your Serviceguard cluster depends
on the processing requirements of the applications you want to protect.
You may want to configure a single-node cluster to take advantage of
Serviceguard’s network failure protection.
In a single-node cluster, a cluster lock is not required, since there is no
other node in the cluster. The output from the cmquerycl command
omits the cluster lock information area if there is only one node.
You still need to have redundant networks, but you do not need to specify
any heartbeat LANs, since there is no other node to send heartbeats to.
In the cluster configuration ASCII file, specify all LANs that you want
Serviceguard to monitor. For LANs that already have IP addresses,
specify them with the STATIONARY_IP keyword, rather than the
HEARTBEAT_IP keyword. For standby LANs, all that is required is the
NETWORK_INTERFACE keyword with the LAN device name.
Single-Node Operation
Single-node operation occurs in a single-node cluster or in a multi-node
cluster, following a situation where all but one node has failed, or where
you have shut down all but one node, which will probably have
applications running. As long as the Serviceguard daemon cmcld is
active, other nodes can re-join the cluster at a later time.
If the Serviceguard daemon fails when in single-node operation, it will
leave the single node up and your applications running. This is different
from the loss of the Serviceguard daemon in a multi-node cluster, which
halts the node with a TOC, and causes packages to be switched to
adoptive nodes.
It is not necessary to halt the single node in this scenario, since the
application is still running, and no other node is currently available for
package switching.