Managing HP Serviceguard A.11.20.00 for Linux, June 2012

In Serviceguard A.11.16 and later, these tasks can be performed by non-root users with the
appropriate privileges. See Controlling Access to the Cluster (page 143) for more information about
configuring access.
You can use Serviceguard Manager or the Serviceguard command line to start or stop the cluster,
or to add or halt nodes. Starting the cluster means running the cluster daemon on one or more of
the nodes in a cluster. You use different Serviceguard commands to start the cluster depending on
whether all nodes are currently down (that is, no cluster daemons are running), or whether you
are starting the cluster daemon on an individual node.
Note the distinction that is made in this chapter between adding an already configured node to
the cluster and adding a new node to the cluster configuration. An already configured node is one
that is already entered in the cluster configuration file; a new node is added to the cluster by
modifying the cluster configuration file.
NOTE: Manually starting or halting the cluster or individual nodes does not require access to the
quorum server, if one is configured. The quorum server is only used when tie-breaking is needed
following a cluster partition.
Starting the Cluster When all Nodes are Down
You can use Serviceguard Manager, or the cmruncl command as described in this section, to
start the cluster when all cluster nodes are down. Particular command options can be used to start
the cluster under specific circumstances.
The -v option produces the most informative output. The following starts all nodes configured in
the cluster without a connectivity check:
cmruncl -v
The -w option causes cmruncl to perform a full check of LAN connectivity among all the nodes
of the cluster. Omitting this option will allow the cluster to start more quickly but will not test
connectivity. The following starts all nodes configured in the cluster with a connectivity check:
cmruncl -v -w
The -n option specifies a particular group of nodes. Without this option, all nodes will be started.
The following example starts up the locally configured cluster only onftsys9 and ftsys10. (This
form of the command should only be used when you are sure that the cluster is not already running
on any node.)
cmruncl -v -n ftsys9 -n ftsys10
CAUTION: HP Serviceguard cannot guarantee data integrity if you try to start a cluster with the
cmruncl -n command while a subset of the cluster's nodes are already running a cluster. If the
network connection is down between nodes, using cmruncl -n might result in a second cluster
forming, and this second cluster might start up the same applications that are already running on
the other cluster. The result could be two applications overwriting each other's data on the disks.
Adding Previously Configured Nodes to a Running Cluster
You can use Serviceguard Manager, or HP Serviceguard commands as shown, to bring a configured
node up within a running cluster.
Use the cmrunnode command to add one or more nodes to an already running cluster. Any node
you add must already be a part of the cluster configuration. The following example adds node
ftsys8 to the cluster that was just started with only nodes ftsys9 and ftsys10. The-v (verbose)
option prints out all the messages
cmrunnode -v ftsys8
By default, cmrunnode will do network validation, making sure the actual network setup matches
the configured network setup. This is the recommended method. If you have recently checked the
190 Cluster and Package Maintenance