Installation guide

1. At any node, use the cl usvcad m utility to relocate, migrate, or stop each HA service running
on the node that is being deleted from the cluster. For information about using cl usvcadm,
refer to Section 8.3, “ Managing High-Availability Services .
2. At the node to be deleted from the cluster, stop the cluster software according to Section 8.1.2,
Stopping Cluster Software”. For example:
[root@ example-01 ~]# servi ce rg manager sto p
Stopping Cluster Service Manager: [ OK ]
[root@ example-01 ~]# servi ce g fs2 sto p
Unmounting GFS2 filesystem (/mnt/gfsA): [ OK ]
Unmounting GFS2 filesystem (/mnt/gfsB): [ OK ]
[root@ example-01 ~]# servi ce cl vmd sto p
Signaling clvmd to exit [ OK ]
clvmd terminated [ OK ]
[root@ example-01 ~]# servi ce cman sto p
Stopping cluster:
Leaving fence domain... [ OK ]
Stopping gfs_controld... [ OK ]
Stopping dlm_controld... [ OK ]
Stopping fenced... [ OK ]
Stopping cman... [ OK ]
Waiting for corosync to shutdown: [ OK ]
Unloading kernel modules... [ OK ]
Unmounting configfs... [ OK ]
[root@ example-01 ~]#
3. At any node in the cluster, edit the /etc/cl uster/cl uster.co nf to remove the
cl usternod e section of the node that is to be deleted. For example, in Example 8.1, “Three-
node Cluster Configuration , if node-03.example.com is supposed to be removed, then delete
the cl usternod e section for that node. If removing a node (or nodes) causes the cluster to
be a two-node cluster, you can add the following line to the configuration file to allow a single
node to maintain quorum (for example, if one node fails):
<cman two _no d e= "1"
expected_vo tes= "1"/>
Refer to Section 8.2.3, “ Examples of Three-Node and Two-Node Configurations for
comparison between a three-node and a two-node configuration.
4. Update the confi g _versio n attribute by incrementing its value (for example, changing
from co nfi g _versi o n= "2" to co nfi g _versio n= "3">).
5. Save /etc/cl uster/cl uster.co nf.
6. (O pti o nal ) Validate the updated file against the cluster schema (cluster. rng ) by
running the ccs_co nfi g _val i d ate command. For example:
[root@ example-01 ~]# ccs_co nfi g _val i d ate
Configuration validates
7. Run the cman_too l versi o n -r command to propagate the configuration to the rest of the
cluster nodes.
8. Verify that the updated configuration file has been propagated.
Chapt er 8 . Managing Red Hat High Availabilit y Add- O n Wit h Command Line T ools
133