HP Serviceguard A.11.20- Managing Serviceguard Twentieth Edition, August 2011

If you have a two-node cluster, you are required to configure a cluster lock. If communications are
lost between these two nodes, the node that obtains the cluster lock will take over the cluster and
the other node will halt (system reset). Without a cluster lock, a failure of either node in the cluster
will cause the other node, and therefore the cluster, to halt. Note also that if the cluster lock fails
during an attempt to acquire it, the cluster will halt.
Lock Requirements
A one-node cluster does not require a cluster lock. A two-node cluster requires a cluster lock. In
clusters larger than three nodes, a cluster lock is strongly recommended. If you have a cluster with
more than four nodes, use a Quorum Server; a cluster lock disk is not allowed for clusters of that
size.
Use of a Lock LUN or LVM Lock Disk as the Cluster Lock
A lock disk or lock LUN can be used for clusters up to and including four nodes in size.
A cluster lock disk is a special area on an LVM disk located in a volume group that is shareable
by all nodes in the cluster. Similarly, a cluster lock LUN is a small dedicated LUN, connected to
all nodes in the cluster, that contains the lock information.
In an LVM configuration, a disk used as a lock disk is not dedicated for use as the cluster lock; the
disk can be employed as part of a normal volume group with user data on it. A lock LUN, on the
other hand, is dedicated to the cluster lock; you cannot store any other data on it.
You specify the cluster lock volume group and physical volume, or the cluster lock LUN, in the
cluster configuration file.
When a node obtains the cluster lock, this area is marked so that other nodes will recognize the
lock as “taken.
The operation of the lock disk or lock LUN is shown in Figure 11.
Figure 11 Lock Disk or Lock LUN Operation
Serviceguard periodically checks the health of the lock disk or LUN and writes messages to the
syslog file if the device fails the health check. This file should be monitored for early detection
of lock disk problems.
If you are using a lock disk, you can choose between two lock disk options—a single or dual lock
disk—based on the kind of high availability configuration you are building. A single lock disk is
recommended where possible. With both single and dual locks, however, it is important that the
cluster lock be available even if the power circuit to one node fails; thus, the choice of a lock
configuration depends partly on the number of power circuits available. Regardless of your choice,
all nodes in the cluster must have access to the cluster lock to maintain high availability.
How the Cluster Manager Works 47