HP 3PAR Cluster Extension Software Administrator Guide (5697-2047, June 2012)
HP Serviceguard for Linux cluster setup considerations
Quorum
In general, the algorithm for cluster re-formation requires a cluster quorum of a strict majority (that
is, more than 50%) of the nodes previously running.
Although a cluster quorum of more than 50% is generally required, exactly 50% of the previously
running nodes may re-form as a new cluster provided that the other 50% of the previously running
nodes do not re-form. This is guaranteed by the use of a tie-breaker to choose between the two
equal-sized node groups, allowing one group to form the cluster and forcing the other group to
shut down. This tie-breaker is known as a cluster lock. The cluster lock is implemented either by
means of a lock LUN or a quorum server. A cluster lock is required on two-node clusters.
The cluster lock is used as a tie-breaker only for situations in which a running cluster fails and, as
Serviceguard attempts to form a new cluster, the cluster is split into two sub-clusters of equal size.
Each sub-cluster will attempt to acquire the cluster lock. The sub-cluster which gets the cluster lock
will form the new cluster, preventing the possibility of two sub-clusters running at the same time. If
the two sub-clusters are of unequal size, the sub-cluster with greater than 50% of the nodes will
form the new cluster, and the cluster lock is not used.
If you have a two-node cluster, you are required to configure a cluster lock.
Using Generic Resources to Monitor Volume Groups
You can monitor a particular disk that is a part of an LVM volume group used by packages.
You can do this by using the disk monitor capabilities of the System Fault Management, available
as a separate product, and integrating it in Serviceguard by configuring generic resources in
packages.
Monitoring can be set up to trigger a package failover or to report disk failure events to
Serviceguard by writing monitoring scripts, which can be configured as a service in a Package.
Disk monitoring
Serviceguard provides disk monitoring for the shared storage that is activated by packages in the
cluster. The monitor daemon on each node tracks the status of all the disks on that node that you
have configured for monitoring.
The configuration must be done separately for each node in the cluster, because each node monitors
only the group of disks that can be activated on that node, and that depends on which packages
are allowed to run on the node.
NOTE: For disk monitoring we should ensure that the multipath device names (either default or
user friendly) should be same for both source and destination LUNs of the Remote Copy volume
group across all the nodes.
For more details, see Red Hat Enterprise Linux DM Multipath Configuration and Administration
guide and SUSE Linux Enterprise Server Storage Administration guide .
Cluster timeout considerations
Each node sends its heartbeat message at a rate calculated by Serviceguard on the basis of the
value of the MEMBER_TIMEOUT parameter, set in the cluster configuration file, which you create
as a part of cluster configuration.
The duration of the safety timer depends on the cluster configuration parameter MEMBER_TIMEOUT,
and also on the characteristics of the cluster configuration, such as whether it uses a quorum server
or a cluster lock (and the type of lock) and whether or not standby LANs are configured.
Planning for HP 3PAR Cluster Extension 19