HP 3PAR Cluster Extension Software Administrator Guide (5697-1429, March 2012)

Disk monitoring
For the situations in which disk access is lost or read/write protection is in effect due to storage
fencing, application monitoring agents, file system agents, or LVM resource agents detect the IO
failure. HP 3PAR Cluster Extension does not monitor the disk access status.
RHCS cluster setup considerations
Quorum
In RHCS, the quorum is based on a simple voting majority of the defined nodes in a cluster. To
re-form successfully, a majority of all possible votes is required.
Each cluster node is assigned a number of votes, and they contribute to the cluster while they are
members. If the cluster has a majority of all possible votes, it has quorum (also called quorate);
otherwise, it does not have quorum.
Fencing
Cluster software adjusts the node membership based on various failure scenarios. The concept of
quorum defines which set of nodes continue to define the cluster. To protect data, nodes that do
not have quorum are removed from the cluster. The non-quorate nodes that are removed must be
prevented from accessing the shared resources. This process is called fencing.
HP iLO fencing is one method that can be used with RHCS to restrict cluster node access to shared
resources.
Observe the following guidelines when using HP iLO network configurations with RHCS clusters:
HP iLO can be connected to the client access network or to a different network, but the network
must be routable.
HP iLO should not be on the network that is used for cluster communication.
The HP iLO of each cluster system must be accessible over the network from every other cluster
system.
To handle infrequent failures of the HP iLO fencing (such as a switch failure), you can set up a
backup fence method for redundancy.
HP iLO fencing can be used on HP Proliant systems with built-in iLO hardware. For third-party
systems, other power control fencing methods can be used.
NOTE: IPMI fencing can be used for Integrity servers that do not support RIBCL scripting.
Qdisk configuration
Red Hat recommends the use of a Qdisk configuration to bolster quorum to handle failures such
as half (or more) of the members failing, a tie-breaker in equal split partition, and a SAN failure.
In a HP 3PAR Cluster Extension configuration with multiple storage arrays, a Qdisk configuration
is not supported.
Failover domains
A cluster service is associated with a failover domain, which is a subset of cluster nodes that are
eligible to run a particular cluster service. To maintain data integrity, each cluster service can run
on only one cluster node at a time. By assigning a cluster service to a restricted failover domain,
you can limit the nodes that are eligible to run a cluster service in the event of a failover, and you
can order the nodes by preference to ensure that a particular node runs the cluster service (as long
as that node is active).
Planning for HP 3PAR Cluster Extension 17