HP Serviceguard Extended Distance Cluster for Linux A.11.20.20 Deployment Guide, August 2013

Table 2 Link Technologies and Distances
Maximum Distance SupportedType of Link
50 metersGigabit Ethernet Twisted Pair
500 metersShort Wave Fiber
10 kilometersLong Wave Fiber
100 kilometersDense Wave Division Multiplexing (DWDM)
The development of DWDM technology allows designers to use dark fiber (high speed
communication lines provided by common carriers) to extend the distances that were formerly
subject to limits imposed by Fibre Channel for storage and Ethernet for network links.
NOTE: Increased distance often means increased cost and reduced speed of connection. Not
all combinations of links are supported in all cluster types. For a current list of supported
configurations and supported distances, see theHP Configuration Guide, available through your
HP representative.
2.3 Two Data Center and Quorum Service Location Architectures
A two data center and Quorum Service location, which is at a third location, have the following
configuration requirements:
NOTE: There is no hard requirement on how far the Quorum Service location has to be from the
two main data centers. It can be as close as the room next door with its own power source or can
be as far as in another site across town. The distance between all three locations dictates that level
of disaster recovery a cluster can provide.
In these solutions, there must be an equal number of nodes in each primary data center, and
the third location (known as the arbitrator data center) contains the Quorum Server. LockLUN
is not supported in a Disaster Recovery configuration.
The Quorum Server is used as a tie-breaker to maintain cluster quorum when all communication
between the two primary data centers is lost. The arbitrator data center must be located
separately from the primary data centers. For more information about quorum server, see the
Managing Serviceguard user’s guide and the Serviceguard Quorum Server Release Notes.
A minimum of two heartbeat paths must be configured for all cluster nodes. The preferred
solution is two separate heartbeat subnets configured in the cluster, each going over a
separately routed network path to the other data center. Alternatively, there can be a single
dedicated heartbeat subnet with a bonded pair configured for it. Each would go over a
separately routed physical network path to the other data centers.
There can be separate networking and Fibre Channel links between the data centers, or both
networking and Fibre Channel can go over DWDM links between the data centers.
Fibre Channel Direct Fabric Attach (DFA) is recommended over Fibre Channel Arbitrated loop
configurations, due to the superior performance of DFA, especially as the distance increases.
Therefore Fibre Channel switches are recommended over Fibre Channel hubs.
For disaster recovery, application data must be mirrored between the primary data centers.
You must ensure that the mirror copies reside in different data centers, as the software cannot
determine the locations.
NOTE: When a failure results in the mirror copies losing synchronization, MD will perform
a full resynchronization when both halves of the mirror are available.
2.3 Two Data Center and Quorum Service Location Architectures 17