HP Serviceguard Extended Distance Cluster for Linux A.11.20.10 Deployment Guide, December 2012

The development of DWDM technology allows designers to use dark fiber (high speed
communication lines provided by common carriers) to extend the distances that were formerly
subject to limits imposed by Fibre Channel for storage and Ethernet for network links.
NOTE: Increased distance often means increased cost and reduced speed of connection. Not
all combinations of links are supported in all cluster types. For a current list of supported
configurations and supported distances, see theHP Configuration Guide, available through your
HP representative.
2.3 Two Data Center and Quorum Service Location Architectures
A two data center and Quorum Service location, which is at a third location, have the following
configuration requirements:
NOTE: There is no hard requirement on how far the Quorum Service location has to be from the
two main data centers. It can be as close as the room next door with its own power source or can
be as far as in another site across town. The distance between all three locations dictates that level
of disaster recovery a cluster can provide.
In these solutions, there must be an equal number of nodes in each primary data center, and
the third location (known as the arbitrator data center) contains the Quorum Server. LockLUN
is not supported in a Disaster Recovery configuration.
The Quorum Server is used as a tie-breaker to maintain cluster quorum when all communication
between the two primary data centers is lost. The arbitrator data center must be located
separately from the primary data centers. For more information about quorum server, see the
Managing Serviceguard user’s guide and the Serviceguard Quorum Server Release Notes.
A minimum of two heartbeat paths must be configured for all cluster nodes. The preferred
solution is two separate heartbeat subnets configured in the cluster, each going over a
separately routed network path to the other data center. Alternatively, there can be a single
dedicated heartbeat subnet with a bonded pair configured for it. Each would go over a
separately routed physical network path to the other data centers.
There can be separate networking and Fibre Channel links between the data centers, or both
networking and Fibre Channel can go over DWDM links between the data centers.
Fibre Channel Direct Fabric Attach (DFA) is recommended over Fibre Channel Arbitrated loop
configurations, due to the superior performance of DFA, especially as the distance increases.
Therefore Fibre Channel switches are recommended over Fibre Channel hubs.
For disaster recovery, application data must be mirrored between the primary data centers.
You must ensure that the mirror copies reside in different data centers, as the software cannot
determine the locations.
NOTE: When a failure results in the mirror copies losing synchronization, MD will perform
a full resynchronization when both halves of the mirror are available.
2.4 Guidelines for Separate Network and Data Links
There must be less than 200 milliseconds of latency in the network between the data centers.
Routing is allowed to the third data center if a Quorum Server is used in that data center.
The maximum distance between the data centers for this type of configuration is currently
limited by the maximum distance supported for the networking type or Fibre Channel link type
being used, whichever is shorter.
There can be a maximum of 500 meters between the Fibre Channel switches in the two data
centers if Short-wave ports are used. This distance can be increased to 10 kilometers by using
a Long-wave Fibre Channel port on the switches. If DWDM links are used, the maximum
2.3 Two Data Center and Quorum Service Location Architectures 17