Understanding and Designing Serviceguard Disaster Recovery Architectures
• Fibre Channel expects that the ordering of packets is preserved for the Inter switch links,
however ordering is not guaranteed by SONET. Therefore, Fibre Channel Gateway / SAN
Extension devices are typically used between the Fibre Channel switches and the SONET box
to preserve the packet ordering. Redundant Fibre Channel switches are required in each data
center, unless the switch offers built in redundancy.
• Refer to the SWD Streams documents for supported Fibre Channel switches. An Extended
Fabric license may be required if the ISL link between the switches is greater than 10 kilometers.
For optimum data replication performance, it is suggested to tune the buffer credits properly
for the inter switch links (ISL) used for data replication between the data centers.
• It is also possible to have a combination of separate network links and SONET links used for
Fibre Channel data, or SONET links used for networking and Fibre Channel links for data;
however it is probably much more cost effective to use the SONET links for both networking
and Fibre Channel data.
SONET Hardware Requirements:
HP does not require any particular vendor’s SONET equipment be used. The customer is responsible
for the selection and maintenance of any SONET equipment.
Extended Distance Cluster with two Data Centers
Configurations with two data centers have the following additional requirements:
• To maintain cluster quorum after the loss of an entire data center, you must configure dual
cluster lock disks (one in each data center). Cluster lock disks are supported for up to four
nodes, the cluster can contain only two or four nodes. Serviceguard does not support dual
lock LUNs, so lock LUNs cannot be used in this configuration. When using dual cluster lock
disks, is a possibility of Split Brain Syndrome (where the nodes in each data center form two
separate clusters, each with exactly one half of the cluster nodes) if all communication between
the two data centers is lost and all nodes continue to run. The Serviceguard Quorum Server
prevents the possibility of split brain; however the Quorum Server must reside in a third site.
Therefore, a three data center cluster is a preferable solution, to prevent split brain, and the
only solution if dual cluster lock disks cannot be used, or if the cluster must have more than
four nodes.
• Two data center configurations are not supported if SONET is used for the cluster interconnects
between the Primary data centers.
• There must be equal number of nodes (one or two) in each data center.
• To protect against the possibility of a split cluster inherent when using dual cluster lock, at
least two and preferably three independent paths between the two data centers must be used
for heartbeat and cluster lock I/O. Specifically, the path from the first data center to the cluster
lock at the second data center must be different than the path from the second data center to
the cluster lock at the first data center. Preferably, at least one of the paths for heartbeat traffic
should be different from each of the paths for cluster lock I/O.
• Routing cannot be used for the networks between the data centers, except in Cross-Subnet
configurations.
• Mirrordisk/UX mirroring for LVM and VxVM mirroring are supported for clusters of two or
four nodes. However, the dual cluster lock devices can only be configured in LVM Volume
Groups.
• CVM 3.5, 4.1, 5.0, or 5.0.1 mirroring is supported for Serviceguard and EC RAC clusters
using CVM or CFS. However, the dual cluster lock devices must still be configured in LVM
62 Extended Distance Cluster Configurations










