Understanding and Designing Serviceguard Disaster Recovery Architectures

the same network interfaces as the Serviceguard heartbeat, however CRS only supports one
heartbeat (primary: standby pair).
In Serviceguard the Heartbeat subnets must be common to all data centers, with the exception
of Cross-Subnet configurations
Cross-Subnet configurations are supported with Extended Cluster configurations with up to 16
nodes. This allows the nodes in each data center to configure their heartbeats on subnets that
are locally unique to their own data centers. The heartbeats must be statically routed; static
route entries must be configured on each node to route the heartbeats through different paths.
Network routing must be configured between minimum two pairs of subnets between the data
centers, and the routing between each subnet pair must be configured such that the failure of
a single router or LAN segment will not take out all heartbeats between the data centers. There
must be a minimum of two heartbeat subnets in each data center and each heartbeat subnet
must have a Standby subnet. The heartbeat between the data centers requires TCP/IP
connectivity (for example, DLPI connectivity is not required between the data centers, but is
required within the data centers). SGeRAC, CVM, and CFS are not supported in Cross-Subnet
cluster configurations. All other requirements listed in this section still apply for Cross-Subnet
configurations, unless otherwise noted.
There must be less than 200 milliseconds of roundtrip latency in the link between the data
centers. This latency requirement applies for both the heartbeat network and the Fiber Channel
data.
Fibre Channel Direct Fabric Attach (DFA) is recommended over Fibre Channel Arbitrated loop
configurations, due to the superior performance of DFA, especially as the distance increases.
Any combination of the following Fibre Channel capable disk arrays may be used: HP Storage
Disk Array XP, HP Storage Enterprise Virtual Array (EVA), HP Storage 1500cs Modular Smart
Array (Active/Active controller version only), HP Storage 1000 Modular Smart Array
(Active/Active controller version only) or EMC Symmetrix Disk Arrays. Verify that HP Storage
Division (SWD) will support your desired combination of disk arrays to be connected to the
same Host Bus Adapter.
1
The Data Link Provider Interface (DLPI) is an industry standard definition for message
communications to a STREAMS based network interface driver. The DLPI resides at layer 2, the
data link layer, in the OSI Reference Model.
2
WDM stands for Wavelength Division Multiplexing. There are two WDM technology solutions:
CWDM (Coarse Wavelength Division Multiplexing) and DWDM (Dense Wavelength Division
Multiplexing). CWDM is similar to DWDM but is less expensive, has fewer channels, is less
expandable, and works over a distance of 100 km.
3
By separately routed network path, we mean a completely independent, physically separate
path, such that the failure of any component in one network path will not result in a network partition
between any nodes in the cluster. In the case of fault tolerant WDM boxes, there may be a single
common WDM box in the data center. There should be separate fibers routed independently
between the WDM boxes in each data center, however.
The use of Mirrordisk/UX with LVM or SLVM, or software mirroring with VxVM or CVM is
required to mirror the application data between the Primary data centers. Devices with
Active/Passive controllers are not supported with VxVM or CVM mirroring, therefore only
LVM or Shared LVM and Mirrordisk/UX are supported for the mirroring between the data
centers with these devices.
See Table 2 (page 51), Table 3 (page 51) and Table 4 (page 53) for the distances and
number of nodes supported with different volume managers, Serviceguard/SGeRAC versions,
and HP-UX versions.
VxVM/CVM mirroring is supported in Extended Clusters for distances of up to 100 kilometers.
CFS relies on CVM mirroring. Please see the “Special requirements and recommendations for
56 Extended Distance Cluster Configurations