Understanding and Designing Serviceguard Disaster Recovery Architectures
to ensure that hot relocation remains disabled. Note that if you preventvxrelocd from starting,
it will disable the hot relocation feature for all VxVM/CVM volumes on the system. The VxVM
Administration Guide provides additional information on how to use the hot relocation feature
in a more granular way.
• Different CVM/CFS revisions have different limitations and requirements:
CVM 3.5 mirroring is supported for EC RAC clusters for distances up to 10 kilometers
for 2, 4, 6, or 8 node clusters, and up to 100 kilometers for 2 node clusters. CVM 3.5
◦
allows only one heartbeat network to be defined for the cluster, therefore you must make
the heartbeat network highly available, using a standby LAN to provide redundancy for
the heartbeat network. The heartbeat subnets must be dedicated networks, to ensure that
other network traffic does not saturate the heartbeat networks.
◦ CVM 4.1/CFS 4.1 mirroring is supported for distances up to 100 kilometers for 2, 4, 6,
or 8 node clusters. CVM/CFS 4.1 are available only with SG SMS A.01.00 or A.01.01.
Standalone CVM 4.1 (without SG SMS) is also supported.
◦ CVM 5.0/CFS 5.0 and CVM 5.0.1/CFS 5.0.1 mirroring is supported for distances up
to 100 kilometers for 2, 4, 6, or 8 node clusters on HP-UX 11i v2 or 11i v3. Beginning
with version 5.0, VxVM and CVM include a new feature, “site-awareness”, which simplifies
configuring the disk volumes and mirroring in an Extended Cluster, because it allows the
storage in each data center to be tagged with a site ID. A license to enable this feature
is included with specific SG SMS A.02.00 product bundles. CVM/CFS 5.0 are available
only with SG SMS A.02.00, A.02.01, or A.02.01.01. CVM/CFS 5.0.1 are available
only with SG SMS A.03.00 on HP-UX 11i v3. Beginning with SG SMS A.02.01, CVM
5.0/CFS 5.0 mirroring is supported for distances of up to 100 kilometers for 2, 4, 6, 8,
10, 12, 14, or 16 node clusters on HP-UX 11i v2 or 11i v3. Standalone CVM 5.0 (without
SG SMS) is also supported.
Recommendations and Requirements for EC RAC Configurations with Oracle RAC
10g or 11g
• Oracle 10g Release 2 and later supports up to two copies of the Oracle Cluster Registry (OCR)
and up to three vote disks. For EDC, each copy of OCR and each vote disk are required to
be physically mirrored between the two datacenters. The mirrored OCR and vote disks ensure
that Oracle Clusterware has access to local physical copies for Oracle Clusterware cluster
reformation.
• For EC RAC configurations, HP recommends that you to maintain local storage for Oracle
Clusterware and Oracle Database binaries and HOME, to reduce inter-site traffic.
TCP/IP Network and Fibre Channel Data Links between the Data Centers
There are three supported configurations for the interconnections between the data centers.
Separate Links for TCP/IP Networking and Fibre Channel Data
• The maximum distance between the data centers for this type of configuration is currently
limited by the maximum distance supported for the networking type or Fibre Channel link type
being used, whichever is shorter.
• Ethernet switches can support varying distances for the inter switch link between the data
centers, depending upon the type of GBIC and fiber cabling used. Inter switch distances of
up to 100 KM are supported in Extended Clusters. Check with the network switch vendor for
the distances supported for the inter switch link and the hardware and cabling requirements.
• There can be a maximum of 500 meters between the Fibre Channel switches in the two data
centers if Short wave GBICs are used. This distance can be increased to 10 kilometers by
using Long wave Fibre Channel GBICs in the switches. The distance can be increased to 80
Extended Distance Cluster on HP-UX 59










