Implementing disaster recovery for HP Integrity Virtual Machines with Metrocluster and Continentalclusters on HP-UX 11i
Table Of Contents
- Executive summary
- Introduction
- Audience
- Configuring Integrity Virtual Machines as packages in HP Metrocluster
- Verifying failover of Metrocluster packages across data centers
- Troubleshooting Metrocluster VM problems
- Application startup and monitoring
- Configuring Integrity Virtual Machines as packages in HP Continentalclusters
- Overview
- Software requirements for HP VMs in Continentalclusters
- Configuring HP VM packages in Continentalclusters
- Creating VM switches in all nodes of the primary cluster
- Configuring replicated storage for VM in Continentalclusters
- Installing the operating system on the virtual machine
- Testing the virtual guest OS in all nodes of the primary cluster
- Creating VM switches in all nodes of the recovery cluster
- Preparing the replicated storage for use in the recovery cluster
- Creating the virtual machine in all nodes of the recovery cluster
- Testing the virtual guest OS in all nodes of the recovery cluster
- Resynchronizing the replicated storage
- Packaging the HP VM in the primary cluster and the recovery cluster
- Creating a Continentalclusters package
- Creating a Continentalclusters configuration with the VM packages
- Running the Continentalclusters monitoring daemon in the recovery cluster
- Recovering to the recovery cluster
- Related documentation
- Appendix I
- Appendix II
- For more information
- Call to action

3
The steps required to configure a VM into a Metrocluster package are explained with a sample
configuration of Metrocluster Continuous Access XP/P9000. This Metrocluster Continuous Access
XP/P9000 configuration information can be easily extended to Metrocluster Continuous Access EVA
and Metrocluster SRDF configurations. The specific differences are cited in detail where applicable.
Assume that a 4-node Serviceguard cluster is created. Of the four nodes in the cluster, Node1 and
Node2 are in the primary data center, while Node3 and Node4 are in the secondary or standby data
center. Two XP storage systems are used—one in each data center. The two XP arrays are connected
using Continuous Access links, and data in the primary storage array is replicated to the secondary
storage array through these links. Either a quorum server or arbitrator nodes running in a third
location provide cluster quorum services. See
Figure 1 for a pictorial representation of this setup.
For details on configuring HP Integrity Virtual Machines, visit
www.hp.com/go/hpux-hpvm-docs (click on
“Setup and Install—General”).
Software requirements for Metrocluster
On each node in the cluster, the following products are assumed to be running:
• HP Integrity Virtual Machines 4.0 or later
• HP Serviceguard A.11.18 or later
• Metrocluster with Continuous Access XP A.09.00 or later (or Metrocluster with EMC SRDF A.08.00
or later, or Metrocluster with Continuous Access EVA A.04.00 or later)
For supported versions of these products, refer to the support matrix at the following locations:
1. www.hp.com/go/hpux-SG-Metrocluster-XP-docs—click on General Reference and then on
Serviceguard Disaster Recovery Products Compatibility and Feature Matrix (Metrocluster with
Continuous Access for P9000 and XP)
2. www.hp.com/go/hpux-SG-Metrocluster-EVA-docs—click on General Reference and then on
Serviceguard Disaster Recovery Products Compatibility and Feature Matrix (Metrocluster with
Continuous Access for EVA)
3. www.hp.com/go/hpux-SG-Metrocluster-SRDF-docs—click on General Reference and then on
Serviceguard Disaster Recovery Products Compatibility and Feature Matrix (Metrocluster with
EMC SRDF)
In this paper, a single-instance Oracle database is used to illustrate an example application running
inside the HP Integrity Virtual Machine. The database resides on StorageWorks XP arrays and is
replicated to the remote site via Continuous Access. The Continuous Access XP device group used by
the single-instance Oracle application is managed by Metrocluster. Figure 1 illustrates the setup.