Implementing disaster recovery for HP Integrity Virtual Machines with Metrocluster and Continentalclusters on HP-UX 11i
Table Of Contents
- Executive summary
- Introduction
- Audience
- Configuring Integrity Virtual Machines as packages in HP Metrocluster
- Verifying failover of Metrocluster packages across data centers
- Troubleshooting Metrocluster VM problems
- Application startup and monitoring
- Configuring Integrity Virtual Machines as packages in HP Continentalclusters
- Overview
- Software requirements for HP VMs in Continentalclusters
- Configuring HP VM packages in Continentalclusters
- Creating VM switches in all nodes of the primary cluster
- Configuring replicated storage for VM in Continentalclusters
- Installing the operating system on the virtual machine
- Testing the virtual guest OS in all nodes of the primary cluster
- Creating VM switches in all nodes of the recovery cluster
- Preparing the replicated storage for use in the recovery cluster
- Creating the virtual machine in all nodes of the recovery cluster
- Testing the virtual guest OS in all nodes of the recovery cluster
- Resynchronizing the replicated storage
- Packaging the HP VM in the primary cluster and the recovery cluster
- Creating a Continentalclusters package
- Creating a Continentalclusters configuration with the VM packages
- Running the Continentalclusters monitoring daemon in the recovery cluster
- Recovering to the recovery cluster
- Related documentation
- Appendix I
- Appendix II
- For more information
- Call to action
34
d) Set the WAIT_TIME variable to the timeout, in minutes, to wait for completion of the data merge
from source to destination volume before starting up the package on the destination volume. If
the wait time expires and merging is still in progress, the package will fail to start with an error
that prevents restarting on any node in the cluster.
e) Set the DR_GROUP_NAME variable to the name of the DR group used by this package. This DR
group name is defined when the DR group is created.
f) Set the DC1_STORAGE_WORLD_WIDE_NAME variable to the world wide name (WWN) of
the EVA storage system that resides in data center 1. This WWN can be found on the front
panel of the EVA controller, or retrieved using Command View EVA.
g) Set the DC1_SMIS_LIST variable to the list of management servers that resides in data center 1.
Multiple names are defined, using a comma as a separator between the names.
h) Set the DC1_HOST_LIST variable to the list of clustered nodes that resides in data center 1.
Multiple names are defined, using a comma as a separator between the names.
i) Set the DC2_STORAGE_WORLD_WIDE_NAME variable to the world wide name of the EVA
storage system that resides in data center 2. This WWN can be found on the front panel of the
EVA controller, or retrieved from the Command View EVA UI.
j) Set the DC2_SMIS_LIST variable to the list of management servers that resides in data center 2.
Multiple names are defined, using a comma as a separator between the names.
k) Set the DC2_HOST_LIST variable to the list of clustered nodes that resides in data center 2.
Multiple names are defined, using a comma as a separator between the names.
l) Set the QUERY_TIME_OUT variable to the number of seconds to wait for a response from the
SMI-S CIMOM in the management server. The default timeout is 300 seconds.
3. Distribute the Metrocluster configuration to all the other nodes in the cluster.