Implementing disaster recovery for HP Integrity Virtual Machines with Metrocluster and Continentalclusters on HP-UX 11i
Table Of Contents
- Executive summary
- Introduction
- Audience
- Configuring Integrity Virtual Machines as packages in HP Metrocluster
- Verifying failover of Metrocluster packages across data centers
- Troubleshooting Metrocluster VM problems
- Application startup and monitoring
- Configuring Integrity Virtual Machines as packages in HP Continentalclusters
- Overview
- Software requirements for HP VMs in Continentalclusters
- Configuring HP VM packages in Continentalclusters
- Creating VM switches in all nodes of the primary cluster
- Configuring replicated storage for VM in Continentalclusters
- Installing the operating system on the virtual machine
- Testing the virtual guest OS in all nodes of the primary cluster
- Creating VM switches in all nodes of the recovery cluster
- Preparing the replicated storage for use in the recovery cluster
- Creating the virtual machine in all nodes of the recovery cluster
- Testing the virtual guest OS in all nodes of the recovery cluster
- Resynchronizing the replicated storage
- Packaging the HP VM in the primary cluster and the recovery cluster
- Creating a Continentalclusters package
- Creating a Continentalclusters configuration with the VM packages
- Running the Continentalclusters monitoring daemon in the recovery cluster
- Recovering to the recovery cluster
- Related documentation
- Appendix I
- Appendix II
- For more information
- Call to action
17
AUTO_FENCEDATA_SPLIT=1
AUTO_SVOLPSUS=0
AUTO_NONCURDATA=0
MULTIPLE_PVOL_OR_SVOL_FRAMES_FOR_PKG=0
WAITTIME=300
PKGDIR=/etc/cmcluster/vmmetro
FENCE=never (or data or async as the case may be)
DEVICE_GROUP=dgVM
CLUSTER_TYPE=”metro” HORCTIMEOUT=360
3.
Once the <package>_xpca.env file has been modified for the specific package, copy this file to
all nodes in the cluster.
Verifying failover of Metrocluster packages across data
centers
To verify that the Metrocluster package is working properly, complete the following procedure to
perform a manual failover between data centers:
1. Verify that the package is running on one of the primary data center nodes (Node1 or Node2).
# cmviewcl –v –p vmmetropkg
2. Verify that the XP device group is in a pair state.
# pairdisplay –g dgVM
For Metrocluster SRDF, use “symrdf –g dgVM query” to verify that the SRDF group is in a
synchronized state. Similarly, for Metrocluster/EVA, use Command View EVA to verify that DR
group operational state is good.
3. Stop the Oracle application running in the virtual machine.
4. Halt the package.
# cmhaltpkg vmmetropkg
5. Verify that the package has stopped.
# cmviewcl –v –p vmmetropkg
6. Start the package on one of the recovery site nodes (Node3 or Node4).
# cmrunpkg vmmetropkg
7. Verify that the package is running on one of the recovery site nodes (Node3 or Node4).
# cmviewcl –v –p vmmetropkg
8. Verify that the XP device group dgVM has failed-over to the secondary data center with the
pairdisplay/pairvolchk commands.
For Metrocluster SRDF, use “
symrdf –g dgVM query” to verify that the SRDF group is in a failed-
over state. Similarly, for Metrocluster Continuous Access EVA, use HP StorageWorks Command
View EVA to verify that the DR group operational state is good.
9. On the adoptive node, verify that the VM guest is running fine.
# hpvmstatus –P vmmetro
10. Verify that you can connect to the VM guest OS. Start up the Oracle application, and verify the
database contents.
In case one of the above steps fails, see the next section, “Troubleshooting Metrocluster VM
problems.”