Implementing disaster recovery for HP Integrity Virtual Machines with Metrocluster and Continentalclusters on HP-UX 11i

Table Of Contents
24
Testing the virtual guest OS in all nodes of the recovery cluster
Once the virtual machine is created, start the machine in Node3 of the recovery cluster. The machine
will have the guest OS image that was installed in the primary cluster. However, because a static IP
address was chosen at the primary cluster and the recovery cluster is in a different domain, the
hostname and the network address for the guest HP-UX OS in the recovery cluster will impede
networking. In order to reset the networking in the guest HP-UX OS, issue the following commands:
#set_parms hostname
#set_parms ip_addr
The above commands will ask for the new hostname and the new IP address of the machine and will
require reboot. After the end of each command, choose no reboot option. Once the hostname and IP
address are set properly, edit the
/etc/rc.config.d/netconf file and set the proper default
gateway address. Now reboot the guest HP-UX. Once the reboot is complete, the network
connectivity will be available.
Note: The above steps will not be required when primary and recovery clusters are in the same subnet or when the IP
assignment for the machine is done via DHCP.
Once the guest OS is tested in Node3, shut down the guest HP-UX and halt the virtual machine. Import
the volume group in Node4 and test the virtual machine startup in that node.
Resynchronizing the replicated storage
Once the guest OS has been tested in all nodes of the recovery cluster, the replication must be started
again. Use the following command to resynchronize the replication pair:
#pairresync –I0 –g dgVM –c 15
Packaging the HP VM in the primary cluster and the recovery cluster
To create a Continentalclusters package for the HP VM guest, first you must create a Serviceguard
package in the cluster. The following steps outline how to configure a Serviceguard node for a VM
environment and how to set up VMs as Serviceguard packages:
Configure the Integrity VM multiserver environment.
Create a Serviceguard package appropriately using the hpvmsg_package script.
Create a Continentalclusters package.
Configuring the Integrity VM multiserver environment
This step involves registering each VM host system that will be a part of the multiserver environment
using the
hpvmdevmgmt command. Registration enables the VM guest to be visible to all Serviceguard
nodes as a distributed guest.
1. Install Integrity VM and create the guest with all necessary virtual storage devices and vswitches.
Repeat this procedure on each node in the multiserver environment.
2. Install, configure, and run Serviceguard on every node in the multiserver environment. Register
each VM host system that is part of multiserver environment using the
hpvmdevmgmt command.
This enables the VM guest to be visible to all Serviceguard nodes as a distributed guest.
3. Start the guest on the primary node using the hpvmstart command. Use the hpvmstatus
command to verify the guest name and to make sure that it is running.
Note: This procedure has to be followed for both primary and recovery clusters.