Building Disaster Recovery Serviceguard Solutions Using Metrocluster with Continuous Access for P9000 and XP A.11.00
When run on a Metrocluster node at the site connected to the target storage of the DC1-DC2
replication pair, the cmdrprev command previews data replication preparation for a Metrocluster
remote failover.
When run on a Metrocluster node connected to the source storage of the DC1-DC2 replication
pair, the command previews the data replication preparation for a local failover. When run on a
node in the recovery cluster, the cmdrprev command previews data replication preparation for
a Continentalclusters recovery. Since the command does not change the data replication
environment, it can be run even if the package associated with the data replication storage is up
in the 3DC solution. If the FORCEFLAG file is present in the package directory, the actual data
replication storage failover will remove it. Inline with the actual behavior, the cmdrprev command
also removes the FORCEFLAG file present in the package directory. For more information on using
cmdrprev, see Appendix “F Metrocluster Command Reference for Preview Utility”.
NOTE: The cmdrprev command does not check the configuration of the Remote Command
Devices and Device Group Monitor.
Testing the application startup at the recovery cluster
To verify that the application can be started successfully at the recovery cluster, complete the
following procedure during the planned downtime for maintenance:
1. Halt the primary package running in the primary cluster (Metrocluster) using the cmhaltpkg
command. When complex workloads are used, the Site Controller package is halted which
brings down the corresponding complex workload stack running in DC1.
2. This step is required only if you are using 3DC CAJ/CAJ Tri-Link configuration. Else, go to
step 3.
Delete the Delta Resync pair from DC3 node:
# pairsplit –g <Delta Resync-pair> -R
3. Split the Active-CAJ pair to make it read-write at the recovery cluster. For suspending replication,
split the Continuous Access journal device group pair, over which the replication is happening
to DC3 currently, using pairsplit –RS command from a node in DC3:
# pairsplit –g <Active-CAJ-pair> -RS
4. Change the Cluster ID of all LVM and SLVM volume groups managed by the application. For
LVM volume groups, run the following commands from a node in the primary cluster to change
the cluster ID:
# vgchange -c n <vg_name>
# vgchange -c y <vg_name>
For SLVM volume groups, run the following commands from a node in the primary cluster to
change the cluster ID:
# vgchange -c n -S n <vg_name>
# vgchange -c y -S y <vg_name>
5. Start the recovery package on a DC3 node using the cmrunpkg command.
6. Verify that the application starts up successfully in the recovery cluster.
7. Halt the recovery package using the cmhaltpkg command.
8. Resume replication from the primary cluster to the recovery cluster. Run the following commands
from DC3 node to overwrite any changes on the recovery cluster disk array and resume the
replication:
# pairsplit -g <Active-CAJ-pair> -RB
# pairresync -g <Active-CAJ-pair> -c 15
These commands discard changes that were done at the recovery cluster and start synchronizing
the Continuous Access Journal group data.
Verifying the Three Data Center Environment 141