HP Serviceguard Extended Distance Cluster for Linux A.12.00.00 Deployment Guide, March 2014
Table 4 Disaster Scenarios and Their Handling (continued)
Recovery ProcessWhat Happens When This Disaster
Occurs
Disaster Scenario
To initiate a recovery:
1. Restore data center 1, node N1, and storage
S1. Once node N1 is restored, it rejoins the
cluster. Once S1 is restored, it becomes
accessible from node N2.
NOTE: If the site is not attached automatically,
you must manually reattach a site and recover
the disk group:
#vxdg -g diskgroup reattachsite
sitename
#vxrecover –g diskgroup
For more information, see Cluster File System
High Availability 6.0.1 Administrator's Guide –
Linux
2. Enable P1 to run on node N1:
#cmmodpkg -e P1 -n N1
The package (P1) fails over to node
N2 and starts running with only one
plex of VxVM mirror that consists of
only the storage local to node 2
(S2).
A package (P1) is running
on a node N1. The package
uses a VxVM mirror across
sites that consists of two
plexes, that is, S1 (local to
node N1) and S2 (local to
node N2).
Data center 1 that consists
of node N1 and package
P1 experiences a failure. A
package (P1) is running on
a node (Node 1)
experiences a failure.
In this scenario, no attempts are made to repair the
first failure until the second failure occurs. Typically,
the second failure occurs before the first failure is
repaired.
For the first failure, to initiate a recovery:
1. Restore the FC links between the data centers.
As a result, S1 is accessible from node N2.
NOTE: If the site is not attached automatically,
you must manually reattach a site and recover
the disk group:
#vxdg -g diskgroup reattachsite
sitename
#vxrecover –g diskgroup
For more information, see Cluster File System
High Availability 6.0.1 Administrator's Guide –
Linux
For the second failure, to initiate the recovery:
1. Restore node N1. Once node N1 is restored, it
joins the cluster and can access S1 and S2.
2. Enable P1 to run on node N1:
#cmmodpkg -e P1 -n N1
NOTE: serviceguard-xdc does not support
RPO_TARGET parameters, if data replication is set
to VxVM mirroring.
The package (P1) continues to run
on node N1 after the first failure,
with the plex that consists of only
S1.
After the second failure, the
package P1 fails over to node N2
and starts with S2. The data that
was written on S1 after the FC link
failure is lost.
This is a multiple failure
scenario where the failures
occur in a particular
sequence in the
configuration that
corresponds to figure 2
where Ethernet and FC links
do not go over DWDM.
The packagec (P1) is
running on node N1. The
package uses a VxVM
mirror across sites that
consists of two plexes, that
is, S1 (local to node N1)
and S2 (local to node N2).
The first failure occurs when
all the FC links between the
two data centers fail,
causing node N1 to lose
access to S2 and node N2
to lose access to S1.
After sometime the second
failure occurs where, node
N1 fails due to the power
failure.
45