Building Disaster Recovery Serviceguard Solutions Using Metrocluster with Continuous Access for P9000 and XP A.11.00
# pairvolchk –g dg13 –s –c
2. Get the recent data from DC3 to DC1 by creating a new journal pair between DC3 and DC1
a. Create a DC1-DC3 device group pair with DC3 as PVOL side. Log in to any DC3 node
and perform the following:
# paircreate –g dg13 –f async –vl –c 15 –jp <Journal_id> js
<Journal_id>
b. Wait for the PAIR state to come up for the Journal device group.
# pairevtwait -g dg13 -t 300 -s pair
c. Once the latest data is at DC1, delete the DC1-DC3 device group pair using the pairsplit
command to recreate pairs from scratch.
# pairsplit –g dg13 -S
3. Create the device group pairs again and assign RCMD to journal volumes using the procedure
described in the sections “Creating Device Group Pairs ” (page 115)and “Assigning Remote
Command Devices to Journal Volumes” (page 118) of this document.
Configuring Remote Array RAID Manager Instances for RCMD (Optional)
A remote array RAID Manager instance allows the package configured in a 3DC environment to
determine the status of the device group on the remote P9000 or XP array even when the remote
hosts are down or are inaccessible due to a network link failure. The remote array RAID Manager
instance is configured on a node using RCMD. Configuring Remote RAID Manager instances in a
3DC solution is an optional but recommended step. This configuration allows the package configured
in a 3DC environment to make better decisions when the nodes in the remote site are not available.
In a 3DC solution, remote array RAID manager instances have to be configured in each site for
the other connected sites. For a detailed configuration on remote array RAID Manager instances,
see the section Section (page 56).
NOTE: RCMD configured using extended SAN is not supported for a 3DC solution with Delta
Resync. Only RCMD configured using XP External Storage functionality is supported.
Timing Considerations
In a journal device group, many journal volumes can be configured to hold a significant amount
of the journal data (host-write data). The package startup time may increase significantly when a
package fails over. Delays in package startup times will occur in these situations:
1. Recovering from a broken pair affinity. On failover, the SVOL pulls all the journal data from
the PVOL site. The time needed to complete all data transfer to the SVOL depends on the
amount of outstanding journal data in the PVOL and the bandwidth of the Continuous Access
links.
2. Host I/O is faster than the Continuous Access Journal data replication. The outstanding data
not being replicated to the SVOL is accumulated in journal volumes. Upon package failover
to the SVOL site, the SVOL pulls all the journal data from PVOL site. The time to complete all
the data transfer to the SVOL depends on the bandwidth of the Continuous Access links and
amount of outstanding data in the PVOL journal volume.
3. Failback - When systems recover from previous failures, a package can be failed back, within
Metrocluster data centers, by manually issuing a Serviceguard command. When a package
failback is triggered, the software needs to ensure the application data integrity. It executes
storage preparation actions to the P9000 or XP array, if it is necessary, prior to the package
startup.
150 Designing a Three Data Center Disaster Recovery Solution