Qlogic Online Data Migration Using the iSR6200 in Cluster Configurations (ISR651405-00 A, December 2011)

2 – Online Data Migration Methods
ISR651405-00 A 4
2.4
Method 4—Online Migration with Cluster Failover
Specific MSCS configurations that use quorum disks require a cluster failover to insert the router in the
data path of the cluster nodes. For these configurations, QLogic recommends that you insert the router as
follows:
1. Replace the direct path with the router path, and then reboot the secondary node with router paths
alone, as shown in the following steps:
a. Shut down the secondary node.
b. Remove all direct paths presented to the host by zoning out the array controller ports from the host
ports. Ensure that you remove the paths from both fabrics.
c. Insert the router paths by zoning the router-presented target WWPNs with the host ports. Ensure
that you zone the target presentations from both fabrics.
d. Boot the secondary node.
e. Using an adapter management utility such as QLogic’s unified adapter management utilities,
QConvergeConsole GUI or CLI, ensure that the secondary host sees only router paths and that no
direct paths are available.
f. Verify that the host correctly sees the disks.
2. Perform failover of the cluster to the secondary node as follows:
a. Shut down the primary node.
b. Verify that the cluster successfully fails over to the secondary node, and that all cluster applications
are still up and running.
c. Remove all direct paths presented to the primary host by zoning out the array controller ports from
the host ports. Ensure that you remove the paths from both fabrics.
d. Insert the router paths by zoning the router-presented target WWPNs with the host ports. Ensure
that you zone the target presentations from both fabrics.
e. Boot the primary node.
f. Using an adapter management utility such as QLogic’s unified adapter management utilities,
QConvergeConsole GUI or CLI, ensure that the primary host sees only router paths and that no
direct paths are available.
g. Verify that the host correctly sees the disks.
3. (Optional) Perform failback of the cluster application to the primary node.