CLI Guide

Table Of Contents
When the inter-cluster link is restored, the clusters learn that I/O has proceeded independently.
I/O continues at both clusters until the administrator picks a winning cluster whose data image will be used as the source to
synchronize the data images.
Use this command to pick the winning cluster. For the distributed volumes in the consistency group:
I/O at the losing cluster is suspended (there is an impending data change)
The administrator stops applications running at the losing cluster.
Any dirty cache data at the losing cluster is discarded
The legs of distributed volumes rebuild, using the legs at the winning cluster as the rebuild source.
When the applications at the losing cluster are shut down, use the consistency-group resume-after-data-loss-
failure command to allow the system to service I/O at that cluster again.
Example
Select cluster-1 as the winning cluster for consistency group TestCG from the TestCG context:
VPlexcli:/clusters/cluster-1/consistency-groups/TestCG> resolve-conflicting-detach
This will cause I/O to suspend at clusters in conflict with cluster cluster-1, allowing
you to stop applications at those clusters. Continue? (Yes/No) yes
Select cluster-1 as the winning cluster for consistency group TestCG from the root context:
VPlexcli:/> consistency-group resolve-conflicting-detach --cluster cluster-1 --
consistency-group /clusters/cluster-1/consistency-groups/TestCG
This will cause I/O to suspend at clusters in conflict with cluster cluster-1, allowing
you to stop applications at those clusters. Continue? (Yes/No) Yes
In the following example, I/O has resumed at both clusters during an inter-cluster link outage. When the inter-cluster link is
restored, the two clusters will come back into contact and learn that they have each detached the other and carried on I/O.
The ls command shows the operational-status as ok, requires-resolve-conflicting-detach at both clusters.
The resolve-conflicting-detach command selects cluster-1 as the winner.
Cluster-2 will have its view of the data discarded.
I/O is suspended on cluster-2.
The ls command displays the change in operational status.
At cluster-1, I/O continues, and the status is ok.
At cluster-2, the view of data has changed and so I/O is suspended pending the consistency-group resume-at-
loser command.
VPlexcli:/clusters/cluster-1/consistency-groups/cg1> ls
Attributes:
Name Value
-------------------- -----------------------------------------
active-clusters [cluster-1, cluster-2]
cache-mode synchronous
detach-rule no-automatic-winner
operational-status [(cluster-1,{ summary:: ok, details:: [requires-resolve-
conflicting-detach] }),
(cluster-2,{ summary:: ok, details:: [requires-resolve-conflicting-detach] })]
passive-clusters []
read-only false
recoverpoint-enabled false
storage-at-clusters [cluster-1, cluster-2]
virtual-volumes [dd1_vol, dd2_vol]
visibility [cluster-1, cluster-2]
Contexts:
advanced recoverpoint
VPlexcli:/clusters/cluster-1/consistency-groups/cg1> resolve-conflicting-detach -c
cluster-1
This will cause I/O to suspend at clusters in conflict with cluster cluster-1, allowing
you to stop applications at those clusters. Continue? (Yes/No) Yes
VPlexcli:/clusters/cluster-1/consistency-groups/cg1> ls
Attributes:
Commands
107