Administrator Guide
Table Of Contents
- Dell EMC Storage Systems Administrator Guide for the metro node appliance
- Contents
- Preface
- CLI Workspace and User Accounts
- Meta Volumes
- System Management
- Thin support in metro node
- Provisioning Storage
- Volume expansion
- Data migration
- About data migrations
- Migrating thin-capable storage
- About rebuilds
- One-time data migrations
- Batch migrations
- Prerequisites
- Creating a batch migration plan
- Checking a batch migration plan
- Modifying a batch migration file
- Starting a batch migration
- Pausing/resuming a batch migration (optional)
- Canceling a batch migration (optional)
- Monitoring a batch migration’s progress
- Viewing a batch migration’s status
- Committing a batch migration
- Cleaning a batch migration
- Removing batch migration records
- Configure the WAN Network
- Cluster Witness
- Consistency Groups
- Performance and Monitoring
- Metro node with active-passive storage arrays
Table 12. Consistency group field descriptions (continued)
Property Description
○ There is no detach-rule
○ If the detach-rule is no-automatic-winner, or
○ If the detach-rule cannot fire because its conditions are not met.
■ unhealthy-devices - I/O has stopped in this consistency group
because one or more volumes are unhealthy and cannot perform I/O.
■ will-rollback-on-link-down - If there were a link-down now,
the winning cluster would have to roll back the view of data in order to
resume I/O.
virtual-volumes
List of virtual volumes that are members of the consistency group.
Operating a consistency group
In the event of a cluster partition, the best practice is to allow I/O to continue at only one cluster. Allowing I/O to continue at
both clusters result in a condition that is known as a conflicting detach. The resolution of the conflicting detach result in the
complete resync of the losing cluster from the winning cluster. All writes at the loser cluster are lost.
About this task
When I/O continues at both clusters:
● The data images at the clusters diverge.
● Legs of distributed volumes are logically separate.
When the inter-cluster link is restored, the clusters learn that I/O has proceeded independently. I/O continues at both clusters
until you pick a winning cluster whose data image will be used as the source to synchronize the data images.
In the following example, I/O resumed at both clusters during an inter-cluster link outage. When the inter-cluster link is restored,
the two clusters come back into contact and learn that they have each detached the other and carried on I/O.
Steps
1. Use the ls command to display the consistency group’s operational status at both clusters.
VPlexcli:/clusters/cluster-1/consistency-groups/cg1> ls
Attributes:
Name Value
-------------------- -----------------------------------------
active-clusters [cluster-1, cluster-2]
cache-mode synchronous
detach-rule no-automatic-winner
operational-status [(cluster-1,{ summary:: ok, details:: [requires-resolve-
conflicting-detach] }),
(cluster-2,{ summary:: ok, details:: [requires-resolve-conflicting-detach] })]
passive-clusters []
recoverpoint-enabled false
storage-at-clusters [cluster-1, cluster-2]
virtual-volumes [dd1_vol, dd2_vol]
visibility [cluster-1, cluster-2]
Contexts:
advanced recoverpoint
2. Use the resolve-conflicting-detach command to select cluster-1 as the winner.
VPlexcli:/clusters/cluster-1/consistency-groups/cg1> resolve-conflicting-detach -c
cluster-1
This will cause I/O to suspend at clusters in conflict with cluster cluster-1,
allowing you to stop applications at those clusters. Continue? (Yes/No) Yes
Cluster-2’s modifications to data on volumes in the consistency group since the link outage started are discarded.
Cluster-2's data image is then synchronized with the image at cluster-1.
Consistency Groups
81