Administrator Guide
Table Of Contents
- Dell EMC Storage Systems Administrator Guide for the metro node appliance
- Contents
- Preface
- CLI Workspace and User Accounts
- Meta Volumes
- System Management
- Thin support in metro node
- Provisioning Storage
- Volume expansion
- Data migration
- About data migrations
- Migrating thin-capable storage
- About rebuilds
- One-time data migrations
- Batch migrations
- Prerequisites
- Creating a batch migration plan
- Checking a batch migration plan
- Modifying a batch migration file
- Starting a batch migration
- Pausing/resuming a batch migration (optional)
- Canceling a batch migration (optional)
- Monitoring a batch migration’s progress
- Viewing a batch migration’s status
- Committing a batch migration
- Cleaning a batch migration
- Removing batch migration records
- Configure the WAN Network
- Cluster Witness
- Consistency Groups
- Performance and Monitoring
- Metro node with active-passive storage arrays
I/O gets suspended at cluster-2 if the auto-resume policy is false.
3. Use the ls command to verify the change in operation status:
VPlexcli:/clusters/cluster-1/consistency-groups/cg1> ls
Attributes:
Name Value
------------------- ----------------------------------------------------------
active-clusters [cluster-1, cluster-2]
cache-mode synchronous
detach-rule no-automatic-winner
operational-status [(cluster-1,{ summary:: ok, details:: [] }),
(cluster-2,{ summary:: suspended, details:: [requires-resume-at-
loser] })]
passive-clusters []
recoverpoint-enabled false
storage-at-clusters [cluster-1, cluster-2]
virtual-volumes [dd1_vol, dd2_vol]
visibility [cluster-1, cluster-2]
Contexts:
advanced recoverpoint
● At cluster-1, I/O continues, and the status is ok.
● At cluster-2, the view of data has changed and hence I/O is suspended.
4. Use the consistency-group resume-at-loser command to resume I/O to the consistency group on cluster-2.
Resuming I/O at the losing cluster
During an inter-cluster link outage, you might permit I/O to resume at one of the two clusters, the winning cluster.
About this task
I/O remains suspended on the losing cluster.
When the inter-cluster link restores, the winning and losing clusters re-connect, and the losing cluster discovers that the
winning cluster has resumed I/O without it.
Unless explicitly configured, I/O remains suspended on the losing cluster. This prevents applications at the losing cluster from
experiencing a spontaneous data change.
The delay allows you to shut down applications.
After stopping the applications, use the consistency-group resume-at-loser command to:
● Resynchronize the data image on the losing cluster with the data image on the winning cluster.
● Resume servicing I/O operations.
You can then safely restart the applications at the losing cluster.
To restart I/O on the losing cluster:
Steps
1. Use the ls command to display the operational status of the target consistency group.
VPlexcli:/clusters/cluster-1/consistency-groups/cg1> ls
Attributes:
Name Value
------------------- ----------------------------------------------------------
active-clusters [cluster-1, cluster-2]
cache-mode synchronous
detach-rule no-automatic-winner
operational-status [(cluster-1,{ summary:: ok, details:: [] }),
(cluster-2,{ summary:: suspended, details:: [requires-resume-at-
loser] })]
passive-clusters []
recoverpoint-enabled false
storage-at-clusters [cluster-1, cluster-2]
82
Consistency Groups