Administrator Guide
Table Of Contents
- Dell EMC Storage Systems Administrator Guide for the metro node appliance
- Contents
- Preface
- CLI Workspace and User Accounts
- Meta Volumes
- System Management
- Thin support in metro node
- Provisioning Storage
- Volume expansion
- Data migration
- About data migrations
- Migrating thin-capable storage
- About rebuilds
- One-time data migrations
- Batch migrations
- Prerequisites
- Creating a batch migration plan
- Checking a batch migration plan
- Modifying a batch migration file
- Starting a batch migration
- Pausing/resuming a batch migration (optional)
- Canceling a batch migration (optional)
- Monitoring a batch migration’s progress
- Viewing a batch migration’s status
- Committing a batch migration
- Cleaning a batch migration
- Removing batch migration records
- Configure the WAN Network
- Cluster Witness
- Consistency Groups
- Performance and Monitoring
- Metro node with active-passive storage arrays
Use the ll command in the /advanced context of the consistency group to display the advanced properties of a specified
consistency group.
VPlexcli:/clusters/cluster-1/consistency-groups/TestCG/advanced> ls
Name Value
-------------------------- --------
auto-resume-at-loser true
current-queue-depth -
current-rollback-data -
default-closeout-time -
delta-size -
local-read-override true
max-possible-rollback-data -
maximum-queue-depth -
potential-winner -
write-pacing disabled
The following example displays output of the ls command in the /clusters/cluster-name/ consistency-groups/
consistency-group context during an inter-cluster link outage.
● The detach-rule is no-automatic-winner, so I/O stops at both clusters. metro node remains in this state until either
the inter-cluster link restarts, or you intervene using the consistency-group choose-winner command.
● Status summary is suspended, showing that I/O has stopped.
● Status details contain cluster-departure, indicating that the clusters can no longer communicate with one another.
VPlexcli:/clusters/cluster-1/consistency-groups/cg1> ls
Attributes:
Name Value
------------------- ----------------------------------------------------------
active-clusters [cluster-1, cluster-2]
cache-mode synchronous
detach-rule no-automatic-winner
operational-status [(cluster-1,{ summary:: suspended, details:: [cluster-departure]
}),
(cluster-2,{ summary:: suspended, details:: [cluster-departure]
})]
passive-clusters []
recoverpoint-enabled false
storage-at-clusters [cluster-1, cluster-2]
virtual-volumes [dd1_vol, dd2_vol]
visibility [cluster-1, cluster-2]
Contexts:
advanced recoverpoint
● The ls command shows consistency group cg1 as suspended, requires-resume-at-loser on cluster-2 after cluster-2
is declared the losing cluster during an inter-cluster link outage.
● The resume-at-loser command restarts I/O on cluster-2.
● The ls command displays the change in operational status:
VPlexcli:/clusters/cluster-1/consistency-groups/cg1> ls
Attributes:
Name Value
------------------- ----------------------------------------------------------
active-clusters [cluster-1, cluster-2]
cache-mode synchronous
detach-rule no-automatic-winner
operational-status [(cluster-1,{ summary:: ok, details:: [] }),
(cluster-2,{ summary:: suspended, details:: [requires-resume-at-
loser] })]
passive-clusters []
recoverpoint-enabled false
storage-at-clusters [cluster-1, cluster-2]
virtual-volumes [dd1_vol, dd2_vol]
visibility [cluster-1, cluster-2]
Contexts:
advanced recoverpoint
VPlexcli:/clusters/cluster-1/consistency-groups/cg1> resume-at-loser -c cluster-2
This may change the view of data presented to applications at cluster cluster-2. You
should first stop applications at that cluster. Continue? (Yes/No) Yes
VPlexcli:/clusters/cluster-1/consistency-groups/cg1> ls
78
Consistency Groups