Help
Table Of Contents
- Dell EMC Storage Systems Online Help for the metro node appliance
- Contents
- Figures
- Welcome
- Using the GUI
- Configuring GUI default settings
- Using storage hierarchy maps
- Viewing system status
- Monitoring the system
- Performance
- The Performance Monitoring dashboard
- Viewing a chart
- Modifying a dashboard layout
- Creating a custom dashboard
- Removing a chart
- Moving a chart
- Back-end Bandwidth Chart
- Back-end Throughput chart
- Back-end Errors chart
- Back-end Latency chart
- CPU utilization chart
- Heap Usage chart
- Front-end Queue Depth chart
- Front-end Bandwidth chart
- Front-end Latency chart
- Front-end Throughput chart
- Front-end Aborts chart
- Write Latency Delta chart
- WAN Port Performance chart
- WAN Latency chart
- Rebuild Status dashboard
- Virtual Volumes dashboard
- Front End Ports dashboard
- System Health
- Performance
- Provisioning storage
- Guide
- Provisioning from storage volumes
- Provision Job properties
- Distributed storage
- Storage arrays
- Storage volumes
- Devices
- About devices
- Using the Devices view
- The Create Devices wizard
- The Add Local/Remote Mirror wizards
- Viewing the status of IO to a device
- Creating a device
- Renaming a device
- Deleting a device
- Mirroring a device
- Device status
- Device component properties
- Device properties
- Distributed device properties
- Add capacity to virtual volumes
- Extent properties
- Extents
- Distributed devices
- About distributed devices
- The Distributed Devices view
- The Create Distributed Device from Claimed Storage Volumes wizard
- Distributed device rule sets
- Changing the rule set for a distributed device
- Creating a distributed device
- Deleting a distributed device
- Renaming a distributed device
- Distributed Device status
- Virtual volumes
- About virtual volumes
- The Virtual Volumes view
- The Distributed Virtual Volumes view
- Creating a virtual volume
- About virtual volume expansion
- Expanding a virtual volume using storage volumes
- Enabling or disabling remote access for a volume
- Manually assigning LUN numbers to volumes
- Deleting a volume
- Renaming a volume
- Tearing down a volume
- Virtual Volume status
- Pool properties
- Virtual volume properties
- Show ITLs dialog box
- Logical unit properties
- ALUA Support field values
- Visibility field values
- Extent or Device mobility job properties
- Metro node port properties
- Storage array properties
- Storage view properties
- Storage volume properties
- Create Virtual Volumes dialog box
- Consistency group
- About consistency groups
- Using the Consistency Groups view
- Distributed Consistency Groups view
- Create Consistency Group wizard
- Types of consistency groups
- Creating a consistency group
- Adding a volume to a consistency group
- Removing a volume from a consistency group
- Deleting a consistency group
- Consistency Group status
- Consistency group properties
- Step 1: Select or create a consistency group for the virtual volume
- Step 1: Create a consistency group
- Step 2: Select volume options
- Step 3: Select a storage pool
- Step 3: Select a pool for each mirror on the second cluster
- Step 3: Select a pool for each mirror in the cluster
- Step3: Create thin virtual volumes
- Select a storage view for the virtual volume(s) (optional)
- Step 5: Review your selections
- Step 6: View results
- Step 2: Select volume options
- Step 2: Select volume options
- Step 3: Select a storage volume to create the virtual volume
- Step 3: Select a source and target storage volume
- Step 3: Create thin volumes
- Step 3: Select a target storage volume on the remote cluster
- Step 3: Select target storage on the remote cluster
- Step 6: View results
- Show Logical Units
- Exporting storage
- Initiators and metro node ports
- Storage views
- About storage views
- Using the Storage Views screen
- The Create Storage View wizard
- Creating a storage view
- Deleting a storage view
- Renaming a storage view
- Adding or removing initiators from a storage view
- Adding virtual volumes to a storage view
- Removing virtual volumes from a storage view
- Adding or removing metro node ports from a storage view
- Storage view status
- Storage group properties
- Director properties
- Cluster properties
- Moving data
- Mobility
- Move Data Within Cluster
- Move Data Across Clusters
- Create Mobility Job wizards
- Mobility job transfer size
- Creating a mobility job
- Viewing job details
- Committing a job
- Canceling a job
- Pausing a job
- Resuming a job
- Removing the record of a job
- Changing a job transfer size
- Searching for a job
- Mobility job status
- Notifications
Consistency Group status
The Operational status column in the Consistency Groups view shows the overall status of the consistency group at both
clusters. If the status is the same at both clusters, that status displays in the Operational status column. If the status is not the
same at both clusters, the less than optimal status will display.
The following table lists and defines the consistency group operational states.
Operational status Definition
OK The consistency group is servicing I/O normally at both clusters.
Suspended I/O is suspended on the volumes in the group at one or both clusters.
Degraded I/O is continuing to the volumes, but there are problems at one or both clusters.
Unknown The status is unknown, likely because management connectivity is lost at one or
both clusters.
The following table lists and describes the status details that may appear when you click one of the status links in the previous
table.
Status detail Description
requires-resolve-conflicting-detach After the inter-cluster link is restored, two clusters have
discovered that they have detached one another and resumed
I/O independently. The clusters are continuing to service I/O
on their independent versions of the data. You must issue the
consistency-group resolve-conflicting-detach command in the CLI
to make the view of data consistent again at the clusters.
rebuilding-across-clusters One or more distributed member volumes is being rebuilt.
rebuilding-within-cluster One or more local rebuilds is in progress at this cluster.
requires-resume-after-data-loss-failure There have been at least two concurrent failures, and data has
been lost. This can happen when, for example, a director fails
shortly after the inter-cluster link fails, or when two directors fail
at almost the same time. You must issue the consistency-group
resume-after data-loss command in the CLI to select a winning
cluster and allow I/O to resume.
requires-resume-after-rollback A cluster has detached its peer cluster and rolled back the view
of data, but is waiting for you to issue the consistency-group
resume-after-rollback command in the CLI before resuming I/O.
cluster-departure Not all the visible clusters are in communication.
requires-resume-at-loser After the inter-cluster link is restored, a cluster that suspended
I/O during the outage has discovered that its peer was declared
the winner and resumed I/O. You must issue the consistency-
group resume-at-loser command to make the view of the data
consistent with the winner cluster, and to resume I/O at the loser
cluster.
unhealthy-devices I/O has stopped in this consistency group because one or more
volumes is unhealthy and cannot perform I/O.
will-rollback-on-link-down If a link failure occurs, the winner cluster will roll back the view of
the data in order to resume I/O. This status detail appears when
the static detach rule configured in detach-rule makes one of the
clusters in active-clusters a loser during a link failure.
104 Provisioning storage