Release Notes
Table Of Contents
● If the metadata usage at a cluster exceeds 90%, metro node triggers a call home event. If the metadata on the other cluster
also exceeds 90% within 8 hours of the first cluster, metro node does not trigger a call home event. This is by design and
occurs in metro node Metro configurations.
● In Unisphere, Provision by pools and Provision by Storage volumes wizards allow you to select only consistency groups that
have a value set for storage-at-clusters property.
● Using the CLARiiON™ Navisphere Management Suite, if you change the active storage processor (SP) for a LUN, the
incorrect SP may be reported as active in the metro node user interface. For example, SPA may be reported as active, when
in fact SPB is active. To correct this reporting inaccuracy, start I/O. After I/O begins, the system recognizes which SP is
active and reports it correctly.
● If host I/O performance is impacted during a data migration or during a rebuild, then lower the rebuild transfer-size setting
for the devices, or reduce the number of concurrent migrations/rebuilds.
●
Ensure that the host resources are sufficient to handle the number of paths provisioned for the metro node system.
● Poor QoS on the WAN-COM link in a Metro configuration could lead to undetermined behavior and data unavailability in
extreme cases. Please follow the Best Practices to configure and monitor WAN-COM links.
● Metro node in Metro configurations does not provide native encryption over the IP WAN COM link. Customers should deploy
an external appliance to achieve data encryption over the IP WAN links between clusters.
● When a claimed storage volume becomes hardware dead metro node automatically probes the storage volume within 20s. If
the probe succeeds, metro node removes the “dead” status from the volume, thus returning it to a healthy state.
CAUTION: While the device is hw-dead, do not perform operations that change data on the storage volumes
underneath metro node RAID 1 (through maintenance or replacing disks within the array). If such operations
are required, first detach the storage volumes from the metro node RAID 1, perform the data changing
operations, and then re-add the storage volumes to the metro node RAID 1 as necessary to trigger a rebuild.
Failure to follow these steps changes data underneath metro node without its knowledge. Without a data
rebuild, the RAID 1 legs might be inconsistent and this may lead to data corruption on resurrection.
● By default, for any user who is created on the management server, who has not changed their password in the last 91 days,
their accounts get locked. The admin user account will never be locked out, but the admin user will be forced to change their
password on the next login. See the “Password Policy” section of the SolVe Desktop troubleshooting section to overcome
account lockouts. Policies are not enforced for the service user.
● Storage volumes that are used as system volumes (metro node metavolume RAID 1 mirror legs, logging volumes, and
backups for the metavolume) must be formatted/zeroed out before being used by metro node as a system volume.
● There are two types of failure handling for back-end array interactions.
○ The unambiguous failure responses, such as requests rejected by storage volume or port leaving the back-end fabric.
○ The condition where storage arrays enter fault modes such that one or more of its target ports remained on the fabric,
while all SCSI commands sent to it by the initiator (metro node) timed out.
Metro node isolates the paths which remain on the fabric but stay unresponsive. In this case, I/O requests sent by a host
initiator to metro node virtual volumes are redirected away from unresponsive paths to the back-end array, onto paths that
are responsive. At the time of isolation, metro node issues a call home event.
● The export port summary of a front-end port with no-link status has the export status as suspended.
● Read-only accounts can access only a subset of metro node CLI commands. A list of per-release restricted commands is
available in the SolVe Desktop or SolVe Online in the Administration > Configure section.
Veritas DMP settings with metro node
If a UNIX host attached to metro node is running Veritas DMP Multipathing, change these values of the DMP tunable
parameters on the host.These changes improve the way DMP handles transient errors at the metro node array in certain
failure scenarios.
1. Set the dmp_lun_retry_timeout for the metro node array to 60s using the vxdmpadm setattr enclosure emc-
vplex0 dmp_lun_retry_timeout=60 command.
2. Set the recovery option to throttle and iotimeout to 30using the vxdmpadm setattr enclosure emc-
vplex0recoveryoption=throttle iotimeout=30 command.
10
Release Notes