Help

Table Of Contents
Changing the view
To view the Front-end Queue Depth of a single director in your metro node system, select the director name from the Director
drop-down.
NOTE: The chart displays data only for the directors in the cluster to which you are currently connected. To simultaneously
view Front-end Queue Depth for another cluster, open a second browser session and connect to the second cluster.
Viewing the Front-end Queue Depth chart
1. From the GUI main menu, click Performance.
2. In the Performance Dashboard, select the tab in which you want to display the Front-end Queue Depth chart (or create a
custom tab).
3. Click +Add Content.
4. Click the Front-end Queue Depth chart icon.
Front-end Bandwidth chart
The Front-End Bandwidth chart on the Performance Dashboard displays the quantity of front-end reads and writes per
second over time for directors on your metro node system. Generally bandwidth (measured in KB/s or MB/s) is associated with
large block I/O (64KB or greater I/O requests).
NOTE: The chart displays data only for the cluster to which you are currently connected. To simultaneously view front end
bandwidth charts for another cluster, open a second browser session and connect to the second cluster.
Guidelines
Front-end performance should be compared to baseline numbers (native host to storage-array) when metro node
performance issues arise. The underlying problem could be poor storage array performance. When you add metro node
to your environment, know what your application throughput was beforehand.
Front-end performance in metro node depends heavily upon the available back-end storage-array performance, and in Metro
configurations, the WAN performance for distributed devices.
Any running distributed rebuilds or data migrations might negatively affect available host throughput.
Since metro node Local and Metro implement write through caching, naturally a small amount of write latency overhead
(typically <1msec) is expected with metro node. This latency may affect applications that serialize their I/O and don't take
advantage of multiple outstanding operations. These types of applications may see a throughput and IOPS drop with metro
node in the data path.
Understand that in a metro node Metro environment you incur extra WAN round-trip time on your write latency since writes
need to be successfully written to each cluster's storage before the host is acknowledged. This extra latency could impact
the throughput and IOPS of serialized-type applications.
Corrective actions
Check CPU busy: If overly busy, metro node will be limited on the amount of bandwidth it can provide.
Check back-end latency: If on average the back-end latency is large, or there are large spikes, there could be a poorly
performing back-end fabric or an unhealthy, un-optimized, or over-loaded storage-array. Perform a back-end fabric analysis,
and a performance analysis of all storage-arrays in question.
Check front-end aborts: The presence of these indicate that metro node is taking too long to respond to the host. These
might indicate problems with the front-end fabric, or slow SCSI reservations.
Check back-end errors: If the metro node back-end is required to retry an operation because it is aborted, then this will add
to the delay in completing the operation to the host.
Check front-end queue depth: If this counter is large, this may explain larger than normal front-end latency. Follow front-end
operations count corrective actions.
Check metro node write delta time: If the time spent within metro node is more than usual, attempt to find out why. See
corrective actions for write delta time.
Monitoring the system
39