Help

Table Of Contents
Back-end Bandwidth Chart
The Back-End Bandwidth chart on the Performance Dashboard shows the quantity of back-end reads and writes per second
over time for directors. Generally, bandwidth (measured in KB/s or MB/s) is associated with large block I/O (64KB or greater
I/O requests).
Each array type, model, and underlying hard disk drives is different. Ensure you know your array's capabilities, for example,
response time, IOPS, and bandwidth.
NOTE: The chart displays data only for the cluster to which you are currently connected. To simultaneously view back-end
bandwidth charts for another cluster, open a second browser session and connect to the second cluster.
Guidelines
Having a baseline of array performance is important so you know what your specific array and what its setup is capable of.
When adding metro node to an existing environment, know what your host to storage array (native) performance should be.
Although metro node tends to boost read-intensive workloads because of additional caching, metro node can only perform as
well as this native performance.
The underlying issue could be poor storage array performance.
Unexpectedly low bandwidth could indicate array saturation or back-end fabric issues.
Keep in mind that many things affect storage array bandwidth performance such as:
IO request size
Read vs. write request
Underlying disk type (SSD, FC, SATA)
RAID type 0, 1, 0+1, 5, 6
FAST VP Fully Automated Storage Tiering across 3 distinct tiers: Flash, enterprise hard disk drives (10K and 15K rpm),
and high-capacity SATA HDDs
Thin/thick pools
Storage array cache settings and size
Running snapshots/clones
For arrays with write-back caching, writes will generally be faster than reads. If the read data is cached by the array, then
the latency will be comparable to that of the writes.
Back-end bandwidth can be negatively affected while running snapshots/clones
For Symmetrix arrays, performance can be impacted by Write Pending (WP) limits when there is a lack of available free
cache slots to accept incoming writes forcing the array to proactively flush pages to disk. Running SRDF sessions might also
affect performance.
For CLARiiON arrays, performance can be impacted by forced flushes due to lack of available cache to buffer writes.
Corrective actions
Check back-end errors: These indicate that metro node had to abort and re-try operations. Could be a back-end fabric
and/or storage array health issue.
Examine the back-end fabric for its overall health state, recent changes, reported errors, and properly negotiated speeds.
Examine the back-end storage array for general health state, and that performance best practices are followed for disk/
RAID layout when needed.
Use available performance monitoring tools from the storage array vendor to confirm the array's performance. Additional
metrics available to the array but not visible to metro node can confirm the problem.
Be sure to run the recommended storage array firmware version. Check for newer software releases and known bug fixes.
Consult your storage array vendor's performance specialists if the problem persists.
Changing the view
Use the following appropriate selection criteria to filter the data:
Director Allows you to select all directors or a specific director in the cluster.
Read and Write check boxes Allows you to select one or both check boxes to filter bandwidth for Reads and Writes.
32
Monitoring the system