High Availability Storage Options and Their Impact on Performance
3
Storage Performance Features
One of the biggest features that all the venders tout as their answer to performance improvements is the
use of large amounts of memory in the array used as a cache. Some even give the impression that this
cache will improve performance over JBOD and that arrays are faster because of the cache. Not always
so.
The right side of the picture below shows an arrow that indicates the (relative) time it takes to do either a
read or write to disk. The time is based on a combination of events that culminate in I/O response time,
such as bus transfer speed, disk seek, settle and latency times. The value of this response time is usually in
milliseconds (which I will not go into because I'll have to change it every time a new disk hits the
market). Just use the placement of the JBOD Read/Write as a relative reference point to compare that of
the storage array's Read/Write response times.
When we factor in the use of a storage array cache we see that when a Read is issued to the array and the
information is already in the cache (Cache Hit), it can respond back with the data faster than a Cache
Miss, which needs to wait until the data is found on the disk (seek, settle and latency) and then transferred
to the Cache. As you see in the picture, a Cache Hit is faster than a JBOD Read/Write and a Cache Miss is
slower than a JBOD Read/Write Access but much slower than the array Cache Hit.
Now this same picture will look a little different on the left (array) side if we look what happens to a
Write Access. In this case, depending on available array cache, the Write Access can complete faster than
the right side (JBOD) because the array has the ability to store the Write data in the array's cache, then
inform the server that the Write has completed and unbeknownst to the server, transfer the data from the
array cache to disk later. This only works if there is room in the array cache. If the array is getting more
data than it can transfer to disk then I/Os will queue up and wait for room in the cache. This then slows
the I/O completion down just like a cache miss.
Early enterprise array technology (1995-2000) had response times for cache hits only marginally faster
than JBOD but cache misses were many many times slower than JBOD. With this scenario the only way
to achieve decent performance numbers was that MPE's cache was large enough to reduce the need to do
I/O to the array or that the array either had a better algorithm to anticipate MPE's needs (highly unlikely)
or a large enough cache to allow MPE's I/O needs to be satisfied with a great proportion of cache hits.
Array Technology
Read Access
Time
Slower
Faster
JBOD Technology
Read/Write Access
Cache Hit
Cache Miss
Write Access