HP VMA SAN Gateway for VMA-series Memory Arrays Release Notes - May 2012 - Software OE version G5.1.0
17
ext2 filesystem is not supported
The ext2 filesystem on Linux distros has outstanding known data integrity issues. It is highly
recommended that you use an alternative filesystem such as ext3 or ext4 instead.
Tune application to issue I/Os which are multiple of 4KB
Most high level applications can be configured to issue certain I/O block sizes for increased
performance. Ensure that configured I/O block size and if appropriate the beginning block
addresses for I/Os are 4KB aligned.
Create multiple LUNs for parallelization of I/O for applications
Creating multiple LUNs on the connected VMA arrays might allow applications to launch multiple
threads for greater I/O parallelism and performance. It is currently recommended to configure no
more than 32 LUNs per connected VMA Array with this G5.1.0 release of the VMA SAN Gateway
software OE. Additionally for optimal I/O flow, it is recommended to not allow more than 16 total
lunpaths per LUN; total sum of all lunpaths to a LUN for all connecting servers.
LUN 0 of the VMA SAN Gateway
LUN 0 is considered a reserved device and various issues can occur if LUN 0 is not accessible by all
connected host servers. Thus, the VMA SAN Gateway will prompt for configured LUNs to begin with
LUN ID #1. If a specific operating system requires configuration of a LUN ID #0, this can be done by
explicitly entering ‘0’ for the LUN ID when creating a LUN. However, any configuration changes or
accessibility change to an explicitly created LUN ID #0 might cause disruption of service including
inability to discover and access LUNs and stored data to other connected server hosts.
Currently only 512B sectors are supported
While the VMA SAN Gateway will allow creation of LUNs with either a 512B or 4KB sector size,
only 512B sector size has been fully tested and supported with the gateway. LUNs using 4KB sectors
have not been fully validated and are not considered supported by HP at this time.
Tune OS I/O queue depth settings for optimal performance
Refer to and follow the guidelines for setting OS specific I/O queue depth for I/O to the VMA SAN
Gateway. If the I/O queue depth settings are too high, you might experience the following due to the
SAN Gateway or VMA array being overwhelmed with I/O requests:
• Queue-Full SCSI status
• SCSI Check Condition: Sense Key=0x06 ASC/ASCQ=0x29/0x07 (Nexus Lost)
• I/O time outs resulting in host initiated I/O aborts
• Incomplete CDB Status 0x400 - when an I/O times out before it can be sent
• Target Ports going offline and then coming back online
• Duplicate sessions (see ‘Known Problems’ section)
When you see the above messages, try lowering the current LUN queue depth or TPCC settings to see
if with additional tuning the diagnostic messages are no longer encountered. Usually the above
diagnostic messages and other issues result from connected host servers exhausting the available I/O
resources on the VMA SAN Gateway.