HP VMA SAN Gateway for VMA-series Memory Arrays Release Notes - August 2012 - Software OE version G5.1.0
18
Create multiple LUNs for parallelization of I/O for applications
Creating multiple LUNs on the connected VMA arrays might allow applications to launch multiple
threads for greater I/O parallelism and performance. It is currently recommended to configure no
more than 32 LUNs per connected VMA Array with this G5.1.0 release of the VMA SAN Gateway
software OE. Additionally for optimal I/O flow, it is recommended to not allow more than 16 total
lunpaths per LUN; total sum of all lunpaths to a LUN for all connecting servers.
LUN 0 of the VMA SAN Gateway
LUN 0 is considered a reserved device and various issues can occur if LUN 0 is not accessible by all
connected host servers. Thus, the VMA SAN Gateway will prompt for configured LUNs to begin with
LUN ID #1. If a specific operating system requires configuration of a LUN ID #0, this can be done by
explicitly entering ‘0’ for the LUN ID when creating a LUN. However, any configuration changes or
accessibility change to an explicitly created LUN ID #0 might cause disruption of service including
inability to discover and access LUNs and stored data to other connected server hosts.
Currently only 512B sectors are supported
While the VMA SAN Gateway will allow creation of LUNs with either a 512B or 4KB sector size, only
512B sector size has been fully tested and supported with the gateway. LUNs using 4KB sectors
have not been fully validated and are not considered supported by HP at this time.
Tune OS I/O queue depth settings for optimal performance
Refer to and follow the guidelines for setting OS specific I/O queue depth for I/O to the VMA SAN
Gateway. If the I/O queue depth settings are too high, you might experience the following due to
the SAN Gateway or VMA array being overwhelmed with I/O requests:
• Queue-Full SCSI status
• SCSI Check Condition: Sense Key=0x06 ASC/ASCQ=0x29/0x07 (Nexus Lost)
• I/O time outs resulting in host initiated I/O aborts
• Incomplete CDB Status 0x400 - when an I/O times out before it can be sent
• Target Ports going offline and then coming back online
• Duplicate sessions (see ‘Known Problems’ section)
When you see the above messages, try lowering the current LUN queue depth or TPCC settings to
see if with additional tuning the diagnostic messages are no longer encountered. Usually the above
diagnostic messages and other issues result from connected host servers exhausting the available
I/O resources on the VMA SAN Gateway.
Increased I/O traffic from server reboot might cause reported path
offline
As with all storage, when a connected host server reboots there is additional I/O activity that can be
higher than normal I/O traffic volume. This added I/O traffic from a server reboot might cause some
I/Os to the SAN Gateway to time out and might be reported as a path being offline or failing. In
such cases, I/Os are retried by the host server and the path is detected as back online shortly
thereafter.