HP VMA SAN Gateway for VMA-series Memory Arrays Release Notes - August 2012 - Software OE version G5.1.0
10
3 hba-b1 hba-b2
5 (if present) hba-c1 hba-c2
6 (if present) hba-d1 hba-d2
Note that only the FC HBAs installed in I/O slots #2 and #3 come with the default configuration of
the VMA SAN Gateway. The FC target HBAs in I/O slots #5 and #6 are optional and will only be
present if ordered as add-on components, P/N AJ764A. As mentioned previously and shown in
Figures 4 and 5, it is recommended to have only two HBAs, four target ports, per connected VMA
array. Be sure to retrieve the HBA port SFPs from the spare parts bag shipped in the box with the
VMA SAN Gateway and insert them into the HBA ports.
Configuring and presenting LUNs to connected host servers
The VMA SAN Gateway provides the ability to selectively present LUNs through specific gateway
target ports and to specific host server initiator ports. Refer to the Chapter 4, “Configuring vSHARE”
in the HP VMA SAN Gateway Installation and User Guide for additional details regarding LUN
configuration and selective host presentation and LUN masking.
The VMA SAN Gateway has a maximum, non-configurable queue depth limit of 1024 per 8Gb FC
target port. There is an additional LUN maximum, non-configurable queue depth limit of 256.
Connecting host servers must establish appropriate I/O queue depth settings in order to achieve
optimal performance with the VMA SAN Gateway and avoid heavy FC resource contention and
excessive I/O retries. Refer to the section below titled ‘Additional Platform/OS Specific
Considerations’ for additional details regarding establishing appropriate queue depth settings for
specific platform/OS combinations.
It is recommended not to configure more than 32 LUNs per connected VMA-series Array and allow
no more than 16total lunpaths per configured LUN at this time. It is also recommended not to
exceed more than eight initiator ports per connected server for optimal usage; greater numbers of
LUNs and initiator ports can cause FC resource contention and excessive retry traffic. Because the
VMA-series Memory Arrays and the VMA SAN Gateway are focused for specific application
environments which require very high I/O throughput with extremely low latency times, it is also
recommended that it not be connected to more than four host servers or partitions for optimal
results.
Setup & configuration of a redundant VMA SAN Gateway
pair
With this release of VMA SAN Gateway software version G5.1.0, you now have the ability to create a
redundant pair of VMA SAN Gateways, each of which has a PCIe link to the same array. Redundant
gateways act as Active/Active controllers to connected VMA Arrays.
Figure 6 below shows supported redundant VMA SAN Gateway pair configurations with one or two
dually connected VMA Arrays. For specific details on how to setup and configure a redundant VMA
SAN Gateway pair, please download, review and carefully follow the pairing process documented in
the ‘Configuring a Redundant Pair of VMA SAN Gateways - Process Guide’ (P/N AM456-9023A). This
and other relevant documents can be found at www.hp.com/go/vma-docs.