Technical white paper HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices Table of contents Executive summary ...................................................................................................................................................................... 4 The challenges .............................................................................................................................................................................
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices Overview of vSphere 4.x/5.x storage...................................................................................................................................... 34 Using VMFS ............................................................................................................................................................................... 35 Using RDM...................
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices Appendix C: Balancing I/O throughput between controllers.............................................................................................. 57 Appendix D: Caveat for data-in-place upgrades and Continuous Access EVA............................................................... 61 Appendix E: Configuring VMDirectPath I/O for Command View EVA in a VM .............................
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices Executive summary The HP Enterprise Virtual Array (EVA) storage1 family has been designed for mid-range and enterprise customers with critical requirements to improve storage utilization and scalability.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices ALUA compliance All EVA storage solutions – models P6x00, EVA8x00/6x00/4x00 – are dual-controller asymmetric active-active arrays that are compliant with the SCSI ALUA standard for Vdisk access/failover and I/O processing. Note ALUA is part of the SCSI Primary Commands – 3 (SPC-3) standard.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices • Standby – The path to the Vdisk is inactive and must be activated before I/Os can be issued • Unavailable – The path to the Vdisk is unavailable through this controller • Transitioning – The Vdisk is transitioning between any two of the access types defined above The following load-balancing I/O path policies are supported by vSphere 4.x and 5.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices Using Command View EVA Command View EVA can manage an array using one of the following methods: • Server-based management (SBM) – Command View EVA is deployed on a standalone server that has access to the EVA storage being managed.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices Best practices for deploying Command View EVA in a VM with VMDirectPath I/O • Deploy Command View EVA on the local datastore of the particular vSphere server.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices Figure 1.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices Figure 2. The Storage module of the plug-in provides mapping from the virtual to the physical environment. The Storage module enhances VMware functionality by detailing the relationships between the virtual and physical environment. For example, Figure 2 shows the mapping from the virtual machine to the array on which it resides.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices The resulting topology should be similar to that presented in Figure 3, which shows a vSphere 4.x server attached to an EVA4400 array through a redundant fabric. Figure 3. Highly-available EVA/vSphere 4.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices In a direct-connect environment, the same principles can be achieved with two more HBA or HBA ports; however, the configuration is slightly different, as shown in Figure 4. Figure 4. EVA/vSphere 4.x and 5.x direct-connect topology If the direct-connect configuration were to use two rather than four HBA ports, there would be a one-to-one relationship between every HBA and controller.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices Disk group provisioning An EVA disk group is the largest storage object within the EVA storage virtualization scheme and is made up of a minimum of eight physical disks for FC, SAS or FATA drives and 6 for SSD drives. Within a disk group, you can create logical units of various sizes and RAID levels. Notes • An EVA RAID level is referred to as VraidX.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices You can use Command View EVA to set a disk drive failure protection level in the properties for the particular disk group, as shown in Figure 6. Figure 6. Disk protection level as seen in Command View EVA Note Vraid0 Vdisks are not protected.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices HP defines three storage optimization schemes, each of which is subject to specific storage overhead and deployment considerations: • Cost • Availability • Performance Optimizing for cost When optimizing for cost, you goal is to minimize the cost per GB (or MB).
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices Vdisk provisioning All EVA active-active arrays are asymmetrical and comply with the SCSI ALUA standard.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices iSCSI configuration The HP EVA family of arrays offers a variety of iSCSI connectivity options. Depending on the array, iSCSI connectivity can be achieved through the iSCSI option built into the array controller (HP EVA P63x0/P65x0) or through the use of the HP MPX200 Multifunction Router. Figures 7, 8 and 9 outline the three configuration options. Figure 7.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices Figure 9. HP EVA with MPX200 Multifunction Router The HP EVA P6000 1GbE iSCSI Module more commonly referred to as the 1GbE iSCSI Module as shown in Figure 7 provides each EVA controller with 4 1GbE ports or a total of 8 iSCSI ports per array. The HP EVA P6000 10GbE iSCSI Module as shown in Figure 8 enables 2 10GbE iSCSI ports on each controller or a total of 4 iSCSI ports per array.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices Figure 10. HP EVA with 1GbE iSCSI Module option – Architecture diagram Figure 11.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices Controller connections As shown in Figures 10 and 11, Fibre Channel ports FP1 and FP2 on each controller are connected to the 1GbE iSCSI Module and 10GbE iSCSI/FCoE Module respectively. For redundancy each array controller has a connection to the 1GbE iSCSI Module available in the other controller.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices Figure 12 below, shows the iSCSI target to GbE port connections from a single 1GbE iSCSI Module perspective on one array controller. Figure 12. Logical view of iSCSI target connection to GbE ports – Single controller view. From an ESX host perspective, ESX has the proper NIC redundancy and will detect its maximum of eight paths per LUN.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices Table 2.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices All the same facts listed above for the 1GbE iSCSI Module apply to the 10GbE iSCSI Module. As in Table 2 above, the highlighted cell in Table 3 is the intersection of configuration options that provides the best balance of high availability and GbE port bandwidth for the 10GbE iSCSI Module configuration.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices The I/O path policies supported since vSphere 4.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices Implementing multi-pathing Since vSphere 4.x and 5.x are ALUA-compliant, their implementation of multi-pathing is less complex and delivers higher levels of reliability than ESX 3.5 or earlier. Setting up multi-pathing only requires the following steps: • Configure the Vdisk • Select the controller access policy at the EVA • Power on/reboot all vSphere 4.x/5.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices When configuring HP EVA with iSCSI Modules, it must be noted that I/O will be routed slightly differently than with a Fibre Channel configuration.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices Figure 16. Fixed_AP use cases. Fixed_AP can cause explicit Vdisk transitions to occur and, in a poorly configured environment, may lead to Vdisk thrashing. Since transitioning Vdisks under heavy loads can have a significant impact on I/O performance, the use of Fixed_AP is not recommended for normal production I/O with EVA arrays.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices • Multi-Pathing Plug-in (MPP) Third-party implementation (which is outside the scope of this document) that takes the place of the NMP/SATP/PSP combination Figure 17 outlines key components of the multi-pathing stack. Figure 17. vSphere 4.x and 5.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices When a path failure occurs, the NMP communicates with the SATP and PSP and then takes the appropriate action. For example, the NMP would update its list of available paths and communicate with the PSP to determine how I/O should be re-routed based on the specified path selection policy.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices Connecting to an active-active EVA array in vSphere 4.0, 4.1 and 5.x When connecting a vSphere 4.x or 5.x host to an active-active EVA array, you should use the VMW_SATP_ALUA SATP as suggested above. This SATP is, by default, associated with VMW_PSP_MRU, a PSP that uses MRU I/O path policy. There are two steps for connecting a vSphere 4.x and 5.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices or the following command on ESXi 5.x esxcli storage nmp psp roundrobin deviceconfig set -t iops -I 1 -d naa.xxxxxxxxx In an environment where you only have EVA Vdisks connected to vSphere 4.x/5.x hosts you can use the following script to automatically set I/O path policy for each Vdisk to round robin: For ESX 4.x: for i in 'esxcli nmp device list | grep ^naa.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices Use the following command to verify that the new rule has been successfully added: esxcli nmp satp listrules Deleting a manually-added rule To delete a manually-added rule, use the esxcli nmp satp deleterule command; specify the same options used to create the rule.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices Best practice for configuring PSP in a multi-vendor SAN • vSphere 4: When using vSphere 4 in a multi-vendor, ALUA-compliant SAN environment, configure the default PSP for the VMW_SATP_ALUA SATP to the recommended setting for the predominant array type or to the recommended setting for the array type with the most Vdisks provisioned for vSphere access. • vSphere 4.1/5.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices Using third-party multi-pathing plug-ins (MPPs) vSphere 4.x/5x allow third-party storage vendors to develop proprietary PSP, SATP, or MPP plug-ins (or MEMs). These third-party MEMs are offered to customers at an incremental license cost and also require enterprise VMware licensing.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices Using VMFS VMFS is a high-performance cluster file system designed to eliminate single points of failure, while balancing storage resources. This file system allows multiple vSphere 4.x hosts to concurrently access a single VMDK (Virtual Machine Disk Format), as shown in Figure 19. VMFS supports Fibre Channel SAN, iSCSI SAN, and NAS storage arrays. Figure 19.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices Figure 20. RDM datastore Comparing supported features Table 6 compares features supported by VMFS and RDM datastores. Table 6.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices • Vdisk size • Vdisk WWN • Name of the vSphere server name from which the datastore was created Creating the convention HP recommends naming VMFS datastores and RDMs in vCenter with the same name used when creating the Vdisk in Command View EVA or when using SSSU scripting tools. While this approach is beneficial, it may require some coordination with the storage administrator.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices VMware has carried out testing to compare I/O performance with aligned and non-aligned file systems and, as a result, suggests working with your vendor to establish the appropriate starting boundary block size. Best practices for aligning the file system • No alignment is required with Windows Vista, Windows 7, or Windows Server 2008.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices Before enabling adaptive queuing, HP highly recommends examining your environment to determine the root cause of transient or permanent I/O congestion. For well-understood, transient conditions, adaptive queuing may help you accommodate these transients at a small performance cost.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices Monitoring EVA performance in order to balance throughput ALUA compliance in vSphere 4.x has significantly reduced configuration complexity and given you the ability to quickly configure a balanced Vdisk environment. However, you should monitor EVA host port performance to ensure that this configuration is also balanced from the perspective of I/O throughput.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices Figure 22 shows a better-balanced environment, achieved by moving the controller ownerships of Vdisk 5 and 6 to Controller 1 and of Vdisk1 and 2 to Controller 2. Figure 22. Balanced I/O access in a vSphere 4.x environment after changing controller ownership for certain Vdisks This type of tuning can be useful in most environments, helping you achieve the optimal configuration.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices VMware vStorage API for Array Integration (VAAI) Starting with firmware 10100000, the HP EVA Storage arrays began supporting VMware VAAI. VMware VAAI provides storage vendors access to a specific set of VMware Storage APIs, which enable offloading specific I/O and VM management operations to the storage array.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices • Plugin-less deployment The HP EVA Storage firmware 10100000 VAAI implementation is SCSI standards based which is directly compatible with the standards based implementation of VAAI in ESXi 5. Therefore, no VAAI software plugin is required when using ESXi 5 with HP EVA Storage VAAI enabled.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices Figure 23 below illustrate an HP EVA Storage configuration where controller 1 owns Vdisks 1 and 2 and controller 2 owns Vdisks 3 and 4. Figure 23. VAAI command pathing and optimal path access In this configuration, access to the Vdisks by VAAI WRITE SAME and XCOPY commands is summarized in the table below: Table 9.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices Best practice for zeroing • For improved performance, when performing zeroing activities on two or more Vdisks concurrently, it is best to have the Vdisks ownership evenly spread across controllers. Best practice for XCOPY • XCOPY operations can benefit from increased performance when different EVA controllers own the source and destination datastores.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices Table 10 provides the number of drives to use with VAAI. Table 10. VAAI command controller access HP EVA Storage 7.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices Table 12 shows the ESXi UNMAP recommendation usage. Table 12. ESXi UNMAP usage recommendations ESX Version UNMAP Support Execution Recommendation ESXi 5.0 Not Supported NA ESXi 5.0 patch 2 Not Supported NA ESXi 5.0U1 Supported Offline ESXi 5.1 Supported Offline ESXi 5.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices Best practice for space reclamation with ESXi 5.0U1 and ESXi 5.1 • Space reclamation in ESXi 5.0U1 and ESXi 5.1 is recommend to be performed during a maintenance window. Note The EVA Vdisks are striped across all spindles in the array.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices Table 13. Differences in versions supporting UNMAP Feature ESXi 5.0U1/ESXi 5.1 ESXi 5.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices Table 14 shows the differences between UNMAP operation in ESXi 5.0 and ESXi 5.0U1 and later. Table 14. Differences in UNMAP operation in ESXI 5 versions ESXi 5.0 ESXi 5.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices How do I simplify storage management, even in a complex environment with multiple storage systems? • Use the Storage Module for vCenter to save time and improve efficiency by mapping, monitoring, provisioning, and troubleshooting EVA storage directly from vCenter.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices • When the EVA is in degraded mode, avoid using VAAI operations. • When using VAAI on Vdisks that are in snapshot, snapclone, mirror clone or continuous access relationships VAAI performance may be throttled. • Minimize the number of concurrent VAAI clone and/or zeroing operations to reduce the impact on overall system performance.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices 54 Term Description EVA The HP Enterprise Virtual Array (EVA) Storage product allows pooled disk capacity to be presented to hosts in the form of one or more variably-sized physical devices. The EVA consists of disks, controllers, cables, power supplies, and controller firmware. An EVA may be thought of as a storage system, a virtual array, or a storage array.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices Appendix A: Using SSSU to configure the EVA The sample SSSU script provided in this appendix creates and presents multiple Vdisks to vSphere hosts. The script performs the following actions: • Create a disk group with 24 disks. • Set the disk group sparing policy to single-drive failure. • Create Vdisk folders.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices Appendix B: Miscellaneous scripts/commands This appendix provides scripts/utilities/commands for the following actions: • Change the default PSP for VMW_SATP_ALUA. • Set the I/O path policy and attributes for each Vdisk. • Configure the disk SCSI timeout for Windows and Linux guests. Changing the default PSP This command changes the default PSP for VMW_SATP_ALUA: For ESX 4.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices If required, set the value to 60 using one of the following commands: echo 60 > /sys/bus/scsi/devices/W:X:Y:Z or echo 60 | cat /sys/block/sdX/device/timeout where W:X:Y:Z or sdX is the desired device. No reboot is required for these changes to take effect.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices Figure C-2. I/O routes In this example, even though the EVA array has a total of eight controller ports (four on each controller), all I/O seems to be routed through just two ports on Controller 1. Note that SAN zoning is only allowing each HBA to see ports 1 and 2 of each controller, explaining why no I/O is seen on ports 3 and 4 even though round robin I/O path policy is being used.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices Alternatively, you can review Vdisk properties in Command View EVA to determine controller ownership, as shown in Figure C-5 (Vdisk9) and C-6 (Vdisk5). Figure C-5. Vdisk properties for Vdisk9 Figure C-6.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices Moving the chosen Vdisk from one controller to the other To better balance throughput in this example, Vdisk5 is being moved to Controller 2. This move is accomplished by using Command View EVA to change the managing controller for Vdisk5, as shown in Figure C-7. Figure C-7.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices Validating the better balanced configuration You can review the output of EVAperf (as shown in Figure C-9) to verify that controller throughput is now better balanced. Run the following command: evaperf hps –sz -cont X –dur Y Figure C-9. Improved I/O distribution The system now has much better I/O distribution.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices After the upgrade, all Vdisks will return “HSV450” instead of “HSV300” in the standard inquiry page response. This change in PID creates a mismatch between LVM header metadata and the information coming from the Vdisk. Note A similar mismatch would occur if you attempted to use Continuous Access EVA to replicate from the EVA4400 to the EVA8400.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices By default, vSphere 4.x claims all HBAs installed in the system, as shown in the vSphere Client view presented in Figure E-1. Figure E-1. Storage Adapters view, available under the Configuration tab of vSphere Client This appendix shows how to assign HBA3 to VM2 in vSphere 4.x. EVA configuration This example uses four ports on an EVA8100 array (Ports 1 and 2 on each controller).
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices Component Description VM2 (Windows Server 2008 VM) HBA3 Port 1: 1000-0000-C97E-CA72 Port 2: 1000-0000-C97E-CA73 Vdisks \VM-DirectLUNs\Win2k8-VM-dLUN1: 30GB \VM-DirectLUNs\Win2k8-VM-dLUN2: 30GB Host modes vSphere server VMware VM2 Windows Server 2008 Fibre Channel configuration This example uses two HP 4/64 SAN switches, with a zone created on each.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices • Pre-install the VMs (for example, as VMs installed on a VMDK on a SAN datastore or a local datastore). Note Refer to Configuring EVA arrays for more information on placing VMs. The procedure is as follows: 1.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices Figure E-4. Indicating that, in this case, the server hardware is incompatible and that VMDirectPath cannot be enabled 3. If your server has compatible hardware, click on the Configure Passthrough… link to move to the Mark devices for passthrough page, as shown in Figure E-5.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices 4. Select the desired devices for VMDirectPath; select and accept the passthrough device dependency check shown in Figure E-6. IMPORTANT If you select OK, the dependent device is also configured for VMDirectPath, regardless of whether or not it was being used by ESX.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices 6. After the reboot, confirm that device icons are green, as shown in Figure E-8, indicating that the VMDirectPath-enabled HBA ports are ready to use. Figure E-8. The HBA ports have been enabled for VMDirectPath and are ready for use 7.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices Configuring the VM Caveats • HBA ports are assigned to the VM one at a time, while the VM is powered off. • The VM must have a memory reservation for the fully-configured memory size. • You must not assign ports on the same HBA to different VMs, or the same HBA to various VMs.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices 4. Select PCI Device and then click on Next, as shown in Figure E-11. Figure E-11. Selecting PCI Device as the type of device to be added to the VM 5. From the list of VMDirectPath devices, select the desired device to assign to the VM, as shown in Figure E-12. In the example, select Port 1 of HBA3 (that is, device 21:00.0). For more information on selecting devices, refer to Caveats.
Technical white paper | HP Enterprise Virtual Array Storage and VMware vSphere 4.x and 5.x configuration best practices For more information Data storage from HP hp.com/go/storage HP and VMware hp.com/go/vmware Converged Storage for VMware http://www8.hp.com/us/en/products/data-storage/datastorage-products.html?compURI=1285027 Documentation for EVA arrays http://h20566.www2.hp.com/portal/site/hpsc/public/psi/ manualsResults?sp4ts.oid=5062117&ac.admitted=13947 43561397.876444892.