Technical information
VMware, Inc. 13
Chapter 1 Hardware for Use with VMware vSphere
Hardware Storage Considerations
Back-end storage configuration can greatly affect performance. Refer to the following sources for more
information:
For SAN best practices: “Using ESX Server with SAN: Concepts” in the SAN Configuration Guide.
For NFS best practices: “Advanced Networking” in the Server Configuration Guide.
For iSCSI best practices: “Configuring Storage” in the Server Configuration Guide.
For a comparison of Fibre Channel, iSCSI, and NFS: Comparison of Storage Protocol Performance.
Storage performance issues are most often the result of configuration issues with underlying storage devices
and are not specific to ESX.
Storage performance is a vast topic that depends on workload, hardware, vendor, RAID level, cache size,
stripe size, and so on. Consult the appropriate documentation from VMware as well as the storage vendor.
Many workloads are very sensitive to the latency of I/O operations. It is therefore important to have storage
devices configured correctly. The remainder of this section lists practices and configurations recommended by
VMware for optimal storage performance.
For iSCSI and NFS, make sure that your network topology does not contain Ethernet bottlenecks, where
multiple links are routed through fewer links, potentially resulting in oversubscription and dropped
network packets. Any time a number of links transmitting near capacity are switched to a smaller number
of links, such oversubscription is a possibility.
Recovering from dropped network packets results in large performance degradation. In addition to time
spent determining that data was dropped, the retransmission uses network bandwidth that could
otherwise be used for new transactions.
Applications or systems that write large amounts of data to storage, such as data acquisition or
transaction logging systems, should not share Ethernet links to a storage device. These types of
applications perform best with multiple connections to storage devices.
Performance design for a storage network must take into account the physical constraints of the network,
not logical allocations. Using VLANs or VPNs does not provide a suitable solution to the problem of link
oversubscription in shared configurations. VLANs and other virtual partitioning of a network provide a
way of logically configuring a network, but don't change the physical capabilities of links and trunks
between switches.
For NFS and iSCSI, if the network switch deployed for the data path supports VLAN, it is beneficial to
create a VLAN just for the ESX host's vmknic and the NFS/iSCSI server. This minimizes network
interference from other packet sources.
If you have heavy disk I/O loads, you may need to assign separate storage processors to separate systems
to handle the amount of traffic bound for storage.
To optimize storage array performance, spread I/O loads over the available paths to the storage (across
multiple host bus adapters (HBAs) and storage processors (SPs)).
Configure maximum queue depth for HBA cards. For additional information see KB article 1267, listed in
“Related Publications” on page 8.
Be aware that with software-initiated iSCSI and NAS the network protocol processing takes place on the
host system, and thus these can require more CPU resources than other storage options.
In order to use VMware Storage VMotion your storage infrastructure must provide sufficient available
storage bandwidth. For the best Storage VMotion performance you should make sure that the available
bandwidth will be well above the minimum required. We therefore recommend you consider the
information in “VMware VMotion and Storage VMotion” on page 37 when planning a deployment.