Technical information

VMware, Inc. 27
Chapter 2 ESX and Virtual Machines
Before performing an alignment, carefully evaluate the performance impact of the unaligned VMFS
partition on your particular workload. The degree of improvement from alignment is highly
dependent on workloads and array types. You might want to refer to the alignment
recommendations from your array vendor for further information.
Multiple heavily-used virtual machines concurrently accessing the same VMFS volume, or multiple
VMFS volumes backed by the same LUNs, can result in decreased storage performance.
Appropriately-configured storage architectures can avoid this issue. For information about storage
configuration and performance, see Scalable Storage Performance (available at
http://www.vmware.com/resources/techresources/1059).
Meta-data-intensive operations can impact virtual machine I/O performance. These operations include:
Administrative tasks such as backups, provisioning virtual disks, cloning virtual machines, or
manipulating file permissions.
Scripts or cron jobs that open and close files. These should be written to minimize open and close
operations (open, do all that needs to be done, then close, rather than repeatedly opening and
closing).
Dynamically growing .vmdk files for thin-provisioned virtual disks. When possible, use thick disks
for better performance.
You can reduce the effect of these meta-data-intensive operations by scheduling them at times when you
don’t expect high virtual machine I/O load.
More information on this topic can be found in Scalable Storage Performance (available at
http://www.vmware.com/resources/techresources/1059).
I/O latency statistics can be monitored using esxtop (or resxtop), which reports device latency, time
spent in the kernel, and latency seen by the guest.
Make sure I/O is not queueing up in the VMkernel by checking the number of queued commands
reported by esxtop (or resxtop).
Since queued commands are an instantaneous statistic, you will need to monitor the statistic over a period
of time to see if you are hitting the queue limit. To determine queued commands, look for the QUED
counter in the esxtop (or resxtop) storage resource screen. If queueing occurs, try adjusting queue
depths. For further information see KB article 1267, listed in “Related Publications” on page 8.
The driver queue depth can be set for some VMkernel drivers. For example, while the default queue depth
of the QLogic driver is 32, specifying a larger queue depth may yield higher performance. You can also
adjust the maximum number of outstanding disk requests per virtual machine in the VMkernel through
the vSphere Client. Setting this parameter can help equalize disk bandwidth across virtual machines. For
additional information see KB article 1268, listed in “Related Publications” on page 8.
By default, Active/Passive storage arrays use Most Recently Used path policy. Do not use Fixed path
policy for Active/Passive storage arrays to avoid LUN thrashing. For more information, see the VMware
SAN Configuration Guide, listed in “Related Publications” on page 8.
By default, Active/Active storage arrays use Fixed path policy. When using this policy you can maximize
the utilization of your bandwidth to the storage array by designating preferred paths to each LUN
through different storage controllers. For more information, see the VMware SAN Configuration Guide,
listed in “Related Publications” on page 8.
Do not allow your service console’s root file system to become full. (Because it does not include a service
console, this point doesn’t apply to ESXi.)
Disk I/O bandwidth can be unequally allocated to virtual machines by using the vSphere Client (select
Edit virtual machine settings, choose the Resources tab, select Disk, then change the Shares field).