6.7

Table Of Contents
Some virtual machines are not managed by the ESXi NUMA scheduler. For example, if you manually set
the processor or memory affinity for a virtual machine, the NUMA scheduler might not be able to manage
this virtual machine. Virtual machines that are not managed by the NUMA scheduler still run correctly.
However, they don't benefit from ESXi NUMA optimizations.
The NUMA scheduling and memory placement policies in ESXi can manage all virtual machines
transparently, so that administrators do not need to address the complexity of balancing virtual machines
between nodes explicitly.
The optimizations work seamlessly regardless of the type of guest operating system. ESXi provides
NUMA support even to virtual machines that do not support NUMA hardware, such as Windows NT 4.0.
As a result, you can take advantage of new hardware even with legacy operating systems.
A virtual machine that has more virtual processors than the number of physical processor cores available
on a single hardware node can be managed automatically. The NUMA scheduler accommodates such a
virtual machine by having it span NUMA nodes. That is, it is split up as multiple NUMA clients, each of
which is assigned to a node and then managed by the scheduler as a normal, non-spanning client. This
can improve the performance of certain memory-intensive workloads with high locality. For information on
configuring the behavior of this feature, see Advanced Virtual Machine Attributes.
ESXi 5.0 and later includes support for exposing virtual NUMA topology to guest operating systems. For
more information about virtual NUMA control, see Using Virtual NUMA.
VMware NUMA Optimization Algorithms and Settings
This section describes the algorithms and settings used by ESXi to maximize application performance
while still maintaining resource guarantees.
Home Nodes and Initial Placement
When a virtual machine is powered on, ESXi assigns it a home node. A virtual machine runs only on
processors within its home node, and its newly allocated memory comes from the home node as well.
Unless a virtual machine’s home node changes, it uses only local memory, avoiding the performance
penalties associated with remote memory accesses to other NUMA nodes.
When a virtual machine is powered on, it is assigned an initial home node so that the overall CPU and
memory load among NUMA nodes remains balanced. Because internode latencies in a large NUMA
system can vary greatly, ESXi determines these internode latencies at boot time and uses this information
when initially placing virtual machines that are wider than a single NUMA node. These wide virtual
machines are placed on NUMA nodes that are close to each other for lowest memory access latencies.
Initial placement-only approaches are usually sufficient for systems that run only a single workload, such
as a benchmarking configuration that remains unchanged as long as the system is running. However, this
approach is unable to guarantee good performance and fairness for a datacenter-class system that
supports changing workloads. Therefore, in addition to initial placement, ESXi 5.0 does dynamic
migration of virtual CPUs and memory between NUMA nodes for improving CPU balance and increasing
memory locality.
vSphere Resource Management
VMware, Inc. 121