6.0.1

Table Of Contents
Table 141. Advanced Options for Virtual NUMA Controls (Continued)
Option Description Default Value
numa.autosize.once
When you create a virtual machine template with these
settings, the settings are guaranteed to remain the same
every time you subsequently power on the virtual
machine. The virtual NUMA topology will be
reevaluated if the configured number of virtual CPUs on
the virtual machine is modified.
TRUE
numa.vcpu.min
Minimum number of virtual CPUs in a virtual machine
that are required in order to generate a virtual NUMA
topology.
9
NOTE When you set numa.autosize to TRUE, and if the configuration is set up manually or with a script,
some guests might not be able to handle dynamic changes.
For example, a Linux application configured with the numactl system utility is set up and tested on one
physical host with four cores per node. The host requires two NUMA nodes for a virtual machine with eight
virtual CPUs. If the same virtual machine is run on a system with eight cores per node, which might occur
during a vMotion operation, and numa.autosize is set to TRUE, only one virtual NUMA node will be
created (rather than two virtual NUMA nodes). When numactl references the second virtual NUMA node,
the operation will fail.
To avoid this, scripts should be intelligent enough to first query numactl --hardware. Otherwise, you must
set the NUMA topology explicitly or allow the default numa.autosize.once setting to take effect.
Specifying NUMA Controls
If you have applications that use a lot of memory or have a small number of virtual machines, you might
want to optimize performance by specifying virtual machine CPU and memory placement explicitly.
Specifying controls is useful if a virtual machine runs a memory-intensive workload, such as an in-memory
database or a scientific computing application with a large data set. You might also want to optimize
NUMA placements manually if the system workload is known to be simple and unchanging. For example,
an eight-processor system running eight virtual machines with similar workloads is easy to optimize
explicitly.
NOTE In most situations, the ESXi host’s automatic NUMA optimizations result in good performance.
ESXi provides three sets of controls for NUMA placement, so that administrators can control memory and
processor placement of a virtual machine.
The vSphere Web Client lets you specify the following options.
NUMA Node Affinity
When you set this option, NUMA can schedule a virtual machine only on the
nodes specified in the affinity.
CPU Affinity
When you set this option, a virtual machine uses only the processors
specified in the affinity.
Memory Affinity
When you set this option, the server allocates memory only on the specified
nodes.
A virtual machine is still managed by NUMA when you specify NUMA node affinity, but its virtual CPUs
can be scheduled only on the nodes specified in the NUMA node affinity. Likewise, memory can be
obtained only from the nodes specified in the NUMA node affinity. When you specify CPU or memory
affinities, a virtual machine ceases to be managed by NUMA. NUMA management of these virtual machines
is effective when you remove the CPU and memory affinity constraints.
vSphere Resource Management
112 VMware, Inc.