6.7

Table Of Contents
Transparent page sharing for ESXi systems has also been optimized for use on NUMA systems. On
NUMA systems, pages are shared per-node, so each NUMA node has its own local copy of heavily
shared pages. When virtual machines use shared pages, they don't need to access remote memory.
Note This default behavior is the same in all previous versions of ESX and ESXi.
Resource Management in NUMA Architectures
You can perform resource management with different types of NUMA architecture.
With the proliferation of highly multicore systems, NUMA architectures are becoming more popular as
these architectures allow better performance scaling of memory intensive workloads. All modern Intel and
AMD systems have NUMA support built into the processors. Additionally, there are traditional NUMA
systems like the IBM Enterprise X-Architecture that extend Intel and AMD processors with NUMA
behavior with specialized chipset support.
Typically, you can use BIOS settings to enable and disable NUMA behavior. For example, in AMD
Opteron-based HP Proliant servers, NUMA can be disabled by enabling node interleaving in the BIOS. If
NUMA is enabled, the BIOS builds a system resource allocation table (SRAT) which ESXi uses to
generate the NUMA information used in optimizations. For scheduling fairness, NUMA optimizations are
not enabled for systems with too few cores per NUMA node or too few cores overall. You can modify the
numa.rebalancecorestotal and numa.rebalancecoresnode options to change this behavior.
Using Virtual NUMA
vSphere 5.0 and later includes support for exposing virtual NUMA topology to guest operating systems,
which can improve performance by facilitating guest operating system and application NUMA
optimizations.
Virtual NUMA topology is available to hardware version 8 virtual machines and is enabled by default when
the number of virtual CPUs is greater than eight. You can also manually influence virtual NUMA topology
using advanced configuration options.
The first time a virtual NUMA enabled virtual machine is powered on, its virtual NUMA topology is based
on the NUMA topology of the underlying physical host. Once a virtual machines virtual NUMA topology is
initialized, it does not change unless the number of vCPUs in that virtual machine is changed.
The virtual NUMA topology does not consider the memory configured to a virtual machine. The virtual
NUMA topology is not influenced by the number of virtual sockets and number of cores per socket for a
virtual machine.
If the virtual NUMA topology needs to be overridden, see Virtual NUMA Controls.
Note Enabling CPU HotAdd will disable virtual NUMA. See https://kb.vmware.com/kb/2040375.
vSphere Resource Management
VMware, Inc. 123