6.5.1

Table Of Contents
When initial placement, dynamic rebalancing, and intelligent memory migration work in conjunction, they
ensure good memory performance on NUMA systems, even in the presence of changing workloads. When a
major workload change occurs, for instance when new virtual machines are started, the system takes time to
readjust, migrating virtual machines and memory to new locations. After a short period, typically seconds
or minutes, the system completes its readjustments and reaches a steady state.
Transparent Page Sharing Optimized for NUMA
Many ESXi workloads present opportunities for sharing memory across virtual machines.
For example, several virtual machines might be running instances of the same guest operating system, have
the same applications or components loaded, or contain common data. In such cases, ESXi systems use a
proprietary transparent page-sharing technique to eliminate redundant copies of memory pages. With
memory sharing, a workload running in virtual machines often consumes less memory than it might when
running on physical machines. As a result, higher levels of overcommitment can be supported eciently.
Transparent page sharing for ESXi systems has also been optimized for use on NUMA systems. On NUMA
systems, pages are shared per-node, so each NUMA node has its own local copy of heavily shared pages.
When virtual machines use shared pages, they do not need access to remote memory.
N This default behavior is the same in all previous versions of ESX and ESXi.
Resource Management in NUMA Architectures
You can perform resource management with dierent types of NUMA architecture.
With the proliferation of highly multicore systems, NUMA architectures are becoming more popular as
these architectures allow beer performance scaling of memory intensive workloads. All modern Intel and
AMD systems have NUMA support built into the processors. Additionally, there are traditional NUMA
systems like the IBM Enterprise X-Architecture that extend Intel and AMD processors with NUMA behavior
with specialized chipset support.
Typically, you can use BIOS seings to enable and disable NUMA behavior. For example, in AMD Opteron-
based HP Proliant servers, NUMA can be disabled by enabling node interleaving in the BIOS. If NUMA is
enabled, the BIOS builds a system resource allocation table (SRAT) which ESXi uses to generate the NUMA
information used in optimizations. For scheduling fairness, NUMA optimizations are not enabled for
systems with too few cores per NUMA node or too few cores overall. You can modify the
numa.rebalancecorestotal and numa.rebalancecoresnode options to change this behavior.
Using Virtual NUMA
vSphere 5.0 and later includes support for exposing virtual NUMA topology to guest operating systems,
which can improve performance by facilitating guest operating system and application NUMA
optimizations.
Virtual NUMA topology is available to hardware version 8 virtual machines and is enabled by default when
the number of virtual CPUs is greater than eight. You can also manually inuence virtual NUMA topology
using advanced conguration options.
The rst time a virtual NUMA enabled virtual machine is powered on, its virtual NUMA topology is based
on the NUMA topology of the underlying physical host. Once a virtual machines virtual NUMA topology is
initialized, it does not change unless the number of vCPUs in that virtual machine is changed.
The virtual NUMA topology does not consider the memory congured to a virtual machine. The virtual
NUMA topology is not inuenced by the number of virtual sockets and number of cores per socket for a
virtual machine.
vSphere Resource Management
110 VMware, Inc.