6.0.1
Table Of Contents
- vSphere Resource Management
- Contents
- About vSphere Resource Management
- Updated Information
- Getting Started with Resource Management
- Configuring Resource Allocation Settings
- CPU Virtualization Basics
- Administering CPU Resources
- Memory Virtualization Basics
- Administering Memory Resources
- View Graphics Information
- Managing Storage I/O Resources
- Managing Resource Pools
- Creating a DRS Cluster
- Using DRS Clusters to Manage Resources
- Creating a Datastore Cluster
- Initial Placement and Ongoing Balancing
- Storage Migration Recommendations
- Create a Datastore Cluster
- Enable and Disable Storage DRS
- Set the Automation Level for Datastore Clusters
- Setting the Aggressiveness Level for Storage DRS
- Datastore Cluster Requirements
- Adding and Removing Datastores from a Datastore Cluster
- Using Datastore Clusters to Manage Storage Resources
- Using NUMA Systems with ESXi
- Advanced Attributes
- Fault Definitions
- Virtual Machine is Pinned
- Virtual Machine not Compatible with any Host
- VM/VM DRS Rule Violated when Moving to another Host
- Host Incompatible with Virtual Machine
- Host has Virtual Machine that Violates VM/VM DRS Rules
- Host has Insufficient Capacity for Virtual Machine
- Host in Incorrect State
- Host has Insufficient Number of Physical CPUs for Virtual Machine
- Host has Insufficient Capacity for Each Virtual Machine CPU
- The Virtual Machine is in vMotion
- No Active Host in Cluster
- Insufficient Resources
- Insufficient Resources to Satisfy Configured Failover Level for HA
- No Compatible Hard Affinity Host
- No Compatible Soft Affinity Host
- Soft Rule Violation Correction Disallowed
- Soft Rule Violation Correction Impact
- DRS Troubleshooting Information
- Cluster Problems
- Load Imbalance on Cluster
- Cluster is Yellow
- Cluster is Red Because of Inconsistent Resource Pool
- Cluster is Red Because Failover Capacity is Violated
- No Hosts are Powered Off When Total Cluster Load is Low
- Hosts are Powered Off When Total Cluster Load is High
- DRS Seldom or Never Performs vMotion Migrations
- Host Problems
- DRS Recommends Host be Powered On to Increase Capacity When Total Cluster Load Is Low
- Total Cluster Load Is High
- Total Cluster Load Is Low
- DRS Does Not Evacuate a Host Requested to Enter Maintenance or Standby Mode
- DRS Does Not Move Any Virtual Machines onto a Host
- DRS Does Not Move Any Virtual Machines from a Host
- Virtual Machine Problems
- Cluster Problems
- Index
When initial placement, dynamic rebalancing, and intelligent memory migration work in conjunction, they
ensure good memory performance on NUMA systems, even in the presence of changing workloads. When a
major workload change occurs, for instance when new virtual machines are started, the system takes time to
readjust, migrating virtual machines and memory to new locations. After a short period, typically seconds
or minutes, the system completes its readjustments and reaches a steady state.
Transparent Page Sharing Optimized for NUMA
Many ESXi workloads present opportunities for sharing memory across virtual machines.
For example, several virtual machines might be running instances of the same guest operating system, have
the same applications or components loaded, or contain common data. In such cases, ESXi systems use a
proprietary transparent page-sharing technique to securely eliminate redundant copies of memory pages.
With memory sharing, a workload running in virtual machines often consumes less memory than it would
when running on physical machines. As a result, higher levels of overcommitment can be supported
efficiently.
Transparent page sharing for ESXi systems has also been optimized for use on NUMA systems. On NUMA
systems, pages are shared per-node, so each NUMA node has its own local copy of heavily shared pages.
When virtual machines use shared pages, they don't need to access remote memory.
NOTE This default behavior is the same in all previous versions of ESX and ESXi.
Resource Management in NUMA Architectures
You can perform resource management with different types of NUMA architecture.
With the proliferation of highly multicore systems, NUMA architectures are becoming more popular as
these architectures allow better performance scaling of memory intensive workloads. All modern Intel and
AMD systems have NUMA support built into the processors. Additionally, there are traditional NUMA
systems like the IBM Enterprise X-Architecture that extend Intel and AMD processors with NUMA behavior
with specialized chipset support.
Typically, you can use BIOS settings to enable and disable NUMA behavior. For example, in AMD Opteron-
based HP Proliant servers, NUMA can be disabled by enabling node interleaving in the BIOS. If NUMA is
enabled, the BIOS builds a system resource allocation table (SRAT) which ESXi uses to generate the NUMA
information used in optimizations. For scheduling fairness, NUMA optimizations are not enabled for
systems with too few cores per NUMA node or too few cores overall. You can modify the
numa.rebalancecorestotal and numa.rebalancecoresnode options to change this behavior.
Using Virtual NUMA
vSphere 5.0 and later includes support for exposing virtual NUMA topology to guest operating systems,
which can improve performance by facilitating guest operating system and application NUMA
optimizations.
Virtual NUMA topology is available to hardware version 8 virtual machines and is enabled by default when
the number of virtual CPUs is greater than eight. You can also manually influence virtual NUMA topology
using advanced configuration options.
You can affect the virtual NUMA topology with two settings in the vSphere Web Client: number of virtual
sockets and number of cores per socket for a virtual machine. If the number of cores per socket
(cpuid.coresPerSocket) is greater than one, and the number of virtual cores in the virtual machine is greater
than 8, the virtual NUMA node size matches the virtual socket size. If the number of cores per socket is less
than or equal to one, virtual NUMA nodes are created to match the topology of the first physical host where
the virtual machine is powered on.
vSphere Resource Management
110 VMware, Inc.