6.7
Table Of Contents
- vSphere Resource Management
- Contents
- About vSphere Resource Management
- Getting Started with Resource Management
- Configuring Resource Allocation Settings
- CPU Virtualization Basics
- Administering CPU Resources
- Memory Virtualization Basics
- Administering Memory Resources
- Persistent Memory
- Configuring Virtual Graphics
- Managing Storage I/O Resources
- Managing Resource Pools
- Creating a DRS Cluster
- Using DRS Clusters to Manage Resources
- Creating a Datastore Cluster
- Initial Placement and Ongoing Balancing
- Storage Migration Recommendations
- Create a Datastore Cluster
- Enable and Disable Storage DRS
- Set the Automation Level for Datastore Clusters
- Setting the Aggressiveness Level for Storage DRS
- Datastore Cluster Requirements
- Adding and Removing Datastores from a Datastore Cluster
- Using Datastore Clusters to Manage Storage Resources
- Using NUMA Systems with ESXi
- Advanced Attributes
- Fault Definitions
- Virtual Machine is Pinned
- Virtual Machine not Compatible with any Host
- VM/VM DRS Rule Violated when Moving to another Host
- Host Incompatible with Virtual Machine
- Host Has Virtual Machine That Violates VM/VM DRS Rules
- Host has Insufficient Capacity for Virtual Machine
- Host in Incorrect State
- Host Has Insufficient Number of Physical CPUs for Virtual Machine
- Host has Insufficient Capacity for Each Virtual Machine CPU
- The Virtual Machine Is in vMotion
- No Active Host in Cluster
- Insufficient Resources
- Insufficient Resources to Satisfy Configured Failover Level for HA
- No Compatible Hard Affinity Host
- No Compatible Soft Affinity Host
- Soft Rule Violation Correction Disallowed
- Soft Rule Violation Correction Impact
- DRS Troubleshooting Information
- Cluster Problems
- Load Imbalance on Cluster
- Cluster is Yellow
- Cluster is Red Because of Inconsistent Resource Pool
- Cluster Is Red Because Failover Capacity Is Violated
- No Hosts are Powered Off When Total Cluster Load is Low
- Hosts Are Powered-off When Total Cluster Load Is High
- DRS Seldom or Never Performs vMotion Migrations
- Host Problems
- DRS Recommends Host Be Powered on to Increase Capacity When Total Cluster Load Is Low
- Total Cluster Load Is High
- Total Cluster Load Is Low
- DRS Does Not Evacuate a Host Requested to Enter Maintenance or Standby Mode
- DRS Does Not Move Any Virtual Machines onto a Host
- DRS Does Not Move Any Virtual Machines from a Host
- Virtual Machine Problems
- Cluster Problems
Some virtual machines are not managed by the ESXi NUMA scheduler. For example, if you manually set
the processor or memory affinity for a virtual machine, the NUMA scheduler might not be able to manage
this virtual machine. Virtual machines that are not managed by the NUMA scheduler still run correctly.
However, they don't benefit from ESXi NUMA optimizations.
The NUMA scheduling and memory placement policies in ESXi can manage all virtual machines
transparently, so that administrators do not need to address the complexity of balancing virtual machines
between nodes explicitly.
The optimizations work seamlessly regardless of the type of guest operating system. ESXi provides
NUMA support even to virtual machines that do not support NUMA hardware, such as Windows NT 4.0.
As a result, you can take advantage of new hardware even with legacy operating systems.
A virtual machine that has more virtual processors than the number of physical processor cores available
on a single hardware node can be managed automatically. The NUMA scheduler accommodates such a
virtual machine by having it span NUMA nodes. That is, it is split up as multiple NUMA clients, each of
which is assigned to a node and then managed by the scheduler as a normal, non-spanning client. This
can improve the performance of certain memory-intensive workloads with high locality. For information on
configuring the behavior of this feature, see Advanced Virtual Machine Attributes.
ESXi 5.0 and later includes support for exposing virtual NUMA topology to guest operating systems. For
more information about virtual NUMA control, see Using Virtual NUMA.
VMware NUMA Optimization Algorithms and Settings
This section describes the algorithms and settings used by ESXi to maximize application performance
while still maintaining resource guarantees.
Home Nodes and Initial Placement
When a virtual machine is powered on, ESXi assigns it a home node. A virtual machine runs only on
processors within its home node, and its newly allocated memory comes from the home node as well.
Unless a virtual machine’s home node changes, it uses only local memory, avoiding the performance
penalties associated with remote memory accesses to other NUMA nodes.
When a virtual machine is powered on, it is assigned an initial home node so that the overall CPU and
memory load among NUMA nodes remains balanced. Because internode latencies in a large NUMA
system can vary greatly, ESXi determines these internode latencies at boot time and uses this information
when initially placing virtual machines that are wider than a single NUMA node. These wide virtual
machines are placed on NUMA nodes that are close to each other for lowest memory access latencies.
Initial placement-only approaches are usually sufficient for systems that run only a single workload, such
as a benchmarking configuration that remains unchanged as long as the system is running. However, this
approach is unable to guarantee good performance and fairness for a datacenter-class system that
supports changing workloads. Therefore, in addition to initial placement, ESXi 5.0 does dynamic
migration of virtual CPUs and memory between NUMA nodes for improving CPU balance and increasing
memory locality.
vSphere Resource Management
VMware, Inc. 121