6.5.1
Table Of Contents
- vSphere Resource Management
- Contents
- About vSphere Resource Management
- Getting Started with Resource Management
- Configuring Resource Allocation Settings
- CPU Virtualization Basics
- Administering CPU Resources
- Memory Virtualization Basics
- Administering Memory Resources
- Configuring Virtual Graphics
- Managing Storage I/O Resources
- Managing Resource Pools
- Creating a DRS Cluster
- Using DRS Clusters to Manage Resources
- Creating a Datastore Cluster
- Initial Placement and Ongoing Balancing
- Storage Migration Recommendations
- Create a Datastore Cluster
- Enable and Disable Storage DRS
- Set the Automation Level for Datastore Clusters
- Setting the Aggressiveness Level for Storage DRS
- Datastore Cluster Requirements
- Adding and Removing Datastores from a Datastore Cluster
- Using Datastore Clusters to Manage Storage Resources
- Using NUMA Systems with ESXi
- Advanced Attributes
- Fault Definitions
- Virtual Machine is Pinned
- Virtual Machine not Compatible with any Host
- VM/VM DRS Rule Violated when Moving to another Host
- Host Incompatible with Virtual Machine
- Host Has Virtual Machine That Violates VM/VM DRS Rules
- Host has Insufficient Capacity for Virtual Machine
- Host in Incorrect State
- Host Has Insufficient Number of Physical CPUs for Virtual Machine
- Host has Insufficient Capacity for Each Virtual Machine CPU
- The Virtual Machine Is in vMotion
- No Active Host in Cluster
- Insufficient Resources
- Insufficient Resources to Satisfy Configured Failover Level for HA
- No Compatible Hard Affinity Host
- No Compatible Soft Affinity Host
- Soft Rule Violation Correction Disallowed
- Soft Rule Violation Correction Impact
- DRS Troubleshooting Information
- Cluster Problems
- Load Imbalance on Cluster
- Cluster is Yellow
- Cluster is Red Because of Inconsistent Resource Pool
- Cluster Is Red Because Failover Capacity Is Violated
- No Hosts are Powered Off When Total Cluster Load is Low
- Hosts Are Powered-off When Total Cluster Load Is High
- DRS Seldom or Never Performs vMotion Migrations
- Host Problems
- DRS Recommends Host Be Powered on to Increase Capacity When Total Cluster Load Is Low
- Total Cluster Load Is High
- Total Cluster Load Is Low
- DRS Does Not Evacuate a Host Requested to Enter Maintenance or Standby Mode
- DRS Does Not Move Any Virtual Machines onto a Host
- DRS Does Not Move Any Virtual Machines from a Host
- Virtual Machine Problems
- Cluster Problems
- Index
The high latency of remote memory accesses can leave the processors under-utilized, constantly waiting for
data to be transferred to the local node, and the NUMA connection can become a boleneck for applications
with high-memory bandwidth demands.
Furthermore, performance on such a system can be highly variable. It varies, for example, if an application
has memory located locally on one benchmarking run, but a subsequent run happens to place all of that
memory on a remote node. This phenomenon can make capacity planning dicult.
Some high-end UNIX systems provide support for NUMA optimizations in their compilers and
programming libraries. This support requires software developers to tune and recompile their programs for
optimal performance. Optimizations for one system are not guaranteed to work well on the next generation
of the same system. Other systems have allowed an administrator to explicitly decide on the node on which
an application should run. While this might be acceptable for certain applications that demand 100 percent
of their memory to be local, it creates an administrative burden and can lead to imbalance between nodes
when workloads change.
Ideally, the system software provides transparent NUMA support, so that applications can benet
immediately without modications. The system should maximize the use of local memory and schedule
programs intelligently without requiring constant administrator intervention. Finally, it must respond well
to changing conditions without compromising fairness or performance.
How ESXi NUMA Scheduling Works
ESXi uses a sophisticated NUMA scheduler to dynamically balance processor load and memory locality or
processor load balance.
1 Each virtual machine managed by the NUMA scheduler is assigned a home node. A home node is one
of the system’s NUMA nodes containing processors and local memory, as indicated by the System
Resource Allocation Table (SRAT).
2 When memory is allocated to a virtual machine, the ESXi host preferentially allocates it from the home
node. The virtual CPUs of the virtual machine are constrained to run on the home node to maximize
memory locality.
3 The NUMA scheduler can dynamically change a virtual machine's home node to respond to changes in
system load. The scheduler might migrate a virtual machine to a new home node to reduce processor
load imbalance. Because this might cause more of its memory to be remote, the scheduler might migrate
the virtual machine’s memory dynamically to its new home node to improve memory locality. The
NUMA scheduler might also swap virtual machines between nodes when this improves overall
memory locality.
Some virtual machines are not managed by the ESXi NUMA scheduler. For example, if you manually set the
processor or memory anity for a virtual machine, the NUMA scheduler might not be able to manage this
virtual machine. Virtual machines that are not managed by the NUMA scheduler still run correctly.
However, they don't benet from ESXi NUMA optimizations.
The NUMA scheduling and memory placement policies in ESXi can manage all virtual machines
transparently, so that administrators do not need to address the complexity of balancing virtual machines
between nodes explicitly.
The optimizations work seamlessly regardless of the type of guest operating system. ESXi provides NUMA
support even to virtual machines that do not support NUMA hardware, such as Windows NT 4.0. As a
result, you can take advantage of new hardware even with legacy operating systems.
A virtual machine that has more virtual processors than the number of physical processor cores available on
a single hardware node can be managed automatically. The NUMA scheduler accommodates such a virtual
machine by having it span NUMA nodes. That is, it is split up as multiple NUMA clients, each of which is
assigned to a node and then managed by the scheduler as a normal, non-spanning client. This can improve
the performance of certain memory-intensive workloads with high locality. For information on conguring
the behavior of this feature, see “Advanced Virtual Machine Aributes,” on page 118.
vSphere Resource Management
108 VMware, Inc.