6.7
Table Of Contents
- vSphere Resource Management
- Contents
- About vSphere Resource Management
- Getting Started with Resource Management
- Configuring Resource Allocation Settings
- CPU Virtualization Basics
- Administering CPU Resources
- Memory Virtualization Basics
- Administering Memory Resources
- Persistent Memory
- Configuring Virtual Graphics
- Managing Storage I/O Resources
- Managing Resource Pools
- Creating a DRS Cluster
- Using DRS Clusters to Manage Resources
- Creating a Datastore Cluster
- Initial Placement and Ongoing Balancing
- Storage Migration Recommendations
- Create a Datastore Cluster
- Enable and Disable Storage DRS
- Set the Automation Level for Datastore Clusters
- Setting the Aggressiveness Level for Storage DRS
- Datastore Cluster Requirements
- Adding and Removing Datastores from a Datastore Cluster
- Using Datastore Clusters to Manage Storage Resources
- Using NUMA Systems with ESXi
- Advanced Attributes
- Fault Definitions
- Virtual Machine is Pinned
- Virtual Machine not Compatible with any Host
- VM/VM DRS Rule Violated when Moving to another Host
- Host Incompatible with Virtual Machine
- Host Has Virtual Machine That Violates VM/VM DRS Rules
- Host has Insufficient Capacity for Virtual Machine
- Host in Incorrect State
- Host Has Insufficient Number of Physical CPUs for Virtual Machine
- Host has Insufficient Capacity for Each Virtual Machine CPU
- The Virtual Machine Is in vMotion
- No Active Host in Cluster
- Insufficient Resources
- Insufficient Resources to Satisfy Configured Failover Level for HA
- No Compatible Hard Affinity Host
- No Compatible Soft Affinity Host
- Soft Rule Violation Correction Disallowed
- Soft Rule Violation Correction Impact
- DRS Troubleshooting Information
- Cluster Problems
- Load Imbalance on Cluster
- Cluster is Yellow
- Cluster is Red Because of Inconsistent Resource Pool
- Cluster Is Red Because Failover Capacity Is Violated
- No Hosts are Powered Off When Total Cluster Load is Low
- Hosts Are Powered-off When Total Cluster Load Is High
- DRS Seldom or Never Performs vMotion Migrations
- Host Problems
- DRS Recommends Host Be Powered on to Increase Capacity When Total Cluster Load Is Low
- Total Cluster Load Is High
- Total Cluster Load Is Low
- DRS Does Not Evacuate a Host Requested to Enter Maintenance or Standby Mode
- DRS Does Not Move Any Virtual Machines onto a Host
- DRS Does Not Move Any Virtual Machines from a Host
- Virtual Machine Problems
- Cluster Problems
Challenges for Operating Systems
Because a NUMA architecture provides a single system image, it can often run an operating system with
no special optimizations.
The high latency of remote memory accesses can leave the processors under-utilized, constantly waiting
for data to be transferred to the local node, and the NUMA connection can become a bottleneck for
applications with high-memory bandwidth demands.
Furthermore, performance on such a system can be highly variable. It varies, for example, if an
application has memory located locally on one benchmarking run, but a subsequent run happens to place
all of that memory on a remote node. This phenomenon can make capacity planning difficult.
Some high-end UNIX systems provide support for NUMA optimizations in their compilers and
programming libraries. This support requires software developers to tune and recompile their programs
for optimal performance. Optimizations for one system are not guaranteed to work well on the next
generation of the same system. Other systems have allowed an administrator to explicitly decide on the
node on which an application should run. While this might be acceptable for certain applications that
demand 100 percent of their memory to be local, it creates an administrative burden and can lead to
imbalance between nodes when workloads change.
Ideally, the system software provides transparent NUMA support, so that applications can benefit
immediately without modifications. The system should maximize the use of local memory and schedule
programs intelligently without requiring constant administrator intervention. Finally, it must respond well to
changing conditions without compromising fairness or performance.
How ESXi NUMA Scheduling Works
ESXi uses a sophisticated NUMA scheduler to dynamically balance processor load and memory locality
or processor load balance.
1 Each virtual machine managed by the NUMA scheduler is assigned a home node. A home node is
one of the system’s NUMA nodes containing processors and local memory, as indicated by the
System Resource Allocation Table (SRAT).
2 When memory is allocated to a virtual machine, the ESXi host preferentially allocates it from the
home node. The virtual CPUs of the virtual machine are constrained to run on the home node to
maximize memory locality.
3 The NUMA scheduler can dynamically change a virtual machine's home node to respond to changes
in system load. The scheduler might migrate a virtual machine to a new home node to reduce
processor load imbalance. Because this might cause more of its memory to be remote, the scheduler
might migrate the virtual machine’s memory dynamically to its new home node to improve memory
locality. The NUMA scheduler might also swap virtual machines between nodes when this improves
overall memory locality.
vSphere Resource Management
VMware, Inc. 120