6.7
Table Of Contents
- vSphere Resource Management
- Contents
- About vSphere Resource Management
- Getting Started with Resource Management
- Configuring Resource Allocation Settings
- CPU Virtualization Basics
- Administering CPU Resources
- Memory Virtualization Basics
- Administering Memory Resources
- Persistent Memory
- Configuring Virtual Graphics
- Managing Storage I/O Resources
- Managing Resource Pools
- Creating a DRS Cluster
- Using DRS Clusters to Manage Resources
- Creating a Datastore Cluster
- Initial Placement and Ongoing Balancing
- Storage Migration Recommendations
- Create a Datastore Cluster
- Enable and Disable Storage DRS
- Set the Automation Level for Datastore Clusters
- Setting the Aggressiveness Level for Storage DRS
- Datastore Cluster Requirements
- Adding and Removing Datastores from a Datastore Cluster
- Using Datastore Clusters to Manage Storage Resources
- Using NUMA Systems with ESXi
- Advanced Attributes
- Fault Definitions
- Virtual Machine is Pinned
- Virtual Machine not Compatible with any Host
- VM/VM DRS Rule Violated when Moving to another Host
- Host Incompatible with Virtual Machine
- Host Has Virtual Machine That Violates VM/VM DRS Rules
- Host has Insufficient Capacity for Virtual Machine
- Host in Incorrect State
- Host Has Insufficient Number of Physical CPUs for Virtual Machine
- Host has Insufficient Capacity for Each Virtual Machine CPU
- The Virtual Machine Is in vMotion
- No Active Host in Cluster
- Insufficient Resources
- Insufficient Resources to Satisfy Configured Failover Level for HA
- No Compatible Hard Affinity Host
- No Compatible Soft Affinity Host
- Soft Rule Violation Correction Disallowed
- Soft Rule Violation Correction Impact
- DRS Troubleshooting Information
- Cluster Problems
- Load Imbalance on Cluster
- Cluster is Yellow
- Cluster is Red Because of Inconsistent Resource Pool
- Cluster Is Red Because Failover Capacity Is Violated
- No Hosts are Powered Off When Total Cluster Load is Low
- Hosts Are Powered-off When Total Cluster Load Is High
- DRS Seldom or Never Performs vMotion Migrations
- Host Problems
- DRS Recommends Host Be Powered on to Increase Capacity When Total Cluster Load Is Low
- Total Cluster Load Is High
- Total Cluster Load Is Low
- DRS Does Not Evacuate a Host Requested to Enter Maintenance or Standby Mode
- DRS Does Not Move Any Virtual Machines onto a Host
- DRS Does Not Move Any Virtual Machines from a Host
- Virtual Machine Problems
- Cluster Problems
Using NUMA Systems with ESXi 15
ESXi supports memory access optimization for Intel and AMD Opteron processors in server architectures
that support NUMA (non-uniform memory access).
After you understand how ESXi NUMA scheduling is performed and how the VMware NUMA algorithms
work, you can specify NUMA controls to optimize the performance of your virtual machines.
This chapter includes the following topics:
n
What is NUMA?
n
How ESXi NUMA Scheduling Works
n
VMware NUMA Optimization Algorithms and Settings
n
Resource Management in NUMA Architectures
n
Using Virtual NUMA
n
Specifying NUMA Controls
What is NUMA?
NUMA systems are advanced server platforms with more than one system bus. They can harness large
numbers of processors in a single system image with superior price to performance ratios.
For the past decade, processor clock speed has increased dramatically. A multi-gigahertz CPU, however,
needs to be supplied with a large amount of memory bandwidth to use its processing power effectively.
Even a single CPU running a memory-intensive workload, such as a scientific computing application, can
be constrained by memory bandwidth.
This problem is amplified on symmetric multiprocessing (SMP) systems, where many processors must
compete for bandwidth on the same system bus. Some high-end systems often try to solve this problem
by building a high-speed data bus. However, such a solution is expensive and limited in scalability.
NUMA is an alternative approach that links several small, cost-effective nodes using a high-performance
connection. Each node contains processors and memory, much like a small SMP system. However, an
advanced memory controller allows a node to use memory on all other nodes, creating a single system
image. When a processor accesses memory that does not lie within its own node (remote memory), the
data must be transferred over the NUMA connection, which is slower than accessing local memory.
Memory access times are not uniform and depend on the location of the memory and the node from
which it is accessed, as the technology’s name implies.
VMware, Inc.
119