6.5.1
Table Of Contents
- vSphere Resource Management
- Contents
- About vSphere Resource Management
- Getting Started with Resource Management
- Configuring Resource Allocation Settings
- CPU Virtualization Basics
- Administering CPU Resources
- Memory Virtualization Basics
- Administering Memory Resources
- Configuring Virtual Graphics
- Managing Storage I/O Resources
- Managing Resource Pools
- Creating a DRS Cluster
- Using DRS Clusters to Manage Resources
- Creating a Datastore Cluster
- Initial Placement and Ongoing Balancing
- Storage Migration Recommendations
- Create a Datastore Cluster
- Enable and Disable Storage DRS
- Set the Automation Level for Datastore Clusters
- Setting the Aggressiveness Level for Storage DRS
- Datastore Cluster Requirements
- Adding and Removing Datastores from a Datastore Cluster
- Using Datastore Clusters to Manage Storage Resources
- Using NUMA Systems with ESXi
- Advanced Attributes
- Fault Definitions
- Virtual Machine is Pinned
- Virtual Machine not Compatible with any Host
- VM/VM DRS Rule Violated when Moving to another Host
- Host Incompatible with Virtual Machine
- Host Has Virtual Machine That Violates VM/VM DRS Rules
- Host has Insufficient Capacity for Virtual Machine
- Host in Incorrect State
- Host Has Insufficient Number of Physical CPUs for Virtual Machine
- Host has Insufficient Capacity for Each Virtual Machine CPU
- The Virtual Machine Is in vMotion
- No Active Host in Cluster
- Insufficient Resources
- Insufficient Resources to Satisfy Configured Failover Level for HA
- No Compatible Hard Affinity Host
- No Compatible Soft Affinity Host
- Soft Rule Violation Correction Disallowed
- Soft Rule Violation Correction Impact
- DRS Troubleshooting Information
- Cluster Problems
- Load Imbalance on Cluster
- Cluster is Yellow
- Cluster is Red Because of Inconsistent Resource Pool
- Cluster Is Red Because Failover Capacity Is Violated
- No Hosts are Powered Off When Total Cluster Load is Low
- Hosts Are Powered-off When Total Cluster Load Is High
- DRS Seldom or Never Performs vMotion Migrations
- Host Problems
- DRS Recommends Host Be Powered on to Increase Capacity When Total Cluster Load Is Low
- Total Cluster Load Is High
- Total Cluster Load Is Low
- DRS Does Not Evacuate a Host Requested to Enter Maintenance or Standby Mode
- DRS Does Not Move Any Virtual Machines onto a Host
- DRS Does Not Move Any Virtual Machines from a Host
- Virtual Machine Problems
- Cluster Problems
- Index
The vSphere Web Client lets you specify the following options.
NUMA Node Affinity
When you set this option, NUMA can schedule a virtual machine only on the
nodes specied in the anity.
CPU Affinity
When you set this option, a virtual machine uses only the processors
specied in the anity.
Memory Affinity
When you set this option, the server allocates memory only on the specied
nodes.
A virtual machine is still managed by NUMA when you specify NUMA node anity, but its virtual CPUs
can be scheduled only on the nodes specied in the NUMA node anity. Likewise, memory can be obtained
only from the nodes specied in the NUMA node anity. When you specify CPU or memory anities, a
virtual machine ceases to be managed by NUMA. NUMA management of these virtual machines is eective
when you remove the CPU and memory anity constraints.
Manual NUMA placement might interfere with ESXi resource management algorithms, which distribute
processor resources fairly across a system. For example, if you manually place 10 virtual machines with
processor-intensive workloads on one node, and manually place only 2 virtual machines on another node, it
is impossible for the system to give all 12 virtual machines equal shares of systems resources.
Associate Virtual Machines with Specific Processors
You might be able to improve the performance of the applications on a virtual machine by pinning its virtual
CPUs to xed processors. This allows you to prevent the virtual CPUs from migrating across NUMA nodes.
Procedure
1 Find the virtual machine in the vSphere Web Client inventory.
a To nd a virtual machine, select a data center, folder, cluster, resource pool, or host.
b Click the VMs tab.
2 Right-click the virtual machine and click Edit .
3 Select the Virtual Hardware tab, and expand CPU.
4 Under Scheduling Anity, set the CPU anity to the preferred processors.
N You must manually select all processors in the NUMA node. CPU anity is specied on a per-
processor, not on a per-node, basis.
Associate Memory Allocations with Specific NUMA Nodes Using Memory
Affinity
You can specify that all future memory allocations on a virtual machine use pages associated with specic
NUMA nodes (also known as manual memory anity).
N Specify nodes to be used for future memory allocations only if you have also specied CPU anity.
If you make manual changes only to the memory anity seings, automatic NUMA rebalancing does not
work properly.
Procedure
1 Browse to the virtual machine in the vSphere Web Client navigator.
2 Click the tab.
3 Click , and click VM Hardware.
vSphere Resource Management
112 VMware, Inc.