6.5.1
Table Of Contents
- vSphere Resource Management
- Contents
- About vSphere Resource Management
- Getting Started with Resource Management
- Configuring Resource Allocation Settings
- CPU Virtualization Basics
- Administering CPU Resources
- Memory Virtualization Basics
- Administering Memory Resources
- Configuring Virtual Graphics
- Managing Storage I/O Resources
- Managing Resource Pools
- Creating a DRS Cluster
- Using DRS Clusters to Manage Resources
- Creating a Datastore Cluster
- Initial Placement and Ongoing Balancing
- Storage Migration Recommendations
- Create a Datastore Cluster
- Enable and Disable Storage DRS
- Set the Automation Level for Datastore Clusters
- Setting the Aggressiveness Level for Storage DRS
- Datastore Cluster Requirements
- Adding and Removing Datastores from a Datastore Cluster
- Using Datastore Clusters to Manage Storage Resources
- Using NUMA Systems with ESXi
- Advanced Attributes
- Fault Definitions
- Virtual Machine is Pinned
- Virtual Machine not Compatible with any Host
- VM/VM DRS Rule Violated when Moving to another Host
- Host Incompatible with Virtual Machine
- Host Has Virtual Machine That Violates VM/VM DRS Rules
- Host has Insufficient Capacity for Virtual Machine
- Host in Incorrect State
- Host Has Insufficient Number of Physical CPUs for Virtual Machine
- Host has Insufficient Capacity for Each Virtual Machine CPU
- The Virtual Machine Is in vMotion
- No Active Host in Cluster
- Insufficient Resources
- Insufficient Resources to Satisfy Configured Failover Level for HA
- No Compatible Hard Affinity Host
- No Compatible Soft Affinity Host
- Soft Rule Violation Correction Disallowed
- Soft Rule Violation Correction Impact
- DRS Troubleshooting Information
- Cluster Problems
- Load Imbalance on Cluster
- Cluster is Yellow
- Cluster is Red Because of Inconsistent Resource Pool
- Cluster Is Red Because Failover Capacity Is Violated
- No Hosts are Powered Off When Total Cluster Load is Low
- Hosts Are Powered-off When Total Cluster Load Is High
- DRS Seldom or Never Performs vMotion Migrations
- Host Problems
- DRS Recommends Host Be Powered on to Increase Capacity When Total Cluster Load Is Low
- Total Cluster Load Is High
- Total Cluster Load Is Low
- DRS Does Not Evacuate a Host Requested to Enter Maintenance or Standby Mode
- DRS Does Not Move Any Virtual Machines onto a Host
- DRS Does Not Move Any Virtual Machines from a Host
- Virtual Machine Problems
- Cluster Problems
- Index
If the virtual NUMA topology needs to be overridden, see “Virtual NUMA Controls,” on page 111.
N Enabling CPU HotAdd will disable virtual NUMA. See hps://kb.vmware.com/kb/2040375.
Virtual NUMA Controls
For virtual machines with disproportionately large memory consumption, you can use advanced options to
override the default virtual CPU seings.
You can add these advanced options to the virtual machine conguration le.
Table 14‑1. Advanced Options for Virtual NUMA Controls
Option Description Default Value
cpuid.coresPerSocket
Determines the number of virtual cores per virtual CPU
socket. This option does not aect the virtual NUMA
topology unless numa.vcpu.followcorespersocket is
congured.
1
numa.vcpu.maxPerVirtualNode
Determines the number of virtual NUMA nodes by
spliing the total vCPU count evenly with this value as
its divisor.
8
numa.autosize.once
When you create a virtual machine template with these
seings, the seings remain the same every time you
then power on the virtual machine with the default value
TRUE. If the value is set to FALSE, the virtual NUMA
topology is updated every time it is powered on. The
virtual NUMA topology is reevaluated when the
congured number of virtual CPUs on the virtual
machine is modied at any time.
TRUE
numa.vcpu.min
The minimum number of virtual CPUs in a virtual
machine that are required to generate a virtual NUMA
topology. A virtual machine is always UMA when its
size is smaller than numa.vcpu.min
9
numa.vcpu.followcorespersocket
If set to 1, reverts to the old behavior of virtual NUMA
node sizing being tied to cpuid.coresPerSocket.
0
Specifying NUMA Controls
If you have applications that use a lot of memory or have a small number of virtual machines, you might
want to optimize performance by specifying virtual machine CPU and memory placement explicitly.
Specifying controls is useful if a virtual machine runs a memory-intensive workload, such as an in-memory
database or a scientic computing application with a large data set. You might also want to optimize NUMA
placements manually if the system workload is known to be simple and unchanging. For example, an eight-
processor system running eight virtual machines with similar workloads is easy to optimize explicitly.
N In most situations, the ESXi host’s automatic NUMA optimizations result in good performance.
ESXi provides three sets of controls for NUMA placement, so that administrators can control memory and
processor placement of a virtual machine.
Chapter 14 Using NUMA Systems with ESXi
VMware, Inc. 111