6.0.1
Table Of Contents
- vSphere Resource Management
- Contents
- About vSphere Resource Management
- Updated Information
- Getting Started with Resource Management
- Configuring Resource Allocation Settings
- CPU Virtualization Basics
- Administering CPU Resources
- Memory Virtualization Basics
- Administering Memory Resources
- View Graphics Information
- Managing Storage I/O Resources
- Managing Resource Pools
- Creating a DRS Cluster
- Using DRS Clusters to Manage Resources
- Creating a Datastore Cluster
- Initial Placement and Ongoing Balancing
- Storage Migration Recommendations
- Create a Datastore Cluster
- Enable and Disable Storage DRS
- Set the Automation Level for Datastore Clusters
- Setting the Aggressiveness Level for Storage DRS
- Datastore Cluster Requirements
- Adding and Removing Datastores from a Datastore Cluster
- Using Datastore Clusters to Manage Storage Resources
- Using NUMA Systems with ESXi
- Advanced Attributes
- Fault Definitions
- Virtual Machine is Pinned
- Virtual Machine not Compatible with any Host
- VM/VM DRS Rule Violated when Moving to another Host
- Host Incompatible with Virtual Machine
- Host has Virtual Machine that Violates VM/VM DRS Rules
- Host has Insufficient Capacity for Virtual Machine
- Host in Incorrect State
- Host has Insufficient Number of Physical CPUs for Virtual Machine
- Host has Insufficient Capacity for Each Virtual Machine CPU
- The Virtual Machine is in vMotion
- No Active Host in Cluster
- Insufficient Resources
- Insufficient Resources to Satisfy Configured Failover Level for HA
- No Compatible Hard Affinity Host
- No Compatible Soft Affinity Host
- Soft Rule Violation Correction Disallowed
- Soft Rule Violation Correction Impact
- DRS Troubleshooting Information
- Cluster Problems
- Load Imbalance on Cluster
- Cluster is Yellow
- Cluster is Red Because of Inconsistent Resource Pool
- Cluster is Red Because Failover Capacity is Violated
- No Hosts are Powered Off When Total Cluster Load is Low
- Hosts are Powered Off When Total Cluster Load is High
- DRS Seldom or Never Performs vMotion Migrations
- Host Problems
- DRS Recommends Host be Powered On to Increase Capacity When Total Cluster Load Is Low
- Total Cluster Load Is High
- Total Cluster Load Is Low
- DRS Does Not Evacuate a Host Requested to Enter Maintenance or Standby Mode
- DRS Does Not Move Any Virtual Machines onto a Host
- DRS Does Not Move Any Virtual Machines from a Host
- Virtual Machine Problems
- Cluster Problems
- Index
Table 14‑1. Advanced Options for Virtual NUMA Controls (Continued)
Option Description Default Value
numa.autosize.once
When you create a virtual machine template with these
settings, the settings are guaranteed to remain the same
every time you subsequently power on the virtual
machine. The virtual NUMA topology will be
reevaluated if the configured number of virtual CPUs on
the virtual machine is modified.
TRUE
numa.vcpu.min
Minimum number of virtual CPUs in a virtual machine
that are required in order to generate a virtual NUMA
topology.
9
NOTE When you set numa.autosize to TRUE, and if the configuration is set up manually or with a script,
some guests might not be able to handle dynamic changes.
For example, a Linux application configured with the numactl system utility is set up and tested on one
physical host with four cores per node. The host requires two NUMA nodes for a virtual machine with eight
virtual CPUs. If the same virtual machine is run on a system with eight cores per node, which might occur
during a vMotion operation, and numa.autosize is set to TRUE, only one virtual NUMA node will be
created (rather than two virtual NUMA nodes). When numactl references the second virtual NUMA node,
the operation will fail.
To avoid this, scripts should be intelligent enough to first query numactl --hardware. Otherwise, you must
set the NUMA topology explicitly or allow the default numa.autosize.once setting to take effect.
Specifying NUMA Controls
If you have applications that use a lot of memory or have a small number of virtual machines, you might
want to optimize performance by specifying virtual machine CPU and memory placement explicitly.
Specifying controls is useful if a virtual machine runs a memory-intensive workload, such as an in-memory
database or a scientific computing application with a large data set. You might also want to optimize
NUMA placements manually if the system workload is known to be simple and unchanging. For example,
an eight-processor system running eight virtual machines with similar workloads is easy to optimize
explicitly.
NOTE In most situations, the ESXi host’s automatic NUMA optimizations result in good performance.
ESXi provides three sets of controls for NUMA placement, so that administrators can control memory and
processor placement of a virtual machine.
The vSphere Web Client lets you specify the following options.
NUMA Node Affinity
When you set this option, NUMA can schedule a virtual machine only on the
nodes specified in the affinity.
CPU Affinity
When you set this option, a virtual machine uses only the processors
specified in the affinity.
Memory Affinity
When you set this option, the server allocates memory only on the specified
nodes.
A virtual machine is still managed by NUMA when you specify NUMA node affinity, but its virtual CPUs
can be scheduled only on the nodes specified in the NUMA node affinity. Likewise, memory can be
obtained only from the nodes specified in the NUMA node affinity. When you specify CPU or memory
affinities, a virtual machine ceases to be managed by NUMA. NUMA management of these virtual machines
is effective when you remove the CPU and memory affinity constraints.
vSphere Resource Management
112 VMware, Inc.