6.5.1
Table Of Contents
- vSphere Resource Management
- Contents
- About vSphere Resource Management
- Getting Started with Resource Management
- Configuring Resource Allocation Settings
- CPU Virtualization Basics
- Administering CPU Resources
- Memory Virtualization Basics
- Administering Memory Resources
- Configuring Virtual Graphics
- Managing Storage I/O Resources
- Managing Resource Pools
- Creating a DRS Cluster
- Using DRS Clusters to Manage Resources
- Creating a Datastore Cluster
- Initial Placement and Ongoing Balancing
- Storage Migration Recommendations
- Create a Datastore Cluster
- Enable and Disable Storage DRS
- Set the Automation Level for Datastore Clusters
- Setting the Aggressiveness Level for Storage DRS
- Datastore Cluster Requirements
- Adding and Removing Datastores from a Datastore Cluster
- Using Datastore Clusters to Manage Storage Resources
- Using NUMA Systems with ESXi
- Advanced Attributes
- Fault Definitions
- Virtual Machine is Pinned
- Virtual Machine not Compatible with any Host
- VM/VM DRS Rule Violated when Moving to another Host
- Host Incompatible with Virtual Machine
- Host Has Virtual Machine That Violates VM/VM DRS Rules
- Host has Insufficient Capacity for Virtual Machine
- Host in Incorrect State
- Host Has Insufficient Number of Physical CPUs for Virtual Machine
- Host has Insufficient Capacity for Each Virtual Machine CPU
- The Virtual Machine Is in vMotion
- No Active Host in Cluster
- Insufficient Resources
- Insufficient Resources to Satisfy Configured Failover Level for HA
- No Compatible Hard Affinity Host
- No Compatible Soft Affinity Host
- Soft Rule Violation Correction Disallowed
- Soft Rule Violation Correction Impact
- DRS Troubleshooting Information
- Cluster Problems
- Load Imbalance on Cluster
- Cluster is Yellow
- Cluster is Red Because of Inconsistent Resource Pool
- Cluster Is Red Because Failover Capacity Is Violated
- No Hosts are Powered Off When Total Cluster Load is Low
- Hosts Are Powered-off When Total Cluster Load Is High
- DRS Seldom or Never Performs vMotion Migrations
- Host Problems
- DRS Recommends Host Be Powered on to Increase Capacity When Total Cluster Load Is Low
- Total Cluster Load Is High
- Total Cluster Load Is Low
- DRS Does Not Evacuate a Host Requested to Enter Maintenance or Standby Mode
- DRS Does Not Move Any Virtual Machines onto a Host
- DRS Does Not Move Any Virtual Machines from a Host
- Virtual Machine Problems
- Cluster Problems
- Index
When initial placement, dynamic rebalancing, and intelligent memory migration work in conjunction, they
ensure good memory performance on NUMA systems, even in the presence of changing workloads. When a
major workload change occurs, for instance when new virtual machines are started, the system takes time to
readjust, migrating virtual machines and memory to new locations. After a short period, typically seconds
or minutes, the system completes its readjustments and reaches a steady state.
Transparent Page Sharing Optimized for NUMA
Many ESXi workloads present opportunities for sharing memory across virtual machines.
For example, several virtual machines might be running instances of the same guest operating system, have
the same applications or components loaded, or contain common data. In such cases, ESXi systems use a
proprietary transparent page-sharing technique to eliminate redundant copies of memory pages. With
memory sharing, a workload running in virtual machines often consumes less memory than it might when
running on physical machines. As a result, higher levels of overcommitment can be supported eciently.
Transparent page sharing for ESXi systems has also been optimized for use on NUMA systems. On NUMA
systems, pages are shared per-node, so each NUMA node has its own local copy of heavily shared pages.
When virtual machines use shared pages, they do not need access to remote memory.
N This default behavior is the same in all previous versions of ESX and ESXi.
Resource Management in NUMA Architectures
You can perform resource management with dierent types of NUMA architecture.
With the proliferation of highly multicore systems, NUMA architectures are becoming more popular as
these architectures allow beer performance scaling of memory intensive workloads. All modern Intel and
AMD systems have NUMA support built into the processors. Additionally, there are traditional NUMA
systems like the IBM Enterprise X-Architecture that extend Intel and AMD processors with NUMA behavior
with specialized chipset support.
Typically, you can use BIOS seings to enable and disable NUMA behavior. For example, in AMD Opteron-
based HP Proliant servers, NUMA can be disabled by enabling node interleaving in the BIOS. If NUMA is
enabled, the BIOS builds a system resource allocation table (SRAT) which ESXi uses to generate the NUMA
information used in optimizations. For scheduling fairness, NUMA optimizations are not enabled for
systems with too few cores per NUMA node or too few cores overall. You can modify the
numa.rebalancecorestotal and numa.rebalancecoresnode options to change this behavior.
Using Virtual NUMA
vSphere 5.0 and later includes support for exposing virtual NUMA topology to guest operating systems,
which can improve performance by facilitating guest operating system and application NUMA
optimizations.
Virtual NUMA topology is available to hardware version 8 virtual machines and is enabled by default when
the number of virtual CPUs is greater than eight. You can also manually inuence virtual NUMA topology
using advanced conguration options.
The rst time a virtual NUMA enabled virtual machine is powered on, its virtual NUMA topology is based
on the NUMA topology of the underlying physical host. Once a virtual machines virtual NUMA topology is
initialized, it does not change unless the number of vCPUs in that virtual machine is changed.
The virtual NUMA topology does not consider the memory congured to a virtual machine. The virtual
NUMA topology is not inuenced by the number of virtual sockets and number of cores per socket for a
virtual machine.
vSphere Resource Management
110 VMware, Inc.