6.7
Table Of Contents
- vSphere Resource Management
- Contents
- About vSphere Resource Management
- Getting Started with Resource Management
- Configuring Resource Allocation Settings
- CPU Virtualization Basics
- Administering CPU Resources
- Memory Virtualization Basics
- Administering Memory Resources
- Persistent Memory
- Configuring Virtual Graphics
- Managing Storage I/O Resources
- Managing Resource Pools
- Creating a DRS Cluster
- Using DRS Clusters to Manage Resources
- Creating a Datastore Cluster
- Initial Placement and Ongoing Balancing
- Storage Migration Recommendations
- Create a Datastore Cluster
- Enable and Disable Storage DRS
- Set the Automation Level for Datastore Clusters
- Setting the Aggressiveness Level for Storage DRS
- Datastore Cluster Requirements
- Adding and Removing Datastores from a Datastore Cluster
- Using Datastore Clusters to Manage Storage Resources
- Using NUMA Systems with ESXi
- Advanced Attributes
- Fault Definitions
- Virtual Machine is Pinned
- Virtual Machine not Compatible with any Host
- VM/VM DRS Rule Violated when Moving to another Host
- Host Incompatible with Virtual Machine
- Host Has Virtual Machine That Violates VM/VM DRS Rules
- Host has Insufficient Capacity for Virtual Machine
- Host in Incorrect State
- Host Has Insufficient Number of Physical CPUs for Virtual Machine
- Host has Insufficient Capacity for Each Virtual Machine CPU
- The Virtual Machine Is in vMotion
- No Active Host in Cluster
- Insufficient Resources
- Insufficient Resources to Satisfy Configured Failover Level for HA
- No Compatible Hard Affinity Host
- No Compatible Soft Affinity Host
- Soft Rule Violation Correction Disallowed
- Soft Rule Violation Correction Impact
- DRS Troubleshooting Information
- Cluster Problems
- Load Imbalance on Cluster
- Cluster is Yellow
- Cluster is Red Because of Inconsistent Resource Pool
- Cluster Is Red Because Failover Capacity Is Violated
- No Hosts are Powered Off When Total Cluster Load is Low
- Hosts Are Powered-off When Total Cluster Load Is High
- DRS Seldom or Never Performs vMotion Migrations
- Host Problems
- DRS Recommends Host Be Powered on to Increase Capacity When Total Cluster Load Is Low
- Total Cluster Load Is High
- Total Cluster Load Is Low
- DRS Does Not Evacuate a Host Requested to Enter Maintenance or Standby Mode
- DRS Does Not Move Any Virtual Machines onto a Host
- DRS Does Not Move Any Virtual Machines from a Host
- Virtual Machine Problems
- Cluster Problems
Dynamic Load Balancing and Page Migration
ESXi combines the traditional initial placement approach with a dynamic rebalancing algorithm.
Periodically (every two seconds by default), the system examines the loads of the various nodes and
determines if it should rebalance the load by moving a virtual machine from one node to another.
This calculation takes into account the resource settings for virtual machines and resource pools to
improve performance without violating fairness or resource entitlements.
The rebalancer selects an appropriate virtual machine and changes its home node to the least loaded
node. When it can, the rebalancer moves a virtual machine that already has some memory located on the
destination node. From that point on (unless it is moved again), the virtual machine allocates memory on
its new home node and it runs only on processors within the new home node.
Rebalancing is an effective solution to maintain fairness and ensure that all nodes are fully used. The
rebalancer might need to move a virtual machine to a node on which it has allocated little or no memory.
In this case, the virtual machine incurs a performance penalty associated with a large number of remote
memory accesses. ESXi can eliminate this penalty by transparently migrating memory from the virtual
machine’s original node to its new home node:
1 The system selects a page (4KB of contiguous memory) on the original node and copies its data to a
page in the destination node.
2 The system uses the virtual machine monitor layer and the processor’s memory management
hardware to seamlessly remap the virtual machine’s view of memory, so that it uses the page on the
destination node for all further references, eliminating the penalty of remote memory access.
When a virtual machine moves to a new node, the ESXi host immediately begins to migrate its memory in
this fashion. It manages the rate to avoid overtaxing the system, particularly when the virtual machine has
little remote memory remaining or when the destination node has little free memory available. The
memory migration algorithm also ensures that the ESXi host does not move memory needlessly if a
virtual machine is moved to a new node for only a short period.
When initial placement, dynamic rebalancing, and intelligent memory migration work in conjunction, they
ensure good memory performance on NUMA systems, even in the presence of changing workloads.
When a major workload change occurs, for instance when new virtual machines are started, the system
takes time to readjust, migrating virtual machines and memory to new locations. After a short period,
typically seconds or minutes, the system completes its readjustments and reaches a steady state.
Transparent Page Sharing Optimized for NUMA
Many ESXi workloads present opportunities for sharing memory across virtual machines.
For example, several virtual machines might be running instances of the same guest operating system,
have the same applications or components loaded, or contain common data. In such cases, ESXi
systems use a proprietary transparent page-sharing technique to securely eliminate redundant copies of
memory pages. With memory sharing, a workload running in virtual machines often consumes less
memory than it would when running on physical machines. As a result, higher levels of overcommitment
can be supported efficiently.
vSphere Resource Management
VMware, Inc. 122