6.7
Table Of Contents
- vSphere Resource Management
- Contents
- About vSphere Resource Management
- Getting Started with Resource Management
- Configuring Resource Allocation Settings
- CPU Virtualization Basics
- Administering CPU Resources
- Memory Virtualization Basics
- Administering Memory Resources
- Persistent Memory
- Configuring Virtual Graphics
- Managing Storage I/O Resources
- Managing Resource Pools
- Creating a DRS Cluster
- Using DRS Clusters to Manage Resources
- Creating a Datastore Cluster
- Initial Placement and Ongoing Balancing
- Storage Migration Recommendations
- Create a Datastore Cluster
- Enable and Disable Storage DRS
- Set the Automation Level for Datastore Clusters
- Setting the Aggressiveness Level for Storage DRS
- Datastore Cluster Requirements
- Adding and Removing Datastores from a Datastore Cluster
- Using Datastore Clusters to Manage Storage Resources
- Using NUMA Systems with ESXi
- Advanced Attributes
- Fault Definitions
- Virtual Machine is Pinned
- Virtual Machine not Compatible with any Host
- VM/VM DRS Rule Violated when Moving to another Host
- Host Incompatible with Virtual Machine
- Host Has Virtual Machine That Violates VM/VM DRS Rules
- Host has Insufficient Capacity for Virtual Machine
- Host in Incorrect State
- Host Has Insufficient Number of Physical CPUs for Virtual Machine
- Host has Insufficient Capacity for Each Virtual Machine CPU
- The Virtual Machine Is in vMotion
- No Active Host in Cluster
- Insufficient Resources
- Insufficient Resources to Satisfy Configured Failover Level for HA
- No Compatible Hard Affinity Host
- No Compatible Soft Affinity Host
- Soft Rule Violation Correction Disallowed
- Soft Rule Violation Correction Impact
- DRS Troubleshooting Information
- Cluster Problems
- Load Imbalance on Cluster
- Cluster is Yellow
- Cluster is Red Because of Inconsistent Resource Pool
- Cluster Is Red Because Failover Capacity Is Violated
- No Hosts are Powered Off When Total Cluster Load is Low
- Hosts Are Powered-off When Total Cluster Load Is High
- DRS Seldom or Never Performs vMotion Migrations
- Host Problems
- DRS Recommends Host Be Powered on to Increase Capacity When Total Cluster Load Is Low
- Total Cluster Load Is High
- Total Cluster Load Is Low
- DRS Does Not Evacuate a Host Requested to Enter Maintenance or Standby Mode
- DRS Does Not Move Any Virtual Machines onto a Host
- DRS Does Not Move Any Virtual Machines from a Host
- Virtual Machine Problems
- Cluster Problems
The first layer of page tables stores guest virtual-to-physical translations, while the second layer of page
tables stores guest physical-to-machine translation. The TLB (translation look-aside buffer) is a cache of
translations maintained by the processor's memory management unit (MMU) hardware. A TLB miss is a
miss in this cache and the hardware needs to go to memory (possibly many times) to find the required
translation. For a TLB miss to a certain guest virtual address, the hardware looks at both page tables to
translate guest virtual address to machine address. The first layer of page tables is maintained by the
guest operating system. The VMM only maintains the second layer of page tables.
Performance Considerations
When you use hardware assistance, you eliminate the overhead for software memory virtualization. In
particular, hardware assistance eliminates the overhead required to keep shadow page tables in
synchronization with guest page tables. However, the TLB miss latency when using hardware assistance
is significantly higher. By default the hypervisor uses large pages in hardware assisted modes to reduce
the cost of TLB misses. As a result, whether or not a workload benefits by using hardware assistance
primarily depends on the overhead the memory virtualization causes when using software memory
virtualization. If a workload involves a small amount of page table activity (such as process creation,
mapping the memory, or context switches), software virtualization does not cause significant overhead.
Conversely, workloads with a large amount of page table activity are likely to benefit from hardware
assistance.
By default the hypervisor uses large pages in hardware assisted modes to reduce the cost of TLB misses.
The best performance is achieved by using large pages in both guest virtual to guest physical and guest
physical to machine address translations.
The option LPage.LPageAlwaysTryForNPT can change the policy for using large pages in guest physical
to machine address translations. For more information, see Advanced Memory Attributes.
Support for Large Page Sizes
ESXi provides limited support for large page sizes.
x86 architecture allows system software to use 4KB, 2MB and 1GB pages. We refer to 4KB pages as
small pages while 2MB and 1GB pages are referred to as large pages. Large pages relieve translation
lookaside buffer (TLB) pressure and reduce the cost of page table walks, which results in improved
workload performance.
In virtualized environments, large pages can be used by the hypervisor and the guest operating system
independently. While the biggest performance impact is achieved if large pages are used by the guest
and the hypervisor, in most cases a performance impact can be observed even if large pages are used
only at the hypervisor level.
ESXi hypervisor uses 2MB pages for backing guest vRAM by default. vSphere 6.7 ESXi provides a
limited support for backing guest vRAM with 1GB pages. For more information, see Backing Guest vRAM
with 1GB Pages.
vSphere Resource Management
VMware, Inc. 32