6.5.1
Table Of Contents
- vSphere Resource Management
- Contents
- About vSphere Resource Management
- Getting Started with Resource Management
- Configuring Resource Allocation Settings
- CPU Virtualization Basics
- Administering CPU Resources
- Memory Virtualization Basics
- Administering Memory Resources
- Configuring Virtual Graphics
- Managing Storage I/O Resources
- Managing Resource Pools
- Creating a DRS Cluster
- Using DRS Clusters to Manage Resources
- Creating a Datastore Cluster
- Initial Placement and Ongoing Balancing
- Storage Migration Recommendations
- Create a Datastore Cluster
- Enable and Disable Storage DRS
- Set the Automation Level for Datastore Clusters
- Setting the Aggressiveness Level for Storage DRS
- Datastore Cluster Requirements
- Adding and Removing Datastores from a Datastore Cluster
- Using Datastore Clusters to Manage Storage Resources
- Using NUMA Systems with ESXi
- Advanced Attributes
- Fault Definitions
- Virtual Machine is Pinned
- Virtual Machine not Compatible with any Host
- VM/VM DRS Rule Violated when Moving to another Host
- Host Incompatible with Virtual Machine
- Host Has Virtual Machine That Violates VM/VM DRS Rules
- Host has Insufficient Capacity for Virtual Machine
- Host in Incorrect State
- Host Has Insufficient Number of Physical CPUs for Virtual Machine
- Host has Insufficient Capacity for Each Virtual Machine CPU
- The Virtual Machine Is in vMotion
- No Active Host in Cluster
- Insufficient Resources
- Insufficient Resources to Satisfy Configured Failover Level for HA
- No Compatible Hard Affinity Host
- No Compatible Soft Affinity Host
- Soft Rule Violation Correction Disallowed
- Soft Rule Violation Correction Impact
- DRS Troubleshooting Information
- Cluster Problems
- Load Imbalance on Cluster
- Cluster is Yellow
- Cluster is Red Because of Inconsistent Resource Pool
- Cluster Is Red Because Failover Capacity Is Violated
- No Hosts are Powered Off When Total Cluster Load is Low
- Hosts Are Powered-off When Total Cluster Load Is High
- DRS Seldom or Never Performs vMotion Migrations
- Host Problems
- DRS Recommends Host Be Powered on to Increase Capacity When Total Cluster Load Is Low
- Total Cluster Load Is High
- Total Cluster Load Is Low
- DRS Does Not Evacuate a Host Requested to Enter Maintenance or Standby Mode
- DRS Does Not Move Any Virtual Machines onto a Host
- DRS Does Not Move Any Virtual Machines from a Host
- Virtual Machine Problems
- Cluster Problems
- Index
Performance Considerations
When you use hardware assistance, you eliminate the overhead for software memory virtualization. In
particular, hardware assistance eliminates the overhead required to keep shadow page tables in
synchronization with guest page tables. However, the TLB miss latency when using hardware assistance is
signicantly higher. By default the hypervisor uses large pages in hardware assisted modes to reduce the
cost of TLB misses. As a result, whether or not a workload benets by using hardware assistance primarily
depends on the overhead the memory virtualization causes when using software memory virtualization. If a
workload involves a small amount of page table activity (such as process creation, mapping the memory, or
context switches), software virtualization does not cause signicant overhead. Conversely, workloads with a
large amount of page table activity are likely to benet from hardware assistance.
The performance of hardware MMU has improved since it was rst introduced with extensive caching
implemented in hardware. Using software memory virtualization techniques, the frequency of context
switches in a typical guest may happen from 100 to 1000 times per second. Each context switch will trap the
VMM in software MMU. Hardware MMU approaches avoid this issue.
By default the hypervisor uses large pages in hardware assisted modes to reduce the cost of TLB misses. The
best performance is achieved by using large pages in both guest virtual to guest physical and guest physical
to machine address translations.
The option LPage.LPageAlwaysTryForNPT can change the policy for using large pages in guest physical to
machine address translations. For more information, see “Advanced Memory Aributes,” on page 116.
N Binary translation only works with software-based memory virtualization.
Chapter 5 Memory Virtualization Basics
VMware, Inc. 31