6.5.1
Table Of Contents
- vSphere Resource Management
- Contents
- About vSphere Resource Management
- Getting Started with Resource Management
- Configuring Resource Allocation Settings
- CPU Virtualization Basics
- Administering CPU Resources
- Memory Virtualization Basics
- Administering Memory Resources
- Configuring Virtual Graphics
- Managing Storage I/O Resources
- Managing Resource Pools
- Creating a DRS Cluster
- Using DRS Clusters to Manage Resources
- Creating a Datastore Cluster
- Initial Placement and Ongoing Balancing
- Storage Migration Recommendations
- Create a Datastore Cluster
- Enable and Disable Storage DRS
- Set the Automation Level for Datastore Clusters
- Setting the Aggressiveness Level for Storage DRS
- Datastore Cluster Requirements
- Adding and Removing Datastores from a Datastore Cluster
- Using Datastore Clusters to Manage Storage Resources
- Using NUMA Systems with ESXi
- Advanced Attributes
- Fault Definitions
- Virtual Machine is Pinned
- Virtual Machine not Compatible with any Host
- VM/VM DRS Rule Violated when Moving to another Host
- Host Incompatible with Virtual Machine
- Host Has Virtual Machine That Violates VM/VM DRS Rules
- Host has Insufficient Capacity for Virtual Machine
- Host in Incorrect State
- Host Has Insufficient Number of Physical CPUs for Virtual Machine
- Host has Insufficient Capacity for Each Virtual Machine CPU
- The Virtual Machine Is in vMotion
- No Active Host in Cluster
- Insufficient Resources
- Insufficient Resources to Satisfy Configured Failover Level for HA
- No Compatible Hard Affinity Host
- No Compatible Soft Affinity Host
- Soft Rule Violation Correction Disallowed
- Soft Rule Violation Correction Impact
- DRS Troubleshooting Information
- Cluster Problems
- Load Imbalance on Cluster
- Cluster is Yellow
- Cluster is Red Because of Inconsistent Resource Pool
- Cluster Is Red Because Failover Capacity Is Violated
- No Hosts are Powered Off When Total Cluster Load is Low
- Hosts Are Powered-off When Total Cluster Load Is High
- DRS Seldom or Never Performs vMotion Migrations
- Host Problems
- DRS Recommends Host Be Powered on to Increase Capacity When Total Cluster Load Is Low
- Total Cluster Load Is High
- Total Cluster Load Is Low
- DRS Does Not Evacuate a Host Requested to Enter Maintenance or Standby Mode
- DRS Does Not Move Any Virtual Machines onto a Host
- DRS Does Not Move Any Virtual Machines from a Host
- Virtual Machine Problems
- Cluster Problems
- Index
n
The dashed arrows show the mapping from guest virtual memory to machine memory in the shadow
page tables also maintained by the VMM. The underlying processor running the virtual machine uses
the shadow page table mappings.
Software-Based Memory Virtualization
ESXi virtualizes guest physical memory by adding an extra level of address translation.
n
The VMM maintains the combined virtual-to-machine page mappings in the shadow page tables. The
shadow page tables are kept up to date with the guest operating system's virtual-to-physical mappings
and physical-to-machine mappings maintained by the VMM.
n
The VMM intercepts virtual machine instructions that manipulate guest operating system memory
management structures so that the actual memory management unit (MMU) on the processor is not
updated directly by the virtual machine.
n
The shadow page tables are used directly by the processor's paging hardware.
n
There is non-trivial computation overhead for maintaining the coherency of the shadow page tables.
The overhead is more pronounced when the number of virtual CPUs increases.
This approach to address translation allows normal memory accesses in the virtual machine to execute
without adding address translation overhead, after the shadow page tables are set up. Because the
translation look-aside buer (TLB) on the processor caches direct virtual-to-machine mappings read from
the shadow page tables, no additional overhead is added by the VMM to access the memory. Note that
software MMU has a higher overhead memory requirement than hardware MMU. Hence, in order to
support software MMU, the maximum overhead supported for virtual machines in the VMkernel needs to
be increased. In some cases, software memory virtualization may have some performance benet over
hardware-assisted approach if the workload induces a huge amount of TLB misses.
Performance Considerations
The use of two sets of page tables has these performance implications.
n
No overhead is incurred for regular guest memory accesses.
n
Additional time is required to map memory within a virtual machine, which happens when:
n
The virtual machine operating system is seing up or updating virtual address to physical address
mappings.
n
The virtual machine operating system is switching from one address space to another (context
switch).
n
Like CPU virtualization, memory virtualization overhead depends on workload.
Hardware-Assisted Memory Virtualization
Some CPUs, such as AMD SVM-V and the Intel Xeon 5500 series, provide hardware support for memory
virtualization by using two layers of page tables.
The rst layer of page tables stores guest virtual-to-physical translations, while the second layer of page
tables stores guest physical-to-machine translation. The TLB (translation look-aside buer) is a cache of
translations maintained by the processor's memory management unit (MMU) hardware. A TLB miss is a
miss in this cache and the hardware needs to go to memory (possibly many times) to nd the required
translation. For a TLB miss to a certain guest virtual address, the hardware looks at both page tables to
translate guest virtual address to machine address. The rst layer of page tables is maintained by the guest
operating system. The VMM only maintains the second layer of page tables.
vSphere Resource Management
30 VMware, Inc.