6.0.1

Table Of Contents
Performance Considerations
When you use hardware assistance, you eliminate the overhead for software memory virtualization. In
particular, hardware assistance eliminates the overhead required to keep shadow page tables in
synchronization with guest page tables. However, the TLB miss latency when using hardware assistance is
significantly higher. By default the hypervisor uses large pages in hardware assisted modes to reduce the
cost of TLB misses. As a result, whether or not a workload benefits by using hardware assistance primarily
depends on the overhead the memory virtualization causes when using software memory virtualization. If a
workload involves a small amount of page table activity (such as process creation, mapping the memory, or
context switches), software virtualization does not cause significant overhead. Conversely, workloads with a
large amount of page table activity are likely to benefit from hardware assistance.
The performance of hardware MMU has improved since it was first introduced with extensive caching
implemented in hardware. Using software memory virtualization techniques, the frequency of context
switches in a typical guest may happen from 100 to 1000 times per second. Each context switch will trap the
VMM in software MMU. Hardware MMU approaches avoid this issue.
By default the hypervisor uses large pages in hardware assisted modes to reduce the cost of TLB misses. The
best performance is achieved by using large pages in both guest virtual to guest physical and guest physical
to machine address translations.
The option LPage.LPageAlwaysTryForNPT can change the policy for using large pages in guest physical to
machine address translations. For more information, see “Advanced Memory Attributes,” on page 116.
NOTE Binary translation only works with software-based memory virtualization.
Chapter 5 Memory Virtualization Basics
VMware, Inc. 33