6.5.1

Table Of Contents
Performance Considerations
When you use hardware assistance, you eliminate the overhead for software memory virtualization. In
particular, hardware assistance eliminates the overhead required to keep shadow page tables in
synchronization with guest page tables. However, the TLB miss latency when using hardware assistance is
signicantly higher. By default the hypervisor uses large pages in hardware assisted modes to reduce the
cost of TLB misses. As a result, whether or not a workload benets by using hardware assistance primarily
depends on the overhead the memory virtualization causes when using software memory virtualization. If a
workload involves a small amount of page table activity (such as process creation, mapping the memory, or
context switches), software virtualization does not cause signicant overhead. Conversely, workloads with a
large amount of page table activity are likely to benet from hardware assistance.
The performance of hardware MMU has improved since it was rst introduced with extensive caching
implemented in hardware. Using software memory virtualization techniques, the frequency of context
switches in a typical guest may happen from 100 to 1000 times per second. Each context switch will trap the
VMM in software MMU. Hardware MMU approaches avoid this issue.
By default the hypervisor uses large pages in hardware assisted modes to reduce the cost of TLB misses. The
best performance is achieved by using large pages in both guest virtual to guest physical and guest physical
to machine address translations.
The option LPage.LPageAlwaysTryForNPT can change the policy for using large pages in guest physical to
machine address translations. For more information, see Advanced Memory Aributes,” on page 116.
N Binary translation only works with software-based memory virtualization.
Chapter 5 Memory Virtualization Basics
VMware, Inc. 31