Datasheet
infraStructure SerViceS
|
5
physical to machine memory address translations, which used to be maintained inside shadow
page tables within ESX. Offloading this memory management to hardware has two benefits:
hardware page table processing is faster than software implementation, and ESX can use the
freed CPU cycles for more workload-related processing.
AMD calls its hardware-assisted memory management feature rapid virtualization indexing
(RVI), while Intel terms its implementation extended page tables (EPT). ESX has supported AMD
RVI since version 3.5. The support for Intel EPT was introduced in ESX 4.0.
The performance benefits of hardware-assisted memory management are achievable only if
page table entries are located in hardware page tables. Remember that the real estate on a proces-
sor chip is at a premium and hence limits the size of hardware page tables. If a page table entry
is not found in the hardware page tables, the associated translation lookaside buffer (TLB) miss
can result in more expensive processing compared to software shadow page tables implemented
by ESX. You can reduce the number of TLB misses by using large memory pages. ESX has been
supporting large memory pages since version 3.5. Together, hardware-assisted memory manage-
ment and large memory pages will provide better performance.
Processor Scheduling
VMware vSphere includes a sophisticated CPU scheduler that enables it to efficiently run several
machines on a single ESX host. The CPU scheduler allows you to over-commit available physical
CPU resources; in other words, the total number of virtual CPUs allocated across all virtual
machines on a vSphere host can be more than the number of physical CPU cores available. The
virtual machines are scheduled on all available physical CPUs in a vSphere host by default or can
be affinitized or pinned to specific physical CPUs. The ESX CPU scheduler will also guarantee
that a virtual machine only uses CPU cycles up to its configured values. When scheduling virtual
CPUs allocated to virtual machines, the CPU scheduler uses a proportional-share scheduling
algorithm that also takes into account user-provided resource specifications such as shares,
reservations, and limits. Maintaining CPU resource allocation fairness among a number of vir-
tual machines running on a vSphere host is a key aspect of ESX processor scheduling.
Starting with the Virtual Infrastructure 3 (VI3) release, ESX has gradually shifted from
“strict” to “relaxed” co-scheduling of virtual CPUs. Strict co-scheduling required that a virtual
machine would run only if all its virtual CPUs could be scheduled to run together. With relaxed
co-scheduling, ESX can schedule a subset of virtual machine CPUs as needed without causing
any guest operating system instability.
The ESX CPU scheduler is also aware of different processor topology architectures such as
nonuniform memory access architecture (NUMA) nodes and hyperthreading.
The ESX 4.0 scheduler further improves on these capabilities by adding the following
enhancements:
More optimizations to relaxed co-scheduling of virtual CPUs, especially for SMP VMs
•u
(virtual machines with multiple virtual CPUs)
New finer-grained locking to reduce scheduling overheads in cases where frequent sched-
•u
uling decisions are needed
Processor cache topology awareness and optimizations to account for newer processor
•u
cache architectures
Improvements in interrupt delivery efficiency and the associated processing costs
•u
563601c01.indd 5 6/29/10 4:41:01 PM