An Intel-VMware Perspective: Intelligent Queuing Technologies for Virtualization

3
VMs
ESX
CPU Cores
App
Operating
System
App
Operating
System
App
Operating
System
App
Operating
System
10 GbE
Queueing Technology Overview
Intel
®
Virtualization Technology
1
(Intel
®
VT) is a set of hardware
enhancements that help hypervisor providers develop simpler
and more robust virtualization software, plus accelerate system
and application solutions in virtual environments. Intel
®
VT for
Connectivity is the portion of Intel VT designed to improve network
I/O in virtualized servers. VMDq is part of Intel VT for Connectivity,
geared towards improving networking performance and reducing
CPU utilization.
VMDq is a network silicon-level technology that offloads network
I/O management burden from the hypervisor. Multiple queues
and sorting intelligence in the silicon support enhanced network
traffic flow in the virtual environment, freeing processor cycles
for application work (Figure 2). This improves efficiency in data
transactions toward the destined VM, and increases overall
system performance.
Figure 3. NetQueue improves receive-side
networking performance.
VM
1
VM
2
VM
n
NIC with VMDq
Layer 2 Software Switch
MAC/PHY
vNIC vNIC vNIC
LAN
Rx 2
Rx 2
Tx 2
Tx 2
Tx 2
Rx 1
Rx 1
Rx 1
Tx 1
Rx 1
Rx 1
Rx 1
Tx 1
Rx n
Rx n
Rx n
Tx n
Tx n
Tx n
Rx n
Tx 3
Tx 2
Tx 3
Tx 3
Idle
Idle
Layer 2 Classifier/Sorter
VMware NetQueue is a performance technology in VMware ESX
that significantly improves performance in 10 Gigabit Ethernet
virtualized environments. NetQueue provides a network adapter
with multiple receive queues that allow data interrupt processing
to be affinitized to the CPU cores associated with individual VMs,
improving receive-side networking performance. These receive
queues can also be assigned to each virtual NIC, mapped to
guest memory to avoid a copy, and steer interrupts to idle or
optimal processor cores.
Receiving Packets
As data packets arrive at the network adapter, a Layer 2 classifier/
sorter in the network controller sorts and determines which VM
each packet is destined for based on MAC addresses and VLAN
tags. It then places the packet in a receive queue assigned to
that VM. The hypervisor’s switch merely routes the packets to
the respective VM instead of performing the heavy lifting work
of sorting data. Thus, VMDq improves platform efficiency for
handling receive-side network I/O and lowers CPU utilization
for application processing.
Transmitting Packets
As packets are transmitted from the VMs towards the adapters, the
hypervisor layer places the transmit data packets in their respective
queues. To prevent head-of-line blocking and ensure each queue
is fairly serviced, the network controller transmits queued packets
to the wire in a round-robin fashion, thereby guaranteeing some
measure of Quality of Service (QoS) to the VMs.
Figure 2. VMDq offloads network I/O management to the
network silicon.