Optimizing-QoS-vSphere_final

Intel Ethernet Server Adapter X520-2 PCIe Lane Width Comparison
IxChariot Performance Data - BX - 2 Port
PCI Express* (PCIe*) Lane Width Comparison
Intel® Ethernet Server Adapter X520-2 PCIe Lane Width Comparison
IxChariot Performance Data - BX - 1 Port
4000
2000
0
8000
6000
12000
10000
16000
14000
20000
18000
20
10
0
40
30
60
50
80
70
100
90
Throughput (Mb/s)
IxChariot Application Buffer Size (Bytes)
x4 x8 x4 CPUx8 CPU
8192
16384
32768
655365
10000
5000
0
20000
15000
30000
25000
40000
35000
20
10
0
40
30
60
50
80
70
100
90
Throughput (Mb/s)
8192
16384
32768
655365
from VMware.
4
VMware can also provide
guidance with regard to the adoption of
new methods and architectures when
implementing virtualization.
Once these considerations are well
understood, the next step is to determine
their impact on QoS requirements.
Bandwidth control may or may not be
needed to ensure the proper allocation
of network resources to support QoS
and to ensure that all 10GbE ports
are performing at optimal levels. More
specically,networkengineersmust
determine what areas of the network
require bandwidth control to meet these
requirements. The remainder of this paper
addresses the process of identifying those
network areas in terms of three types
of best practices: analysis, monitoring,
and control.

To ensure optimal availability of
throughput on 10GbE uplinks, the proper
performance-enhancing features must
be enabled and used. For example, using
a dual-port 10GbE server adapter on a
PCIExpress*(PCIe*)Gen2x8connection
and enabling VMware NetQueue* is vital
in order to get 10 gigabits per second
(Gbps) of throughput. Without NetQueue
enabled, the hypervisor’s virtual switch is
restricted to the use of a single processor
core, and its processing limitations
constrain receive-side (Rx) throughput, in
most cases, to 4–6 Gbps.
Relative to GbE ports, this bottleneck
assumes even greater importance after
migrating to 10GbE. Intel has worked
with VMware to deliver support for Intel®
Virtual Machine Device Queues (Intel®
VMDq),
5
which provides multiple network
queuesandahardware-basedsorter/
classierbuiltintothenetworkIntel®
Ethernet Controller. In combination with
NetQueue, VMDq spreads the network
processingovermultiplequeuesandCPU
cores,allowingfornear-native9.5Gbps
throughput.
6
A dual-port 10GbE server adapter
usingaPCIExpressGen2connectioncan
deliver near-native throughput.
3
