Optimizing-QoS-vSphere_final
As shown in Figure 9, NetIOC implements
a software scheduler within the vDS
toisolateandprioritizespecictrafc
types contending for bandwidth on the
uplinksconnectingESX/ESXi4.1hosts
with the physical network. NetIOC is able
to individually identify and prioritize the
followingtrafctypesleavinganESX/
ESXi host on a vDS-connected uplink:
•VMtraffic
•Managementtraffic
•iSCSI
•NFS
•VMwareFTlogging
•vMotion
NetIOC is particularly applicable to
environmentswheremultipletrafc
types are converged over a pair of
10GbE interfaces. If an interface is
oversubscribed (that is, more than 10
Gbps of data is contending for a 10GbE
interface), NetIOC is able to ensure each
trafctypeisgivenaselectableand
congurableminimumlevelofservice.
Moving from GbE to 10GbE networking
typicallyinvolvesconvergingtrafcfrom
multiple GbE server adapters onto a
smaller number of 10GbE ones, as shown
inFigure10.Onthetopofthegure,
dedicated server adapters are used for
severaltypesoftrafc,includingiSCSI,
VMware FT, vMotion and NFS. On the
bottomofthegure,thosetrafcclasses
are all converged onto a single 10GbE
server adapter, with the other adapter
handlingVMtrafc.
In the case shown in Figure 10, the total
bandwidthforVMtrafchasgonefrom4
Gbpsto10Gbps,providinganominal2.5x
increase, which should easily support the
existingtrafcandprovidesubstantial
headroom for growth and usage peaks. At
the same time, however, some network
administrators might want to explicitly
address cases where different types of
networktrafccouldcontendfornetwork
bandwidth; for example, prioritizing
certaintrafcwithparticularlystringent
latency requirements.
The rstpaperinthisseries
1
describes
the value of data center bridging (DCB)
totrafcprioritizationwithinasingle
physical server adapter. Intel worked with
the Institute of Electrical and Electronics
Engineers (IEEE) and the Internet Engineering
Task Force (IETF) to develop standards for
DCB, which is supported in Intel Ethernet
10 Gigabit Server Adapter products. This
standard is still being implemented in other
elements of the network infrastructure,
so VMware has built similar technology
into VMware vSphere 4.1 using NetIOC,
which can help administrators take optimal
advantage of network bandwidth and
guarantee minimum service levels for
specicclassesoftrafc.
Traffic from multiple GbE server connections may be converged onto two 10GbE uplinks.
VMware
VMotion*
VMware vNetwork
Distributed Switch
Primary Traffic:
Virtual Machines (VMs)
Secondary (FT) Traffic:
VMKernel
VLAN Trunk Traffic:
A, B, C, D, E
Primary Traffic:
VMKernel
Secondary (FT) Traffic:
Virtual Machines (VMs)
VLAN Trunk Traffic:
A, B, C, D, E
VMKernel
VM Traffic
VLAN-A
VMKernel
VM Traffic
VLAN-B
VMKernel
VMware
vMotion*
VLAN-D
VMKernel
Serivce Console
VLAN-E
VM Traffic #2
VLAN-C
Port #1
10Gb
Port #2
10Gb
Port
Groups
APP
OS
APP
OS
APP
OS
Service
Console
APP
OS
Po
P
Po
P
Virtual Switch
Virtual NICs
Physical NICs
Port
Groups
APP
OS
APP
OS
APP
OS
VMware
VMotion*
Service
Console
APP
OS
1Gb
1G
1
1Gb
1G
1
1Gb
1G
1
1Gb1Gb 1Gb1Gb1Gb
1Gb
1G
1G
1Gb
1Gb
11