Optimizing-QoS-vSphere_final



As shown in Figure 9, NetIOC implements
a software scheduler within the vDS
toisolateandprioritizespecictrafc
types contending for bandwidth on the
uplinksconnectingESX/ESXi4.1hosts
with the physical network. NetIOC is able
to individually identify and prioritize the
followingtrafctypesleavinganESX/
ESXi host on a vDS-connected uplink:
•VMtraffic
•Managementtraffic
•iSCSI
•NFS
•VMwareFTlogging
•vMotion
NetIOC is particularly applicable to
environmentswheremultipletrafc
types are converged over a pair of
10GbE interfaces. If an interface is
oversubscribed (that is, more than 10
Gbps of data is contending for a 10GbE
interface), NetIOC is able to ensure each
trafctypeisgivenaselectableand
congurableminimumlevelofservice.
Moving from GbE to 10GbE networking
typicallyinvolvesconvergingtrafcfrom
multiple GbE server adapters onto a
smaller number of 10GbE ones, as shown
inFigure10.Onthetopofthegure,
dedicated server adapters are used for
severaltypesoftrafc,includingiSCSI,
VMware FT, vMotion and NFS. On the
bottomofthegure,thosetrafcclasses
are all converged onto a single 10GbE
server adapter, with the other adapter
handlingVMtrafc.
In the case shown in Figure 10, the total
bandwidthforVMtrafchasgonefrom4
Gbpsto10Gbps,providinganominal2.5x
increase, which should easily support the
existingtrafcandprovidesubstantial
headroom for growth and usage peaks. At
the same time, however, some network
administrators might want to explicitly
address cases where different types of
networktrafccouldcontendfornetwork
bandwidth; for example, prioritizing
certaintrafcwithparticularlystringent
latency requirements.
The rstpaperinthisseries
1
describes
the value of data center bridging (DCB)
totrafcprioritizationwithinasingle
physical server adapter. Intel worked with
the Institute of Electrical and Electronics
Engineers (IEEE) and the Internet Engineering
Task Force (IETF) to develop standards for
DCB, which is supported in Intel Ethernet
10 Gigabit Server Adapter products. This
standard is still being implemented in other
elements of the network infrastructure,
so VMware has built similar technology
into VMware vSphere 4.1 using NetIOC,
which can help administrators take optimal
advantage of network bandwidth and
guarantee minimum service levels for
specicclassesoftrafc.
Traffic from multiple GbE server connections may be converged onto two 10GbE uplinks.
VMware
VMotion*
VMware vNetwork
Distributed Switch
Primary Traffic:
Virtual Machines (VMs)
Secondary (FT) Traffic:
VMKernel
VLAN Trunk Traffic:
A, B, C, D, E
Primary Traffic:
VMKernel
Secondary (FT) Traffic:
Virtual Machines (VMs)
VLAN Trunk Traffic:
A, B, C, D, E
VMKernel
VM Traffic
VLAN-A
VMKernel
VM Traffic
VLAN-B
VMKernel
VMware
vMotion*
VLAN-D
VMKernel
Serivce Console
VLAN-E
VM Traffic #2
VLAN-C
Port #1
10Gb
Port #2
10Gb
Port
Groups
APP
OS
APP
OS
APP
OS
Service
Console
APP
OS
Po
P
Po
P
Virtual Switch
Virtual NICs
Physical NICs
Port
Groups
APP
OS
APP
OS
APP
OS
VMware
VMotion*
Service
Console
APP
OS
1Gb
1G
1
1Gb
1G
1
1Gb
1G
1
1Gb1Gb 1Gb1Gb1Gb
1Gb
1G
1G
1Gb
1Gb
11
