Optimizing-QoS-vSphere_final

Figure11showshowNetIOCiscongured
through the vSphere Client on vCenter
Server. The  tab
within the vDS enables administrators to
specifymaximum(HostLimit,measuredin
megabitspersecond(Mb/s))andminimum
(Shares Value, represented as a proportion
of the total) bandwidth on the physical
serveradaptertoeachtrafcclass.
Inthisexample,theaggregateVMtrafc
issubjecttoalimitationof500Mb/sof
bandwidth, regardless of how much is
available. Because the assigned Shares
Values add up to a total of 400 shares
(100+100+50+50+50+50),andVM
trafchasbeenassignedaminimumof
50shares,itisguaranteedaminimum
of50/400,orone-eighth,ofthetotal
bandwidth available from the physical
server adapter. This aspect of this new 4.1
featurespecicallyaddressestheconcerns
that many administrators have voiced when
discussing the move away from dedicated
GbE ports to shared 10GbE ports.
Partitioningtrafctheold-schoolway,
eitherbyphysicallysegmentingthetrafc
on dedicated GbE ports or physically
dividing up a 10GbE port into multiple
ports with dedicated bandwidth limits
using proprietary technologies adds
unnecessary complexity and cost. NetIOC
is a more effective way to segregate
bandwidth because it is dynamic and limits
trafconlywhenthereiscongestionon
the port. The other methods place static
limitsandleavesignicantbandwidth
unused,signicantlyreducingthevalue
that 10GbE brings to virtualization.
Note: NetIOC does not support the use
of a dependent hardware iSCSI adapters.
TheiSCSItrafcresourcepoolsharesdo
notapplytoiSCSItrafconadependent
hardware iSCSI adapter.
Configuring the Shares Value
The Sharesvaluespeciestherelative
importanceofatrafctypescheduled
for transmission on a physical server
adapter.Sharesarespeciedinabstract
units between 1 and 100. The bandwidth
forthelinkisdividedamongthetrafc
types according to their relative shares
value. For example, consider the case of
two 10GbE links; for a total of 20Gbps
of bandwidth in each direction, with VM
trafcsetto100shares,vMotiontrafc
setto50shares,andVMwareFTlogging
trafcsetto50shares.
IfVMtrafcandvMotiontrafcareboth
contending for the bandwidth on teamed
10GbEports,theVMtrafc(100shares)
will get 67 percent (13.4 Gbps) of the
link,andvMotion(50shares)willget33
percent (6.7 Gbps) of the link. If all three
ofthesetrafctypesareactiveand
contendingforthelink,VMtrafc(100
shares)willget50percent(10Gbps),
vMotion(50shares)willget25percent(5
Gbps),andVMwareFTlogging(50shares)
willget25percent(5Gbps).Ifnoother
trafctypesarecontendingforthelink
atthatmoment,eachtrafctypecan
consume the entire link (or up to the host
limit, if set).
I/OControlconfigurationisperformedfromthevSphere*client.
12
