Optimizing-QoS-vSphere_final

UsingatrafcgeneratorsuchasNTTTCP
or NetPerftosendtrafctomultipleVMs
onahostwilldrivereceive-sidetrafcon
the port groups associated with the target
VMs, while using a VMkernel feature such
asvMotionwillshowtrafcontheport
associated with the VMkernel. This allows
the setting up of different port groups
withdifferenttrafcloadswhileusing
different kernel features to see how much
trafcisbeinggeneratedandwhatthe
maximum bandwidth is on the adapters.
There are several video demos that show
differentcongurationspostedonthe
Intel® Server Room site and YouTube*.
Utilizing a 10GbE connection, vMotion
under vSphere 4.1 can use up to 8 Gbps
of aggregate bandwidth, as opposed to
approximately1GbpsinESX3.5and2.6
Gbps in ESX 4.0, as shown in Figure 8.
Evenwithgreaterthan9.5Gbpsperport
being sent to the VMs, vMotion is able to
move up to eight VMs concurrently, and
VMware vCenter* Server can adjust the
amount of bandwidth allocated to vMotion
sotheVMtrafcisnotsignicantly
affected by vMotion activity.
While monitoring with esxtop or resxtop,
VMkerneltrafccanbeseen,together
withalltheothertrafconthedifferent
port groups. The best practice of using
portgroupstoseparatetrafctypesisan
easy way to see how increasing one type
oftrafcaffectsothers.
The increase of vMotion bandwidth also
emphasizes the point that the advances
in VMkernel functions are driving the
need for 10GbE faster than actual
VM-generatedtrafc.Whilesomeof
these functions do not have consistent
trafc,theycanbenetfromthehigher
bandwidth that 10GbE can provide. It also
supports the move from the old GbE-
based paradigm of providing dedicated
portstospecicfunctions.Thisnew
paradigm of providing 10GbE uplinks to
thevDSandallowingalltrafctypesto
have access to the potential bandwidth
will provide increased performance while
simplifying network deployments.
Network architects should keep in mind
that to fully test a 10GbE network
connection,articiallyexaggerated
trafcmightberequired.Evenso,such
testing allows for throughput and impact
modeling that can be extremely helpful
in determining what kind of control
measures need to be deployed.
Note: When using a dependent hardware
iSCSI adapter, performance reporting for
a NIC associated with the adapter might
show little or no activity, even when iSCSI
trafcisheavy.Thisbehavioroccurs
becausetheiSCSItrafcbypassesthe
regular networking stack and needs to
be calculated into the overall network
bandwidth requirements.
VMware VMotion* Throughput in
Various Versions of VMware ESX
4
ESX 4.1ESX 4.0ESX 3.5
~1 Gbps
~2.6 Gbps
8 Gbps
Maximum
2
0
8
6
Throughput (Gbps)
Successive versions of VMware
ESX* each support higher levels of throughput
for vMotion*.
COLUMN 
PORT-ID
Virtual network device port ID
UPLINK
Y means the corresponding port is an uplink; N means it is not
UP
Y means the corresponding link is up; N means it is not
SPEED
Link speed in megabits per second
FDUPLX
Y means the corresponding link is operating at full duplex; N means it is not
USED-BY
Virtual network device port user
DTYP
Virtual network device type: H means “hub” and S means “switch”
DNAME
Virtual network device name
PKTTX/s
Number of packets transmitted per second
PKTRX/s
Number of packets received per second
MbTX /s
Megabits transmitted per second
MbRX/s
Megabits received per second
%DRPTX
Percentage of transmit packets dropped
%DRPRX
Percentage of receive packets dropped
TEAM-PNIC
Name of the physical NIC used for the team uplink
PKTTXMUL/s
Number of multicast packets transmitted per second
PKTRXMUL/s
Number of multicast packets received per second
PKTTXBRD/s
Number of broadcast packets transmitted per second
PKTRXBRD/s
Number of broadcast packets received per second
Table 1. Network panel statistics
9
