Optimizing-QoS-vSphere_final



Using 10GbE connections when deploying
virtualization can make data centers more
cost effective and easier to manage.
Ethernet bandwidth and connectivity
requirements should be established with
due regard to which vSphere features
will be used. That approach allows
use cases to be developed to create
appropriate network designs. The key is
to fully understand the actual bandwidth
requirements, based on bandwidth
analysisandtrafccharacterizations,
before implementing any designs.
Consider the base recommended network
model for connecting ESXi* hosts:
•AvNetworkDistributedSwitch(vDS)
for VM Ethernet connectivity
•Two10GbEuplinks
•PortgroupsandVLANstoseparate
traffic types for performance, isolation,
and security
Whilethiscongurationiscoveredinthe
white paper, “Simplify VMware vSphere*
4 Networking with Intel® Ethernet 10
Gigabit Server Adapters,
1
new features
and enhancements in the VMware
vSphere* 4.1 release make it worthwhile
to revisit existing and future network
designs. Additional discussions can be
found in Intel blogs on the subject.
3
Thisdual10GbEuplinkconguration
replaces the previous multiple GbE
congurationthatwasusedpriorto
10GbE becoming mainstream. While it
may seem intuitive to try to divide a
10GbE connection into multiple network
connections to mimic the physical
separation of a GbE architecture, doing
so adds complexity and additional
management layers. Moreover, it also
signicantlyreducesthebandwidthand
simplicationbenetsthatthemove
to 10GbE provides. In such cases, new
practicesspecicallycreatedforuse
with 10GbE are strategically vital. The
rststepindeterminingbandwidth
requirements is to identify what type of
trafcwillbedeployedonthenetwork
and how:
•
used. Many capabilities such as VMware
Fault Tolerance (VMware FT) and vMotion
can use large amounts of bandwidth.
These kernel-based features can actually
require more peak bandwidth capabilities
than the VMs on the host.
•

Some VMs are memory intensive and
CPUintensivewithlittleI/O,whileothers
requireonlylowmemorybuthighI/O
andCPU.Understandingthespecific
characteristics and requirements of the
relevant VMs is critical to identifying
where bottlenecks may reside.
•
per host. This characteristic also has
direct bearing on expected average and
peak Ethernet bandwidth requirements.
The optimal number is becoming
more and more dynamic as vMotion
and Dynamic Resource Scheduling
become more prevalent in data center
deployments, so a balance of peak and
idle requirements must be considered.
•
such as iSCSI or NAS. Associated usage
models require moving large amounts of
data around the network, which has a
direct impact on bandwidth requirements.
Based on those requirements, network
architectsmustdecidewhetherIP-based
storage will be unified with data traffic
or remain on its own network.
Security requirements may vary between
different services and other aspects
ofadatacenter.Inmostcases,VLANs
provide adequate separation between
trafctypes,althoughphysicalseparation
may be desirable in some cases. The
number of 10GbE uplinks needed may
therefore be based in part on physical
security requirements, rather than
bandwidth requirements. Implementers
are encouraged to refer to additional
security and hardening documentation
Table of Contents
Overview ........................ 1

 ...... 2
 . . . 3
QoSBestPractice1:Use
Dual-Port10GbEServerAdapters
andVerifyAdequatePCIExpress*
Connectivity .......................4
QoSBestPractice2:UseVMware
NetQueue with VMDq-enabled Inte
Ethernet 10 Gigabit Controllers .....4
QoSBestPractice3:UseVMXNET3
Virtual Network Device in Microsoft
Windows* VMs .....................6
QoSBestPractice4:UseDynamic
LogicalSegmentationacrossTwo
10GbE Uplinks to Increase Bandwidth
andBalanceLoads . . . . . . . . . . . . . . . . .6
QoSBestPractice5:Determine
PerformanceofNativeversus
OfoadCongurations .............7
Native Software-based iSCSI
Adapter/Initiators . . . . . . . . . . . . . . .7
Dependent Hardware
iSCSI Adapters ...................8
Best Practices for
 . . . . . . . . . . . . . . . . . . 8
QoSBestPractice6:Useresxtop
and vSphere Management Assistant
to View and Monitor Network
Performance . . . . . . . . . . . . . . . . . . . . . .8
NetworkPerformance
Enhancements in VMware
vSphere* 4.1 to Test ........... 10
Best Practices for QoS Control . . . 
QoSBestPractice7:UseNetwork
I/OControlandStorageI/OControl
toHandleContentiononUnied
Networks ........................ 10
Resource Management Using
NetworkI/OControl . . . . . . . . . . . .11
StorageI/OControl . . . . . . . . . . . . .13
QoSBestPractice8:LimitUseof
Trafc-ShapingPoliciestoControl
BandwidthonaPer-PortBasis
Only When Needed. . . . . . . . . . . . . . . 14
Conclusion . . . . . . . . . . . . . . . . . . . . . 15
2
