Optimizing-QoS-vSphere_final


VMware vSphere* 4.1 to Test
vSphere 4.1 incorporates a number of
network performance enhancements that
affect native guest VM throughput and
VMkernel-basedESX/ESXiapplications,
such as vMotion, VMware Fault
Tolerance(FT)Logging,andNFS.These
improvements include the following (note
that observed performance increases will
vary according to the platform and other
external factors):
• Increases can
generateasmuchasa50percent
reduction in the time required to migrate
a VM from one host to another.
• vSphere will
automatically increase the maximum
number of concurrently allowed vMotion
instances to eight (up from a maximum
of two with vSphere 4.0) when 10GbE
uplinks are employed.
•NFS. Throughput is increased for both
read and write operations.
• This quantity
also increases by 10 percent going out
to the physical network - this is directly
related to the vmxnet3 enhancements.
• In vSphere 4.1, the
VM-to-VM traffic throughput improved
by 2x, to up to 19 Gbps.
For more information on the specific
enhancements in VMware vSphere 4.1,
see the VMware document, “What’s
New in vSphere 4.1.
12
Best Practices for QoS Control
Before any bandwidth control measures
are taken, it is critical that thorough
consideration be given to the actual
bandwidth being used under maximum
and expected workloads. The key is to
note that two 10GbE ports can provide
more than double the bidirectional
bandwidth of as many as eight to 12
GbE ports, so many of the concerns of
bandwidth contention found in a GbE
network are not present in a 10GbE
network.LikemostQoScontrols,they
should be implemented only when actually
needed, based on observed data.
The best practices described in this
section are designed to remedy those
situations where analysis and monitoring
of the network shows that QoS issues are
present, although lab testing suggests
that QoS issues are unlikely to arise when
using 10GbE networking.
Thekeytocontrollingtrafcistomaintain
the 10GbE connections as single uplink
ports in the hypervisor. This practice
enables unused throughput from one
group to be used by other groups if
needed.Inadditiontoenablingalltrafc
types to take advantage of the 10GbE
infrastructure, the environment is also
less complex. The best practices in this
section should be used only if network
monitoring shows contention.



TheNetworkI/OControl(NetIOC)feature
available in vSphere 4.1 introduces a
software-based approach to partitioning
physical network bandwidth among
thedifferenttypesofnetworktrafc
ows.Itdoessobyprovidingappropriate
QoSpoliciesenforcingtrafcisolation,
predictability, and prioritization, helping IT
organizations overcome the contention that
may arise as the result of consolidation.
The experiments conducted in VMware
performance labs using industry-standard
workloads show that NetIOC:
•MaintainsNFSand/oriSCSIstorage
performance in the presence of other
network traffic such as vMotion and
bursty VMs.
•Providesnetworkservicelevel
guarantees for critical VMs.
•Ensuresadequatebandwidthfor
VMware FT logging.
•EnsurespredictablevMotion
performance and duration.
•Facilitatessituationswhereaminimum
or weighted level of service is required
for a particular traffic type, independent
of other traffic types.
Scheduler
10GbE NIC 10GbE NIC
Scheduler
Shaper
VM VM VM VM
FT
Mgmt
Teaming Policy
NFS ISCSI
VMotion
The NetIOC software-based
scheduler manages bandwidth resources
among varous types of traffic.
10
