Optimizing-QoS-vSphere_final
VMware vSphere* 4.1 to Test
vSphere 4.1 incorporates a number of
network performance enhancements that
affect native guest VM throughput and
VMkernel-basedESX/ESXiapplications,
such as vMotion, VMware Fault
Tolerance(FT)Logging,andNFS.These
improvements include the following (note
that observed performance increases will
vary according to the platform and other
external factors):
• Increases can
generateasmuchasa50percent
reduction in the time required to migrate
a VM from one host to another.
• vSphere will
automatically increase the maximum
number of concurrently allowed vMotion
instances to eight (up from a maximum
of two with vSphere 4.0) when 10GbE
uplinks are employed.
•NFS. Throughput is increased for both
read and write operations.
• This quantity
also increases by 10 percent going out
to the physical network - this is directly
related to the vmxnet3 enhancements.
• In vSphere 4.1, the
VM-to-VM traffic throughput improved
by 2x, to up to 19 Gbps.
For more information on the specific
enhancements in VMware vSphere 4.1,
see the VMware document, “What’s
New in vSphere 4.1.”
12
Best Practices for QoS Control
Before any bandwidth control measures
are taken, it is critical that thorough
consideration be given to the actual
bandwidth being used under maximum
and expected workloads. The key is to
note that two 10GbE ports can provide
more than double the bidirectional
bandwidth of as many as eight to 12
GbE ports, so many of the concerns of
bandwidth contention found in a GbE
network are not present in a 10GbE
network.LikemostQoScontrols,they
should be implemented only when actually
needed, based on observed data.
The best practices described in this
section are designed to remedy those
situations where analysis and monitoring
of the network shows that QoS issues are
present, although lab testing suggests
that QoS issues are unlikely to arise when
using 10GbE networking.
Thekeytocontrollingtrafcistomaintain
the 10GbE connections as single uplink
ports in the hypervisor. This practice
enables unused throughput from one
group to be used by other groups if
needed.Inadditiontoenablingalltrafc
types to take advantage of the 10GbE
infrastructure, the environment is also
less complex. The best practices in this
section should be used only if network
monitoring shows contention.
TheNetworkI/OControl(NetIOC)feature
available in vSphere 4.1 introduces a
software-based approach to partitioning
physical network bandwidth among
thedifferenttypesofnetworktrafc
ows.Itdoessobyprovidingappropriate
QoSpoliciesenforcingtrafcisolation,
predictability, and prioritization, helping IT
organizations overcome the contention that
may arise as the result of consolidation.
The experiments conducted in VMware
performance labs using industry-standard
workloads show that NetIOC:
•MaintainsNFSand/oriSCSIstorage
performance in the presence of other
network traffic such as vMotion and
bursty VMs.
•Providesnetworkservicelevel
guarantees for critical VMs.
•Ensuresadequatebandwidthfor
VMware FT logging.
•EnsurespredictablevMotion
performance and duration.
•Facilitatessituationswhereaminimum
or weighted level of service is required
for a particular traffic type, independent
of other traffic types.
Scheduler
10GbE NIC 10GbE NIC
Scheduler
Shaper
VM VM VM VM
FT
Mgmt
Teaming Policy
NFS ISCSI
VMotion
The NetIOC software-based
scheduler manages bandwidth resources
among varous types of traffic.
10