Simplified, High-Performance 10GbE Networks Based on a Single Virtual Distributed Switch, Managed by VMware* vSphere 5.1

Hypervisor-bypass hardware offload using VMware
DirectPath* I/O allows a VM to be directly assigned to
a dedicated network port, bypassing the virtual switch
completely. Using the PCI-SIG Single-Root I/O Virtualization
(SR-IOV) specification—newly supported in vSphere 5.1
now allows a single network port to look like multiple entities
known as “virtual functions” (VFs), each of which appears as
a separate adapter that can be directly assigned to the VMs.
This new capability greatly increases the usefulness of direct
assignment by allowing multiple VMs to bypass the hypervisor
using a single port. Careful consideration must always be given
whenever hardware offloads are under consideration, because
they often also bypass many valuable features to achieve
their performance benefits. Here are some of the benefits and
limitations that need to be considered:
- Benefits. SR-IOV can significantly reduce latency and increase
throughout beyond what is possible using hardware-assisted
sharing with VMDq. Those enhancements make it possible to
virtualize workloads where it would otherwise not be feasible.
- Limitations. SR-IOV uses virtual functions that appear to
the hypervisor as unique PCIe* devices (rather than network
uplinks), so the virtual switch cannot dynamically allocate
shared resources. Further, SR-IOV must be enabled or not
enabled at the level of a whole physical port, disallowing
the hypervisor and other tools from controlling resources
on that entire port. See the VMware vSphere 5.1 Network
Configuration documentation for a list of further limitations.
Both 10 Gigabit Intel Ethernet Converged Network Adapters and
vSphere 5.1 support network resource sharing with both VMDq
and SR-IOV. Since the VMware VDS can dynamically allocate
resources with VMDq but not SR-IOV, it is the recommended
default choice for network I/O resource sharing. At the same
time, VMware DirectPath with SR-IOV enables virtualization of
workloads that could not otherwise be virtualized, making it a
valuable special-case technology where it is needed.
4 Software Entities at the Heart
of the Virtualized Network
vSphere supports a set of virtual networking elements that
provide capabilities for networking VMs in the data center
similar to how physical machines are networked in a physical
environment. Because these elements are abstracted away from
the physical plane, they tend to decouple workloads from specic
physical resources, providing for dynamic assignment of those
resources. The resulting increased efciency is at the heart of
resource elasticity in the data center, as well as for the cloud.
This section introduces key software entities in vSphere
networking, including a description, where appropriate, of how
they differ between environments based on VSSs and those
based on VDSs.
4.1 Virtual Network Interface Cards (Virtual NICs)
Each VM has one or more virtual NICs. The guest OS and
application programs communicate with virtual NICs through
either a commonly available device driver or a VMware device
driver optimized for the virtual environment. In either case,
communication by the guest OS occurs just as it would with
a physical device. Outside the VM, the virtual NIC has its own
MAC address and one or more IP addresses. It responds to the
standard Ethernet protocol just as a physical NIC would, and
from the perspective of an outside agent, communicating with
a virtual NIC is identical to communicating with a physical one.
Network redundancy for these virtual NICs typically is provided
for at the port group level on the virtual switch, but since SR-IOV
bypasses the virtual switch, redundancy is congured in the
guest. This is accomplished by using teaming software in the
guest OS between two VFs from two different physical ports.
4.2 Port Groups and Distributed Port Groups
Port groups in a VSS specify port conguration for each member
virtual port. A VM (using its virtual NIC) connects to the virtual
ports that are part of the given port group. VMs that connect
to the same port group belong to the same network inside
the virtual environment, allowing them to exchange data.
Administrators can congure port groups to enforce policies
that provide enhanced security, network segmentation, better
performance, HA, and trafc management.
The corresponding entity on a VDS is the distributed port group,
which spans multiple hosts and denes how connections are
made through the VDS to the network. Each VDS supports up to
10,000 static port groups. Conguration settings such as Virtual
LAN (VLAN) IDs, trafc shaping parameters, teaming and load
balancing conguration, and port security are congured through
distributed port groups, ensuring conguration consistency
for VMs and virtual ports necessary for such functions as live
migration using vMotion. The port group construct provides
the exibility and agility that is the foundation for a software-
dened network. For more information on VDSs, refer to the
VMware vSphere Distributed Switch Best Practices.
4.3 VLANs and Private VLANs
A VLAN enables a software-dened means of logically
segmenting a network, similar to using different network cables
to attach to a physical switch. That is, VMs or physical hosts
assigned to separate VLANs can use shared network connections
and other resources while being restricted from communicating
with one another. Typically each VLAN is assigned a separate IP
subnet on the overall network. Much as with separate physical
LANs, passing trafc between VLANs must be accomplished
through a routing device.
6
Simplied, High-Performance 10GbE Networks Based on a Single Virtual Distributed Switch, Managed by VMware vSphere* 5.1