6.7
Table Of Contents
- vSphere Networking
- Contents
- About vSphere Networking
- Introduction to vSphere Networking
- Setting Up Networking with vSphere Standard Switches
- Setting Up Networking with vSphere Distributed Switches
- vSphere Distributed Switch Architecture
- Create a vSphere Distributed Switch
- Upgrade a vSphere Distributed Switch to a Later Version
- Edit General and Advanced vSphere Distributed Switch Settings
- Managing Networking on Multiple Hosts on a vSphere Distributed Switch
- Tasks for Managing Host Networking on a vSphere Distributed Switch
- Add Hosts to a vSphere Distributed Switch
- Configure Physical Network Adapters on a vSphere Distributed Switch
- Migrate VMkernel Adapters to a vSphere Distributed Switch
- Create a VMkernel Adapter on a vSphere Distributed Switch
- Migrate Virtual Machine Networking to the vSphere Distributed Switch
- Use a Host as a Template to Create a Uniform Networking Configuration on a vSphere Distributed Switch
- Remove Hosts from a vSphere Distributed Switch
- Managing Networking on Host Proxy Switches
- Distributed Port Groups
- Working with Distributed Ports
- Configuring Virtual Machine Networking on a vSphere Distributed Switch
- Topology Diagrams of a vSphere Distributed Switch in the vSphere Web Client
- Setting Up VMkernel Networking
- VMkernel Networking Layer
- View Information About VMkernel Adapters on a Host
- Create a VMkernel Adapter on a vSphere Standard Switch
- Create a VMkernel Adapter on a Host Associated with a vSphere Distributed Switch
- Edit a VMkernel Adapter Configuration
- Overriding the Default Gateway of a VMkernel Adapter
- Configure the VMkernel Adapter Gateway by Using esxcli Commands
- View TCP/IP Stack Configuration on a Host
- Change the Configuration of a TCP/IP Stack on a Host
- Create a Custom TCP/IP Stack
- Remove a VMkernel Adapter
- LACP Support on a vSphere Distributed Switch
- Backing Up and Restoring Networking Configurations
- Rollback and Recovery of the Management Network
- Networking Policies
- Applying Networking Policies on a vSphere Standard or Distributed Switch
- Configure Overriding Networking Policies on Port Level
- Teaming and Failover Policy
- VLAN Policy
- Security Policy
- Traffic Shaping Policy
- Resource Allocation Policy
- Monitoring Policy
- Traffic Filtering and Marking Policy
- Traffic Filtering and Marking on a Distributed Port Group or Uplink Port Group
- Enable Traffic Filtering and Marking on a Distributed Port Group or Uplink Port Group
- Mark Traffic on a Distributed Port Group or Uplink Port Group
- Filter Traffic on a Distributed Port Group or Uplink Port Group
- Working with Network Traffic Rules on a Distributed Port Group or Uplink Port Group
- Disable Traffic Filtering and Marking on a Distributed Port Group or Uplink Port Group
- Traffic Filtering and Marking on a Distributed Port or Uplink Port
- Enable Traffic Filtering and Marking on a Distributed Port or Uplink Port
- Mark Traffic on a Distributed Port or Uplink Port
- Filter Traffic on a Distributed Port or Uplink Port
- Working with Network Traffic Rules on a Distributed Port or Uplink Port
- Disable Traffic Filtering and Marking on a Distributed Port or Uplink Port
- Qualifying Traffic for Filtering and Marking
- Traffic Filtering and Marking on a Distributed Port Group or Uplink Port Group
- Manage Policies for Multiple Port Groups on a vSphere Distributed Switch
- Port Blocking Policies
- Isolating Network Traffic by Using VLANs
- Managing Network Resources
- DirectPath I/O
- Single Root I/O Virtualization (SR-IOV)
- SR-IOV Support
- SR-IOV Component Architecture and Interaction
- vSphere and Virtual Function Interaction
- DirectPath I/O vs SR-IOV
- Configure a Virtual Machine to Use SR-IOV
- Networking Options for the Traffic Related to an SR-IOV Enabled Virtual Machine
- Using an SR-IOV Physical Adapter to Handle Virtual Machine Traffic
- Enabling SR-IOV by Using Host Profiles or an ESXCLI Command
- Virtual Machine That Uses an SR-IOV Virtual Function Fails to Power On Because the Host Is Out of Interrupt Vectors
- Remote Direct Memory Access for Virtual Machines
- Jumbo Frames
- TCP Segmentation Offload
- Enable or Disable Software TSO in the VMkernel
- Determine Whether TSO Is Supported on the Physical Network Adapters on an ESXi Host
- Enable or Disable TSO on an ESXi Host
- Determine Whether TSO Is Enabled on an ESXi Host
- Enable or Disable TSO on a Linux Virtual Machine
- Enable or Disable TSO on a Windows Virtual Machine
- Large Receive Offload
- Enable Hardware LRO for All VMXNET3 Adapters on an ESXi Host
- Enable or Disable Software LRO for All VMXNET3 Adapters on an ESXi Host
- Determine Whether LRO Is Enabled for VMXNET3 Adapters on an ESXi Host
- Change the Size of the LRO Buffer for VMXNET 3 Adapters
- Enable or Disable LRO for All VMkernel Adapters on an ESXi Host
- Change the Size of the LRO Buffer for VMkernel Adapters
- Enable or Disable LRO on a VMXNET3 Adapter on a Linux Virtual Machine
- Enable or Disable LRO on a VMXNET3 Adapter on a Windows Virtual Machine
- Enable LRO Globally on a Windows Virtual Machine
- NetQueue and Networking Performance
- vSphere Network I/O Control
- About vSphere Network I/O Control Version 3
- Enable Network I/O Control on a vSphere Distributed Switch
- Bandwidth Allocation for System Traffic
- Bandwidth Allocation for Virtual Machine Traffic
- About Allocating Bandwidth for Virtual Machines
- Bandwidth Allocation Parameters for Virtual Machine Traffic
- Admission Control for Virtual Machine Bandwidth
- Create a Network Resource Pool
- Add a Distributed Port Group to a Network Resource Pool
- Configure Bandwidth Allocation for a Virtual Machine
- Configure Bandwidth Allocation on Multiple Virtual Machines
- Change the Quota of a Network Resource Pool
- Remove a Distributed Port Group from a Network Resource Pool
- Delete a Network Resource Pool
- Move a Physical Adapter Out the Scope of Network I/O Control
- MAC Address Management
- Configuring vSphere for IPv6
- Monitoring Network Connection and Traffic
- Capture Network Packets by Using the PacketCapture Utility
- Capturing and Tracing Network Packets by Using the pktcap-uw Utility
- pktcap-uw Command Syntax for Capturing Packets
- pktcap-uw Command Syntax for Tracing Packets
- pktcap-uw Options for Output Control
- pktcap-uw Options for Filtering Packets
- Capturing Packets by Using the pktcap-uw Utility
- Trace Packets by Using the pktcap-uw Utility
- Configure the NetFlow Settings of a vSphere Distributed Switch
- Working With Port Mirroring
- vSphere Distributed Switch Health Check
- Switch Discovery Protocol
- Configuring Protocol Profiles for Virtual Machine Networking
- Multicast Filtering
- Stateless Network Deployment
- Networking Best Practices
- Troubleshooting Networking
- Guidelines for Troubleshooting
- Troubleshooting MAC Address Allocation
- Unable to Remove a Host from a vSphere Distributed Switch
- Hosts on a vSphere Distributed Switch Lose Connectivity to vCenter Server
- Hosts on vSphere Distributed Switch 5.0 and Earlier Lose Connectivity to vCenter Server
- Alarm for Loss of Network Redundancy on a Host
- Virtual Machines Lose Connectivity After Changing the Uplink Failover Order of a Distributed Port Group
- Unable to Add a Physical Adapter to a vSphere Distributed Switch
- Troubleshooting SR-IOV Enabled Workloads
- A Virtual Machine that Runs a VPN Client Causes Denial of Service for Virtual Machines on the Host or Across a vSphere HA Cluster
- Low Throughput for UDP Workloads on Windows Virtual Machines
- Virtual Machines on the Same Distributed Port Group and on Different Hosts Cannot Communicate with Each Other
- Attempt to Power On a Migrated vApp Fails Because the Associated Protocol Profile Is Missing
- Networking Configuration Operation Is Rolled Back and a Host Is Disconnected from vCenter Server
The total number of interrupt vectors on each ESXi host can scale up to 4096 in the case of 32 CPUs.
When the host boots, devices on the host such as storage controllers, physical network adapters, and
USB controllers consume a subset of the 4096 vectors. If these devices require more than 1024
vectors, the maximum number of potentially supported VFs is reduced.
n
The number of VFs that is supported on an Intel NIC might be different from the number that is
supported on an Emulex NIC. See the technical documentation from the NIC vendor.
n
If you have Intel and Emulex NICs present with SR-IOV enabled, the number of VFs available for the
Intel NICs depends on how many VFs are configured for the Emulex NIC, and the reverse. You can
use the following formula to estimate the maximum number of VFs for use if all 3072 interrupt vectors
are available for passthrough:
3X + 2Y < 3072
where X is the number of Intel VFs, and Y is the number of Emulex VFs.
This number might be smaller if other types of devices on the host use more than 1024 interrupt
vectors from the total of 4096 vectors on the host.
n
vSphere SR-IOV supports up to 1024 VFs on supported Intel and Emulex NICs.
n
vSphere SR-IOV supports up to 64 VFs on a supported Intel or Emulex NIC.
n
If a supported Intel NIC loses connection, all VFs from the physical NIC stop communication
completely, including that between VFs.
n
If a supported Emulex NIC loses connection, all VFs stop communication with the external
environment, but communication between VFs still works
n
VF drivers offer many different features, such as IPv6 support, TSO, and LRO checksum. See the
technical documentation of the NIC vendor for more details.
DirectPath I/O vs SR-IOV
SR-IOV offers performance benefits and tradeoffs similar to those of DirectPath I/O. DirectPath I/O and
SR-IOV have similar functionality but you use them to accomplish different things.
SR-IOV is beneficial in workloads with very high packet rates or very low latency requirements. Like
DirectPath I/O, SR-IOV is not compatible with certain core virtualization features, such as vMotion. SR-
IOV does, however, allow for a single physical device to be shared amongst multiple guests.
With DirectPath I/O you can map only one physical function to one virtual machine. SR-IOV lets you
share a single physical device, allowing multiple virtual machines to connect directly to the physical
function.
Configure a Virtual Machine to Use SR-IOV
To use the capabilities of SR-IOV, you must enable the SR-IOV virtual functions on the host and connect
a virtual machine to the functions.
vSphere Networking
VMware, Inc. 151