6.5.1
Table Of Contents
- vSphere Networking
- Contents
- About vSphere Networking
- Updated Information
- Introduction to Networking
- Setting Up Networking with vSphere Standard Switches
- Setting Up Networking with vSphere Distributed Switches
- vSphere Distributed Switch Architecture
- Create a vSphere Distributed Switch
- Upgrade a vSphere Distributed Switch to a Later Version
- Edit General and Advanced vSphere Distributed Switch Settings
- Managing Networking on Multiple Hosts on a vSphere Distributed Switch
- Tasks for Managing Host Networking on a vSphere Distributed Switch
- Add Hosts to a vSphere Distributed Switch
- Configure Physical Network Adapters on a vSphere Distributed Switch
- Migrate VMkernel Adapters to a vSphere Distributed Switch
- Create a VMkernel Adapter on a vSphere Distributed Switch
- Migrate Virtual Machine Networking to the vSphere Distributed Switch
- Use a Host as a Template to Create a Uniform Networking Configuration on a vSphere Distributed Switch
- Remove Hosts from a vSphere Distributed Switch
- Managing Networking on Host Proxy Switches
- Distributed Port Groups
- Working with Distributed Ports
- Configuring Virtual Machine Networking on a vSphere Distributed Switch
- Topology Diagrams of a vSphere Distributed Switch in the vSphere Web Client
- Setting Up VMkernel Networking
- VMkernel Networking Layer
- View Information About VMkernel Adapters on a Host
- Create a VMkernel Adapter on a vSphere Standard Switch
- Create a VMkernel Adapter on a Host Associated with a vSphere Distributed Switch
- Edit a VMkernel Adapter Configuration
- Overriding the Default Gateway of a VMkernel Adapter
- Configure the VMkernel Adapter Gateway by Using ESXCLI
- View TCP/IP Stack Configuration on a Host
- Change the Configuration of a TCP/IP Stack on a Host
- Create a Custom TCP/IP Stack
- Remove a VMkernel Adapter
- LACP Support on a vSphere Distributed Switch
- Convert to the Enhanced LACP Support on a vSphere Distributed Switch
- LACP Teaming and Failover Configuration for Distributed Port Groups
- Configure a Link Aggregation Group to Handle the Traffic for Distributed Port Groups
- Edit a Link Aggregation Group
- Enable LACP 5.1 Support on an Uplink Port Group
- Limitations of the LACP Support on a vSphere Distributed Switch
- Backing Up and Restoring Networking Configurations
- Rollback and Recovery of the Management Network
- Networking Policies
- Applying Networking Policies on a vSphere Standard or Distributed Switch
- Configure Overriding Networking Policies on Port Level
- Teaming and Failover Policy
- VLAN Policy
- Security Policy
- Traffic Shaping Policy
- Resource Allocation Policy
- Monitoring Policy
- Traffic Filtering and Marking Policy
- Traffic Filtering and Marking on a Distributed Port Group or Uplink Port Group
- Enable Traffic Filtering and Marking on a Distributed Port Group or Uplink Port Group
- Mark Traffic on a Distributed Port Group or Uplink Port Group
- Filter Traffic on a Distributed Port Group or Uplink Port Group
- Working with Network Traffic Rules on a Distributed Port Group or Uplink Port Group
- Disable Traffic Filtering and Marking on a Distributed Port Group or Uplink Port Group
- Traffic Filtering and Marking on a Distributed Port or Uplink Port
- Enable Traffic Filtering and Marking on a Distributed Port or Uplink Port
- Mark Traffic on a Distributed Port or Uplink Port
- Filter Traffic on a Distributed Port or Uplink Port
- Working with Network Traffic Rules on a Distributed Port or Uplink Port
- Disable Traffic Filtering and Marking on a Distributed Port or Uplink Port
- Qualifying Traffic for Filtering and Marking
- Traffic Filtering and Marking on a Distributed Port Group or Uplink Port Group
- Manage Policies for Multiple Port Groups on a vSphere Distributed Switch
- Port Blocking Policies
- Isolating Network Traffic by Using VLANs
- Managing Network Resources
- DirectPath I/O
- Single Root I/O Virtualization (SR-IOV)
- SR-IOV Support
- SR-IOV Component Architecture and Interaction
- vSphere and Virtual Function Interaction
- DirectPath I/O vs SR-IOV
- Configure a Virtual Machine to Use SR-IOV
- Networking Options for the Traffic Related to an SR-IOV Enabled Virtual Machine
- Using an SR-IOV Physical Adapter to Handle Virtual Machine Traffic
- Enabling SR-IOV by Using Host Profiles or an ESXCLI Command
- Virtual Machine That Uses an SR-IOV Virtual Function Fails to Power On Because the Host Is Out of Interrupt Vectors
- Remote Direct Memory Access for Virtual Machines
- Jumbo Frames
- TCP Segmentation Offload
- Enable or Disable Software TSO in the VMkernel
- Determine Whether TSO Is Supported on the Physical Network Adapters on an ESXi Host
- Enable or Disable TSO on an ESXi Host
- Determine Whether TSO Is Enabled on an ESXi Host
- Enable or Disable TSO on a Linux Virtual Machine
- Enable or Disable TSO on a Windows Virtual Machine
- Large Receive Offload
- Enable Hardware LRO for All VMXNET3 Adapters on an ESXi Host
- Enable or Disable Software LRO for All VMXNET3 Adapters on an ESXi Host
- Determine Whether LRO Is Enabled for VMXNET3 Adapters on an ESXi Host
- Change the Size of the LRO Buffer for VMXNET 3 Adapters
- Enable or Disable LRO for All VMkernel Adapters on an ESXi Host
- Change the Size of the LRO Buffer for VMkernel Adapters
- Enable or Disable LRO on a VMXNET3 Adapter on a Linux Virtual Machine
- Enable or Disable LRO on a VMXNET3 Adapter on a Windows Virtual Machine
- Enable LRO Globally on a Windows Virtual Machine
- NetQueue and Networking Performance
- vSphere Network I/O Control
- About vSphere Network I/O Control Version 3
- Upgrade Network I/O Control to Version 3 on a vSphere Distributed Switch
- Enable Network I/O Control on a vSphere Distributed Switch
- Bandwidth Allocation for System Traffic
- Bandwidth Allocation for Virtual Machine Traffic
- About Allocating Bandwidth for Virtual Machines
- Bandwidth Allocation Parameters for Virtual Machine Traffic
- Admission Control for Virtual Machine Bandwidth
- Create a Network Resource Pool
- Add a Distributed Port Group to a Network Resource Pool
- Configure Bandwidth Allocation for a Virtual Machine
- Configure Bandwidth Allocation on Multiple Virtual Machines
- Change the Quota of a Network Resource Pool
- Remove a Distributed Port Group from a Network Resource Pool
- Delete a Network Resource Pool
- Move a Physical Adapter Out the Scope of Network I/O Control
- Working with Network I/O Control Version 2
- MAC Address Management
- Configuring vSphere for IPv6
- Monitoring Network Connection and Traffic
- Capturing and Tracing Network Packets by Using the pktcap-uw Utility
- pktcap-uw Command Syntax for Capturing Packets
- pktcap-uw Command Syntax for Tracing Packets
- pktcap-uw Options for Output Control
- pktcap-uw Options for Filtering Packets
- Capturing Packets by Using the pktcap-uw Utility
- Trace Packets by Using the pktcap-uw Utility
- Configure the NetFlow Settings of a vSphere Distributed Switch
- Working With Port Mirroring
- vSphere Distributed Switch Health Check
- Switch Discovery Protocol
- Capturing and Tracing Network Packets by Using the pktcap-uw Utility
- Configuring Protocol Profiles for Virtual Machine Networking
- Multicast Filtering
- Stateless Network Deployment
- Networking Best Practices
Route Based on Physical NIC Load
Route Based on Physical NIC Load is based on Route Based on Originating Virtual Port, where the virtual
switch checks the actual load of the uplinks and takes steps to reduce it on overloaded uplinks. Available
only for vSphere Distributed Switch.
The distributed switch calculates uplinks for virtual machines by taking their port ID and the number of
uplinks in the NIC team. The distributed switch tests the uplinks every 30 seconds, and if their load
exceeds 75 percent of usage, the port ID of the virtual machine with the highest I/O is moved to a different
uplink.
Table 8‑5. Considerations on Using Route Based on Physical NIC Load
Considerations Description
Advantages
n
Low resource consumption because the distributed switch
calculates uplinks for virtual machines only once and
checking the of uplinks has minimal impact.
n
The distributed switch is aware of the load of uplinks and
takes care to reduce it if needed.
n
No changes on the physical switch are required.
Disadvantages
n
The bandwidth that is available to virtual machines is limited
to the uplinks that are connected to the distributed switch.
Use Explicit Failover Order
No actual load balancing is available with this policy. The virtual switch always uses the uplink that stands
first in the list of Active adapters from the failover order and that passes failover detection criteria. If no
uplinks in the Active list are available, the virtual switch uses the uplinks from the Standby list.
Configure NIC Teaming, Failover, and Load Balancing on a
vSphere Standard Switch or Standard Port Group
Include two or more physical NICs in a team to increase the network capacity of a vSphere Standard
Switch or standard port group. Configure failover order to determine how network traffic is rerouted in
case of adapter failure. Select a load balancing algorithm to determine how the standard switch
distributes the traffic between the physical NICs in a team.
Configure NIC teaming, failover, and load balancing depending on the network configuration on the
physical switch and the topology of the standard switch. See Teaming and Failover Policy and Load
Balancing Algorithms Available for Virtual Switches for more information.
If you configure the teaming and failover policy on a standard switch, the policy is propagated to all port
groups in the switch. If you configure the policy on a standard port group, it overrides the policy inherited
from the switch.
Procedure
1 In the vSphere Web Client, navigate to the host.
2 On the Configure tab, expand Networking and select Virtual switches.
vSphere Networking
VMware, Inc. 103