6.5.1
Table Of Contents
- vSphere Networking
- Contents
- About vSphere Networking
- Updated Information
- Introduction to Networking
- Setting Up Networking with vSphere Standard Switches
- Setting Up Networking with vSphere Distributed Switches
- vSphere Distributed Switch Architecture
- Create a vSphere Distributed Switch
- Upgrade a vSphere Distributed Switch to a Later Version
- Edit General and Advanced vSphere Distributed Switch Settings
- Managing Networking on Multiple Hosts on a vSphere Distributed Switch
- Tasks for Managing Host Networking on a vSphere Distributed Switch
- Add Hosts to a vSphere Distributed Switch
- Configure Physical Network Adapters on a vSphere Distributed Switch
- Migrate VMkernel Adapters to a vSphere Distributed Switch
- Create a VMkernel Adapter on a vSphere Distributed Switch
- Migrate Virtual Machine Networking to the vSphere Distributed Switch
- Use a Host as a Template to Create a Uniform Networking Configuration on a vSphere Distributed Switch
- Remove Hosts from a vSphere Distributed Switch
- Managing Networking on Host Proxy Switches
- Distributed Port Groups
- Working with Distributed Ports
- Configuring Virtual Machine Networking on a vSphere Distributed Switch
- Topology Diagrams of a vSphere Distributed Switch in the vSphere Web Client
- Setting Up VMkernel Networking
- VMkernel Networking Layer
- View Information About VMkernel Adapters on a Host
- Create a VMkernel Adapter on a vSphere Standard Switch
- Create a VMkernel Adapter on a Host Associated with a vSphere Distributed Switch
- Edit a VMkernel Adapter Configuration
- Overriding the Default Gateway of a VMkernel Adapter
- Configure the VMkernel Adapter Gateway by Using ESXCLI
- View TCP/IP Stack Configuration on a Host
- Change the Configuration of a TCP/IP Stack on a Host
- Create a Custom TCP/IP Stack
- Remove a VMkernel Adapter
- LACP Support on a vSphere Distributed Switch
- Convert to the Enhanced LACP Support on a vSphere Distributed Switch
- LACP Teaming and Failover Configuration for Distributed Port Groups
- Configure a Link Aggregation Group to Handle the Traffic for Distributed Port Groups
- Edit a Link Aggregation Group
- Enable LACP 5.1 Support on an Uplink Port Group
- Limitations of the LACP Support on a vSphere Distributed Switch
- Backing Up and Restoring Networking Configurations
- Rollback and Recovery of the Management Network
- Networking Policies
- Applying Networking Policies on a vSphere Standard or Distributed Switch
- Configure Overriding Networking Policies on Port Level
- Teaming and Failover Policy
- VLAN Policy
- Security Policy
- Traffic Shaping Policy
- Resource Allocation Policy
- Monitoring Policy
- Traffic Filtering and Marking Policy
- Traffic Filtering and Marking on a Distributed Port Group or Uplink Port Group
- Enable Traffic Filtering and Marking on a Distributed Port Group or Uplink Port Group
- Mark Traffic on a Distributed Port Group or Uplink Port Group
- Filter Traffic on a Distributed Port Group or Uplink Port Group
- Working with Network Traffic Rules on a Distributed Port Group or Uplink Port Group
- Disable Traffic Filtering and Marking on a Distributed Port Group or Uplink Port Group
- Traffic Filtering and Marking on a Distributed Port or Uplink Port
- Enable Traffic Filtering and Marking on a Distributed Port or Uplink Port
- Mark Traffic on a Distributed Port or Uplink Port
- Filter Traffic on a Distributed Port or Uplink Port
- Working with Network Traffic Rules on a Distributed Port or Uplink Port
- Disable Traffic Filtering and Marking on a Distributed Port or Uplink Port
- Qualifying Traffic for Filtering and Marking
- Traffic Filtering and Marking on a Distributed Port Group or Uplink Port Group
- Manage Policies for Multiple Port Groups on a vSphere Distributed Switch
- Port Blocking Policies
- Isolating Network Traffic by Using VLANs
- Managing Network Resources
- DirectPath I/O
- Single Root I/O Virtualization (SR-IOV)
- SR-IOV Support
- SR-IOV Component Architecture and Interaction
- vSphere and Virtual Function Interaction
- DirectPath I/O vs SR-IOV
- Configure a Virtual Machine to Use SR-IOV
- Networking Options for the Traffic Related to an SR-IOV Enabled Virtual Machine
- Using an SR-IOV Physical Adapter to Handle Virtual Machine Traffic
- Enabling SR-IOV by Using Host Profiles or an ESXCLI Command
- Virtual Machine That Uses an SR-IOV Virtual Function Fails to Power On Because the Host Is Out of Interrupt Vectors
- Remote Direct Memory Access for Virtual Machines
- Jumbo Frames
- TCP Segmentation Offload
- Enable or Disable Software TSO in the VMkernel
- Determine Whether TSO Is Supported on the Physical Network Adapters on an ESXi Host
- Enable or Disable TSO on an ESXi Host
- Determine Whether TSO Is Enabled on an ESXi Host
- Enable or Disable TSO on a Linux Virtual Machine
- Enable or Disable TSO on a Windows Virtual Machine
- Large Receive Offload
- Enable Hardware LRO for All VMXNET3 Adapters on an ESXi Host
- Enable or Disable Software LRO for All VMXNET3 Adapters on an ESXi Host
- Determine Whether LRO Is Enabled for VMXNET3 Adapters on an ESXi Host
- Change the Size of the LRO Buffer for VMXNET 3 Adapters
- Enable or Disable LRO for All VMkernel Adapters on an ESXi Host
- Change the Size of the LRO Buffer for VMkernel Adapters
- Enable or Disable LRO on a VMXNET3 Adapter on a Linux Virtual Machine
- Enable or Disable LRO on a VMXNET3 Adapter on a Windows Virtual Machine
- Enable LRO Globally on a Windows Virtual Machine
- NetQueue and Networking Performance
- vSphere Network I/O Control
- About vSphere Network I/O Control Version 3
- Upgrade Network I/O Control to Version 3 on a vSphere Distributed Switch
- Enable Network I/O Control on a vSphere Distributed Switch
- Bandwidth Allocation for System Traffic
- Bandwidth Allocation for Virtual Machine Traffic
- About Allocating Bandwidth for Virtual Machines
- Bandwidth Allocation Parameters for Virtual Machine Traffic
- Admission Control for Virtual Machine Bandwidth
- Create a Network Resource Pool
- Add a Distributed Port Group to a Network Resource Pool
- Configure Bandwidth Allocation for a Virtual Machine
- Configure Bandwidth Allocation on Multiple Virtual Machines
- Change the Quota of a Network Resource Pool
- Remove a Distributed Port Group from a Network Resource Pool
- Delete a Network Resource Pool
- Move a Physical Adapter Out the Scope of Network I/O Control
- Working with Network I/O Control Version 2
- MAC Address Management
- Configuring vSphere for IPv6
- Monitoring Network Connection and Traffic
- Capturing and Tracing Network Packets by Using the pktcap-uw Utility
- pktcap-uw Command Syntax for Capturing Packets
- pktcap-uw Command Syntax for Tracing Packets
- pktcap-uw Options for Output Control
- pktcap-uw Options for Filtering Packets
- Capturing Packets by Using the pktcap-uw Utility
- Trace Packets by Using the pktcap-uw Utility
- Configure the NetFlow Settings of a vSphere Distributed Switch
- Working With Port Mirroring
- vSphere Distributed Switch Health Check
- Switch Discovery Protocol
- Capturing and Tracing Network Packets by Using the pktcap-uw Utility
- Configuring Protocol Profiles for Virtual Machine Networking
- Multicast Filtering
- Stateless Network Deployment
- Networking Best Practices
5 From the New device drop-down menu, select Network and click Add.
6 Expand the New Network section and connect the virtual machine to a distributed port group.
7 From the Adapter type drop-down menu, select PVRDMA.
8 Expand the Memory section, select Reserve all guest memory (All locked), and click OK .
9 Power on the virtual machine.
Network Requirements for RDMA over Converged Ethernet
RDMA over Converged Ethernet ensures low-latency, light-weight, and high-throughput RDMA
communication over an Ethernet network. RoCE requires a network that is configured for lossless traffic
of information at layer 2 alone or at both layer 2 and layer 3.
RDMA over Converged Ethernet (RoCE) is a network protocol that uses RDMA to provide faster data
transfer for network-intensive applications. RoCE allows direct memory transfer between hosts without
involving the hosts' CPUs.
There are two versions of the RoCE protocol. RoCE v1 operates at the link network layer (layer 2). RoCE
v2 operates at the Internet network layer (layer 3) . Both RoCE v1 and RoCE v2 require a lossless
network configuration. RoCE v1 requires a lossless layer 2 network, and RoCE v2 requires that both layer
2 and layer 3 are configured for lossless operation.
Lossless Layer 2 Network
To ensure lossless layer 2 environment, you must be able to control the traffic flows. Flow control is
achieved by enabling global pause across the network or by using the Priority Flow Control (PFC)
protocol defined by Data Center Bridging group (DCB). PFC is a layer 2 protocol that uses the class of
services field of the 802.1Q VLAN tag to set individual traffic priorities. It puts on pause the transfer of
packets towards a receiver in accordance with the individual class of service priorities. This way, a single
link carries both lossless RoCE traffic and other lossy, best-effort traffic. In case of traffic flow congestion,
important lossy traffic can be affected. To isolate different flows from one another, use RoCE in a PFC
priority-enabled VLAN.
Lossless Layer 3 Network
RoCE v2 requires that lossless data transfer is preserved at layer 3 routing devices. To enable the
transfer of layer 2 PFC lossless priorities across layer 3 routers, configure the router to map the received
priority setting of a packet to the corresponding Differentiated Serviced Code Point (DSCP) QoS setting
that operates at layer 3. The transferred RDMA packets are marked with layer 3 DSCP, layer 2 Priority
Code Points (PCP) or with both. Routers use either DSCP or PCP to extract priority information from the
packet. In case PCP is used, the packet must be VLAN-tagged and the router must copy the PCP bits of
the tag and forward them to the next network. If the packet is marked with DSCP, the router must keep
the DSCP bits unchanged.
vSphere Networking
VMware, Inc. 167