Users Guide
Table Of Contents
- Table of Contents
- 1 Regulatory and Safety Approvals
- 2 Functional Description
- 3 Network Link and Activity Indication
- 4 Features
- 4.1 Software and Hardware Features
- 4.2 Virtualization Features
- 4.3 VXLAN
- 4.4 NVGRE/GRE/IP-in-IP/Geneve
- 4.5 Stateless Offloads
- 4.6 Priority Flow Control
- 4.7 Virtualization Offload
- 4.8 SR-IOV
- 4.9 Network Partitioning (NPAR)
- 4.10 Security
- 4.11 RDMA over Converged Ethernet – RoCE
- 4.12 VMWare Enhanced Networking Stack (ENS)
- 4.13 Supported Combinations
- 4.14 Unsupported Combinations
- 5 Installing the Hardware
- 6 Software Packages and Installation
- 7 Updating the Firmware
- 8 Link Aggregation
- 9 System-Level Configuration
- 10 PXE Boot
- 11 SR-IOV – Configuration and Use Case Examples
- 12 NPAR – Configuration and Use Case Example
- 13 Tunneling Configuration Examples
- 14 RoCE – Configuration and Use Case Examples
- 15 DCBX – Data Center Bridging
- 16 DPDK – Configuration and Use Case Examples
- Revision History
Broadcom NetXtreme-E-UG304-2CS
32
NetXtreme-E User Guide User Guide for Dell Platforms
4.6 Priority Flow Control
Priority Flow Control (PFC) is a standard-compliant backpressure mechanism implemented in the NetXtreme-E controllers.
The goal of PFC is to backpressure congested priority traffic flow without affecting the traffic flows of uncongested priorities.
PFC can be used in a network with real-time or time-sensitive traffic because of its capability to provide differential treatment
to Traffic Classes. For example, using PFC lower priority Internet traffic can be backpressured leaving the higher priority
traffic like VOIP and Streaming Video flowing through the link without flow control.
4.7 Virtualization Offload
4.7.1 Multiqueue Support
4.7.2 KVM/Xen Multiqueue
KVM/Multiqueue returns the frames to different queues of the host stack by classifying the incoming frame by processing
the received packet's destination MAC address and or IEEE 802.1Q VLAN tag. The classification combined with the ability
to DMA the frames directly into a virtual machine's memory allows scaling of virtual machines across multiple processors.
4.7.3 Virtual Machine Queue
The NDIS Virtual Machine Queue (VMQ) is a feature that is supported by Microsoft to improve Hyper-V network
performance. The VMQ feature supports packet classification based on the destination MAC address to return received
packets on different completion queues. This packet classification combined with the ability to DMA packets directly into a
virtual machine's memory allows the scaling of virtual machines across multiple processors.
See Driver Advanced Properties for information on VMQ.
4.7.3.1 VMware NetQueue
The VMware NetQueue is a feature that is similar to Microsoft's NDIS VMQ feature. The NetQueue feature supports packet
classification based on the destination MAC address and VLAN to return received packets on different NetQueues. This
packet classification combined with the ability to DMA packets directly into a virtual machine's memory allows the scaling of
virtual machines across multiple processors.
4.7.3.2 Xen Multiqueue
Xen multiqueue enables network device drivers to dedicate each Rx queue to a specific guest operating system. This means
the network device drivers should be able to allocate physical memory from the set of memory pages assigned to a specific
guest operating system.
4.7.4 Tunneling Offload
Stateless Transport Tunnel Offload (STT) is a tunnel encapsulation that enables overlay networks in virtualized data centers.
STT uses IP-based encapsulation with a TCP-like header. There is no TCP connection state associated with the tunnel and
that is why STT is stateless. Open Virtual Switch (OVS) uses STT.
An STT frame contains the STT frame header and payload. The payload of the STT frame is an untagged Ethernet frame.
The STT frame header and encapsulated payload are treated as the TCP payload and TCP-like header. The IP header (IPv4
or IPv6) and Ethernet header are created for each STT segment that is transmitted.