User Manual
Table Of Contents
- Mellanox WinOF VPI User Manual
- Table of Contents
- List of Tables
- Document Revision History
- About this Manual
- 1 Introduction
- 2 Firmware Upgrade
- 3 Driver Features
- 3.1 Hyper-V with VMQ
- 3.2 Header Data Split
- 3.3 Receive Side Scaling (RSS)
- 3.4 Port Configuration
- 3.5 Load Balancing, Fail-Over (LBFO) and VLAN
- 3.6 Ports TX Arbitration
- 3.7 RDMA over Converged Ethernet (RoCE)
- 3.8 Network Virtualization using Generic Routing Encapsulation
- 3.9 Differentiated Services Code Point (DSCP)
- 4 Deploying Windows Server 2012 and Above with SMB Direct
- 5 Driver Configuration
- 6 Performance Tuning
- 7 OpenSM - Subnet Manager
- 8 InfiniBand Fabric
- 8.1 Network Direct Interface
- 8.2 part_man - Virtual IPoIB Port Creation Utility
- 8.3 InfiniBand Fabric Diagnostic Utilities
- 8.3.1 Utilities Usage
- 8.3.2 ibdiagnet
- 8.3.3 ibportstate
- 8.3.4 ibroute
- 8.3.5 ibdump
- 8.3.6 smpquery
- 8.3.7 perfquery
- 8.3.8 ibping
- 8.3.9 ibnetdiscover
- 8.3.10 ibtracert
- 8.3.11 sminfo
- 8.3.12 ibclearerrors
- 8.3.13 ibstat
- 8.3.14 vstat
- 8.3.15 osmtest
- 8.3.16 ibaddr
- 8.3.17 ibcacheedit
- 8.3.18 iblinkinfo
- 8.3.19 ibqueryerrors
- 8.3.20 ibsysstat
- 8.3.21 saquery
- 8.3.22 smpdump
- 8.4 InfiniBand Fabric Performance Utilities
- 8.4.1 ib_read_bw
- 8.4.2 ib_read_lat
- 8.4.3 ib_send_bw
- 8.4.4 ib_send_lat
- 8.4.5 ib_write_bw
- 8.4.6 ib_write_lat
- 8.4.7 ibv_read_bw
- 8.4.8 ibv_read_lat
- 8.4.9 ibv_send_bw
- 8.4.10 ibv_send_lat
- 8.4.11 ibv_write_bw
- 8.4.12 ibv_write_lat
- 8.4.13 nd_write_bw
- 8.4.14 nd_write_lat
- 8.4.15 nd_read_bw
- 8.4.16 nd_read_lat
- 8.4.17 nd_send_bw
- 8.4.18 nd_send_lat
- 8.4.19 NTttcp
- 9 Software Development Kit
- 10 Troubleshooting
- 11 Documentation
- Appendix A: Windows MPI (MS-MPI)
- Appendix B: NVGRE Configuration Scrips Examples
Rev 4.60
Mellanox Technologies
33
a Hyper-V host can be tunneled using a single PA on that Hyper-V host. CAs must be
unique across all VMs on the same virtual network, but they do not need to be unique
across virtual networks with different Virtual Subnet ID.
The VM generates a packet with the addresses of the sender and the recipient within the CA
space. Then Hyper-V host encapsulates the packet with the addresses of the sender and the recip-
ient in PA space.
PA addresses are determined by using virtualization table. Hyper-V host retrieves the received
packet, identifies recipient and forwards the original packet with the CA addresses to the desired
VM.
NVGRE can be implemented across an existing physical IP network without requiring changes
to physical network switch architecture. Since NVGRE tunnels terminate at each Hyper-V host,
the hosts handle all encapsulation and de-encapsulation of the network traffic. Firewalls that
block GRE tunnels between sites have to be configured to support forwarding GRE (IP Protocol
47) tunnel traffic.
Figure 1: NVGRE Packet Structure
3.8.1 Configuring the NVGRE using the User Interface
To leverage NVGRE to virtualize heavy network IO workloads, the Mellanox ConnectX®-3 Pro
network NIC provides hardware support for GRE off-load within the network NICs by default.
To enable/disable NVGRE off-loading:
Step 1. Open the Device Manager.
Step 2. Go to the Network adapters.
Step 3. Right click 'Properties on Mellanox ConnectX®-3 Pro Ethernet Adapter' card.
Step 4. Go to Advanced tab.
Step 5. Choose the ‘Encapsulate Task Offload’ option.
Step 6. Set one of the following values:
• Enable - GRE off-loading is Enabled by default
• Disabled - When disabled the Hyper-V host will still be able to transfer NVGRE traffic, but TCP and
inner IP checksums will be calculated by software that significant reduces performance.