User Manual
Table Of Contents
- Mellanox WinOF VPI User Manual
- Table of Contents
- List of Tables
- Document Revision History
- About this Manual
- 1 Introduction
- 2 Firmware Upgrade
- 3 Driver Features
- 3.1 Hyper-V with VMQ
- 3.2 Header Data Split
- 3.3 Receive Side Scaling (RSS)
- 3.4 Port Configuration
- 3.5 Load Balancing, Fail-Over (LBFO) and VLAN
- 3.6 Ports TX Arbitration
- 3.7 RDMA over Converged Ethernet (RoCE)
- 3.8 Network Virtualization using Generic Routing Encapsulation
- 3.9 Differentiated Services Code Point (DSCP)
- 4 Deploying Windows Server 2012 and Above with SMB Direct
- 5 Driver Configuration
- 6 Performance Tuning
- 7 OpenSM - Subnet Manager
- 8 InfiniBand Fabric
- 8.1 Network Direct Interface
- 8.2 part_man - Virtual IPoIB Port Creation Utility
- 8.3 InfiniBand Fabric Diagnostic Utilities
- 8.3.1 Utilities Usage
- 8.3.2 ibdiagnet
- 8.3.3 ibportstate
- 8.3.4 ibroute
- 8.3.5 ibdump
- 8.3.6 smpquery
- 8.3.7 perfquery
- 8.3.8 ibping
- 8.3.9 ibnetdiscover
- 8.3.10 ibtracert
- 8.3.11 sminfo
- 8.3.12 ibclearerrors
- 8.3.13 ibstat
- 8.3.14 vstat
- 8.3.15 osmtest
- 8.3.16 ibaddr
- 8.3.17 ibcacheedit
- 8.3.18 iblinkinfo
- 8.3.19 ibqueryerrors
- 8.3.20 ibsysstat
- 8.3.21 saquery
- 8.3.22 smpdump
- 8.4 InfiniBand Fabric Performance Utilities
- 8.4.1 ib_read_bw
- 8.4.2 ib_read_lat
- 8.4.3 ib_send_bw
- 8.4.4 ib_send_lat
- 8.4.5 ib_write_bw
- 8.4.6 ib_write_lat
- 8.4.7 ibv_read_bw
- 8.4.8 ibv_read_lat
- 8.4.9 ibv_send_bw
- 8.4.10 ibv_send_lat
- 8.4.11 ibv_write_bw
- 8.4.12 ibv_write_lat
- 8.4.13 nd_write_bw
- 8.4.14 nd_write_lat
- 8.4.15 nd_read_bw
- 8.4.16 nd_read_lat
- 8.4.17 nd_send_bw
- 8.4.18 nd_send_lat
- 8.4.19 NTttcp
- 9 Software Development Kit
- 10 Troubleshooting
- 11 Documentation
- Appendix A: Windows MPI (MS-MPI)
- Appendix B: NVGRE Configuration Scrips Examples
Rev 4.60
Mellanox Technologies
126
• Mellanox ConnectX EN 10Gbit Ethernet Adapter <X> device detected that the link connected to port
<Y> is up, and has initiated normal operation.
• Mellanox ConnectX EN 10Gbit Ethernet Adapter <X> device detected that the link connected to port
<Y> is down. This can occur if the physical link is disconnected or damaged, or if the other end-port
is down.
• Mismatch in the configurations between the two ports may affect the performance. When Using
MSI-X, both ports should use the same RSS mode. To fix the problem, configure the RSS mode of
both ports to be the same in the driver GUI.
• Mellanox ConnectX EN 10Gbit Ethernet Adapter <X> device failed to create enough MSI-X vec-
tors. The Network interface will not use MSI-X interrupts. This may affects the performance. To fix
the problem, configure the number of MSI-X vectors in the registry to be at least <Y>.
10.3 Performance Troubleshooting
Issue 1. Windows Settings
Suggestion 1: In Windows 2012 and above, when a kernel debugger is configured (not neces-
sarily physically connected), flow control is disabled unless the following registry key is set
(reboot required after setting):
Registry Path: HKLM\SYSTEM\CurrentControlSet\Services\NDIS\Parameters
Type: REG_DWORD
Key name: AllowFlowControlUnderDebugger
Value: 1
Suggestion 2: Go to "Power Options" in the "Control Panel". Make sure "Maximum Perfor-
mance" is set as the power scheme, reboot is needed.
Issue 2. General Diagnostic
Suggestion 1: Go to "Device Manager", locate the Mellanox adapter that you are debugging,
right-click and go to "Information":
• PCI Gen 2: should appear as "PCI-E 5.0 Gbps x8"
• PCI Gen 3: should appear as "PCI-E 8.0 Gbps x8"
• Link Speed: 40.0Gbps/10.0Gbps
Suggestion 2: To determine if the Mellanox NIC and PCI bus can achieve their maximum
speed, it's best to run ib_send_bw in a loopback. On the same machine:
1. Run "start /b /affinity 0x1 ibv_write_bw"
2. Run "start /b /affinity 0x2 ibv_write_bw 127.0.0.1"
3. Repeat for port 2 with additional -p2, and for other cards if necessary.
4. On PCI Gen3 the expected result is around 5700MB/s
On PCI Gen2 the expected result is around 3300MB/s
Any number lower than that points to bad configuration or installation on the wrong PCI slot.
Malfunctioning QoS settings and Flow Control can be the cause as well.
Suggestion 3: To determine the maximum speed between the two sides with the most basic
test:
1. Run "ib_send_bw" on machine 1
2. Run "ib_send_bw <host1>" on machine 2 where <host1> is the hostname for
machine 1.