User Manual
Table Of Contents
- Mellanox WinOF VPI User Manual
- Table of Contents
- List of Tables
- Document Revision History
- About this Manual
- 1 Introduction
- 2 Firmware Upgrade
- 3 Driver Features
- 3.1 Hyper-V with VMQ
- 3.2 Header Data Split
- 3.3 Receive Side Scaling (RSS)
- 3.4 Port Configuration
- 3.5 Load Balancing, Fail-Over (LBFO) and VLAN
- 3.6 Ports TX Arbitration
- 3.7 RDMA over Converged Ethernet (RoCE)
- 3.8 Network Virtualization using Generic Routing Encapsulation
- 3.9 Differentiated Services Code Point (DSCP)
- 4 Deploying Windows Server 2012 and Above with SMB Direct
- 5 Driver Configuration
- 6 Performance Tuning
- 7 OpenSM - Subnet Manager
- 8 InfiniBand Fabric
- 8.1 Network Direct Interface
- 8.2 part_man - Virtual IPoIB Port Creation Utility
- 8.3 InfiniBand Fabric Diagnostic Utilities
- 8.3.1 Utilities Usage
- 8.3.2 ibdiagnet
- 8.3.3 ibportstate
- 8.3.4 ibroute
- 8.3.5 ibdump
- 8.3.6 smpquery
- 8.3.7 perfquery
- 8.3.8 ibping
- 8.3.9 ibnetdiscover
- 8.3.10 ibtracert
- 8.3.11 sminfo
- 8.3.12 ibclearerrors
- 8.3.13 ibstat
- 8.3.14 vstat
- 8.3.15 osmtest
- 8.3.16 ibaddr
- 8.3.17 ibcacheedit
- 8.3.18 iblinkinfo
- 8.3.19 ibqueryerrors
- 8.3.20 ibsysstat
- 8.3.21 saquery
- 8.3.22 smpdump
- 8.4 InfiniBand Fabric Performance Utilities
- 8.4.1 ib_read_bw
- 8.4.2 ib_read_lat
- 8.4.3 ib_send_bw
- 8.4.4 ib_send_lat
- 8.4.5 ib_write_bw
- 8.4.6 ib_write_lat
- 8.4.7 ibv_read_bw
- 8.4.8 ibv_read_lat
- 8.4.9 ibv_send_bw
- 8.4.10 ibv_send_lat
- 8.4.11 ibv_write_bw
- 8.4.12 ibv_write_lat
- 8.4.13 nd_write_bw
- 8.4.14 nd_write_lat
- 8.4.15 nd_read_bw
- 8.4.16 nd_read_lat
- 8.4.17 nd_send_bw
- 8.4.18 nd_send_lat
- 8.4.19 NTttcp
- 9 Software Development Kit
- 10 Troubleshooting
- 11 Documentation
- Appendix A: Windows MPI (MS-MPI)
- Appendix B: NVGRE Configuration Scrips Examples
Rev 4.60
Mellanox Technologies
127
3. Results appear in MB/s (Mega Bytes 2^20), and reflect the actual data that was
transferred, excluding headers.
4. If these results are not as expected, the problem is most probably with one or more
of the following:
• Old Firmware version.
• Misconfigured Flow-control: Global pause or PFC is configured wrong on the hosts, routers and-
switches. See Section 3.7,“RDMA over Converged Ethernet (RoCE),” on page 29
• CPU/power options are not set to "Maximum Performance".
Issue 3.
QoS and Flow-control
Flow control settings can greatly affect results. In order to see configured settings for all of the
QoS options, open a PowerShell prompt and use "Get-NetAdapterQos"
To achieve maximum performance all of the following must exist:
1. All of the hosts, switches and routers should use the same matching flow control
settings. If Global-pause is used, all devices must be configured for it. If PFC (Prior-
ity Flow-control) is used all devices must have matching settings for all priorities.
2. ETS settings that limit speed of some priorities will greatly affect the output results.
3. Make sure Flow-Control is enabled on the Mellanox Interfaces (enabled by default).
Go to the device manager, right click the Mellanox interface go to "Advanced" and
make sure Flow-control is enabled for both TX and RX.
4. To eliminate QoS and Flow-control as the performance degrading factor, set all
devices to run with Global Pause and rerun the tests:
• Set Global pause on the switches, routers.
• Run "Disable-NetAdapterQos *" on all of the hosts in a PowerShell window.