User Manual
Table Of Contents
- Mellanox WinOF VPI User Manual
- Table of Contents
- List of Tables
- Document Revision History
- About this Manual
- 1 Introduction
- 2 Firmware Upgrade
- 3 Driver Features
- 3.1 Hyper-V with VMQ
- 3.2 Header Data Split
- 3.3 Receive Side Scaling (RSS)
- 3.4 Port Configuration
- 3.5 Load Balancing, Fail-Over (LBFO) and VLAN
- 3.6 Ports TX Arbitration
- 3.7 RDMA over Converged Ethernet (RoCE)
- 3.8 Network Virtualization using Generic Routing Encapsulation
- 3.9 Differentiated Services Code Point (DSCP)
- 4 Deploying Windows Server 2012 and Above with SMB Direct
- 5 Driver Configuration
- 6 Performance Tuning
- 7 OpenSM - Subnet Manager
- 8 InfiniBand Fabric
- 8.1 Network Direct Interface
- 8.2 part_man - Virtual IPoIB Port Creation Utility
- 8.3 InfiniBand Fabric Diagnostic Utilities
- 8.3.1 Utilities Usage
- 8.3.2 ibdiagnet
- 8.3.3 ibportstate
- 8.3.4 ibroute
- 8.3.5 ibdump
- 8.3.6 smpquery
- 8.3.7 perfquery
- 8.3.8 ibping
- 8.3.9 ibnetdiscover
- 8.3.10 ibtracert
- 8.3.11 sminfo
- 8.3.12 ibclearerrors
- 8.3.13 ibstat
- 8.3.14 vstat
- 8.3.15 osmtest
- 8.3.16 ibaddr
- 8.3.17 ibcacheedit
- 8.3.18 iblinkinfo
- 8.3.19 ibqueryerrors
- 8.3.20 ibsysstat
- 8.3.21 saquery
- 8.3.22 smpdump
- 8.4 InfiniBand Fabric Performance Utilities
- 8.4.1 ib_read_bw
- 8.4.2 ib_read_lat
- 8.4.3 ib_send_bw
- 8.4.4 ib_send_lat
- 8.4.5 ib_write_bw
- 8.4.6 ib_write_lat
- 8.4.7 ibv_read_bw
- 8.4.8 ibv_read_lat
- 8.4.9 ibv_send_bw
- 8.4.10 ibv_send_lat
- 8.4.11 ibv_write_bw
- 8.4.12 ibv_write_lat
- 8.4.13 nd_write_bw
- 8.4.14 nd_write_lat
- 8.4.15 nd_read_bw
- 8.4.16 nd_read_lat
- 8.4.17 nd_send_bw
- 8.4.18 nd_send_lat
- 8.4.19 NTttcp
- 9 Software Development Kit
- 10 Troubleshooting
- 11 Documentation
- Appendix A: Windows MPI (MS-MPI)
- Appendix B: NVGRE Configuration Scrips Examples
Rev 4.60
Mellanox Technologies
129
Appendix A: Windows MPI (MS-MPI)
A.1 Overview
Message Passing Interface (MPI) is meant to provide virtual topology, synchronization, and com-
munication functionality between a set of processes.
With MPI you can run one process on several hosts.
• Windows MPI run over the following protocols:
• Sockets (Ethernet)
• Network Direct (ND)
A.1.1 Prerequisites
• Install HPC (Build: 4.0.3906.0).
• Validate traffic (ping) between the whole MPI Hosts.
• Every MPI client need to run smpd process which open the mpi channel.
• MPI Initiator Server need to run: mpiexec. If the initiator is also client it should also run
smpd.
A.2 Running MPI
Step 1. Run the following command on each mpi client.
Step 2. Install ND provider on each MPI client in MPI ND.
Step 3. Run the following command on MPI server.
A.3 Directing MSMPI Traffic
Directing MPI traffic to a specific QoS priority may delayed due to:
• Except for NetDirectPortMatchCondition, the QoS powershell CmdLet for NetworkDi-
rect traffic does not support port range. Therefore, NetwrokDirect traffic cannot be
directed to ports 1-65536.
• The MSMPI directive to control the port range (namely: MPICH_PORT_RANGE
3000,3030) is not working for ND, and MSMPI chose a random port.
A.4 Running MSMPI on the Desired Priority
Step 1. Set the default QoS policy to be the desired priority (Note: this prio should be lossless all the
way in the switches*)
Step 2. Set SMB policy to a desired priority only if SMD Traffic running.
start smpd -d -p <port>
mpiexec.exe -p <smpd_port> -hosts <num_of_hosts> <hosts_ip_list>
-env MPICH_NETMASK <network_ip/subnet> -env
MPICH_ND_ZCOPY_THRESHOLD -1 -env MPICH_DISABLE_ND <0/1> -env
MPICH_DISABLE_SOCK <0/1> -affinity <process>