User Manual
Table Of Contents
- Mellanox WinOF VPI User Manual
- Table of Contents
- List of Tables
- Document Revision History
- About this Manual
- 1 Introduction
- 2 Firmware Upgrade
- 3 Driver Features
- 3.1 Hyper-V with VMQ
- 3.2 Header Data Split
- 3.3 Receive Side Scaling (RSS)
- 3.4 Port Configuration
- 3.5 Load Balancing, Fail-Over (LBFO) and VLAN
- 3.6 Ports TX Arbitration
- 3.7 RDMA over Converged Ethernet (RoCE)
- 3.8 Network Virtualization using Generic Routing Encapsulation
- 3.9 Differentiated Services Code Point (DSCP)
- 4 Deploying Windows Server 2012 and Above with SMB Direct
- 5 Driver Configuration
- 6 Performance Tuning
- 7 OpenSM - Subnet Manager
- 8 InfiniBand Fabric
- 8.1 Network Direct Interface
- 8.2 part_man - Virtual IPoIB Port Creation Utility
- 8.3 InfiniBand Fabric Diagnostic Utilities
- 8.3.1 Utilities Usage
- 8.3.2 ibdiagnet
- 8.3.3 ibportstate
- 8.3.4 ibroute
- 8.3.5 ibdump
- 8.3.6 smpquery
- 8.3.7 perfquery
- 8.3.8 ibping
- 8.3.9 ibnetdiscover
- 8.3.10 ibtracert
- 8.3.11 sminfo
- 8.3.12 ibclearerrors
- 8.3.13 ibstat
- 8.3.14 vstat
- 8.3.15 osmtest
- 8.3.16 ibaddr
- 8.3.17 ibcacheedit
- 8.3.18 iblinkinfo
- 8.3.19 ibqueryerrors
- 8.3.20 ibsysstat
- 8.3.21 saquery
- 8.3.22 smpdump
- 8.4 InfiniBand Fabric Performance Utilities
- 8.4.1 ib_read_bw
- 8.4.2 ib_read_lat
- 8.4.3 ib_send_bw
- 8.4.4 ib_send_lat
- 8.4.5 ib_write_bw
- 8.4.6 ib_write_lat
- 8.4.7 ibv_read_bw
- 8.4.8 ibv_read_lat
- 8.4.9 ibv_send_bw
- 8.4.10 ibv_send_lat
- 8.4.11 ibv_write_bw
- 8.4.12 ibv_write_lat
- 8.4.13 nd_write_bw
- 8.4.14 nd_write_lat
- 8.4.15 nd_read_bw
- 8.4.16 nd_read_lat
- 8.4.17 nd_send_bw
- 8.4.18 nd_send_lat
- 8.4.19 NTttcp
- 9 Software Development Kit
- 10 Troubleshooting
- 11 Documentation
- Appendix A: Windows MPI (MS-MPI)
- Appendix B: NVGRE Configuration Scrips Examples
Rev 4.60
Mellanox Technologies
114
8.4.12 ibv_write_lat
This is a more advanced version of ib_write_lat and contains more flags and features than the
older version and also improved algorithms. ibv_write_lat calculates the latency of RDMA write
operation of message_size between a pair of machines. One acts as a server, and the other as a
client. They perform a ping pong benchmark on which one side RDMA writes to the other side
memory only after the other side wrote on his memory. Each of the sides samples the CPU clock
each time they write to the other side memory to calculate latency.
-c, --connection=<RC/UC> Connection type RC/UC(default RC)
-s, --size=<size> The size of message to exchange (default 65536)
-a, --all Runs sizes from 2 till 2^23
-t, --tx-depth=<dep> The size of tx queue (default 100)
-n, --iters=<iters> The number of exchanges (at least 2, default 1000)
-u, --qp-timeout=<timeout> QP timeout. The timeout value is 4 usec * 2 ^(timeout), default 14
-S, --sl=<sl> The service level (default 0)
-x, --gid-index=<index> Test uses GID with GID index taken from command line (for
RDMAoE index should be 0)
-b, --bidirectional Measures bidirectional bandwidth (default unidirectional)
-V, --version Displays version number
-g, --post=<num of posts> The number of posts for each qp in the chain (default tx_depth)
-F, --CPU-freq The CPU frequency test. It is active even if the cpufreq_ondemand
module is loaded
-q, --qp=<num of qp's> The number of qp's (default 1)
-I, --inline_size=<size> The maximum size of message to be sent in “inline mode” (default 0)
-N, --no peak-bw Cancels peak-bw calculation (default with peak-bw)
-R, --rdma_cm Connect QPs with rdma_cm and run test on those QPs
-z, --com_rdma_cm Communicate with rdma_cm module to exchange data - use regular
QPs
-Q, --cq-mod Generate Cqe only after <--cq-mod> completion
Table 42 - ibv_write_bw Flags and Options
Flag Description