Product specifications

Table Of Contents
B–Benchmark Programs
Benchmark 1: Measuring MPI Latency Between Two Nodes
B-2 IB6054601-00 H
S
defined. The program uses a loop, executing many such exchanges for each
message size, to get an average. The program defers the timing until the
message has been sent and received a number of times, to be sure that all the
caches in the pipeline have been filled.
This benchmark always involves two node programs. It can be run with the
command:
$ mpirun -np 2 -ppn 1 -m mpihosts osu_latency
The -ppn 1 option is needed to ensure that the two communicating processes
are on different nodes. Otherwise, in the case of multiprocessor nodes, mpirun
might assign the two processes to the same node. In this case, the result would
not be indicative of the latency of the InfiniPath fabric, but rather of the shared
memory transport mechanism. The output of the program looks like:
# OSU MPI Latency Test (Version 2.0)
# Size Latency (us)
0 1.06
1 1.06
2 1.06
4 1.05
8 1.05
16 1.30
32 1.33
64 1.30
128 1.36
256 1.51
512 1.84
1024 2.47
2048 3.79
4096 4.99
8192 7.28
16384 11.75
32768 20.57
65536 58.28
131072 98.59
262144 164.68
524288 299.08
1048576 567.60
2097152 1104.50
4194304 2178.66
The first column displays the message size in bytes. The second column displays
the average (one-way) latency in microseconds. This example shows the syntax of
the command and the format of the output, and is not meant to represent actual
values that might be obtained on any particular InfiniPath installation.