User guide

ABenchmark Programs
Benchmark 3: Messaging Rate Microbenchmarks
IB0054606-02 A A-7
This was run on 12-core compute nodes, so we used Open MPI's -npernode 12
option to place 12 MPI processes on each node (for a total of 24) to maximize
message rate. Note that the output below indicates that there are 12 pairs of
communicating processes.
# OSU MPI Multiple Bandwidth / Message Rate Test v3.1.1
# [ pairs: 12 ] [ window size: 64 ]
# Size MB/s Messages/s
1 22.77 22768062.43
2 44.90 22449128.66
4 91.75 22938300.02
8 179.23 22403849.44
16 279.91 17494300.07
32 554.16 17317485.47
64 1119.88 17498101.32
128 1740.54 13597979.96
256 2110.22 8243066.36
512 2353.17 4596038.46
1024 2495.88 2437386.38
2048 2573.99 1256833.08
4096 2567.88 626923.21
8192 2757.54 336613.42
16384 3283.94 200435.90
32768 3291.54 100449.84
65536 3298.20 50326.50
131072 3305.77 25221.05
262144 3310.39 12628.14
524288 3310.83 6314.90
1048576 3311.11 3157.72
2097152 3323.50 1584.77
4194304 3302.35 787.34
An Enhanced Multiple Bandwidth / Message Rate
test (mpi_multibw)
mpi_multibw is a version of osu_mbw_mr which has been enhanced by QLogic
to, optionally, run in a bidirectional mode and to scale better on the larger
multi-core nodes available today This benchmark is a modified form of the OSU
Network-Based Computing Lab’s osu_mbw_mr benchmark (as shown in the
previous example). It has been enhanced with the following additional
functionality: