Product specifications
6 – MPI Sample Applications
D000006-000 Rev A 6-3
Q
This will run assorted bandwidths from 4K to 4Mbytes. To run a different set of
message sizes an optional argument specifying the maximum message size can
be provided.
This benchmark will only use the first two nodes listed in mpi_hosts.
During this benchmark the /opt/iba/src/mpi_apps/mpi.param.pallas
config file is used.
6.4
OSU Bandwidth2
This is a simple benchmark of maximum unidirectional bandwidth.
A script is provided to run this application that will execute an assortment of sizes:
1. cd /opt/iba/src/mpi_apps
2. ./run_bw2
This will run assorted bandwidths from 1 byte to 4Mbytes. This benchmark will only
use the first two nodes listed in mpi_hosts.
During this benchmark the /opt/iba/src/mpi_apps/mpi.param.pallas
config file is used.
6.5
OSU Bidirectional Bandwidth
This is a simple benchmark of maximum bidirectional bandwidth..
A script is provided to run this application that will execute an assortment of sizes:
1. cd /opt/iba/src/mpi_apps
2. ./run_bibw2
This will run assorted bandwidths from 1 byte to 4Mbytes. This benchmark will only
use the first two nodes listed in mpi_hosts.
During this benchmark the /opt/iba/src/mpi_apps/mpi.param.pallas
config file is used.
6.6
High Performance Linpack (HPL)
This is a standard benchmark for Floating Point Linear Algebra performance.
Included in the HPL is the Dr K. Goto Linear Algebra library. If desired, the user
can modify the HPL makefiles to use alternate libraries. Atlas source code and
the open source math library is also provided in
/opt/iba/src/mpi_apps/ATLAS.
HPL is known to scale very well and is the benchmark of choice for identifying a
systems ranking in the Top 500 supercomputers (http://www.top500.org).