HP-MPI User's Guide (11th Edition)

Debugging and troubleshooting
Troubleshooting HP-MPI applications
Chapter 6212
Testing the network on HP-UX and Linux
Often, clusters might have both ethernet and some form of higher speed
interconnect such as InfiniBand. This section describes how to use the
ping_pong_ring.c example program to confirm that you are able to run
using the desired interconnect.
Running a test like this, especially on a new cluster, is useful to ensure
that the appropriate network drivers are installed and that the network
hardware is functioning properly. If any machine has defective network
cards or cables, this test can also be useful at identifying which machine
has the problem.
To compile the program, set the MPI_ROOT environment variable (not
required, but recommended) to a value such as /opt/hpmpi (for Linux)
or /opt/mpi (for HP-UX), then run
% export MPI_CC=gcc (whatever compiler you want)
% $MPI_ROOT/bin/mpicc -o pp.x \
$MPI_ROOT/help/ping_pong_ring.c
Although mpicc will perform a search for what compiler to use if you
don't specify MPI_CC, it is preferable to be explicit.
If you have a shared filesystem, it is easiest to put the resulting pp.x
executable there, otherwise you will have to explicitly copy it to each
machine in your cluster.
Use the startup that is appropriate for your cluster. Your situation
should resemble one of the following:
If there is no job scheduler (such as srun, prun, or LSF) available,
run a command like:
$MPI_ROOT/bin/mpirun -prot -hostlist \
hostA,hostB,...hostZ pp.x
You may need to specify what remote shell command to use (the
default is ssh) by setting the MPI_REMSH environment variable. For
example:
% export MPI_REMSH="rsh -x" (optional)
If LSF is being used, create an appfile such as: