HP-MPI User's Guide (11th Edition)

Example applications
ping_pong_ring.c (HP-UX and Linux)
Appendix A 229
ping_pong_ring.c (HP-UX and Linux)
Often a cluster might have both regular ethernet and some form of
higher speed interconnect such as InfiniBand. This section describes how
to use the ping_pong_ring.c example program to confirm that you are
able to run using the desired interconnect.
Running a test like this, especially on a new cluster, is useful to ensure
that the appropriate network drivers are installed and that the network
hardware is functioning properly. If any machine has defective network
cards or cables, this test can also be useful at identifying which machine
has the problem.
To compile the program, set the MPI_ROOT environment variable (not
required, but recommended) to a value such as /opt/hpmpi (Linux) or
/opt/mpi (HP-UX), then run
% export MPI_CC=gcc (whatever compiler you want)
% $MPI_ROOT/bin/mpicc -o pp.x \
$MPI_ROOT/help/ping_pong_ring.c
Although mpicc will perform a search for what compiler to use if you
don't specify MPI_CC, it is preferable to be explicit.
If you have a shared filesystem, it is easiest to put the resulting pp.x
executable there, otherwise you will have to explicitly copy it to each
machine in your cluster.
As discussed elsewhere, there are a variety of supported startup
methods, and you need to know which is appropriate for your cluster.
Your situation should resemble one of the following:
•No srun, prun, or CCS job scheduler command is available
For this case you can create an appfile such as the following:
-h hostA -np 1 /path/to/pp.x
-h hostB -np 1 /path/to/pp.x
-h hostC -np 1 /path/to/pp.x
...
-h hostZ -np 1 /path/to/pp.x
And you can specify what remote shell command to use (Linux
default is ssh) in the MPI_REMSH environment variable.
For example you might want