HP-MPI User's Guide (11th Edition)

Example applications
ping_pong_ring.c (Windows)
Appendix A236
ping_pong_ring.c (Windows)
Often, clusters might have both ethernet and some form of higher-speed
interconnect such as InfiniBand. This section describes how to use the
ping_pong_ring.c example program to confirm that you are able to run
using the desired interconnect.
Running a test like this, especially on a new cluster, is useful to ensure
that the appropriate network drivers are installed and that the network
hardware is functioning properly. If any machine has defective network
cards or cables, this test can also be useful for identifying which machine
has the problem.
To compile the program, set the MPI_ROOT environment variable to the
location of HP-MPI. The default is
"C:\Program Files (x86)\Hewlett-Packard\HP-MPI" for 64-bit systems,
and "C:\Program Files\Hewlett-Packard\HP-MPI" for 32-bit systems.
This may already be set by the HP-MPI install.
Open a command window for the compiler you plan on using. This will
include all libraries and compilers in path, and compile the program
using the mpicc wrappers:
> "%MPI_ROOT%\bin\mpicc" -mpi64 /out:pp.exe ^
"%MPI_ROOT%\help\ping_ping_ring.c"
Use the startup that is appropriate for your cluster. Your situation
should resemble one of the following:
If running on Windows CCS using automatic scheduling:
Submit the command to the scheduler, but include the total number of
processes needed on the nodes as the -np command. This is
NOT the rank
count when used in this fashion. Also include the -nodex flag to indicate
only one rank/node.
Assume we have 4 CPUs/nodes in this cluster. The command would be:
> "%MPI_ROOT%\bin\mpirun" -ccp -np 12 -IBAL -nodex -prot ^
ping_ping_ring.exe
> "%MPI_ROOT%\bin\mpirun" -ccp -np 12 -IBAL -nodex -prot ^
ping_ping_ring.exe 10000