HP-MPI User's Guide (11th Edition)

Debugging and troubleshooting
Troubleshooting HP-MPI applications
Chapter 6 215
If the run aborts with some kind of error message, it's possible that
HP-MPI incorrectly determined what interconnect was available. One
common way to encounter this problem is to run a 32-bit application on a
64-bit machine like an Opteron or Intel64. It's not uncommon for some
network vendors to provide only 64-bit libraries.
HP-MPI determines which interconnect to use before it even knows the
application's bitness. So in order to have proper network selection in that
case, one must specify if the app is 32-bit when running on
Opteron/Intel64 machines.
% $MPI_ROOT/bin/mpirun -mpi32 ...
Testing the network on Windows
Often, clusters might have both ethernet and some form of higher-speed
interconnect such as InfiniBand. This section describes how to use the
ping_pong_ring.c example program to confirm that you are able to run
using the desired interconnect.
Running a test like this, especially on a new cluster, is useful to ensure
that the appropriate network drivers are installed and that the network
hardware is functioning properly. If any machine has defective network
cards or cables, this test can also be useful for identifying which machine
has the problem.
To compile the program, set the MPI_ROOT environment variable to the
location of HP-MPI. The default is
"C:\Program Files (x86)\Hewlett-Packard\HP-MPI" for 6-bit systems,
and "C:\Program Files \Hewlett-Packard\HP-MPI" for 32-bit systems.
This may already be set by the HP-MPI install.
Open a command window for the compiler you plan on using. This will
include all libraries and compilers in path, and compile the program
using the mpicc wrappers:
> "%MPI_ROOT%\bin\mpicc" -mpi64 /out:pp.exe ^
"%MPI_ROOT%\help\ping_ping_ring.c"
Use the startup that is appropriate for your cluster. Your situation
should resemble one of the following:
If running on Windows CCS using appfile mode:
Create an appfile such as: