HP-MPI User's Guide (11th Edition)
Debugging and troubleshooting
Troubleshooting HP-MPI applications
Chapter 6216
-h hostA -np 1 \\node\share\path\to\pp.x
-h hostB -np 1 \\node\share\path\to\pp.x
-h hostC -np 1 \\node\share\path\to\pp.x
Submit the command to the scheduler using Automatic scheduling, from
a mapped share drive:
> "%MPI_ROOT%\bin\mpirun" -ccp -prot -IBAL -f appfile
> "%MPI_ROOT%\bin\mpirun" -ccp -prot -IBAL -f appfile ^
-- 1000000
If running on Windows CCS using automatic scheduling:
Submit the command to the scheduler, but include the total number of
processes needed on the nodes as the -np command. This is
NOT the rank
count when used in this fashion. Also include the -nodex flag to indicate
only one rank/node.
Assume we have 4 CPUs/nodes in this cluster. The command would be:
> "%MPI_ROOT%\bin\mpirun" -ccp -np 12 -IBAL -nodex -prot ^
ping_ping_ring.exe
> "%MPI_ROOT%\bin\mpirun" -ccp -np 12 -IBAL -nodex -prot ^
ping_ping_ring.exe 1000000
If running on Windows 2003/XP using appfile mode:
Create an appfile such as:
-h hostA -np 1 \\node\share\path\to\pp.x
-h hostB -np 1 \\node\share\path\to\pp.x
-h hostC -np 1 \\node\share\path\to\pp.x
Submit the command to the scheduler using Automatic scheduling, from
a mapped share drive:
> "%MPI_ROOT%\bin\mpirun" -ccp -prot -IBAL -f appfile
> "%MPI_ROOT%\bin\mpirun" -ccp -prot -IBAL -f appfile ^
-- 1000000
In each case above, the first mpirun uses 0 bytes per message and is
checking latency. The second mpirun uses 1000000 bytes per message
and is checking bandwidth.
Example output might look like: