HP-MPI User's Guide (11th Edition)

Understanding HP-MPI
Running applications on HP-UX and Linux
Chapter 3 67
This uses 4 ranks on 4 nodes from the existing allocation. Note that
we asked for block.
n00 rank1
n00 rank2
n02 rank3
n03 rank4
•Use mpirun with -srun on HP XC clusters. For example,
% $MPI_ROOT/bin/mpirun <mpirun options> -srun \
<srun options> <program> <args>
Some features like mpirun -stdio processing are unavailable.
The -np option is not allowed with -srun. The following options are
allowed with -srun:
% $MPI_ROOT/bin/mpirun [-help] [-version] [-jv] [-i
<spec>] [-universe_size=#] [-sp <paths>] [-T] [-prot]
[-spawn] [-tv] [-1sided] [-e var[=val]] -srun <srun
options> <program> [<args>]
For more information on srun usage:
% man srun
The following examples assume the system has the Quadrics Elan
interconnect, SLURM is configured to use Elan, and the system is a
collection of 2-CPU nodes.
% $MPI_ROOT/bin/mpirun -srun -N4 ./a.out
will run a.out with 4 ranks, one per node, ranks are cyclically
allocated.
n00 rank1
n01 rank2
n02 rank3
n03 rank4
% $MPI_ROOT/bin/mpirun -srun -n4 ./a.out
will run a.out with 4 ranks, 2 ranks per node, ranks are block
allocated. Two nodes used.
Other forms of usage include allocating the nodes you wish to use,
which creates a subshell. Then jobsteps can be launched within that
subshell until the subshell is exited.