HP-MPI User's Guide (11th Edition)
Understanding HP-MPI
Running applications on HP-UX and Linux
Chapter 368
% srun -A -n4
This allocates 2 nodes with 2 ranks each and creates a subshell.
% $MPI_ROOT/bin/mpirun -srun ./a.out
This runs on the previously allocated 2 nodes cyclically.
n00 rank1
n00 rank2
n01 rank3
n01 rank4
• Use XC LSF and HP-MPI
HP-MPI jobs can be submitted using LSF. LSF uses the SLURM
srun launching mechanism. Because of this, HP-MPI jobs need to
specify the -srun option whether LSF is used or srun is used.
% bsub -I -n2 $MPI_ROOT/bin/mpirun -srun ./a.out
LSF creates an allocation of 2 processors and srun attaches to it.
% bsub -I -n12 $MPI_ROOT/bin/mpirun -srun -n6 \
-N6 ./a.out
LSF creates an allocation of 12 processors and srun uses 1 CPU per
node (6 nodes). Here, we assume 2 CPUs per node.
LSF jobs can be submitted without the -I (interactive) option.
An alternative mechanism for achieving the one rank per node which
uses the -ext option to LSF:
% bsub -I -n3 -ext "SLURM[nodes=3]" \
$MPI_ROOT/bin/mpirun -srun ./a.out
The -ext option can also be used to specifically request a node. The
command line would look something like the following:
% bsub -I -n2 -ext "SLURM[nodelist=n10]" mpirun -srun \
./hello_world
Job <1883> is submitted to default queue <interactive>.
<<Waiting for dispatch ...>>
<<Starting on lsfhost.localdomain>>
Hello world! I'm 0 of 2 on n10
Hello world! I'm 1 of 2 on n10