HP-MPI Version 2.2 for Linux Release Note

HP-MPI V2.2 for Linux Release Note
What’s in This Version
24
For more information on prun usage:
% man prun
For more information on srun usage:
% man srun
4. The following is an example that assumes the system has the Quadrics Elan4
interconnect, and the system is a collection of 2-CPU nodes. (For srun, SLURM is
configured to use the Elan4):
% $MPI_ROOT/bin/mpirun [-prun|-srun] -N4 ./a.out
will run a.out with 4 ranks, one per node, ranks are cyclically allocated.
n00 rank1
n01 rank2
n02 rank3
n03 rank4
% $MPI_ROOT/bin/mpirun [-prun|-srun] -n4 ./a.out
will run a.out with 4 ranks, 2 ranks per node, ranks are block allocated. Two nodes used.
n00 rank1
n00 rank2
n01 rank3
n01 rank4
% $MPI_ROOT/bin/mpirun [-prun|-srun] -n6 -O -N2 -m block ./a.out
will run a.out with 6 ranks (oversubscribed), 3 ranks per node, ranks are block allocated.
Two nodes used.
n00 rank1
n00 rank2
n00 rank3
n01 rank4
n01 rank5
n01 rank6
% $MPI_ROOT/bin/mpirun [-prun|-srun] -n6 -O -N2 -m cyclic ./a.out
will run a.out with 6 ranks (oversubscribed), 3 ranks per node, ranks are block allocated.
Two nodes used.
n00 rank1
n01 rank2
n00 rank3
n01 rank4
n00 rank5
n01 rank6