HP-MPI Version 2.2 for Linux Release Note

HP-MPI V2.2 for Linux Release Note
What’s in This Version
25
Other forms of usage include allocating the nodes you wish to use, which creates a
subshell. Then jobsteps can be launched within that subshell until the subshell is exited.
% $MPI_ROOT/bin/mpirun [-prun|-srun] -A -N6
This allocates 6 nodes and creates a subshell.
% $MPI_ROOT/bin/mpirun [-prun|-srun] -n4 -m block ./a.out
This allocates 4 ranks on 4 nodes cyclically. Note that we asked for block.
n00 rank1
n01 rank2
n02 rank3
n03 rank4
5. Interconnect Selection Examples
Example 1
% export MPI_IC_ORDER="elan:TCP"
% export MPIRUN_SYSTEM_OPTIONS="-subnet 192.168.1.1"
% export MPIRUN_OPTIONS="-prot"
% $MPI_ROOT/bin/mpirun -prun -n4 ./a.out
The command line for the above will appear to mpirun as $MPI_ROOT/bin/mpirun
-subnet 192.168.1.1 -prot -prun -n4 ./a.out and the interconnect decision will
look for the presence of Elan and use it if found. Otherwise, TCP/IP will be used and the
communication path will be on the same subnet as the 192.168.1.* host.
Example 2 TCP/IP over GigE
The following is an example using TCP/IP over GigE, assuming GigE is installed and
192.168.1.1 corresponds to the ethernet interface with GigE. Note the implicit use of
-subnet 192.168.1.1 is required to effectively get TCP/IP over the proper subnet, if eth0 is
not the gigabit interface.
% export MPI_IC_ORDER="elan:TCP"
% export MPIRUN_SYSTEM_OPTIONS="-subnet 192.168.1.1"
% $MPI_ROOT/bin/mpirun -prot -TCP -prun -n4 ./a.out
Example 3 TCP/IP over Elan4
The following is an example using TCP/IP over Elan4, assuming Elan4 is installed and
configured. The subnet information is omitted, Elan4 is installed and configured, and
TCP/IP via -TCP is explicitly requested.