HP-MPI User's Guide (11th Edition)
Understanding HP-MPI
Running applications on HP-UX and Linux
Chapter 3 91
% export MPIRUN_OPTIONS="-prot"
% $MPI_ROOT/bin/mpirun -srun -n4 ./a.out
The command line for the above will appear to mpirun as
$MPI_ROOT/bin/mpirun -netaddr 192.168.1.0/24 -prot -srun -n4
./a.out and the interconnect decision will look for IBV, then VAPI, etc.
down to TCP/IP. If TCP/IP is chosen, it will use the 192.168.1.* subnet.
If TCP/IP is desired on a machine where other protocols are available,
the -TCP option can be used.
This example is like the previous, except TCP is searched for and found
first. (TCP should always be available.) So TCP/IP would be used instead
of IBV or Elan, etc.
% $MPI_ROOT/bin/mpirun -TCP -srun -n4 ./a.out
The following example output shows three runs on an Elan system; first
using Elan as the protocol, then using TCP/IP over GigE, then using
TCP/IP over the Quadrics card.
• This runs on Elan
[user@opte10 user]$ bsub -I -n3 -ext "SLURM[nodes=3]"
$MPI_ROOT/bin/mpirun -prot -srun ./a.out
Job <59304> is submitted to default queue <normal>.
<<Waiting for dispatch ...>>
<<Starting on lsfhost.localdomain>>
Host 0 -- ELAN node 0 -- ranks 0
Host 1 -- ELAN node 1 -- ranks 1
Host 2 -- ELAN node 2 -- ranks 2
host | 0 1 2
======|================
0 : SHM ELAN ELAN
1 : ELAN SHM ELAN
2 : ELAN ELAN SHM
Hello world! I'm 0 of 3 on opte6
Hello world! I'm 1 of 3 on opte7
Hello world! I'm 2 of 3 on opte8
• This runs on TCP/IP over the GigE network configured as 172.20.x.x
on eth0
[user@opte10 user]$ bsub -I -n3 -ext "SLURM[nodes=3]"
$MPI_ROOT/bin/mpirun -prot -TCP -srun ./a.out
Job <59305> is submitted to default queue <normal>.
<<Waiting for dispatch ...>>
<<Starting on lsfhost.localdomain>>