HP-MPI Version 2.2 for Linux Release Note

HP-MPI V2.2 for Linux Release Note
What’s in This Version
27
[user@opte10 user]$ bsub -I -n3 -ext "SLURM[nodes=3]" $MPI_ROOT/bin/mpirun -prot
-TCP -subnet 172.22.0.10 -srun ./a.out Job <59307> is submitted to default queue
<normal>.
<<Waiting for dispatch ...>>
<<Starting on lsfhost.localdomain>>
Host 0 -- ip 172.22.0.2 -- ranks 0
Host 1 -- ip 172.22.0.3 -- ranks 1
Host 2 -- ip 172.22.0.4 -- ranks 2
host | 0 1 2
======|================
0 : SHM TCP TCP
1 : TCP SHM TCP
2 : TCP TCP SHM
Hello world! I'm 0 of 3 on opte2
Hello world! I'm 1 of 3 on opte3
Hello world! I'm 2 of 3 on opte4
Elan interface
[user@opte10 user]$ /sbin/ifconfig eip0
eip0 Link encap:Ethernet HWaddr 00:00:00:00:00:0F
inet addr:172.22.0.10 Bcast:172.22.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:65264 Metric:1
RX packets:38 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:3 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1596 (1.5 Kb) TX bytes:252 (252.0 b)
•GigE interface
[user@opte10 user]$ /sbin/ifconfig eth0
eth0 Link encap:Ethernet HWaddr 00:00:1A:19:30:80
inet addr:172.20.0.10 Bcast:172.20.255.255 Mask:255.0.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:133469120 errors:0 dropped:0 overruns:0 frame:0
TX packets:135950325 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:24498382931(23363.4Mb) TX bytes:29823673137(28442.0Mb)
Interrupt:31
6. XC LSF and HP-MPI
HP-MPI jobs can be submitted on XC systems using LSF. LSF uses the SLURM srun
launching mechanism. Because of this, HP-MPI jobs need to specify the -srun option
whether LSF is used or srun is used.
LSF creates an allocation of 2 processors and srun attaches to it.
% bsub -I -n2 $MPI_ROOT/bin/mpirun -srun ./a.out