HP-MPI Version 2.2.5 for Linux Release Note
HP-MPI V2.2.5 for Linux Release Note
What’s in This Version
34
host | 0 1 2
======|================
0 : SHM TCP TCP
1 : TCP SHM TCP
2 : TCP TCP SHM
Hello world! I'm 0 of 3 on opte2
Hello world! I'm 1 of 3 on opte3
Hello world! I'm 2 of 3 on opte4
• Elan interface
[user@opte10 user]$ /sbin/ifconfig eip0
eip0 Link encap:Ethernet HWaddr 00:00:00:00:00:0F
inet addr:172.22.0.10 Bcast:172.22.255.255 Mask:255.255.0.0
UP BROADCAST RUNNING MULTICAST MTU:65264 Metric:1
RX packets:38 errors:0 dropped:0 overruns:0 frame:0
TX packets:6 errors:0 dropped:3 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1596 (1.5 Kb) TX bytes:252 (252.0 b)
•GigE interface
[user@opte10 user]$ /sbin/ifconfig eth0
eth0 Link encap:Ethernet HWaddr 00:00:1A:19:30:80
inet addr:172.20.0.10 Bcast:172.20.255.255 Mask:255.0.0.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:133469120 errors:0 dropped:0 overruns:0 frame:0
TX packets:135950325 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:24498382931(23363.4Mb) TX bytes:29823673137(28442.0Mb)
Interrupt:31
6. HP XC LSF and HP-MPI
HP-MPI jobs can be submitted using LSF. LSF uses the SLURM srun launching
mechanism. Because of this, HP-MPI jobs need to specify the -srun option whether LSF is
used or srun is used.
LSF creates an allocation of 2 processors and srun attaches to it.
% bsub -I -n2 $MPI_ROOT/bin/mpirun -srun ./a.out
LSF creates an allocation of 12 processors and srun uses 1 CPU per node (6 nodes). Here,
we assume 2 CPUs per node.
% bsub -I -n12 $MPI_ROOT/bin/mpirun -srun -n6 -N6 ./a.out
LSF jobs can be submitted without the -I (interactive) option.
An alternative mechanism for achieving the one rank per node which uses the -ext option
to LSF: