HP-MPI Version 2.2.5 for Linux Release Note

HP-MPI V2.2.5 for Linux Release Note
What’s in This Version
26
Examples:
To use the 2nd IB card:
% setenv MPI_IB_CARD_ORDER 1
To use the 2nd port of the 2nd card:
% setenv MPI_IB_CARD_ORDER 1:1
To use the 1st IB card:
% setenv MPI_IB_CARD_ORDER 0
To assign ranks to multiple cards:
% setenv MPI_IB_CARD_ORDER 0,1,2
will assign the local ranks per node in order to each card.
% mpirun -hostlist "host0 4 host1 4"
creates ranks 0-3 on host0 and ranks 4-7 on host1. Will assign rank 0 to card 0, rank 1 to card 1, rank 2
to card 2, rank 3 to card 0 all on host0. And will assign rank 4 to card 0, rank 5 to card 1, rank 6 to card
2, rank 7 to card 0 all on host1.
% mpirun -hostlist -np 8 "host0 host1"
creates ranks 0 through 7 alternatingly on host0, host1, host0, host1, etc. Will assign rank 0 to card 0,
rank 2 to card 1, rank 4 to card 2, rank 6 to card 0 all on host0. And will assign rank 1 to card 0, rank 3
to card 1, rank 5 to card 2, rank 7 to card 0 all on host1.
MPI_USE_MMALLOPT_AVOID_MMAP Instructs the underlying malloc implementation to avoid mmaps and
instead use sbrk() to get all the memory used. The default is MPI_USE_MALLOPT_AVOID_MMAP=0.
MPI_MAX_REMSH=N This release includes a startup scalability enhancement when using the -f option to
mpirun. This enhancement allows a large number of HP-MPI daemons (mpid) to be created without
requiring mpirun to maintain a large number of remote shell connections.
When running with a very large number of nodes, the number of remote shells normally required to
start all of the daemons can exhaust the available file descriptors. To create the necessary daemons,
mpirun now uses the remote shell specified with MPI_REMSH to create up to 20 daemons only, by default.
This number can be changed using the environment variable MPI_MAX_REMSH. When the number of
daemons required is greater than MPI_MAX_REMSH, mpirun will create only MPI_MAX_REMSH number of
remote daemons directly. The directly created daemons will then create the remaining daemons using an
n-ary tree, where n is the value of MPI_MAX_REMSH. Although this process is generally transparent to the
user, the new startup requires that each node in the cluster is able to use the specified MPI_REMSH
command (e.g. rsh, ssh) to each node in the cluster without a password. The value of MPI_MAX_REMSH is
used on a per-world basis. Therefore, applications which spawn a large number of worlds may need to
use a small value for MPI_MAX_REMSH. MPI_MAX_REMSH is only relevant when using the -f option to
mpirun. The default value is 20.
MPI_IBV_QPPARAMS=a,b,c,d,e Specifies QP settings for IBV where: