HP-MPI Version 2.2.7 for Linux Release Note

New flag -dbgspin (page 18)
Improved message rate (page 18)
1.2.7 Description of Benefits and Features
The following section provides brief descriptions of the features included in this release. For
more information on HP-MPI, refer to the HP-MPI User’s Guide available at http://docs.hp.com.
1.2.7.1 Some dynamic process functionality no longer requires -spawn
The MPI functions MPI_Comm_accept(), MPI_Comm_connect(), MPI_Comm_join(),
MPI_Open_port(), and MPI_Close_port() no longer require the user to specify the -spawn
option to mpirun.
1.2.7.2 The launch utility mpirun.all has been deprecated
The launch utility mpirun.all has been deprecated and will be removed from future releases.
It now displays the warning message:
WARNING: mpirun.all has been deprecated and may not be available in
future releases.Set the MPI_NO_DEPRECATE_WARNINGS environment variable
to 1 to suppress this warning.
1.2.7.3 InfiniBand multiple rail support
HP-MPI provides multiple rail support on OpenFabric through the MPI_IB_MULTIRAIL
environment variable. This environment variable is ignored by all other interconnects. In multi-rail
mode, a rank can use up to all cards on its node, but it is limited to the number of cards on the
node to which it is connecting.
For example, if rank A has three cards, rank B has two cards, and rank C has three cards, then
connection A--B uses two cards, connection B--C uses two cards, and connection A--C uses three
cards. Long messages are striped among all the cards on that connection to improve bandwidth.
By default, multi-card message striping is off. To turn it on, specify -e MPI_IB_MULTIRAIL=N
where N is the number of cards used by a rank:
If N <= 1, message striping is not used.
If N is greater than the maximum number of cards M on that node, all M cards are used.
If 1 < N <= M, message striping is used on N cards or less.
If you specify -e MPI_IB_MULTIRAIL , the maximum possible cards are used.
On a host, all the ranks select all the cards in a series. For example, given 4 cards and 4 ranks per
host:
Rank 0 uses cards 0, 1, 2, 3.
Rank 1 uses cards 1, 2, 3, 0.
Rank 2 uses cards 2, 3, 0, 1.
Rank 4 uses cards 3, 0, 1, 2.
The order is important in SRQ mode because only the first card is used for short messages. The
selection approach enables short RDMA messages to use all the cards in a balanced way. For
HP-MPI 2.2.5.1 and older, all cards must be on the same fabric.
1.2.7.4 OFED 1.3 support
Both HP-MPI 2.2.5.1 and HP-MPI 2.2.7 support OFED 1.3. OFED 1.3 has two new features that
are only supported on HP-MPI 2.2.7:
XRC support with ConnectX cards
uDAPL 2.0 spec implementation (v2) and uDAPL 1.2 stack (v1). HP-MPI 2.2.7 supports both
uDAPL 1.2 and uDAPL 2.0.
12 Information About This Release