HP-MPI Version 2.2.5 for Linux Release Note
HP-MPI V2.2.5 for Linux Release Note
What’s in This Version
25
New Environment Variables The following section provides brief descriptions of the new
environment variables included in this release.
Misc. HP-MPI Environment Variables
MPI_NETADDR Can be used to access the functionality of the -netaddr option. See the description for
-netaddr under “New mpirun options” on page 21.
MPI_IB_CARD_ORDER Assigns ranks in order to cards.
% setenv MPI_IB_CARD_ORDER <card#> [:port#]
Where:
card# ranges from 0 to N-1
port# ranges from 0 to 1
Card:port can be a comma separated list which drives the assignment of ranks to cards and ports within
the cards.
Note that HP-MPI numbers the ports on a card from 0 to N-1, whereas utilities such as vstat display
ports numbered 1 to N.
Table 3 Scalability
Feature
Affected
Interconnect/Protocol
Scalability Impact
spawn All
Forces use of pairwise socket connections
between all mpid’s (typically one mpid per
machine)
one-sided
shared
lock/unlock
All except VAPI and IBV
Only VAPI and IBV provide low-level calls to
efficiently implement shared lock/unlock. All
other interconnects require mpid’s to satisfy
this feature.
one-sided
exclusive
lock/unlock
All except VAPI, IBV,
and Elan
VAPI, IBV, and Elan provide low-level calls
which allow HP-MPI to efficiently implement
exclusive lock/unlock. All other interconnects
require mpid’s to satisfy this feature.
one-sided
other
TCP/IP
All interconnects other than TCP/IP allow
HP-MPI to efficiently implement the
remainder of the one-sided functionality.
Only when using TCP/IP are mpid’s required
to satisfy this feature.