HP-MPI Version 2.2.5 for HP-UX Release Note

HP-MPI V2.2.5 for HP-UX Release Note
What’s in This Version
10
Miscellaneous HP-MPI Environment Variables
MPI_NETADDR Can be used to access the functionality of the -netaddr option. See the
description for -netaddr under “New mpirun option -netaddr” on page 9.
MPI_IB_CARD_ORDER Assigns ranks in order to cards.
% setenv MPI_IB_CARD_ORDER <card#> [:port#]
Where:
card# ranges from 0 to N-1
port# ranges from 0 to 1
Card:port can be a comma separated list which drives the assignment of ranks to cards and
ports within the cards.
Note that HP-MPI numbers the ports on a card from 0 to N-1, whereas utilities such as vstat
display ports numbered 1 to N.
Examples:
To use the 2nd IB card:
% setenv MPI_IB_CARD_ORDER 1
To use the 2nd port of the 2nd card:
% setenv MPI_IB_CARD_ORDER 1:1
To use the 1st IB card:
% setenv MPI_IB_CARD_ORDER 0
To assign ranks to multiple cards:
% setenv MPI_IB_CARD_ORDER 0,1,2
will assign the local ranks per node in order to each card.
% mpirun -hostlist "host0 4 host1 4"
creates ranks 0-3 on host0 and ranks 4-7 on host1. Will assign rank 0 to card 0, rank 1 to card
1, rank 2 to card 2, rank 3 to card 0 all on host0. And will assign rank 4 to card 0, rank 5 to
card 1, rank 6 to card 2, rank 7 to card 0 all on host1.
% mpirun -np 8 -hostlist "host0 host1"
creates ranks 0 through 7 alternatingly on host0, host1, host0, host1, etc. Will assign rank 0
to card 0, rank 2 to card 1, rank 4 to card 2, rank 6 to card 0 all on host0. And will assign rank
1 to card 0, rank 3 to card 1, rank 5 to card 2, rank 7 to card 0 all on host1.