HP-MPI Version 2.2.7 for Linux Release Note

pp.x: Rank 0:1: MPI_Init: The IB ports choosen for IB connection setup do not have the same subnet_prefix.
Please provide a port GID that all nodes have IB path to it by MPI_IB_PORT_GID
pp.x: Rank 0:1: MPI_Init: You can get port GID using 'ibv_devinfo -v'
1.2.7.13 Control of first port selection
By default, HP-MPI distributes the processes onto the available InfiniBand ports in a round-robin
manner using the local rank ID of each process.
Users can override the default order using the environment variable MPI_IB_CARD_ORDER.
-e MPI_IB_CARD_ORDER=<card>:<port>,<card>:<port>, ...
NOTE: HP-MPI numbers the ports on a card from 0 to N-1, whereas utilities such as vstat
display numbers ports 1 to N.
Each process selects the entry from the list where the item number matches the local rank ID. If
no entry matches the process rank ID, or if there are more processes than entries, then the entries
are numbered from left to right and selected based on the formula:
<entry#> mod <rank_ID>
NOTE: MPI_IB_CARD_ORDER affects the selection of the first card, and first port. Subsequent
card/port selection is based on the method described in the “InfiniBand port reordering”section
below.
To use the 2nd IB card:
-e MPI_IB_CARD_ORDER=1
To use the 2nd port of the 2nd card:
-e MPI_IB_CARD_ORDER=1:1
To use the 1st IB card:
-e MPI_IB_CARD_ORDER=0
For example:
% mpirun -hostlist "host0 4 host1 4" -e MPI_IB_CARD_ORDER=0,1,2
The above example creates ranks 0-3 on host0 and ranks 4-7 on host1. It assigns rank 0 to card
0, rank 1 to card 1, rank 2 to card 2, rank 3 to card 0 all on host0. And it assigns rank 4 to card 0,
rank 5 to card 1, rank 6 to card 2, rank 7 to card 0 all on host1.
% mpirun -np 8 -hostlist "host0 host1" -e MPI_IB_CARD_ORDER=0,1,2
The above example creates ranks 0 through 7 alternating on host0, host1, host0, host1, etc. It
assigns rank 0 to card 0, rank 2 to card 1, rank 4 to card 2, rank 6 to card 0 all on host0. And it
assigns rank 1 to card 0, rank 3 to card 1, rank 5 to card 2, rank 7 to card 0 all on host1.
1.2.7.13.1 InfiniBand port reordering
HP-MPI v2.2.7 reorders the available InfiniBand ports on a node. This reordering is done to load
balance the communication traffic across the available InfiniBand cards and ports. HP-MPI creates
and tracks the mapping between ports and cards. When a process is started on a node, all of the
active InfiniBand ports are opened in a sequential order and stored for future reference. Each
process knows how many other processes in the same job are on the same node and knows its
local rank id. Each process uses its local rank ID to reorder both the InfiniBand cards and ports:
first_card = local_rank % ncard;
first_port = ((local_rank / ncard) % nport[first_card];
next_card = (first_card + i) % ncard;
next_port = (first_port + m + k) % nport[next_card];
Where:
1.2 What’s in This Version 17