HP-MPI V2.3 for Linux Release Note

on host mpixbl01 to cpu 4
MPI_CPU_AFFINITY set to MAP_CPU, setting affinity of rank 6 pid 15807
on host mpixbl01 to cpu 2
MPI_CPU_AFFINITY set to MAP_CPU, setting affinity of rank 7 pid 15808
on host mpixbl01 to cpu 0
Hello world! I'm 1 of 8 on mpixbl01
....
If the operating system orders the CPUs differently relative to the ldom/socket,
this mapping has different results. In the above example, the CPU ID must be
referenced to the physical ID from the /proc/cpuinfo file to verify which socket
the ranks landed on.
To see the default binding order and ldom/Socket/CPU placement information,
use the default_cpu,v option with -cpu_bind:
% mpirun -cpu_bind=default_cpu,v -np 8 hello_world.exe
MPI_CPU_AFFINITY set to CYCLIC, setting affinity of rank 0 pid 16273
on host mpixbl01 to ldom 0 (0) (0)
MPI_CPU_AFFINITY set to CYCLIC, setting affinity of rank 1 pid 16274
on host mpixbl01 to ldom 1 (1) (1)
MPI_CPU_AFFINITY set to CYCLIC, setting affinity of rank 2 pid 16275
on host mpixbl01 to ldom 0 (0) (2)
MPI_CPU_AFFINITY set to CYCLIC, setting affinity of rank 3 pid 16276
on host mpixbl01 to ldom 1 (1) (3)
MPI_CPU_AFFINITY set to CYCLIC, setting affinity of rank 4 pid 16277
on host mpixbl01 to ldom 0 (0) (4)
MPI_CPU_AFFINITY set to CYCLIC, setting affinity of rank 5 pid 16278
on host mpixbl01 to ldom 1 (1) (5)
MPI_CPU_AFFINITY set to CYCLIC, setting affinity of rank 6 pid 16279
on host mpixbl01 to ldom 0 (0) (6)
MPI_CPU_AFFINITY set to CYCLIC, setting affinity of rank 7 pid 16280
on host mpixbl01 to ldom 1 (1) (7)
Hello world! I'm 0 of 8 on mpixbl01
...
OFED 1.3 documents minimum supported firmware revisions for a variety of
InfiniBand adapters. If unsupported firmware is used, HP-MPI might experience
issues with abnormal application teardown.
Codes which call either MPI_Comm_spawn() or MPI_Comm_spawn_multiple()
might not be able to locate the commands to spawn on remote nodes. To work
around this issue, users can either specify an absolute path to the commands to
spawn, or users can set the MPI_WORKDIR environment variable to the path of the
command to spawn.
The ha option in previous HP-MPI releases forced the use of TCP for
communication. Both IBV and TCP are possible network selections when using
ha. If no forced selection criteria (for example, TCP, IBV, or equivalent
MPI_IC_ORDER setting) is specified by the user, then IBV is selected where it is
available. Otherwise, TCP is used.
To support the -dd (deferred deregistration) option, HP-MPI must intercept calls
to glibc routines that allocate and free memory. The compiler wrapper scripts
27