HP-MPI Version 2.3.1 for Linux Release Note

Table Of Contents
If this happens, add the setting -e MPI_UDAPL_READ=0 to the mpirun command
line. This results in HP-MPI only using uDAPL for the regular non-one-sided data
transfers, and using a slower TCP-based communication method for one-sided
operations.
4.3 Mapping Ranks to a CPU
When mapping ranks to a CPU, the ordering of the CPUs relative to the locality domain
(ldom)/socket can vary depending on the architecture and operating system. This
ordering is not consistent, and therefore the MAP_CPU order for one system may not
be the same for a different hardware platform or operating system.
Use the appropriate block (block, block_cpu, fill) or cyclic (cyclic,
cyclic_cpu, rr) binding order to correctly bind the ranks to the same ldom/socket
across all architectures and operating systems.
If MAP_CPU is used, cpu_bind maps a CPU number equal to what is specified in the
system information, such as the /proc/cpuinfo file. Use the ,v option to verify that
the selected CPU ordering has the desired effect.
For example, to run on a 2 socket, quad core machine running Linux RHEL 5:
% mpirun -cpu_bind=block_cpu,v -np 8 hello_world.exe
MPI_CPU_AFFINITY set to BLOCK, setting affinity of rank 0 pid 15374
on host mpixbl01 to ldom 0 (0) (0)
MPI_CPU_AFFINITY set to BLOCK, setting affinity of rank 1 pid 15375
on host mpixbl01 to ldom 0 (0) (2)
MPI_CPU_AFFINITY set to BLOCK, setting affinity of rank 2 pid 15376
on host mpixbl01 to ldom 0 (0) (4)
MPI_CPU_AFFINITY set to BLOCK, setting affinity of rank 3 pid 15377
on host mpixbl01 to ldom 0 (0) (6)
MPI_CPU_AFFINITY set to BLOCK, setting affinity of rank 4 pid 15378
on host mpixbl01 to ldom 1 (1) (1)
MPI_CPU_AFFINITY set to BLOCK, setting affinity of rank 5 pid 15379
on host mpixbl01 to ldom 1 (1) (3)
MPI_CPU_AFFINITY set to BLOCK, setting affinity of rank 6 pid 15380
on host mpixbl01 to ldom 1 (1) (5)
MPI_CPU_AFFINITY set to BLOCK, setting affinity of rank 7 pid 15381
on host mpixbl01 to ldom 1 (1) (7)
Hello world! I'm 5 of 8 on mpixbl01
...
The preceding example shows how the ranks are ordered in relation to the socket (the
first number in the parenthesis) and the CPU ID (the second number in the parenthesis).
The rank placement can be ordered relative to the CPU ID by using the MAP_CPU option,
as follows:
% mpirun -cpu_bind=map_cpu=7,5,3,1,6,4,2,0,v -np 8 hello_world.exe
MPI_CPU_AFFINITY set to MAP_CPU, setting affinity of rank 0 pid 15801
on host mpixbl01 to cpu 7
MPI_CPU_AFFINITY set to MAP_CPU, setting affinity of rank 1 pid 15802
26 Known Issues and Workarounds