HP-MPI Version 2.3.1 for Linux Release Note
Table Of Contents
- HP-MPI V2.3.1 for Linux Release Note
- Table of Contents
- 1 Information About This Release
- 2 New or Changed Features in V2.3.1
- 3 New or Changed Features in V2.3
- 3.1 Options Supported Only on HP Hardware
- 3.2 System Check
- 3.3 Default Message Size Changed For -ndd
- 3.4 MPICH2 Compatibility
- 3.5 Support for Large Messages
- 3.6 Redundant License Servers
- 3.7 License Release/Regain on Suspend/Resume
- 3.8 Expanded Functionality for -ha
- 3.8.1 Support for High Availability on InfiniBand Verbs
- 3.8.2 Highly Available Infrastructure (-ha:infra)
- 3.8.3 Using MPI_Comm_connect and MPI_Comm_accept
- 3.8.4 Using MPI_Comm_disconnect
- 3.8.5 Instrumentation and High Availability Mode
- 3.8.6 Failure Recover (-ha:recover)
- 3.8.7 Network High Availability (-ha:net)
- 3.8.8 Failure Detection (-ha:detect)
- 3.8.9 Clarification of the Functionality of Completion Routines in High Availability Mode
- 3.9 Enhanced InfiniBand Support for Dynamic Processes
- 3.10 Singleton Launching
- 3.11 Using the -stdio=files Option
- 3.12 Using the -stdio=none Option
- 3.13 Expanded Lightweight Instrumentation
- 3.14 The api option to MPI_INSTR
- 3.15 New mpirun option -xrc
- 4 Known Issues and Workarounds
- 4.1 Running on iWarp Hardware
- 4.2 Running with Chelsio uDAPL
- 4.3 Mapping Ranks to a CPU
- 4.4 OFED Firmware
- 4.5 Spawn on Remote Nodes
- 4.6 Default Interconnect for -ha Option
- 4.7 Linking Without Compiler Wrappers
- 4.8 Locating the Instrumentation Output File
- 4.9 Using the ScaLAPACK Library
- 4.10 Increasing Shared Memory Segment Size
- 4.11 Using MPI_FLUSH_FCACHE
- 4.12 Using MPI_REMSH
- 4.13 Increasing Pinned Memory
- 4.14 Disabling Fork Safety
- 4.15 Using Fork with OFED
- 4.16 Memory Pinning with OFED 1.2
- 4.17 Upgrading to OFED 1.2
- 4.18 Increasing the nofile Limit
- 4.19 Using appfiles on HP XC Quadrics
- 4.20 Using MPI_Bcast on Quadrics
- 4.21 MPI_Issend Call Limitation on Myrinet MX
- 4.22 Terminating Shells
- 4.23 Disabling Interval Timer Conflicts
- 4.24 libpthread Dependency
- 4.25 Fortran Calls Wrappers
- 4.26 Bindings for C++ and Fortran 90
- 4.27 Using HP Caliper
- 4.28 Using -tv
- 4.29 Extended Collectives with Lightweight Instrumentation
- 4.30 Using -ha with Diagnostic Library
- 4.31 Using MPICH with Diagnostic Library
- 4.32 Using -ha with MPICH
- 4.33 Using MPI-2 with Diagnostic Library
- 4.34 Quadrics Memory Leak
- 5 Installation Information
- 6 Licensing Information
- 7 Additional Product Information

IMPORTANT: When waiting on a receive request that uses MPI_ANY_SOURCE on an
intracommunicator, the request is never considered complete due to rank or interconnect
failures because the rank that created the receive request can legally match it. For
intercommunicators, after all processes in the remote group are unavailable, the request
is considered complete and, the MPI_ERROR field of the MPI_Status() structure
indicates MPI_ERR_EXITED.
MPI_Waitall() waits until all requests are complete, even if an error occurs with
some requests. If some requests fail, MPI_IN_STATUS is returned. Otherwise,
MPI_SUCCESS is returned. In the case of an error, the error code is returned in the
status array.
3.9 Enhanced InfiniBand Support for Dynamic Processes
This release supports the use of InfiniBand between processes in different MPI worlds.
Processes that are not part of the same MPI world, but are introduced through calls to
MPI_Comm_connect(), MPI_Comm_accept(), MPI_Comm_spawn(), or
MPI_Comm_spawn_multiple() attempt to use InfiniBand for communication. Both
sides need to have InfiniBand support enabled and use the same InfiniBand parameter
settings, otherwise TCP will be used for the connection. Only OFED IBV protocol is
supported for these connections. When the connection is established through one of
these MPI calls, a TCP connection is first established between the root process of both
sides. TCP connections are set up among all the processes. Finally, IBV InfiniBand
connections are established among all process pairs, and the TCP connections are closed.
3.10 Singleton Launching
This release supports the creation of a single rank without the use of mpirun, called
singleton launching. It is only valid to launch an MPI_COMM_WORLD of size one using
this approach. The single rank created in this way is executed as if it were created with
mpirun -np 1 <executable>. HP-MPI environment variables can influence the
behavior of the rank. Interconnect selection can be controlled using the environment
variable MPI_IC_ORDER. Many command-line options that would normally be passed
to mpirun cannot be used with singletons. Examples include, but are not limited to,
-cpu_bind, -d, -prot, -ndd, -srq, and -T. Some options, such as -i, are accessible
through environment variables (MPI_INSTR) and can still be used by setting the
appropriate environment variable before creating the process.
Creating a singleton using fork() and exec() from another MPI process has the
same limitations that OFED places on fork() and exec().
3.11 Using the -stdio=files Option
This option specifies that the standard input, output and error of each rank is to be
taken from the files specified by the environment variables MPI_STDIO_INFILE,
22 New or Changed Features in V2.3