HP-MPI Version 2.3.1 for Linux Release Note
Table Of Contents
- HP-MPI V2.3.1 for Linux Release Note
- Table of Contents
- 1 Information About This Release
- 2 New or Changed Features in V2.3.1
- 3 New or Changed Features in V2.3
- 3.1 Options Supported Only on HP Hardware
- 3.2 System Check
- 3.3 Default Message Size Changed For -ndd
- 3.4 MPICH2 Compatibility
- 3.5 Support for Large Messages
- 3.6 Redundant License Servers
- 3.7 License Release/Regain on Suspend/Resume
- 3.8 Expanded Functionality for -ha
- 3.8.1 Support for High Availability on InfiniBand Verbs
- 3.8.2 Highly Available Infrastructure (-ha:infra)
- 3.8.3 Using MPI_Comm_connect and MPI_Comm_accept
- 3.8.4 Using MPI_Comm_disconnect
- 3.8.5 Instrumentation and High Availability Mode
- 3.8.6 Failure Recover (-ha:recover)
- 3.8.7 Network High Availability (-ha:net)
- 3.8.8 Failure Detection (-ha:detect)
- 3.8.9 Clarification of the Functionality of Completion Routines in High Availability Mode
- 3.9 Enhanced InfiniBand Support for Dynamic Processes
- 3.10 Singleton Launching
- 3.11 Using the -stdio=files Option
- 3.12 Using the -stdio=none Option
- 3.13 Expanded Lightweight Instrumentation
- 3.14 The api option to MPI_INSTR
- 3.15 New mpirun option -xrc
- 4 Known Issues and Workarounds
- 4.1 Running on iWarp Hardware
- 4.2 Running with Chelsio uDAPL
- 4.3 Mapping Ranks to a CPU
- 4.4 OFED Firmware
- 4.5 Spawn on Remote Nodes
- 4.6 Default Interconnect for -ha Option
- 4.7 Linking Without Compiler Wrappers
- 4.8 Locating the Instrumentation Output File
- 4.9 Using the ScaLAPACK Library
- 4.10 Increasing Shared Memory Segment Size
- 4.11 Using MPI_FLUSH_FCACHE
- 4.12 Using MPI_REMSH
- 4.13 Increasing Pinned Memory
- 4.14 Disabling Fork Safety
- 4.15 Using Fork with OFED
- 4.16 Memory Pinning with OFED 1.2
- 4.17 Upgrading to OFED 1.2
- 4.18 Increasing the nofile Limit
- 4.19 Using appfiles on HP XC Quadrics
- 4.20 Using MPI_Bcast on Quadrics
- 4.21 MPI_Issend Call Limitation on Myrinet MX
- 4.22 Terminating Shells
- 4.23 Disabling Interval Timer Conflicts
- 4.24 libpthread Dependency
- 4.25 Fortran Calls Wrappers
- 4.26 Bindings for C++ and Fortran 90
- 4.27 Using HP Caliper
- 4.28 Using -tv
- 4.29 Extended Collectives with Lightweight Instrumentation
- 4.30 Using -ha with Diagnostic Library
- 4.31 Using MPICH with Diagnostic Library
- 4.32 Using -ha with MPICH
- 4.33 Using MPI-2 with Diagnostic Library
- 4.34 Quadrics Memory Leak
- 5 Installation Information
- 6 Licensing Information
- 7 Additional Product Information
4.5 Spawn on Remote Nodes
Codes which call either MPI_Comm_spawn() or MPI_Comm_spawn_multiple()
might not be able to locate the commands to spawn on remote nodes. To work around
this issue, you can either specify an absolute path to the commands to spawn, or users
can set the MPI_WORKDIR environment variable to the path of the command to spawn.
4.6 Default Interconnect for -ha Option
The –ha option in previous HP-MPI releases forced the use of TCP for communication.
Both IBV and TCP are possible network selections when using –ha. If no forced selection
criteria (for example, –TCP, –IBV, or equivalent MPI_IC_ORDER setting) is specified
by the user, then IBV is selected where it is available. Otherwise, TCP is used.
4.7 Linking Without Compiler Wrappers
To support the -dd (deferred deregistration) option, HP-MPI must intercept calls to
glibc routines that allocate and free memory. The compiler wrapper scripts included
with HP-MPI attempt to link MPI applications to make this possible. If you choose not
to link your application with the provided compiler wrappers, you must either ensure
that libmpi.so precedes libc.so on the linker command line or specify "-e
LD_PRELOAD=%LD_PRELOAD:libmpi.so" on the mpirun command line.
4.8 Locating the Instrumentation Output File
Whether mpirun is invoked on a host where at least one MPI process is running or on
a host remote from all MPI processes, HP-MPI writes the instrumentation output file
prefix.instr to the working directory on the host that is running rank 0 (when
instrumentation for multihost runs is enabled). When using -ha, the output file is
located on the host that is running the lowest existing rank number at the time the
instrumentation data is gathered during MPI_Finalize().
4.9 Using the ScaLAPACK Library
Applications that use the ScaLAPACK library must use the HP-MPI mpich compatibility
mode. When the application is built, mpicc.mpich or mpif77.mpich or
mpif90.mpich must be used. At runtime, mpirun.mpich must be used to launch
the application.
4.10 Increasing Shared Memory Segment Size
HP-MPI uses shared memory for communications between processes on the same node
and might attempt to allocate a shared-memory segment that is larger than the operating
system allows. The most common issue you might experience is an error message like:
Cannot create shared memory segment of <size> bytes.
28 Known Issues and Workarounds