HP-MPI Version 2.3.1 for Linux Release Note

Table Of Contents
4.5 Spawn on Remote Nodes
Codes which call either MPI_Comm_spawn() or MPI_Comm_spawn_multiple()
might not be able to locate the commands to spawn on remote nodes. To work around
this issue, you can either specify an absolute path to the commands to spawn, or users
can set the MPI_WORKDIR environment variable to the path of the command to spawn.
4.6 Default Interconnect for -ha Option
The ha option in previous HP-MPI releases forced the use of TCP for communication.
Both IBV and TCP are possible network selections when using ha. If no forced selection
criteria (for example, TCP, IBV, or equivalent MPI_IC_ORDER setting) is specified
by the user, then IBV is selected where it is available. Otherwise, TCP is used.
4.7 Linking Without Compiler Wrappers
To support the -dd (deferred deregistration) option, HP-MPI must intercept calls to
glibc routines that allocate and free memory. The compiler wrapper scripts included
with HP-MPI attempt to link MPI applications to make this possible. If you choose not
to link your application with the provided compiler wrappers, you must either ensure
that libmpi.so precedes libc.so on the linker command line or specify "-e
LD_PRELOAD=%LD_PRELOAD:libmpi.so" on the mpirun command line.
4.8 Locating the Instrumentation Output File
Whether mpirun is invoked on a host where at least one MPI process is running or on
a host remote from all MPI processes, HP-MPI writes the instrumentation output file
prefix.instr to the working directory on the host that is running rank 0 (when
instrumentation for multihost runs is enabled). When using -ha, the output file is
located on the host that is running the lowest existing rank number at the time the
instrumentation data is gathered during MPI_Finalize().
4.9 Using the ScaLAPACK Library
Applications that use the ScaLAPACK library must use the HP-MPI mpich compatibility
mode. When the application is built, mpicc.mpich or mpif77.mpich or
mpif90.mpich must be used. At runtime, mpirun.mpich must be used to launch
the application.
4.10 Increasing Shared Memory Segment Size
HP-MPI uses shared memory for communications between processes on the same node
and might attempt to allocate a shared-memory segment that is larger than the operating
system allows. The most common issue you might experience is an error message like:
Cannot create shared memory segment of <size> bytes.
28 Known Issues and Workarounds