HP-MPI Version 2.2.5 for Linux Release Note

HP-MPI V2.2.5 for Linux Release Note
Known Problems and Workarounds
39
Calling MPI from Fortran 90 or C++ programs
HP-MPI complies with the 1.2 version of the MPI standard, which defines bindings for
Fortran 77 and C, but not Fortran 90 or C++. Some features of Fortran 90 may interact
with MPI non-blocking semantics to produce unexpected results. Consult the HP-MPI
User’s Guide for details. C++ applications should be able to use the existing C binding for
MPI with no problems.
Locating your instrumentation
Whether you invoke mpirun on a host where at least one MPI process is running, or on a
host remote from all your MPI processes, HP-MPI writes the instrumentation output file
(prefix.instr) to the working directory on the host that is running rank 0 (when you
enable instrumentation for multi-host runs).
In order to use the -tv option to mpirun, the totalview binary must be in the user’s PATH,
or the TOTALVIEW environment variable must be set to the full path of the totalview
binary.
% export TOTALVIEW= usr/toolworks/totalview/bin/totalview
Extended collectives with intercommunicators are not profiled by our lightweight
instrumentation mode.
High Availability (H/A) mode and the diagnostic library are not allowed at the same time.
The diagnostic library strict mode is not compatible with some MPI-2 features.
Some versions of Quadrics have a memory leak. The error received will look like:
0 MAP_SDRAM(140008800): can't map SDRAM 2824000(2404000) -
4020000(3c00000) (25149440 bytes) : -1
ELAN_EXCEPTION @ 0: 5 (Memory exhausted)
newRxDesc: Elan memory exhausted: port 2b200
This error can occur in following two cases:
1. If the application calls MPI_Cancel repeatedly.
2. If the application receives on MPI_ANY_SOURCE.
Try setting the environment variable LIBELAN_TPORT_BIGMSG to an appropriate message
size to resolve the resource issue. If this doesn’t eliminate the error, contact Quadrics for
the fix (which is unavailable at the time of this release).