HP-MPI Version 2.2.7 for Linux Release Note

The HP-MPI Version 2.2.7 library for Linux contains a dependency which requires
libpthread. The mpicc, mpif90, etc. compiler wrapper scripts automatically add the
necessary -lpthread, but manually linked applications must explicitly add -lpthread.
Profiling routines built for C calls will not cause the corresponding Fortran calls to be
wrapped automatically. To profile Fortran routines, separate wrappers need to be written
for the Fortran calls.
HP-MPI complies with the MPI-1.2 standard, which defines bindings for Fortran 77 and C,
but not Fortran 90 or C++. HP-MPI also complies with the C++ binding definitions detailed
in the MPI-2 standard. However, the C++ bindings provided are not thread safe and should
not be used with the HP-MPI threaded libraries (for example, libmtmpi). HP-MPI does not
provide bindings for Fortran 90. Some features of Fortran 90 might interact with MPI
non-blocking semantics to produce unexpected results. Consult the HP-MPI User’s Guide
for details.
When using the HP Caliper profiling tool with HP-MPI applications, it might be necessary
to specify the following environment variable setting in order to avoid an application abort.
% setenv HPMPI_NOPROPAGATE_SUSP 1
or
$ export HPMPI_NOPROPAGATE_SUSP=1
To use the -tv option to mpirun, the TotalView binary must be in the users PATH, or the
TOTALVIEW environment variable must be set to the full path of the TotalView binary.
% export TOTALVIEW=/usr/toolworks/totalview/bin/totalview
Extended collectives with intercommunicators are not profiled by the HP-MPI lightweight
instrumentation mode.
High Availability (-ha) mode and the diagnostic library are not allowed at the same time.
MPICH mode and the diagnostic library are not allowed at the same time.
The diagnostic library strict mode is not compatible with some MPI-2 features.
Some versions of Quadrics have a memory leak. The error received will look like:
0 MAP_SDRAM(140008800): can't map SDRAM 2824000(2404000) -
4020000(3c00000) (25149440 bytes) : -1
ELAN_EXCEPTION @ 0: 5 (Memory exhausted)
newRxDesc: Elan memory exhausted: port 2b200
This error can occur in following two cases:
— If the application calls MPI_Cancel repeatedly.
— If the application receives on MPI_ANY_SOURCE.
Set the environment variable LIBELAN_TPORT_BIGMSG to an appropriate message size to
resolve the resource issue. If this setting does not eliminate the error, contact Quadrics for
the fix.
22 Known Problems and Workarounds