HP-MPI User's Guide (11th Edition)
Standard-flexibility in HP-MPI
HP-MPI implementation of standard flexibility
Appendix B282
Vendors may write optimized
collective routines matched to
their architectures or a complete
library of collective
communication routines can be
written using MPI point-to-point
routines and a few auxiliary
functions. See MPI-1.2 Section
4.1.
Use HP-MPI’s collective routines
instead of implementing your own
with point-to-point routines.
HP-MPI’s collective routines are
optimized to use shared memory
where possible for performance.
Error handlers in MPI take as
arguments the communicator in
use and the error code to be
returned by the MPI routine
that raised the error. An error
handler can also take “stdargs”
arguments whose number and
meaning is implementation
dependent. See MPI-1.2 Section
7.2 and MPI-2.0 Section 4.12.6.
To ensure portability, HP-MPI’s
implementation does not take
“stdargs”. For example in C, the
user routine should be a C
function of type
MPI_handler_function, defined
as:
void (MPI_Handler_function)
(MPI_Comm *, int *);
MPI implementors may place a
barrier inside MPI_FINALIZE.
See MPI-2.0 Section 3.2.2.
HP-MPI’s MPI_FINALIZE behaves
as a barrier function such that the
return from MPI_FINALIZE is
delayed until all potential future
cancellations are processed.
MPI defines minimal
requirements for
thread-compliant MPI
implementations and MPI can
be implemented in
environments where threads are
not supported. See MPI-2.0
Section 8.7.
HP-MPI provides a
thread-compliant library (lmtmpi).
Use -lmtmpi on the link line to
use the libmtmpi. Refer to
“Thread-compliant library” on
page 57 for more information.
Table B-1 HP-MPI implementation of standard-flexible issues (Continued)
Reference in MPI standard HP-MPI’s implementation