HP-UX 11i September 2001 Release Notes
HP-UX 11i Operating Environment Applications
HP-UX 11i Technical Computing Operating Environment (new at 11i original release)
Chapter 4
87
• Library names. Some of the libraries have been merged. Compilation wrappers have
been provided for convenience. Wrappers can also be used as templates.
• Multi-Thread mode. By default, the non thread-compliant library (libmpi) is used
when running MPI jobs. Linking to the thread-compliant library (libmtmpi) is now
required only for applications that have multiple threads making MPI calls
simultaneously. In previous releases, linking to the thread-compliant library was
required for multi-threaded applications even if only one thread was making a MPI
call at a time.
• Additional MPI-2 support. HP MPI 1.7 expands MPI-2 support of one-sided
communications to clusters. Refer to “Appendix C” in the HP MPI User’s Guide, 6th
edition, for a full list of MPI-2 support.
• New options for handling standard IO. HP MPI 1.7 supports several new options for
handling standard IO streams.
All standard input is routed through the mpirun process. Standard input to mpirun
is selectively ignored (default behavior), replicated to all of the MPI processes, or
directed to a single process. Input intended for one or all of the processes in an MPI
application should therefore be directed to the standard input of mpirun.
Since mpirun reads stdin on behalf of the processes, running an MPI application in
the background will result in the application being suspended by most shells. For this
reason, the default mode for stdin is off. Running applications in the background
will not work with stdin turned on.
• Backtrace functionality. HP MPI 1.7 handles several common termination signals
differently (on PA-RISC systems) than earlier versions of HP MPI by printing a stack
trace prior to termination. The backtrace is helpful in determining where the signal
was generated and the call stack at the time of the error.
• IMPI functionality. The Interoperable MPI protocol (IMPI) extends the power of MPI
by allowing applications to run on heterogeneous clusters of machines with various
architectures and operating systems, while allowing the program to use a different
implementation of MPI on each machine.
• Fortran profiling interface.
To facilitate improved Fortran performance, we no longer implement Fortran calls as
wrappers to C calls. Consequently, profiling routines built for C calls will no longer
cause the corresponding Fortran calls to be wrapped automatically. In order to profile
Fortran routines, separate wrappers need to be written for the Fortran calls.
• Support for collecting profiling information for applications linked with the
thread-compliant library in addition to those linked with the standard MPI library.
You can collect profiling information for applications linked with the
thread-compliant library in addition to those linked with the standard MPI library.
Counter instrumentation (MPI_INSTR) is supported for the thread-compliant library
regardless of thread level. Trace file generation (XMPI) is supported for all thread
levels except MPI_THREAD_MULTIPLE.
• A new error checking flag (-ck ) in the mpirun utility. The new error checking flag
(-ck) allows you to check appfile set-up, host machine and program availability,
and file permissions without creating MPI processes.
• The mpirun utility no longer makes assumptions about how long it will take before a
process calls MPI_Init. Timeout errors before MPI_Init that may have been seen in