HP-MPI Version 2.2.5 for Linux Release Note

HP-MPI V2.2.5 for Linux Release Note
What’s in This Version
24
If prun/srun is used for launching the application, then mpirun sends the signal to the
responsible launcher and relies on the signal propagation capabilities of the launcher to
ensure that the signal is propagated to the ranks. When using prun, SIGTTIN is also
intercepted by mpirun, but is not propagated.
When using an appfile, HP-MPI propagates these signals to remote HP-MPI daemons (mpid)
and local ranks. Each daemon propagates the signal to the ranks it created. An exception is
the treatment of SIGTSTP. When a daemon receives a SIGTSTP signal, it propagates
SIGSTOP to the ranks it created and then raises SIGSTOP on itself. This allows all processes
related to an HP-MPI execution to be suspended and resumed using SIGTSTP and SIGCONT.
The HP-MPI library also changes the default signal handling properties of the application in a
few specific cases. When using the -ha option to mpirun, SIGPIPE is ignored. When using
MPI_FLAGS=U, an MPI signal handler for printing outstanding message status is established
for SIGUSR1. When using MPI_FLAGS=sa, an MPI signal handler used for message
propagation is established for SIGALRM. When using MPI_FLAGS=sp, an MPI signal handler
used for message propagation is established for SIGPROF.
In general, HP-MPI relies on applications terminating when they are sent SIGTERM.
Applications which intercept SIGTERM may not terminate properly.
Fast one-sided lock/unlock under VAPI and IBV When using the VAPI or IBV protocol,
HP-MPI 2.2.5 is able to use low-level hardware atomic operations to provide a high
performance and scalable one-sided lock/unlock implementation. Note that one-sided
lock/unlock is supported on all interconnects, but the performance will vary depending on
what kind of hardware support is available. See Table 3 on page 25 for more information.
Scalability HP-MPI 2.2.5 has been tested on InfiniBand clusters with as many as 2048
ranks using the VAPI protocol. Most HP-MPI features function in a scalable manner.
However, a few are still subject to significant resource growth as the job size grows.