HP-MPI User's Guide (11th Edition)

Understanding HP-MPI
Running applications on HP-UX and Linux
Chapter 388
This means any of those three names will be accepted as evidence that
VAPI is available. Each of the three strings individually is a regular
expression that will be grepped for in the output from /sbin/lsmod.
In many cases, if a system has a high-speed interconnect that is not
found by HP-MPI due to changes in library names and locations or
module names, the problem can be fixed by simple edits to the
hpmpi.conf file. Contacting HP-MPI support for assistance is
encouraged.
Protocol-specific options and information
This section briefly describes the available interconnects and illustrates
some of the more frequently used interconnects options.
The environment variables and command line options mentioned below
are described in more detail in “mpirun options” on page 119, and “List of
runtime environment variables” on page 134.
TCP/IP TCP/IP is supported on many types of cards.
Machines often have more than one IP address, and a user may wish to
specify which interface is to be used to get the best performance.
HP-MPI does not inherently know which IP address corresponds to the
fastest available interconnect card.
By default IP addresses are selected based on the list returned by
gethostbyname(). The mpirun option -netaddr can be used to gain
more explicit control over which interface is used.
IBAL IBAL is only supported on Windows. Lazy deregistration is not
supported with IBAL.
IBV HP-MPI supports OpenFabrics Enterprise Distribution (OFED)
V1.0 and V1.1. The HP-MPI V2.2.5.1 release adds support for OFED
V1.2.
In order to use OFED on Linux, the memory size for locking must be
specified. It is controlled by the /etc/security/limits.conf file for
Red Hat and the /etc/syscntl.conf file for SuSE.
* soft memlock 524288
* hard memlock 524288