HP-MPI Version 2.2.5 for Linux Release Note
HP-MPI V2.2.5 for Linux Release Note
What’s in This Version
14
OpenFabrics support HP-MPI supports OpenFabrics using Verbs API. HP-MPI selects the
fast interconnect by default, but the user can force OpenFabrics use with the mpirun option
-IBV, or just recommend OpenFabrics use with -ibv.
HP-MPI does not use the Connection Manager (CM) library.
In order to use OpenFabrics on Linux, the memory size for locking must be specified. It is
controlled by the /etc/security/limits.conf file.
* soft memlock 524288
* hard memlock 524288
The example above uses the max locked-in-memory address space in KB units. The
recommendation is to set the value to half of the physical memory.
HP-MPI claims support for OpenFabrics V1.0 only. OpenFabrics is not supported on Itanium2
platforms.
InfiniPath support HP-MPI supports InfiniPath via the PSM library provided by QLogic.
PSM is a wrapper library based on the lowest API level, to provide a user-friendly interface.
QLogic PSM and Myricom MX share the same style.
Force InfiniPath PSM use with the mpirun option -PSM, or just recommend InfiniPath PSM
use with -psm.
The user can control where to find the InfiniPath interconnect library in the config file
$MPI_ROOT/etc/hpmpi.conf.
New mpirun options for intra-host performance tuning
-intra=shm Use shared memory for all intra-host data transfers. This is the default.
-intra=nic Use the interconnect for all intra-host data transfers. (Not recommended for high
performance solutions.)
-intra=mix Uses shared memory for small messages below 256k, or what is set by
MPI_RDMA_INTRALEN for better latency. The interconnect is used for larger messages for better
bandwidth.
The same functionality is available through the environment variable MPI_INTRA which can
be set to shm, nic, or mix.
-e MPI_RDMA_INTRALEN=262144 Specifies the size (in bytes) of the transition from shared
memory to interconnect when -intra=mix is used. For messages less than or equal to the
specified size, shared memory will be used. For messages greater than that size, the
interconnect will be used. TCP/IP and Elan do not have mixed mode.