HP-MPI Version 2.2.5.1 for Linux Release Note

HP-MPI V2.2.5.1 for Linux Release Note
What’s in This Version
10
Technical application providers using HP-MPI protect their software investment using a
highly efficient, portable tool for parallel application development.
HP-MPI 2.2.5.1 for Linux includes the following features (which are described in detail in the
next section):
OFED V1.2 support
Enhanced CPU bind support
Description of Features
The following section provides brief descriptions of the features included in this release. For
more information on HP-MPI, refer to the HP-MPI User’s Guide available at
http://docs.hp.com.
OFED support HP-MPI supports OpenFabrics Enterprise Distribution (OFED) using Verbs
API. HP-MPI selects the fast interconnect by default, but the user can force OFED use with
the mpirun option -IBV, or just recommend OFED use with -ibv.
HP-MPI does not use the Connection Manager (CM) library.
In order to use OFED on Linux, the memory size for locking must be specified. It is controlled
by the /etc/security/limits.conf file. For example, on a machine with 4GB of memory:
* soft memlock 2097512
* hard memlock 2097512
The example above uses the max locked-in-memory address space in KB units. The
recommendation is to set the value to half of the physical memory.
HP-MPI V2.2.5.1 adds support for OFED V1.2.
Enhanced CPU bind support HP-MPI 2.2.5.1 supports CPU binding with a variety of
binding strategies (see below). The option -cpu_bind is supported in appfile, command line,
and srun modes.
% mpirun -cpu_bind[_mt]=[v,][
option
][,v] -np 4 a.out
Where _mt implies thread aware CPU binding; v, and ,v are verbose information on threads
binding to CPUs; and [
option
] is one of:
default Use cyclic if NUMA, otherwise use rank. This is the default.
rank Schedule ranks on CPUs according to packed rank id.
map_cpu Schedule ranks on CPUs in cycle through MAP variable.
mask_cpu Schedule ranks on CPU masks in cycle through MAP variable.