HP-MPI Version 2.2.5 for Linux Release Note
HP-MPI V2.2.5 for Linux Release Note
What’s in This Version
21
-hostfile hfile # 2 ranks on n1 and 3 ranks on n3.
-lsb_hosts uses the environment variable established by the LSF bsub command to launch
jobs across the list of hosts, using the list of hosts within $LSB_HOSTS.
-lsb_mcpu_hosts uses the environment variable established by the LSF bsub command to
launch jobs across the list of hosts, using the list of hosts within $LSB_MCPU_HOSTS.
The new launch options described above will also add an implicit -e MPI_WORKDIR=$CWD to the
command line in the event that no -e MPI_WORKDIR is present on the command line.
New mpirun options The default setting varies depending on the interconnect and rank
count. See “New Environment Variables” on page 25 for more details on default selections.
• -srq specifies use of the shared receiving queue protocol when OpenFabrics, Myrinet GM,
Mellanox VAPI or uDAPL or V1.2 interfaces are used. This protocol uses less pre-pinned
memory for short message transfer.
• -rdma specifies use of envelope pairs for short message transfer. The pre-pinned memory
will increase continuously with the job size.
MPI-2 supported ROMIO HP-MPI 2.2.5 includes a new version of ROMIO which
implements true MPI-2 functionality with regards to asynchronous writing and reading of
files. If existing applications utilize the ROMIO-specific asynchronous routines identifiable by
an MPIO prefix (e.g. MPIO_File_Iread, MPIO_File_Iwrite, or MPIO_Wait) then users should
consider rewriting the application with the standard MPI function names. Those applications
would then need to be recompiled, and their customers would be required to upgrade to the
current version of HP-MPI.
NOTE ROMIO is only supported when using the default libmpi library. ROMIO cannot
be used with the multi threaded or diagnostic libraries.
CPU bind support HP-MPI 2.2.5 supports CPU binding with a variety of binding strategies
(see below). The option -cpu_bind is supported in appfile, command line, and srun modes.
% mpirun -cpu_bind[_mt]=[v,][option][,v] -np 4 a.out
Where _mt implies thread aware CPU binding; v, and ,v are verbose information on threads
binding to CPUs; and [option] is one of:
rank Schedule ranks on CPUs according to packed rank id.
map_cpu Schedule ranks on CPUs in cycle through MAP variable.
mask_cpu Schedule ranks on CPU masks in cycle through MAP variable.
ll Bind each rank to CPU each is currently running on.