HP-MPI Version 2.3.1 for Linux Release Note
Table Of Contents
- HP-MPI V2.3.1 for Linux Release Note
- Table of Contents
- 1 Information About This Release
- 2 New or Changed Features in V2.3.1
- 3 New or Changed Features in V2.3
- 3.1 Options Supported Only on HP Hardware
- 3.2 System Check
- 3.3 Default Message Size Changed For -ndd
- 3.4 MPICH2 Compatibility
- 3.5 Support for Large Messages
- 3.6 Redundant License Servers
- 3.7 License Release/Regain on Suspend/Resume
- 3.8 Expanded Functionality for -ha
- 3.8.1 Support for High Availability on InfiniBand Verbs
- 3.8.2 Highly Available Infrastructure (-ha:infra)
- 3.8.3 Using MPI_Comm_connect and MPI_Comm_accept
- 3.8.4 Using MPI_Comm_disconnect
- 3.8.5 Instrumentation and High Availability Mode
- 3.8.6 Failure Recover (-ha:recover)
- 3.8.7 Network High Availability (-ha:net)
- 3.8.8 Failure Detection (-ha:detect)
- 3.8.9 Clarification of the Functionality of Completion Routines in High Availability Mode
- 3.9 Enhanced InfiniBand Support for Dynamic Processes
- 3.10 Singleton Launching
- 3.11 Using the -stdio=files Option
- 3.12 Using the -stdio=none Option
- 3.13 Expanded Lightweight Instrumentation
- 3.14 The api option to MPI_INSTR
- 3.15 New mpirun option -xrc
- 4 Known Issues and Workarounds
- 4.1 Running on iWarp Hardware
- 4.2 Running with Chelsio uDAPL
- 4.3 Mapping Ranks to a CPU
- 4.4 OFED Firmware
- 4.5 Spawn on Remote Nodes
- 4.6 Default Interconnect for -ha Option
- 4.7 Linking Without Compiler Wrappers
- 4.8 Locating the Instrumentation Output File
- 4.9 Using the ScaLAPACK Library
- 4.10 Increasing Shared Memory Segment Size
- 4.11 Using MPI_FLUSH_FCACHE
- 4.12 Using MPI_REMSH
- 4.13 Increasing Pinned Memory
- 4.14 Disabling Fork Safety
- 4.15 Using Fork with OFED
- 4.16 Memory Pinning with OFED 1.2
- 4.17 Upgrading to OFED 1.2
- 4.18 Increasing the nofile Limit
- 4.19 Using appfiles on HP XC Quadrics
- 4.20 Using MPI_Bcast on Quadrics
- 4.21 MPI_Issend Call Limitation on Myrinet MX
- 4.22 Terminating Shells
- 4.23 Disabling Interval Timer Conflicts
- 4.24 libpthread Dependency
- 4.25 Fortran Calls Wrappers
- 4.26 Bindings for C++ and Fortran 90
- 4.27 Using HP Caliper
- 4.28 Using -tv
- 4.29 Extended Collectives with Lightweight Instrumentation
- 4.30 Using -ha with Diagnostic Library
- 4.31 Using MPICH with Diagnostic Library
- 4.32 Using -ha with MPICH
- 4.33 Using MPI-2 with Diagnostic Library
- 4.34 Quadrics Memory Leak
- 5 Installation Information
- 6 Licensing Information
- 7 Additional Product Information

1 Information About This Release
1.1 Announcement
HP-MPI V2.3.1 for Linux is the March 2009 release of HP-MPI, the HP implementation
of the Message Passing Interface standard for Linux. HP-MPI V2.3.1 for Linux is
supported on HP ProLiant and HP Integrity servers running CentOS 5, Red Hat
Enterprise Linux AS 4 and 5, SuSE Linux Enterprise Server 9 and 10 operating systems,
and HP XC3000, HP XC4000, and HP XC6000 Clusters.
1.2 HP-MPI Product Information
HP-MPI is a high-performance and production-quality implementation of the Message
Passing Interface standard for HP systems. HP-MPI fully complies with the MPI-2.1
standard. HP-MPI provides an application programming interface and software libraries
that support parallel, message-passing applications that are efficient, portable, and
flexible.
HP-MPI enhancements provide low latency and high bandwidth point-to-point and
collective communication routines. On clusters of shared-memory servers, HP-MPI
supports the use of shared memory for intranode communication. Internode
communication uses a high-speed interconnect.
HP-MPI supports a variety of high-speed interconnects and enables you to build a
single executable that transparently uses the supported high-performance interconnects.
This greatly reduces efforts to make applications available on the newest interconnect
technologies.
HP-MPI is available as shared libraries. To use shared libraries, HP-MPI must be
installed on all machines in the same directory or accessible through the same shared
network path.
NOTE: HP-MPI V2.3.1 only provides shared libraries and only supports execution
of executables linked against shared MPI libraries. Earlier executables linked against
archive libraries either need to be relinked with HP-MPI V2.3.1, or run under HP-MPI
V2.2.7 or earlier.
1.2.1 Platforms Supported
HP-MPI V2.3.1 is supported on the following systems.
1.1 Announcement 7