Product specifications
Table Of Contents
- Table of Contents
- 1 Introduction
- 2 Feature Overview
- 3 Step-by-Step Cluster Setup and MPI Usage Checklists
- 4 InfiniPath Cluster Setup and Administration
- Introduction
- Installed Layout
- Memory Footprint
- BIOS Settings
- InfiniPath and OpenFabrics Driver Overview
- OpenFabrics Drivers and Services Configuration and Startup
- Other Configuration: Changing the MTU Size
- Managing the InfiniPath Driver
- More Information on Configuring and Loading Drivers
- Performance Settings and Management Tips
- Host Environment Setup for MPI
- Checking Cluster and Software Status
- 5 Using QLogic MPI
- Introduction
- Getting Started with MPI
- QLogic MPI Details
- Use Wrapper Scripts for Compiling and Linking
- Configuring MPI Programs for QLogic MPI
- To Use Another Compiler
- Process Allocation
- mpihosts File Details
- Using mpirun
- Console I/O in MPI Programs
- Environment for Node Programs
- Environment Variables
- Running Multiple Versions of InfiniPath or MPI
- Job Blocking in Case of Temporary InfiniBand Link Failures
- Performance Tuning
- MPD
- QLogic MPI and Hybrid MPI/OpenMP Applications
- Debugging MPI Programs
- QLogic MPI Limitations
- 6 Using Other MPIs
- A mpirun Options Summary
- B Benchmark Programs
- C Integration with a Batch Queuing System
- D Troubleshooting
- Using LEDs to Check the State of the Adapter
- BIOS Settings
- Kernel and Initialization Issues
- OpenFabrics and InfiniPath Issues
- Stop OpenSM Before Stopping/Restarting InfiniPath
- Manual Shutdown or Restart May Hang if NFS in Use
- Load and Configure IPoIB Before Loading SDP
- Set $IBPATH for OpenFabrics Scripts
- ifconfig Does Not Display Hardware Address Properly on RHEL4
- SDP Module Not Loading
- ibsrpdm Command Hangs when Two Host Channel Adapters are Installed but Only Unit 1 is Connected to the Switch
- Outdated ipath_ether Configuration Setup Generates Error
- System Administration Troubleshooting
- Performance Issues
- QLogic MPI Troubleshooting
- Mixed Releases of MPI RPMs
- Missing mpirun Executable
- Resolving Hostname with Multi-Homed Head Node
- Cross-Compilation Issues
- Compiler/Linker Mismatch
- Compiler Cannot Find Include, Module, or Library Files
- Problem with Shell Special Characters and Wrapper Scripts
- Run Time Errors with Different MPI Implementations
- Process Limitation with ssh
- Number of Processes Exceeds ulimit for Number of Open Files
- Using MPI.mod Files
- Extending MPI Modules
- Lock Enough Memory on Nodes When Using a Batch Queuing System
- Error Creating Shared Memory Object
- gdb Gets SIG32 Signal Under mpirun -debug with the PSM Receive Progress Thread Enabled
- General Error Messages
- Error Messages Generated by mpirun
- MPI Stats
- E Write Combining
- F Useful Programs and Files
- G Recommended Reading
- Glossary
- Index

6–Using Other MPIs
Intel MPI
IB6054601-00 H 6-11
A
compat-dapl-devel-static-1.2.12-1.x86_64.rpm
compat-dapl-utils-1.2.12-1.x86_64.rpm
2. Verify that there is a /etc/dat.conf file. It should be installed by the
dapl- RPM. The file dat.conf contains a list of interface adapters
supported by uDAPL service providers. In particular, it must contain
mapping entries for OpenIB-cma for dapl 1.2.x, in a form similar to this
(all on one line):
OpenIB-cma u1.2 nonthreadsafe default libdaplcma.so.1 dapl.1.2
"ib0 0" ""
3. On every node, type the following command (as a root user):
# modprobe rdma_ucm
To ensure that the module is loaded when the driver is loaded, add
RDMA_UCM_LOAD=yes to the /etc/infiniband/openib.conf file.
(Note that rdma_cm is also used, but it is loaded automatically.)
4. Bring up an IPoIB interface on every node, for example, ib0. See the
instructions for configuring IPoIB for more details.
Intel MPI has different bin directories for 32-bit (bin) and 64-bit (bin64); 64-bit is
the most commonly used.
To launch MPI jobs, the Intel installation directory must be included in PATH and
LD_LIBRARY_PATH.
When using sh for launching MPI jobs, run the following command:
$ source <$prefix>/bin64/mpivars.sh
When using csh for launching MPI jobs, run the following command:
$ source <$prefix>/bin64/mpivars.csh
Substitute bin if using 32-bit.
Compiling Intel MPI Applications
As with QLogic MPI, QLogic recommended that you use the included wrapper
scripts that invoke the underlying compiler. The default underlying compiler is
GCC, including gfortran. Note that there are more compiler drivers (wrapper
scripts) with Intel MPI than are listed here (see Table 6-6); check the Intel
documentation for more information.
Table 6-6. Intel MPI Wrapper Scripts
Wrapper Script Name Language
mpicc C
mpiCC C++