Product specifications
Table Of Contents
- Table of Contents
- 1 Introduction
- 2 Feature Overview
- 3 Step-by-Step Cluster Setup and MPI Usage Checklists
- 4 InfiniPath Cluster Setup and Administration
- Introduction
- Installed Layout
- Memory Footprint
- BIOS Settings
- InfiniPath and OpenFabrics Driver Overview
- OpenFabrics Drivers and Services Configuration and Startup
- Other Configuration: Changing the MTU Size
- Managing the InfiniPath Driver
- More Information on Configuring and Loading Drivers
- Performance Settings and Management Tips
- Host Environment Setup for MPI
- Checking Cluster and Software Status
- 5 Using QLogic MPI
- Introduction
- Getting Started with MPI
- QLogic MPI Details
- Use Wrapper Scripts for Compiling and Linking
- Configuring MPI Programs for QLogic MPI
- To Use Another Compiler
- Process Allocation
- mpihosts File Details
- Using mpirun
- Console I/O in MPI Programs
- Environment for Node Programs
- Environment Variables
- Running Multiple Versions of InfiniPath or MPI
- Job Blocking in Case of Temporary InfiniBand Link Failures
- Performance Tuning
- MPD
- QLogic MPI and Hybrid MPI/OpenMP Applications
- Debugging MPI Programs
- QLogic MPI Limitations
- 6 Using Other MPIs
- A mpirun Options Summary
- B Benchmark Programs
- C Integration with a Batch Queuing System
- D Troubleshooting
- Using LEDs to Check the State of the Adapter
- BIOS Settings
- Kernel and Initialization Issues
- OpenFabrics and InfiniPath Issues
- Stop OpenSM Before Stopping/Restarting InfiniPath
- Manual Shutdown or Restart May Hang if NFS in Use
- Load and Configure IPoIB Before Loading SDP
- Set $IBPATH for OpenFabrics Scripts
- ifconfig Does Not Display Hardware Address Properly on RHEL4
- SDP Module Not Loading
- ibsrpdm Command Hangs when Two Host Channel Adapters are Installed but Only Unit 1 is Connected to the Switch
- Outdated ipath_ether Configuration Setup Generates Error
- System Administration Troubleshooting
- Performance Issues
- QLogic MPI Troubleshooting
- Mixed Releases of MPI RPMs
- Missing mpirun Executable
- Resolving Hostname with Multi-Homed Head Node
- Cross-Compilation Issues
- Compiler/Linker Mismatch
- Compiler Cannot Find Include, Module, or Library Files
- Problem with Shell Special Characters and Wrapper Scripts
- Run Time Errors with Different MPI Implementations
- Process Limitation with ssh
- Number of Processes Exceeds ulimit for Number of Open Files
- Using MPI.mod Files
- Extending MPI Modules
- Lock Enough Memory on Nodes When Using a Batch Queuing System
- Error Creating Shared Memory Object
- gdb Gets SIG32 Signal Under mpirun -debug with the PSM Receive Progress Thread Enabled
- General Error Messages
- Error Messages Generated by mpirun
- MPI Stats
- E Write Combining
- F Useful Programs and Files
- G Recommended Reading
- Glossary
- Index

5–Using QLogic MPI
QLogic MPI Details
5-10 IB6054601-00 H
S
Compiler and Linker Variables
When you use environment variables (e.g., $MPICH_CC) to select which compiler
mpicc (and others) will use, the scripts will also set the matching linker variable
(for example, $MPICH_CLINKER), if it is not already set. When both the
environment variable and command line options are used (-cc=gcc), the
command line variable is used.
When both the compiler and linker variables are set, and they do not match for the
compiler you are using, the MPI program may fail to link; or, if it links, it may not
execute correctly. For a sample error message, see “Compiler/Linker Mismatch”
on page D-15.
Process Allocation
Normally MPI jobs are run with each node program (process) being associated
with a dedicated QLogic host channel adapter hardware context, which is mapped
to a CPU.
If the number of node programs is greater than the available number of hardware
contexts, software context sharing increases the number of node programs that
can be run. Each adapter supports four software contexts per hardware context,
so up to four node programs (from the same MPI job) can share that hardware
context. There is a small additional overhead for each shared context.
Table 5-6 shows the maximum number of contexts available for each adapter.
The default hardware context/CPU mappings can be changed on the QLE7240
and QLE7280. See “InfiniPath Hardware Contexts on the QLE7240 and
QLE7280” on page 5-11 for more details.
Context sharing is enabled by default. How the system behaves when context
sharing is enabled or disabled is described in “Enabling and Disabling Software
Context Sharing” on page 5-12.
Table 5-6. Available Hardware and Software Contexts
Adapter
Available Hardware
Contexts (same as number
of supported CPUs)
Available Contexts when
Software Context Sharing is
Enabled
QLE7140 4 16
QHT7140 8 32
QLE7240 16 64
QLE7280 16 64