Product specifications
Table Of Contents
- Table of Contents
- 1 Introduction
- 2 Feature Overview
- 3 Step-by-Step Cluster Setup and MPI Usage Checklists
- 4 InfiniPath Cluster Setup and Administration
- Introduction
- Installed Layout
- Memory Footprint
- BIOS Settings
- InfiniPath and OpenFabrics Driver Overview
- OpenFabrics Drivers and Services Configuration and Startup
- Other Configuration: Changing the MTU Size
- Managing the InfiniPath Driver
- More Information on Configuring and Loading Drivers
- Performance Settings and Management Tips
- Host Environment Setup for MPI
- Checking Cluster and Software Status
- 5 Using QLogic MPI
- Introduction
- Getting Started with MPI
- QLogic MPI Details
- Use Wrapper Scripts for Compiling and Linking
- Configuring MPI Programs for QLogic MPI
- To Use Another Compiler
- Process Allocation
- mpihosts File Details
- Using mpirun
- Console I/O in MPI Programs
- Environment for Node Programs
- Environment Variables
- Running Multiple Versions of InfiniPath or MPI
- Job Blocking in Case of Temporary InfiniBand Link Failures
- Performance Tuning
- MPD
- QLogic MPI and Hybrid MPI/OpenMP Applications
- Debugging MPI Programs
- QLogic MPI Limitations
- 6 Using Other MPIs
- A mpirun Options Summary
- B Benchmark Programs
- C Integration with a Batch Queuing System
- D Troubleshooting
- Using LEDs to Check the State of the Adapter
- BIOS Settings
- Kernel and Initialization Issues
- OpenFabrics and InfiniPath Issues
- Stop OpenSM Before Stopping/Restarting InfiniPath
- Manual Shutdown or Restart May Hang if NFS in Use
- Load and Configure IPoIB Before Loading SDP
- Set $IBPATH for OpenFabrics Scripts
- ifconfig Does Not Display Hardware Address Properly on RHEL4
- SDP Module Not Loading
- ibsrpdm Command Hangs when Two Host Channel Adapters are Installed but Only Unit 1 is Connected to the Switch
- Outdated ipath_ether Configuration Setup Generates Error
- System Administration Troubleshooting
- Performance Issues
- QLogic MPI Troubleshooting
- Mixed Releases of MPI RPMs
- Missing mpirun Executable
- Resolving Hostname with Multi-Homed Head Node
- Cross-Compilation Issues
- Compiler/Linker Mismatch
- Compiler Cannot Find Include, Module, or Library Files
- Problem with Shell Special Characters and Wrapper Scripts
- Run Time Errors with Different MPI Implementations
- Process Limitation with ssh
- Number of Processes Exceeds ulimit for Number of Open Files
- Using MPI.mod Files
- Extending MPI Modules
- Lock Enough Memory on Nodes When Using a Batch Queuing System
- Error Creating Shared Memory Object
- gdb Gets SIG32 Signal Under mpirun -debug with the PSM Receive Progress Thread Enabled
- General Error Messages
- Error Messages Generated by mpirun
- MPI Stats
- E Write Combining
- F Useful Programs and Files
- G Recommended Reading
- Glossary
- Index

1–Introduction
Interoperability
IB6054601-00 H 1-3
A
The QLogic host channel adapters are InfiniBand 4X. The Double Data Rate
(DDR) QLE7240 and QLE7280 adapters have a raw data rate of 20Gbps (data
rate of 16Gbps). For the Single Data Rate (SDR) adapters, the QLE7140 and
QHT7140, the raw data rate is 10Gbps (data rate of 8Gbps). The QLE7240 and
QLE7280 can also run in SDR mode.
The QLogic adapters utilize standard, off-the-shelf InfiniBand 4X switches and
cabling. The QLogic interconnect is designed to work with all InfiniBand-compliant
switches.
QLogic OFED OpenFabrics software is interoperable with other vendors’
InfiniBand host channel adapters running compatible OpenFabrics releases.
There are several options for subnet management in your cluster:
Use the embedded Subnet Manager (SM) in one or more managed switches
supplied by your InfiniBand switch vendor.
Use a host-based Subnet Manager. QLogic provides one, QLogic Fabric
Manager, as a part of the QLogic InfiniBand Fabric Suite download.
Use the Open source Subnet Manager (OpenSM) component of
OpenFabrics.
Interoperability
QLogic InfiniPath participates in the standard InfiniBand subnet management
protocols for configuration and monitoring. Note that:
InfiniPath OpenFabrics (including Internet Protocol over InfiniBand (IPoIB))
is interoperable with other vendors’ InfiniBand adapters running compatible
OpenFabrics releases.
The QLogic MPI stack is not interoperable with other InfiniBand host channel
adapters and target channel adapters. Instead, it uses an
InfiniBand-compliant, vendor-specific protocol that is highly optimized for
QLogic MPI and MPI over Verbs.
NOTE:
If you are using the QLE7240 or QLE7280, and want to use DDR mode,
then DDR-capable switches must be used.
NOTE:
See the OpenFabrics web site at www.openfabrics.org
for more information
on the OpenFabrics Alliance.