Product specifications
Table Of Contents
- Table of Contents
- 1 Introduction
- 2 Feature Overview
- 3 Step-by-Step Cluster Setup and MPI Usage Checklists
- 4 InfiniPath Cluster Setup and Administration
- Introduction
- Installed Layout
- Memory Footprint
- BIOS Settings
- InfiniPath and OpenFabrics Driver Overview
- OpenFabrics Drivers and Services Configuration and Startup
- Other Configuration: Changing the MTU Size
- Managing the InfiniPath Driver
- More Information on Configuring and Loading Drivers
- Performance Settings and Management Tips
- Host Environment Setup for MPI
- Checking Cluster and Software Status
- 5 Using QLogic MPI
- Introduction
- Getting Started with MPI
- QLogic MPI Details
- Use Wrapper Scripts for Compiling and Linking
- Configuring MPI Programs for QLogic MPI
- To Use Another Compiler
- Process Allocation
- mpihosts File Details
- Using mpirun
- Console I/O in MPI Programs
- Environment for Node Programs
- Environment Variables
- Running Multiple Versions of InfiniPath or MPI
- Job Blocking in Case of Temporary InfiniBand Link Failures
- Performance Tuning
- MPD
- QLogic MPI and Hybrid MPI/OpenMP Applications
- Debugging MPI Programs
- QLogic MPI Limitations
- 6 Using Other MPIs
- A mpirun Options Summary
- B Benchmark Programs
- C Integration with a Batch Queuing System
- D Troubleshooting
- Using LEDs to Check the State of the Adapter
- BIOS Settings
- Kernel and Initialization Issues
- OpenFabrics and InfiniPath Issues
- Stop OpenSM Before Stopping/Restarting InfiniPath
- Manual Shutdown or Restart May Hang if NFS in Use
- Load and Configure IPoIB Before Loading SDP
- Set $IBPATH for OpenFabrics Scripts
- ifconfig Does Not Display Hardware Address Properly on RHEL4
- SDP Module Not Loading
- ibsrpdm Command Hangs when Two Host Channel Adapters are Installed but Only Unit 1 is Connected to the Switch
- Outdated ipath_ether Configuration Setup Generates Error
- System Administration Troubleshooting
- Performance Issues
- QLogic MPI Troubleshooting
- Mixed Releases of MPI RPMs
- Missing mpirun Executable
- Resolving Hostname with Multi-Homed Head Node
- Cross-Compilation Issues
- Compiler/Linker Mismatch
- Compiler Cannot Find Include, Module, or Library Files
- Problem with Shell Special Characters and Wrapper Scripts
- Run Time Errors with Different MPI Implementations
- Process Limitation with ssh
- Number of Processes Exceeds ulimit for Number of Open Files
- Using MPI.mod Files
- Extending MPI Modules
- Lock Enough Memory on Nodes When Using a Batch Queuing System
- Error Creating Shared Memory Object
- gdb Gets SIG32 Signal Under mpirun -debug with the PSM Receive Progress Thread Enabled
- General Error Messages
- Error Messages Generated by mpirun
- MPI Stats
- E Write Combining
- F Useful Programs and Files
- G Recommended Reading
- Glossary
- Index

Glossary
EE – kDAPL
Glossary-2 IB6054601-00 H
S
EE
Stands for End to End
EEC
Stands for End to End Context
fabric
The InfiniBand interconnect infrastructure,
consisting of a set of host channel
adapters (and possibly target channel
adapters) connected by switches, such
that each end node can directly reach all
other nodes.
front end node
The machine or machines that launch
jobs.
funneled thread model
Only the main (master) thread may
execute MPI calls. In QLogic MPI, hybrid
MPI/OpenMP applications are supported,
provided that the MPI routines are called
only by the master OpenMP thread.
GID
Stands for Global Identifier. Used for
routing between different InfiniBand
subnets.
GUID
Stands for Globally Unique Identifier for
the QLogic chip. GUID is equivalent to an
Ethernet MAC address.
head node
Same as front end node.
host channel adapter
Host channel adapters are I/O engines
located within processing nodes,
connecting them to the InfiniBand fabric.
hosts file
Same as mpihosts file. Not the same as
the /etc/hosts file.
HTX
A specification that defines a connector
and form factor for HyperTrans-
port-enabled daughter cards and EATX
motherboards.
InfiniBand
Also referred to as IB. An input/output
architecture used in high-end servers. It is
also a specification for the serial transmis-
sion of data between processors and I/O
devices. InfiniBand typically uses
switched, point-to-point channels. These
channels are usually created by attaching
host channel adapters and target channel
adapters through InfiniBand switches.
IPoIB
Stands for Internet Protocol over Infini-
Band, as per the OpenFabrics standards
effort. This protocol layer allows the tradi-
tional Internet Protocol (IP) to run over an
InfiniBand fabric. IPoIB runs in either
connected mode (IPoIB-CM) or unreliable
datagram mode (IPoIB-UD).
iSER
Stands for iSCSI Extensions for RDMA. An
upper layer protocol.
kDAPL
Stands for kernel Direct Access Provider
Library. kDAPL is the kernel mode version
of the DAPL protocol.