Product specifications
Table Of Contents
- Table of Contents
- 1 Introduction
- 2 Feature Overview
- 3 Step-by-Step Cluster Setup and MPI Usage Checklists
- 4 InfiniPath Cluster Setup and Administration
- Introduction
- Installed Layout
- Memory Footprint
- BIOS Settings
- InfiniPath and OpenFabrics Driver Overview
- OpenFabrics Drivers and Services Configuration and Startup
- Other Configuration: Changing the MTU Size
- Managing the InfiniPath Driver
- More Information on Configuring and Loading Drivers
- Performance Settings and Management Tips
- Host Environment Setup for MPI
- Checking Cluster and Software Status
- 5 Using QLogic MPI
- Introduction
- Getting Started with MPI
- QLogic MPI Details
- Use Wrapper Scripts for Compiling and Linking
- Configuring MPI Programs for QLogic MPI
- To Use Another Compiler
- Process Allocation
- mpihosts File Details
- Using mpirun
- Console I/O in MPI Programs
- Environment for Node Programs
- Environment Variables
- Running Multiple Versions of InfiniPath or MPI
- Job Blocking in Case of Temporary InfiniBand Link Failures
- Performance Tuning
- MPD
- QLogic MPI and Hybrid MPI/OpenMP Applications
- Debugging MPI Programs
- QLogic MPI Limitations
- 6 Using Other MPIs
- A mpirun Options Summary
- B Benchmark Programs
- C Integration with a Batch Queuing System
- D Troubleshooting
- Using LEDs to Check the State of the Adapter
- BIOS Settings
- Kernel and Initialization Issues
- OpenFabrics and InfiniPath Issues
- Stop OpenSM Before Stopping/Restarting InfiniPath
- Manual Shutdown or Restart May Hang if NFS in Use
- Load and Configure IPoIB Before Loading SDP
- Set $IBPATH for OpenFabrics Scripts
- ifconfig Does Not Display Hardware Address Properly on RHEL4
- SDP Module Not Loading
- ibsrpdm Command Hangs when Two Host Channel Adapters are Installed but Only Unit 1 is Connected to the Switch
- Outdated ipath_ether Configuration Setup Generates Error
- System Administration Troubleshooting
- Performance Issues
- QLogic MPI Troubleshooting
- Mixed Releases of MPI RPMs
- Missing mpirun Executable
- Resolving Hostname with Multi-Homed Head Node
- Cross-Compilation Issues
- Compiler/Linker Mismatch
- Compiler Cannot Find Include, Module, or Library Files
- Problem with Shell Special Characters and Wrapper Scripts
- Run Time Errors with Different MPI Implementations
- Process Limitation with ssh
- Number of Processes Exceeds ulimit for Number of Open Files
- Using MPI.mod Files
- Extending MPI Modules
- Lock Enough Memory on Nodes When Using a Batch Queuing System
- Error Creating Shared Memory Object
- gdb Gets SIG32 Signal Under mpirun -debug with the PSM Receive Progress Thread Enabled
- General Error Messages
- Error Messages Generated by mpirun
- MPI Stats
- E Write Combining
- F Useful Programs and Files
- G Recommended Reading
- Glossary
- Index

Glossary
MR – RC
Glossary-4 IB6054601-00 H
S
MR
Stands for Memory Region
MTRR
Stands for Memory Type Range Registers.
Used by the InfiniPath driver to enable
write combining to the QLogic on-chip
transmit buffers. This improves write
bandwidth to the QLogic chip by writing
multiple words in a single bus transaction
(typically 64). Applies only to x86_64
systems.
MTU
Stands for Maximum Transfer Unit. The
largest packet size that can be transmitted
over a given network.
multicast group
A mechanism that a group of nodes use to
communicate amongst each other. It is an
efficient mechanism for broadcasting
messages to many nodes, as messages
sent to the group are received by all
members of the group without the sender
having to explicitly send it to each
individual member (or even having to know
who the members are). Nodes can join or
leave the group at any time.
multihomed head node
A host that has multiple IP addresses,
usually assigned to a different interface
and part of a different network. In the
normal case, each active interface has a
separate and unique IP address and a
unique host name.
node file
Same as hosts file
node program
Each individual process that is part of the
parallel MPI job. The machine on which it
is executed is called a "node".
OpenIB
The previous name of OpenFabrics
OpenFabrics
The open source InfiniBand protocol stack
OpenMP
Specification that provides an open source
model for parallel programming that is
portable across shared memory architec-
tures from different vendors.
OpenSM
Stands for Open source Subnet Manager.
Provides provides basic functionality for
subnet discovery and activation.
PAT
Stands for Page Attribute Table. Controls
how areas of memory are cached. Similar
to MTRR, except that it can be specified
on a per-page basis.
PCIe
Stands for PCI Express. Based on PCI
concepts and standards, PCIe uses a
faster serial connection mechanism.
PSM
PSM is QLogic’s low-level user level Appli-
cation Programming Interface (API).
QLogic MPI, as well as numerous other
high performance MPI implementations,
have been ported to the PSM interface.
QP
Stands for Queue Pair
RC
Stands for Reliable Connected. A transport
mode used by InfiniBand.