Product specifications
Table Of Contents
- Table of Contents
- 1 Introduction
- 2 Feature Overview
- 3 Step-by-Step Cluster Setup and MPI Usage Checklists
- 4 InfiniPath Cluster Setup and Administration
- Introduction
- Installed Layout
- Memory Footprint
- BIOS Settings
- InfiniPath and OpenFabrics Driver Overview
- OpenFabrics Drivers and Services Configuration and Startup
- Other Configuration: Changing the MTU Size
- Managing the InfiniPath Driver
- More Information on Configuring and Loading Drivers
- Performance Settings and Management Tips
- Host Environment Setup for MPI
- Checking Cluster and Software Status
- 5 Using QLogic MPI
- Introduction
- Getting Started with MPI
- QLogic MPI Details
- Use Wrapper Scripts for Compiling and Linking
- Configuring MPI Programs for QLogic MPI
- To Use Another Compiler
- Process Allocation
- mpihosts File Details
- Using mpirun
- Console I/O in MPI Programs
- Environment for Node Programs
- Environment Variables
- Running Multiple Versions of InfiniPath or MPI
- Job Blocking in Case of Temporary InfiniBand Link Failures
- Performance Tuning
- MPD
- QLogic MPI and Hybrid MPI/OpenMP Applications
- Debugging MPI Programs
- QLogic MPI Limitations
- 6 Using Other MPIs
- A mpirun Options Summary
- B Benchmark Programs
- C Integration with a Batch Queuing System
- D Troubleshooting
- Using LEDs to Check the State of the Adapter
- BIOS Settings
- Kernel and Initialization Issues
- OpenFabrics and InfiniPath Issues
- Stop OpenSM Before Stopping/Restarting InfiniPath
- Manual Shutdown or Restart May Hang if NFS in Use
- Load and Configure IPoIB Before Loading SDP
- Set $IBPATH for OpenFabrics Scripts
- ifconfig Does Not Display Hardware Address Properly on RHEL4
- SDP Module Not Loading
- ibsrpdm Command Hangs when Two Host Channel Adapters are Installed but Only Unit 1 is Connected to the Switch
- Outdated ipath_ether Configuration Setup Generates Error
- System Administration Troubleshooting
- Performance Issues
- QLogic MPI Troubleshooting
- Mixed Releases of MPI RPMs
- Missing mpirun Executable
- Resolving Hostname with Multi-Homed Head Node
- Cross-Compilation Issues
- Compiler/Linker Mismatch
- Compiler Cannot Find Include, Module, or Library Files
- Problem with Shell Special Characters and Wrapper Scripts
- Run Time Errors with Different MPI Implementations
- Process Limitation with ssh
- Number of Processes Exceeds ulimit for Number of Open Files
- Using MPI.mod Files
- Extending MPI Modules
- Lock Enough Memory on Nodes When Using a Batch Queuing System
- Error Creating Shared Memory Object
- gdb Gets SIG32 Signal Under mpirun -debug with the PSM Receive Progress Thread Enabled
- General Error Messages
- Error Messages Generated by mpirun
- MPI Stats
- E Write Combining
- F Useful Programs and Files
- G Recommended Reading
- Glossary
- Index

4–InfiniPath Cluster Setup and Administration
Memory Footprint
IB6054601-00 H 4-3
A
The InfiniPath driver modules in this release are installed in:
/lib/modules/$(uname -r)/updates/kernel/drivers/infiniband/hw/ipath
Most of the other OFED modules are installed under the infiniband
subdirectory. Other modules are installed under:
/lib/modules/$(uname -r)/updates/kernel/drivers/net
The RDS modules are installed under:
/lib/modules/$(uname -r)/updates/kernel/net/rds
QLogic-supplied OpenMPI and MVAPICH RPMs with PSM support and compiled
with GCC, PathScale, PGI, and the Intel compilers are now installed in directories
using this format:
/usr/mpi/<compiler>/<mpi>-<mpi_version>-qlc
For example:
/usr/mpi/gcc/openmpi-1.2.8-qlc
Memory Footprint
This section contains a preliminary guideline for estimating the memory footprint
of the QLogic adapter on Linux x86_64 systems. Memory consumption is linear,
based on system configuration. OpenFabrics support is under development and
has not been fully characterized. Table 4-1 summarizes the guidelines.
Table 4-1. Memory Footprint of the QLogic Adapter on Linux x86_64
Systems
Adapter
Component
Required/
Optional
Memory Footprint Comment
InfiniPath
driver
Required 9 MB
Includes accelerated IP
support. Includes table
space to support up to 1000
node systems. Clusters
larger than 1000 nodes can
also be configured.
MPI Optional 68 MB per process + 264
bytes × num_remote_procs:
68 MB = 60 MB (base) + 512
× 2172 (sendbufs) +
1024×1K (misc allocations) +
6 MB (shared memory)
Several of these parame-
ters (sendbufs, recvbufs
and size of the shared
memory region) are tun-
able if you want a reduced
memory footprint.