User guide
4–Running MPI on QLogic Adapters
Open MPI
IB0054606-02 A 4-11
Context Sharing Error Messages
The error message when the context limit is exceeded is:
No free InfiniPath contexts available on /dev/ipath
This message appears when the application starts.
Error messages related to contexts may also be generated by ipath_checkout
or mpirun. For example:
PSM found 0 available contexts on InfiniPath device
The most likely cause is that the cluster has processes using all the available
PSM contexts. Clean up these processes before restarting the job.
Running in Shared Memory Mode
Open MPI supports running exclusively in shared memory mode; no QLogic
adapter is required for this mode of operation. This mode is used for running
applications on a single node rather than on a cluster of nodes.
To add pre-built applications (benchmarks), add
/usr/mpi/gcc/openmpi-1.4.3-qlc/tests/osu_benchmarks-3.1.1
to your PATH (or if you installed the MPI in another location: add
$MPI_HOME/tests/osu_benchmarks-3.1.1 to your PATH).
To enable shared memory mode, use a single node in the mpihosts file. For
example, if the file were named onehost and it is in the working directory, the
following would be entered:
$ cat /tmp/onehost
idev-64 slots=8
Enabling the shared memory mode as previously described uses a feature of
Open-MPI host files to list the number of slots, which is the number of possible
MPI processes (aka ranks) that you want to run on the node. Typically this is set
equal to the number of processor cores on the node. A hostfile with 8 lines
containing 'idev-64' would function identically. You can use this hostfile and run:
$ mpirun -np=2 -hostfile onehost osu_latency
to measure MPI latency between two cores on the same host using
shared-memory, or
$ mpirun -np=2 -hostfile onehost osu_bw
to measure MPI unidirectional bandwidth using shared memory.