Product specifications

Table Of Contents
5–Using QLogic MPI
QLogic MPI Details
5-10 IB6054601-00 H
S
Compiler and Linker Variables
When you use environment variables (e.g., $MPICH_CC) to select which compiler
mpicc (and others) will use, the scripts will also set the matching linker variable
(for example, $MPICH_CLINKER), if it is not already set. When both the
environment variable and command line options are used (-cc=gcc), the
command line variable is used.
When both the compiler and linker variables are set, and they do not match for the
compiler you are using, the MPI program may fail to link; or, if it links, it may not
execute correctly. For a sample error message, see “Compiler/Linker Mismatch”
on page D-15.
Process Allocation
Normally MPI jobs are run with each node program (process) being associated
with a dedicated QLogic host channel adapter hardware context, which is mapped
to a CPU.
If the number of node programs is greater than the available number of hardware
contexts, software context sharing increases the number of node programs that
can be run. Each adapter supports four software contexts per hardware context,
so up to four node programs (from the same MPI job) can share that hardware
context. There is a small additional overhead for each shared context.
Table 5-6 shows the maximum number of contexts available for each adapter.
The default hardware context/CPU mappings can be changed on the QLE7240
and QLE7280. See “InfiniPath Hardware Contexts on the QLE7240 and
QLE7280” on page 5-11 for more details.
Context sharing is enabled by default. How the system behaves when context
sharing is enabled or disabled is described in “Enabling and Disabling Software
Context Sharing” on page 5-12.
Table 5-6. Available Hardware and Software Contexts
Adapter
Available Hardware
Contexts (same as number
of supported CPUs)
Available Contexts when
Software Context Sharing is
Enabled
QLE7140 4 16
QHT7140 8 32
QLE7240 16 64
QLE7280 16 64