User guide
4–Running MPI on QLogic Adapters
Open MPI
IB0054606-02 A 4-7
For Fortran 90 programs:
$ mpif90 -f90=pgf90 -show pi3f90.f90 -o pi3f90
pgf90 -I/usr/include/mpich/pgi5/x86_64 -c -I/usr/include
pi3f90.f90 -c
pgf90 pi3f90.o -o pi3f90 -lmpichf90 -lmpich
-lmpichabiglue_pgi5
Fortran 95 programs will be similar to the above.
For C programs:
$ mpicc -cc=pgcc -show cpi.c
pgcc -c cpi.c
pgcc cpi.o -lmpich -lpgftnrtl -lmpichabiglue_pgi5
Compiler and Linker Variables
When you use environment variables (e.g., $MPICH_CC) to select the compiler
mpicc (and others) will use, the scripts will also set the matching linker variable
(for example, $MPICH_CLINKER), if it is not already set. When both the
environment variable and command line options are used (-cc=gcc), the
command line variable is used.
When both the compiler and linker variables are set, and they do not match for the
compiler you are using, the MPI program may fail to link; or, if it links, it may not
execute correctly.
Process Allocation
Normally MPI jobs are run with each node program (process) being associated
with a dedicated QLogic IB adapter hardware context that is mapped to a CPU.
If the number of node programs is greater than the available number of hardware
contexts, software context sharing increases the number of node programs that
can be run. Each adapter supports four software contexts per hardware context,
so up to four node programs (from the same MPI job) can share that hardware
context. There is a small additional overhead for each shared context.
Table 4-5 shows the maximum number of contexts available for each adapter.