User guide

4Running MPI on QLogic Adapters
Open MPI and Hybrid MPI/OpenMP Applications
IB0054606-02 A 4-21
Open MPI and Hybrid MPI/OpenMP Applications
Open MPI supports hybrid MPI/OpenMP applications, provided that MPI routines
are called only by the master OpenMP thread. This application is called the
funneled thread model. Instead of MPI_Init/MPI_INIT (for C/C++ and Fortran
respectively), the program can call MPI_Init_thread/MPI_INIT_THREAD to
determine the level of thread support, and the value MPI_THREAD_FUNNELED will
be returned.
To use this feature, the application must be compiled with both OpenMP and MPI
code enabled. To do this, use the -openmp or -mp flag (depending on your
compiler) on the mpicc
compile line.
As mentioned previously, MPI routines can be called only by the master OpenMP
thread. The hybrid executable is executed as usual using mpirun, but typically
only one MPI process is run per node and the OpenMP library will create
additional threads to utilize all CPUs on that node. If there are sufficient CPUs on
a node, you may want to run multiple MPI processes and multiple OpenMP
threads per node.
The number of OpenMP threads is typically controlled by the OMP_NUM_THREADS
environment variable in the
.bashrc file. (OMP_NUM_THREADS is used by other
compilers’ OpenMP products, but is not an Open MPI environment variable.) Use
this variable to adjust the split between MPI processes and OpenMP threads.
Usually, the number of MPI processes (per node) times the number of OpenMP
threads will be set to match the number of CPUs per node. An example case
would be a node with four CPUs, running one MPI process and four OpenMP
threads. In this case, OMP_NUM_THREADS is set to four. OMP_NUM_THREADS is on
a per-node basis.
See “Environment for Node Programs” on page 4-15 for information on setting
environment variables.