HP-MPI User's Guide (11th Edition)

Introduction
MPI concepts
Chapter 116
Multilevel parallelism
By default, processes in an MPI application can only do one task at a
time. Such processes are single-threaded processes. This means that
each process has an address space together with a single program
counter, a set of registers, and a stack.
A process with multiple threads has one address space, but each process
thread has its own counter, registers, and stack.
Multilevel parallelism refers to MPI processes that have multiple
threads. Processes become multi threaded through calls to multi
threaded libraries, parallel directives and pragmas, or auto-compiler
parallelism. Refer to “Thread-compliant library” on page 57 for more
information on linking with the thread-compliant library.
Multilevel parallelism is beneficial for problems you can decompose into
logical parts for parallel execution; for example, a looping construct that
spawns multiple threads to do a computation and joins after the
computation is complete.
The example program, “multi_par.f on page 251 is an example of
multilevel parallelism.
Advanced topics
This chapter only provides a brief introduction to basic MPI concepts.
Advanced MPI topics include:
Error handling
Process topologies
User-defined datatypes
Process grouping
Communicator attribute caching
The MPI profiling interface
To learn more about the basic concepts discussed in this chapter and
advanced MPI topics refer to MPI: The Complete Reference and MPI: A
Message-Passing Interface Standard.