HP-MPI User's Guide (11th Edition)

Glossary
nonblocking send
Glossary 313
locality domain (ldom) Consists of a
related collection of processors, memory, and
peripheral resources that compose a
fundamental building block of the system.
All processors and peripheral devices in a
given locality domain have equal latency to
the memory contained within that locality
domain.
mapped drive In a network, drive
mappings reference remote drives, and you
have the option of assigning the letter of
your choice. For example, on your local
machine you might map S: to refer to drive
C: on a server. Each time S: is referenced on
the local machine, the drive on the server is
substituted behind the scenes. The mapping
may also be set up to refer only to a specific
folder on the remote machine, not the entire
drive.
message bin A message bin stores
messages according to message length. You
can define a message bin by defining the
byte range of the message to be stored in the
bin—use the MPI_INSTR environment
variable.
message-passing model Model in which
processes communicate with each other by
sending and receiving messages.
Applications based on message passing are
nondeterministic by default. However, when
one process sends two or more messages to
another, the transfer is deterministic as the
messages are always received in the order
sent.
MIMD Multiple instruction multiple data.
Category of applications in which many
instruction streams are applied concurrently
to multiple data sets.
MPI Message-passing interface. Set of
library routines used to design scalable
parallel applications. These routines provide
a wide range of operations that include
computation, communication, and
synchronization. MPI-2 is the current
standard supported by major vendors.
MPMD Multiple data multiple program.
Implementations of HP-MPI that use two or
more separate executables to construct an
application. This design style can be used to
simplify the application source and reduce
the size of spawned processes. Each process
may run a different executable.
multilevel parallelism Refers to
multithreaded processes that call MPI
routines to perform computations. This
approach is beneficial for problems that can
be decomposed into logical parts for parallel
execution (for example, a looping construct
that spawns multiple threads to perform a
computation and then joins after the
computation is complete).
multihost A mode of operation for an MPI
application where a cluster is used to carry
out a parallel application run.
nonblocking receive Communication in
which the receiving process returns before a
message is stored in the receive buffer.
Nonblocking receives are useful when
communication and computation can be
effectively overlapped in an MPI application.
Use of nonblocking receives may also avoid
system buffering and memory-to-memory
copying.
nonblocking send Communication in
which the sending process returns before a
message is stored in the send buffer.