HP-MPI User's Guide (11th Edition)

Introduction
MPI concepts
Chapter 1 11
Collective operations consist of routines for communication,
computation, and synchronization. These routines all specify a
communicator argument that defines the group of participating
processes and the context of the operation.
Collective operations are valid only for intracommunicators.
Intercommunicators are not allowed as arguments.
Communication
Collective communication involves the exchange of data among all
processes in a group. The communication can be one-to-many,
many-to-one, or many-to-many.
The single originating process in the one-to-many routines or the single
receiving process in the many-to-one routines is called the root.
Collective communications have three basic patterns:
Broadcast and Scatter Root sends data to all processes,
including itself.
Gather Root receives data from all processes,
including itself.
Allgather and Alltoall Each process communicates with
each process, including itself.
The syntax of the MPI collective functions is designed to be consistent
with point-to-point communications, but collective functions are more
restrictive than point-to-point functions. Some of the important
restrictions to keep in mind are:
The amount of data sent must exactly match the amount of data
specified by the receiver.
Collective functions come in blocking versions only.
Collective functions do not use a tag argument meaning that
collective calls are matched strictly according to the order of
execution.
Collective functions come in standard mode only.
For detailed discussions of collective communications refer to Chapter 4,
“Collective Communication” in the MPI 1.0 standard. The following
examples demonstrate the syntax to code two collective operations; a
broadcast and a scatter: