User Manual
HPC FeaturesRev 2.2-1.0.1
Mellanox Technologies
148
5.3.3 MPI Selector - Which MPI Runs
Mellanox OFED contains a simple mechanism for system administrators and end-users to select
which MPI implementation they want to use. The MPI selector functionality is not specific to
any MPI implementation; it can be used with any implementation that provides shell startup files
that correctly set the environment for that MPI. The Mellanox OFED installer will automatically
add MPI selector support for each MPI that it installs. Additional MPI's not known by the Mella-
nox OFED installer can be listed in the MPI selector; see the mpi-selector(1) man page for
details.
Note that MPI selector only affects the default MPI environment for futur
e shells. Specifically, if
you use MPI selector to select MPI implementation ABC, this default selection will not take
effect until you start a new shell (e.g., logout and login again). Other packages (such as environ-
ment modules) provide functionality that allows changing your environment to point to a new
MPI implementation in the current shell.
The MPI selector was not meant to duplicate or replace
that functionality.
The MPI selector functionality can be invoked in one of two ways:
1. The mpi-selector-menu command.
This command is a simple, menu-based program that allows the selection of the system-wide
MPI (usually only settable by root) and a per
-user MPI selection. It also shows what the current
selections are. This command is recommended for all users.
2. The mpi-selector command.
This command is a CLI-equivalent of the mpi-selector-menu, allowing for the same functional-
ity as mpi-selector-menu but without the interactive menus and prompts. It is suitable for
scripting.
5.3.4 Compiling MPI Applications
Compiling MVAPICH Applications
Please refer to http://mvapich.cse.ohio-state.edu/support/mvapich_user_guide.html.
To review the default configuration of the installation, check the default configuration file:
/usr/mpi/<compiler>/mvapich-<mvapich-ver>/etc/mvapich.conf
Compiling Open MPI Applications
Please refer to http://www.open-mpi.org/faq/?category=mpi-apps.
5.4 MellanoX Messaging
MellanoX Messaging (MXM) provides enhancements to parallel communication libraries by
fully utilizing the underlying networking infrastructure provided by Mellanox HCA/switch hard-
ware. This includes a variety of enhancements that take advantage of Mellanox networking hard-
ware including:
• Multiple transport support including RC and UD
• Proper management of HCA resources and memory structures
• Efficient memory registration
• One-sided communication semantics