User`s guide
11
If you are using a network fabric different than the default fabric, use the -genv option to assign a
value to the I_MPI_FABRICS variable.
For example, to run an MPI program using the shm fabric, type in the following command:
$ mpirun -genv I_MPI_FABRICS shm -n <# of processes> ./myprog
For a dapl-capable fabric, use the following command:
$ mpirun -genv I_MPI_FABRICS dapl -n <# of processes> ./myprog
To use shared memory for intra-node communication and the DAPL layer for inter-node
communication, use the following command:
$ mpirun -genv I_MPI_FABRICS shm:dapl -n <# of processes> ./myprog
or simply
$ mpirun -n <# of processes> ./myprog
To use shared memory for intra-node communication and TMI for inter-node communication, use
the following command:
$ mpirun -genv I_MPI_FABRICS shm:tmi -n <# of processes> ./myprog
To select shared memory for intra-node communication and OFED verbs for inter-node
communication, use the following command:
$ mpirun -genv I_MPI_FABRICS shm:ofa -n <# of processes> ./myprog
To utilize the multirail* capabilities, set the I_MPI_OFA_NUM_ADAPTERS or the
I_MPI_OFA_NUM_PORTS environment variable.
The exact settings depend on your cluster configuration. For example, if you have two InfiniBand*
cards installed on your cluster nodes, use the following command:
$ export I_MPI_OFA_NUM_ADAPTERS=2
$ mpirun -genv I_MPI_FABRICS shm:ofa -n <# of processes> ./myprog
To enable connectionless DAPL User Datagrams (DAPL UD), set the I_MPI_DAPL_UD environment
variable .
$ export I_MPI_DAPL_UD=enable
$ mpirun -genv I_MPI_FABRICS shm:dapl -n <# of processes> ./myprog
If you successfully run your application using the Intel MPI Library over any of the fabrics
described, you can move your application from one cluster to another and use different fabrics
between the nodes without re-linking. If you encounter problems, see Troubleshooting for possible
solutions.
Additionally, using mpirun is the recommended practice when using a resource manager, such as
PBS Pro* or LSF*.
For example, to run the application in the PBS environment, follow these steps:
1. Create a PBS launch script that specifies number of nodes requested and sets your Intel MPI
Library environment. For example, create a pbs_run.sh file with the following content:
#PBS -l nodes=2:ppn=1