User`s guide

Using the Intel(R) MPI Library
9
2.7. Running an MPI Program
To launch programs linked with the Intel® MPI Library use the mpiexec command:
> mpiexec.exe -n <# of processes> myprog.exe
The wmpiexec utility is a GUI wrapper for mpiexec.exe. See the Intel® MPI Library Reference
Manual for more details.
To set the number of processes on the local node, use the only required mpiexec -n option.
To set names of hosts and number of processes, use the -hosts option :
> mpiexec.exe -hosts 2 host1 2 host2 2 myprog.exe
If you are using a network fabric as opposed to the default fabric, use the -genv option to set the
I_MPI_FABRICS variable.
For example, to run an MPI program using the shm fabric, type in the following command:
> mpiexec.exe -genv I_MPI_FABRICS shm -n <# of processes> \
myprog.exe
To run the program, you may use the -configfile option :
> mpiexec.exe -configfile config_file
The configuration file contains:
-host host1 -n 1 -genv I_MPI_FABRICS shm:dapl myprog.exe
-host host2 -n 1 -genv I_MPI_FABRICS shm:dapl myprog.exe
For the rdma capable fabric, use the following command:
> mpiexec.exe -hosts 2 host1 1 host2 1 -genv I_MPI_FABRICS dapl myprog.exe
You can select any supported device. For more information, see Section Selecting a Network
Fabric.
If you successfully run your application using the Intel® MPI Library, you can move your
application from one cluster to another and use different fabrics between the nodes without re-
linking. If you encounter problems, see Troubleshooting for possible solutions.
2.8. Controlling MPI Process Placement
The mpiexec command controls how the ranks of the processes are allocated to the
nodes of the cluster. By default, the mpiexec command uses group round-robin
assignment, putting consecutive MPI process on all processor ranks of a node. This
placement algorithm may not be the best choice for your application, particularly for
clusters with symmetric multi-processor (SMP) nodes.