User`s guide

13
Hello world: rank 1 of 4 running on clusternode1
Hello world: rank 2 of 4 running on clusternode2
Hello world: rank 3 of 4 running on clusternode2
Alternatively, you can explicitly set the number of processes to be executed on each host
through the use of argument sets. One common use case is when employing the master-
worker model in your application. For example, the following command equally
distributes the four processes on clusternode1 and on clusternode2:
mpirun n 2 host clusternode1 ./myprog.exe : -n 2 -host clusternode2
./myprog.exe
See Also
You can get more details in the Local Options topic of Intel® MPI Library Reference
Manual for Linux OS.
You can get more information about controlling MPI process placement at Controlling
Process Placement with the Intel® MPI Library.
2.9. Using Intel® MPI Library on Intel® Xeon Phi™
Coprocessor
Intel® MPI Library for the Intel® Many Integrated Core Architecture (Intel® MIC
Architecture) supports only the Intel® Xeon Phi™ coprocessor (codename: Knights
Corner).
2.9.1. Building an MPI Application
To build an MPI application for the host node and the Intel® Xeon Phi™ coprocessor,
follow these steps:
1. Establish the environment settings for the compiler and for the Intel® MPI Library:
$ . <install-dir>/composerxe/bin/compilervars.sh intel64
$ . <install-dir>/impi/intel64/bin/mpivars.sh
2. Build your application for Intel® Xeon Phi™ coprocessor:
$ mpiicc -mmic myprog.c -o myprog.mic
3. Build your application for Intel® 64 architecture:
$ mpiicc myprog.c -o myprog