User Manual

Rev 2.0-3.0.0
Mellanox Technologies
105
5.3.4 Configuring Multi-Rail Support
Multi-Rail support enables the user to use more than one of the active ports on the card, by mak-
ing a better use of the resources. It provides a combined throughput among the used ports.
To configure dual rail support:
Specify the list of ports you would like to use to enable multi rail support.
5.3.5 Configuring MXM over the Ethernet Fabric
To configure MXM over the Ethernet fabric:
Step 1. Make sure the Ethernet port is active.
Step 2. Specify the ports you would like to use, if there is a non Ethernet active port in the card.
5.4 Fabric Collective Accelerator
The Mellanox Fabric Collective Accelerator (FCA) is a unique solution for offloading collective
operations from the Message Passing Interface (MPI) process to the server CPUs. As a system-
wide solution, FCA does not require any additional hardware. The FCA manager creates a topol-
ogy based collective tree, and orchestrates an efficient collective operation using the CPUs in the
servers that are part of the collective operation. FCA accelerates MPI collective operation perfor-
mance by up to 100 times providing a reduction in the overall job runtime. Implementation is
simple and transparent during the job runtime.
FCA is built on the following main principles:
Topology-aware Orchestration
The MPI collective logical tree is matched to the physical topology. The collective logical
tree is constructed to assure:
Maximum utilization of fast inter-core communication
Distribution of the results.
Communication Isolation
Collective communications are isolated from the rest of the traffic in the fabric using a private virtual
network (VLane) eliminating contention with other types of traffic.
After MLNX_OFED installation, FCA can be found at
/opt/mellanox/fca folder.
For further information on configuration instructions, please refer to the FCA User Manual.
-x MXM_RDMA_PORTS=cardName:portNum
mpirun -x MXM_RDMA_PORTS=mlx4_0:1,mlx4_0:2 <...>
ibv_devinfo
ibv_devinfo displays the list of cards and ports in the system. Please make sure (in the
ibv_devinfo output) that the desired port has Ethernet at the
link_layer field and that
its state is
PORT_ACTIVE.
mpirun -x MXM_RDMA_PORTS=mlx4_0:1 <...>