White Papers

o Ports 01-04 and 27–52 are assigned to the cluster’s private management network to be used by Bright Cluster Manager®
connecting master, login, CIFS gateway and compute nodes. The PowerEdge C6320 server’s ethernet and iDRAC
constitute a majority of these ports.
o Ports 0609 are used for the private network associated with NSS7.0-HA.
o The rest of the port 05 and ports 1226 are allocated to the Lustre solution for its private management network
o Port 10 and 11 are used for the PDUs.
For the 10GigE configuration, the deployment and management of the cluster is done over the 10 GbE network by using the Dell EMC
Force10 S4820T switch. So, the first virtual LAN on the S3048-ON, from ports 016, is not used. The other two virtual LANs are still
used for the same purpose as in the Intel® OPA or IB configuration.
Table 1 Differences in Switching Infrastructure among Intel® OPA, IB FDR/EDR and 10GbE Configurations
Switching component
Intel® OPA with C6320
IB EDR with C6320
IB FDR with FC430
10 GbE with FC430
Top of Rack switch
1x Dell EMC
Networking H1048-OPF
switch
1 x Force10 S3048-ON
1 GbE switch
1 x SB7700 EDR switch
1 x Force10 S3048-ON
1 GbE switch
3 x Mellanox SX 6036
FDR switch
1 x Force10 S3048-ON
1 GbE switch
1 x Force10 S4820T 10
GbE switch
1 x Force10 S3048-ON
1 GbE switch
Switches/IOAs in Dell
EMC PowerEdge
chassis
N/A
N/A
1 x FN 410T 10GB I/O
Aggregator
1 Link per chassis to
Force10 S3048-ON
2 x FN 410T 10GB I/O
aggregator
6 links up to Force10
S4820T and 2 links for
stacking.
Adapters in Login
nodes, head nodes,
NFS servers, Lustre
metadata servers and
object storage servers,
CIFS gateway
Intel® Omni-Path Host
Fabric Interface (HFI)
100 series card
Mellanox ConnectX-4 IB
EDR adapter
Mellanox ConnectX-3 IB
FDR adapter
Intel X520 DA SFP+ DP
10 GbE low profile
adapter
Interconnect on Dell
EMC PowerEdge sleds
Intel® Omni-Path host
fabric interface (HFI)
100 series card
Mellanox ConnectX-4 IB
EDR adapter
Mellanox ConnectX-3
FDR mezzanine
adapter
10 GbE LOM
Dell EMC Networking H-Series OPA Switch
Intel® Omni-Path Architecture (OPA) is an evolution of the Intel® True Scale Fabric Cray Aries interconnect and internal Intel® IP
[9]. In
contrast to Intel® True Scale Fabric edge switches that support 36 ports of InfiniBand QDR-40Gbps performance, the new Intel® Omni-
Path fabric edge switches support 48 ports of 100Gbps performance. The switching latency for True Scale edge switches is 165ns-
175ns. The switching latency for the 48-port Omni-Path edge switch has been reduced to around 100ns-110ns. The Omni-Path host
fabric interface (HFI) MPI messaging rate is expected to be around 160 million messages per second (Mmps) with a link bandwidth of
100Gbps.
Dell EMC Networking Infiniband FDR and EDR Switch
Mellanox EDR adapters are based on a new generation ASIC, also known as ConnectX-4, while the FDR adapters are based on
ConnectX-3. The theoretical uni-directional bandwidth for EDR is 100 Gb/s versus FDR which is 56Gb/s. Another difference is that EDR
adapters are x16 adapters while FDR adapters are available in x8 and x16. Both of these adapters operate at a bus width of 4X link.
The messaging rate for EDR can reach up to 150 million messages per second compared with FDR ConnectX-3 adapters which deliver
more than 90 million messages per second.
Software Components
Along with the hardware components, the solution includes the following software components:
o Bright Cluster Manager 7.2®
o Red Hat Enterprise Linux 7.2 (RHEL 7.2)
o CUDA 7.5