White Papers

9 DELL EMC HPC System for Life Sciences v1.2 | Document ID | version (optional)
PowerEdge C4130 - 1U - Up to 4 accelerators per node
PowerEdge C4130, 2-socket server with Intel Xeon E5-2690 v4 processors
8 x 16GB RDIMM, 2400MT/s, Dual Rank
o 16 DIMM slots, DDR4 Memory
o 4GB/8GB/16GB/32GB DDR4 up to 2400MT/s
Up to 2 x 1.8” SATA SSD boot drives
Optional 96-lane PCIe 3.0 switch for certain accelerator configurations
iDRAC8, Dell EMC OpenManage Essentials
Available GPUs
o 4 x NVIDIA K80
o 4 x NVIDIA P100 - two versions are available
SXM2 module for NVLink-optimized servers provide the best performance and
strong-scaling for hyperscale and HPC data centers running applications that scale
to multiple GPUs, such as deep learning.
PCIe allows HPC data centers to deploy the most advanced GPUs within PCIe-
based nodes to support a mix of CPU and GPU workloads.
2.2 Network Configuration
The Dell EMC HPC System for Life Sciences is available in Intel OPA and two IB variants. There is also a
Force10 S3048-ON GbE switch used in both configurations, whose purpose is described here. In one of the
IB variants, the Dell EMC PowerEdge FC430 sleds have 2:1 blocking FDR connectivity to the top of rack FDR
switch. The other IB variant is 1:1 non-blocking EDR network for C6320.
2.2.1 Management Network
Management traffic typically communicates with the Baseboard Management Controller (BMC) on the
compute nodes using IPMI. The management network is used to push images or packages to the compute
nodes from the master nodes and for reporting data from client to the master node. Dell EMC Networking
S3048-ON and PowerConnect 2848 Switch are considered for management network
Dell EMC Network S3048-ON
High density, 1U 48-port 1000BASE-T + with four 10GbE uplinks, non-blocking rate-line performance,
featuring the Open Networking Install Environment (ONIE).
The port assignment of the Dell EMC Networking S3048-ON switch for the Intel® OPA or IB versions of the
solution is as follows.
Ports 01-04 and 2752 are assigned to the cluster’s private management network to be used by Bright
Cluster Manager® connecting master, login, CIFS gateway and compute nodes. The PowerEdge C6320
server’s ethernet and iDRAC constitute a majority of these ports.
Ports 0609 are used for the private network associated with NSS7.0-HA.
The rest of the port 05 and ports 1226 are allocated to the Lustre solution for its private management
network