White Papers

6 DELL EMC HPC Solution for Life Sciences v1.1 | Document ID | version (optional)
2 System Design
The first step in designing the system is to decide upon the following four basic design considerations:
Type of workload
- Genomics/NGS data analysis only
- General purpose and Genomics/NGS data analysis
- Adding molecular dynamics simulation capacity
Parameter for sizing
- Number of compute nodes
- Genomes per day to be analyzed
Form factor of servers
- 2U shared infrastructure of high density that can host 4 compute nodes in one chassis (C6320)
- 2U shared infrastructure of very high density that can host 8 compute nodes in one chassis
(FC430)
- 4U rack mount servers (R930)
- 1U rack mount servers (R430)
- 2U rack mount servers that can host up to 4 accelerators per node (C4130)
Types of interconnect: All three options are available for any server except FC430. Mellanox
ConnectX-3 (IB FDR) is only high speed interconnect option for FC430.
- Intel® Omni-Path Host Fabric Interface (HFI) 100 series card
- Mellanox ConnectX-4, Single Port, VPI EDR, QSFP28 Adapter
- Mellanox ConnectX-3, Single Port, VPI FDR, QSFP+ Adapter, Low Profile
Following are the technical specification of the servers that are considered for Dell HPC Solution for Life
Sciences.
2.1 Hardware Configuration
There are several considerations while selecting the servers for compute node, login node, fat node and
accelerator node which are the components of Dell HPC solution for Life Sciences. While 1U form factor
PowerEdge R430 is the recommended servers for master node, login node and CIFS gateway, the 4U form
factor PowerEdge R930 is recommended as the appropriate server for a fat node and the 2U form factor
PowerEdge C4130 is recommended to host accelerators, two options are available for selecting the compute
nodes. These servers are not being offered as standard off-the-shelf components. Although all the servers
mentioned below are customizable, the configuration of servers that best fit for the life sciences applications
are mentioned below.
2.1.1 Master Node Configuration
The master node is responsible for managing the compute nodes and optimizing the overall compute
capacity. Hence it is also known as a “head” node. Usually, the master and login nodes are the only nodes
that communicates with the outside world and it acts as a middle point between the actual cluster and the
outside network. A master node is also referred to as the front end node because it provides the point of
access and testing of the programs you want to run on a cluster system.