White Papers
11 DELL EMC HPC Solution for Life Sciences v1.1 | Document ID | version (optional)
Dell Networking S4820T
• 1U high performance ToR switch provides (48) 1/10G BASE-T ports that support 100Mb/1 Gb/10Gb
and four 40GbE QSFP+ uplinks.
• Each 40GbE QSFP+ uplink can be broken out into four 10GbE ports using breakout cables
Dell Networking N4032F SFP Switch
• 24x 10GbE SFP + auto-sensing (10Gb/1Gb) fixed ports
• Up to 32 10GbE ports using breakout cables and optional QSFP+ module
• One hot swap expansion module bay
• Dual hot-swappable redundant power supplies (460W)
2.3 Storage Configuration
The performance requirements of HPC environments with ever-larger compute clusters have placed
unprecedented demands on the storage infrastructure. The storage infrastructure consists of the following
components:
• NFS storage solution with HA (NSS 7.0-HA)
• Dell EMC HPC Lustre Storage Solution
2.3.1 NSS 7.0 HA
NSS 7.0 HA is designed to enhance the availability of storage services to the HPC cluster by using a pair of
Dell PowerEdge servers and PowerVault storage arrays along with Red Hat HA software stack. The HA
cluster consists of a pair of Dell PowerEdge servers and a network switch. The two PowerEdge servers have
shared access to disk-based Dell PowerVault storage in a variety of capacities, and both are directly
connected to the HPC cluster by using OPA, IB or 10GbE. The two servers are equipped with two fence
devices: iDRAC8 Enterprise, and an APC Power Distribution Unit (PDU). If failures such as storage
disconnection, network disconnection, and system stopping from functioning, etc., occur on one server, the
HA cluster will failover the storage service to the healthy server with the assistance of the two fence devices;
and also ensure that the failed server does not return to life without the administrator’s knowledge or control.
The test used to evaluate the NSS7.0-HA functionality and performance is shown in Figure 2. The following
configuration was used.
• A 32-node HPC compute cluster (also known as “the clients”) was used to provide I/O network traffic
for the test bed.
• A pair of Dell PowerEdge R730 servers were configured as an active-passive HA pair and function as
a NFS server for the HPC compute cluster.
• Both NFS servers were connected to a shared Dell PowerVault MD3460 storage enclosure extended
with one Dell PowerVault MD3060e storage enclosure (Figure 2 shows a 480 TB solution with the two
PowerVault MD storage arrays) at the back-end. The user data is stored on an XFS file system
created on this storage. The XFS file system was exported to the clients by using NFS.