White Papers
Dell HPC NFS Storage Solution - High Availability (NSS5.5-HA) Configuration with Dell PowerVault
MD3460 and MD3060e Storage Arrays
13
Firmware and Drivers
InfiniBand driver
Mellanox OFED 2.1-1.0.0
10GbE Ethernet driver
Bnx2x 1.78.17-0
PERC H710P firmware
21.2.0-0007
PERC H710P driver
megaraid_sas 06.700.06.00-rh1
NSS5.5-HA client cluster configuration Table 6.
Client / HPC Compute Cluster
Clients
64 PowerEdge M420 blade servers
32 blades in each of two PowerEdge M1000e chassis
Red Hat Enterprise Linux 6.4 x86-64.
Chassis
configuration
Two PowerEdge M1000e chassis, each with 32 blades
Two Mellanox M4001F FDR10 I/O modules per chassis
Two PowerConnect M6220 I/O switch modules per chassis
InfiniBand
Each blade server has one Mellanox ConnectX-3 Dual-port FDR10
Mezzanine I/O Card
Mellanox OFED 2.0-2.0.5
InfiniBand
fabric for I/O
traffic
Each PowerEdge M1000e chassis has two Mellanox M4001 FDR10 I/O
module switches.
Each FDR10 I/O module has four uplinks to a rack Mellanox SX6025
FDR switch for a total of 16 uplinks.
The FDR rack switch has a single FDR link to the NFS server.
Ethernet
Each blade server has one onboard 10GbE Broadcom 57810 network
adapter.
bnx2x driver 1.72.51-0
Ethernet
fabric for
cluster
deployment
and
management
Each PowerEdge M1000e chassis has two PowerConnect M6220
Ethernet switch modules.
Each M6220 switch module has one link to a rack PowerConnect
5224 switch.
There is one link from the rack PowerConnect switch to an Ethernet
interface on the cluster master node.