Administrator Guide

Solution Architecture
11 Dell EMC Ready Solution for HPC PixStor Storage | Document ID
Solution Component
At Release
Test Bed
Storage Node
NVMe Node
2x Intel Xeon Gold 6230 @ 2.1GHz,
20 cores
Management Node
2x Intel Xeon Gold 5220 2.2G,
18C/36T, 10.4GT/s, 24.75M Cache,
Turbo, HT (125W) DDR4-2666
2x Intel Xeon Gold 5118 @ 2.30GHz,
12 cores
Memory
Gateway/Ngenea
12 x 16GiB 2933 MT/s RDIMMs
(192 GiB)
24 x 16GiB 2666 MT/s RDIMMs
(384 GiB)
High Demand Metadata
Storage Node
NVMe Node
12x 16GiB 2933 MT/s RDIMMs
(192 GiB)
Management Node
12 X 16GB DIMMs, 2666 MT/s
(192GiB)
12 x 8GiB 2666 MT/s RDIMMs
(96 GiB)
Operating System
CentOS 7.6
CentOS 7.5
Kernel version
3.10.0-957.12.2.el7.x86_64
3.10.0-862.14.4.el7.x86_64
PixStor Software
5.1.0.0
4.8.3
Spectrum Scale (GPFS)
5.0.3
5.0.3
OFED Version
Mellanox OFED 4.6-1.0.1.0
Mellanox OFED 4.3-3.0.2
High Performance Network
Connectivity
Mellanox ConnectX-5 Dual-Port
InfiniBand EDR/100 GbE, and 10 GbE
Mellanox ConnectX-5 InfiniBand
EDR
High Performance Switch
2x Mellanox SB7800 (HA Redundant)
1x Mellanox SB7790
Local Disks (OS &
Analysis/monitoring)
All servers except Management node
3x 480GB SSD SAS3 (RAID1 + HS) for
OS
PERC H730P RAID controller
Management Node
3x 480GB SSD SAS3 (RAID1 + HS) for
OS & Analysis/Monitoring
PERC H740P RAID controller
All servers except Management node
2x 300GB 15K SAS3 (RAID 1) for
OS
PERC H330 RAID controller
Management Node
5x 300GB 15K SAS3 (RAID 5) for
OS & Analysis/monitoring
PERC H740P RAID controller
Systems Management
iDRAC 9 Enterprise + DellEMC
OpenManage
iDRAC 9 Enterprise + DellEMC
OpenManage
High-Speed, management and SAS connections
All the servers have the iDRAC dedicated port and the first 1GbE port (either LOM or NCD) are connected to
the management switch.
The PE R440 servers used as management servers only have two x16 slots used for CX5 adapters
connected to the high-speed network switches.
The PE R640 servers used as NVMe nodes have 3 x16 slots, slots 1 and 3 are used for CX5 adapters
connected to the high-speed network switches.
The PE R740 servers with the Riser configuration 6 have 8 slots, 3 x16 and 5 x8, Figure 2 shows the slot
Allocation for the server. All R740 servers have slots 1 & 8 (x16) used for CX5 adapters connected to the
high-speed network switches. Any Storage or HDMD server that is connected to one or two ME4 arrays only
have two 12Gb SAS HBAs ins slots 3 & 5 (x8). Notice that slot 4 (x16) is only used for a CX5 adapter by the