White Papers

Dell HPC NFS Storage Solution High Availability (NSS-HA) Configurations with Dell PowerEdge 12th
Generation Servers
12
The Dell PowerEdge R620 can support an onboard 10Gigabit Ethernet network daughter card for
clusters that require 10GbE connectivity, which frees up a PCI-E slot in the NFS server.
Table 3 gives a detailed comparison between the Dell PowerEdge R620 and the Dell PowerEdge R710
used in NSS-HA solutions.
Dell PowerEdge R620 vs. Dell PowerEdge R710 Table 3.
Dell PowerEdge R620
Dell PowerEdge R710
Processor
Intel Xeon processor E5-2680
@2.70GHz
Intel Xeon processor E5630
@2.53GHz
Form factor
1U rack
2U rack
Memory
Recommend: 128GB 16 x 8GB
DDR3 1600MHz
Recommend: 96GB 12 x 8GB DDR3
1333MHz
Slots
3 PCI-E Gen 3 slots:
Two x16 slots with x16 bandwidth,
half-height, half-length
One x16 slot with x8 bandwidth,
half-height, half-length
4 PCI-E Gen 2 slots+ 1 storage slot
Two x8 slots
Two x4 slots
One x4 storage slot
Drive Bays
Up to ten 2.5’’ hot-plug SAS,
SATA, or SSD
8 x 2.5’’ hard drive option
Internal RAID
controller
PERC H710P
PERC H700
InfiniBand support
QDR/FDR Links
QDR links
3.2. PCI-E slots recommendations in the Dell PowerEdge R620
In NSS-HA IP over InfiniBand (IPoIB) based solutions, two SAS HBA cards and one InfiniBand HCA card are
required to directly connect to a shared storage stack and the InfiniBand switch, respectively.
However, given the PCI-E slot design (two x16 slots with x16 bandwidth, one x16 slot with x8
bandwidth) in the Dell PowerEdge R620 poses a question of how to distribute SAS HBA cards and
InfiniBand HCA cards in a Dell PowerEdge R620 to achieve the best overall system performance.
There are two options as shown in Figure 3:
Option 1: InfiniBand HCA card and one SAS HBA card occupy slot 2 and slot 3 with x16 bandwidth;
the other SAS HBA card is installed in the slot 1 with x8 bandwidth.
Option 2: Two SAS HBA cards are installed in the slot 2 and slot 3 with x16 bandwidth, while the
InfiniBand HCA card occupies the slot 1 with x8 bandwidth.