Release Notes
Dell PowerEdge R730xd Performance and Sizing Guide for Red Hat Ceph Storage - A Dell Red Hat Technical White Paper 20
Server Configurations
Server
configuration
PowerEdge
R730xd 12+3,
3xRep
PowerEdge
R730xd 16+0,
EC3+2
PowerEdge
R730xd 16r+1,
3xRep
PowerEdge
R730xd 16+1,
EC 3+2
PowerEdge
R730xd 16j+1,
3xRep
OS disk
2x 500 GB
2.5"
2x 500 GB
2.5"
2x 500 GB
2.5"
2x 500 GB
2.5"
2x 500 GB
2.5"
Data disk type
HDD 7.2K SAS
12Gbps, 4TB
HDD 7.2K SAS
12Gbps, 4TB
HDD 7.2K SAS
12Gbps, 4TB
HDD 7.2K SAS
12Gbps, 4TB
HDD 7.2K SAS
12Gbps, 4TB
HDD quantity 12 16 16 16 16
Number of Ceph
write journal
devices
3 0 1 1 1
Ceph write
journal device
type
Intel SATA
SSD S3710
(6Gb/s)
n/a
Intel P3700
PCIe NVMe
HHHL AIC
Intel P3700
PCIe NVMe
HHHL AIC
Intel P3700
PCIe NVMe
HHHL AIC
Ceph write
journal device
size (GB)
200 0 800 800 800
Controller model
PERC H730,
1 GB Cache
PERC H730,
1 GB Cache
PERC H730,
1 GB Cache
PERC H730,
1 GB Cache
PERC H730,
1 GB Cache
PERC Controller
configuration for
HDDs
RAID RAID RAID RAID
JBOD
(PERC pass-
through
mode)
Raw capacity for
Ceph OSDs (TB)
48 64 64 64 64
While the Dell PowerEdge R730xd provides a great deal of flexibility in the layout and configuration of IO
subsystems, the choice was limited to the above mentioned configurations. This decision was based on
performance data of different configuration variations tested during the baselining which provided the
following data points:
SATA SSDs perform better when the PERC is configured as JBOD pass-through mode rather than
configured as a RAID0 single-disk devices. SATA SSDs in JBOD pass-through have higher
sequential and random IO throughput and lower latency.
SAS HDDs have higher throughput of random small-block IO when configured as RAID0 single-
disk devices than as configured as JBOD pass-through devices. This is due to the PERC H730 Mini
Cache being enabled for RAID0 devices.
SAS HDDs have higher sequential write throughput when the disks are configured as Non-RAID
devices. However, it was determined that the average disk access pattern of Ceph (which is more
random on the individual disks) would benefit more from the presence of the RAID cache than
from higher write bandwidth.