Release Notes
Dell PowerEdge R730xd Performance and Sizing Guide for Red Hat Ceph Storage - A Dell Red Hat Technical White Paper 19
While the overall physical setup, server types, and number of systems remain unchanged, the
configuration of the OSD node’s storage subsystems was altered. Throughout the benchmark tests,
different I/O subsystem configurations are used to determine the best performing configuration for a
specific usage scenario. Table 6, Table 7, and Table 8 list the configurations used in the benchmark tests.
Server and Ceph Storage Configurations Tested in Benchmarks
OSD to Journal
Ratio [drives]
12+3 16+0 16+1
OSD node
configuration
12+3 16+0 16+1
HDDs 12 16 16
HDD RAID
mode
Single-disk RAID0 Single-disk RAID0 Single-disk RAID0
SATA SSDs 3 0 0
SSD RAID
mode
JBOD
1
JBOD JBOD
NVMe SSDs 0 0 1
Network 1x 10 GbE Front-End
1x 10 GbE Back-End
1x 10 GbE Front-End
1x 10 GbE Back-End
1x 10 GbE Front-End
1x 10 GbE Back-End
Software Components used for Testbed
Ceph Red Hat Ceph Storage 1.3.2
Operating System Red Hat Enterprise Linux 7.2
Tools Ceph Benchmarking Tool (CBT) and FIO 2.2.8
1
JBODindicatesPERCpass‐throughmode