Release Notes

Dell PowerEdge R730xd Performance and Sizing Guide for Red Hat Ceph Storage - A Dell Red Hat Technical White Paper 24
Disk IO Baseline (results are average)
Disk Type
OSD
Seagate 4TB SAS
Journal
Intel DC S3700 200GB
Journal
Intel DC P3700 800GB
Random Read 314 IOPS (8K blocks) 72767 IOPS (4K blocks) 374703 IOPS (4K blocks)
Random Write 506 IOPS (8K blocks) 56483 IOPS (4K blocks) 101720 IOPS (4K blocks)
Sequential Read 189.92 MB/s (4M blocks) 514.88 MB/s (4M blocks) 2201 MB/s (4M blocks)
Sequential Write 158.16 MB/s (4M blocks) 298.35 MB/s (4M blocks) 1776 MB/s (4M blocks)
Read Latency 12.676 ms (8K blocks) 0.443 ms (4K blocks) 0.682 ms (4K blocks)
Write Latency 7.869 ms (8K blocks) 0.565 ms (4K blocks) 0.652 ms (4K blocks)
As previously stated, this data was obtained to get a performance baseline of the systems in their current
setup; not to get individual peak performance for every device tested out of context. With that said,
individual components may perform higher in other systems or when tested in isolation.
One instance of such a case was found when testing the SAS HDDs behind the PERC H730 Mini RAID
Controller. When tested in isolation, a single 4TB SAS drive is able to achieve 190 MB/s sequential read
and write patterns. When tested in parallel with all drives, the write bandwidth is limited by the RAID
controller and the disks in RAID0 mode.
3.7 Benchmarking with CBT
For automation of the actual Ceph benchmarks, an open-source utility called the Ceph Benchmarking
Tool (CBT) was used. It is available at https://github.com/ceph/cbt.
CBT is written in Python and takes a modular approach to Ceph benchmarking. The utility is able to use
different benchmark drivers for examining various layers of the Ceph Storage stack, including RADOS,
RADOS Block Device (RBD), RADOS Gateway (RGW) and KVM. In this paper, storage performance on the
core layer RADOS is examined for which the driver in CBT uses the ‘rados bench’ benchmark which ships
with Ceph. CBT’s architecture is depicted below.