Release Notes

Dell PowerEdge R730xd Performance and Sizing Guide for Red Hat Ceph Storage - A Dell Red Hat Technical White Paper 22
3.6 Performance Baselining
Before attempting benchmark scenarios that utilize higher-layer Ceph protocols, it is recommended to
establish a known performance baseline of all relevant subsystems, which are:
HDDs and SSDs (SATA + NVMe)
Network (10 GbE Front-End and Back-End)
CPU
The IO-related benchmarks on storage and network subsystems will be tied into direct reference of
vendor specifications whereas the CPU benchmarks are ensuring the systems are all performing equally
and provide comparison to future benchmarks. As such, the following baseline benchmarks have been
conducted:
Server Subsystem Baseline Tests
Subsystem Benchmark Tool Benchmark Methodology
CPU Intel-linpack-11.3.1.002 Single-Core / Multi-Core Floating-Point
Calculation
Network iperf-2.0.8 Single TCP-Stream Benchmark All-to-All
SAS HDD fio-2.2.8 8K random read/write and 4M sequential
read/write on top of XFS
SATA SSD fio-2.2.8 4K random read/write and 4M sequential
read/write on top of XFS
NVMe SSD fio-2.2.8 4K random read/write and 4M sequential
read/write on top of XFS
CPU testing has been performanced with Intel LinPACK benchmark running suitable problem sizes given
each server’s CPU resources.
CPU Baseline (results are average)
Server Type
PowerEdge R730xd
2x Intel Xeon E5-2630 v3
PowerEdge R630
2x Intel Xeon E5-2650 v3
PowerEdge R220
1x Intel Pentium G1820
LinPACK
Multi-Threaded
377.5473 GFlops
(Problem Size = 30000)
622.7303 GFlops
(Problem Size = 30000)
20.0647 GFlops
(Problem Size = 22000)
LinPACK
Single-Threaded
47.0928 GFlops
(Problem Size = 30000)
41.2679 GFlops
(Problem Size = 30000)
10.3612 GFlops
(Problem Size = 22000)