Release Notes

Dell PowerEdge R730xd Performance and Sizing Guide for Red Hat Ceph Storage - A Dell Red Hat Technical White Paper 23
Network performance measurements have been taken by running point-to-point connection tests
following a fully-meshed approach; that is, each server’s connection has been tested towards each
available endpoint of the other servers. The tests were run one by one and thus do not include measuring
the switch backplane’s combined throughput. Although the physical line rate is 10000 MBit/s for each
individual link, the results are within ~1.5% of the expected TCP/IP overhead. The MTU used was 1500.
The TCP Window Size was 2.50 MByte as the determined default by the Iperf tool.
Network Baseline (results are average)
Server Type
PowerEdge R730xd
Intel X520 LOM
PowerEdge R630
Intel X520 LOM
PowerEdge R220
Intel X520 LOM
PowerEdge R730xd
Intel X520 LOM
9889.45 MBit/s 9886.53 MBit/s 9891.02 MBit/s
PowerEdge R630
Intel X520 LOM
9888.33 MBit/s 9895.5 MBit/s 9886.07 MBit/s
PowerEdge R220
Intel X520 LOM
9892.92 MBit/s 9892.8 MBit/s 9893.08 MBit/s
Storage performance has been measured thoroughly in order to determine the maximum performance of
each individual component. The tests have been run on all devices in the system in parallel to ensure the
backplanes, IO hubs and PCI bridges are factored in as well and don’t pose a bottleneck. The fio job spec
files used in these benchmarks are found at https://github.com/red-hat-storage/dell-ceph-psg. Each job
was assembled of 3 components: a global include file, a scenario specific include file stating the IO
pattern and a job specific file containing the target devices.
include-global.fio
– the global settings for all jobs run by fio, setting access method, IO engine,
run tme and ramp time for each job
include-<target-device>-all-<test-type>-<access-pattern>.fio
– a scenario specific include file
setting benchmark specific settings like block size, queue depth, access pattern and level of
parallelism:
o
target-device
– either journal (SATA SSDs) or OSDs (4TB SAS HDDs)
o
all
– stating that all devices of the specified type are benchmarked in parallel
o
test-type
– denoting the nature of the test, looking to quantify either sequential or random
IO or access latency
o
access pattern
– specifying which IO access pattern this test uses, either read, write or
read-write mixed
In order to try to run the tests as close as possible to the way Ceph utilizes the systems, the tests were run
on files in an XFS file system on the block devices under test using the formatting options from the
ceph.conf
file:
-f –i size=2048
. Before each benchmark run, all files were filled with random data up to
the maximum file system capacity to ensure steady-state performance; write benchmarks were executed
before read benchmarks to alleviate for different NAND behavior during reads. The data is reported on a
per-device average.