Release Notes
Dell PowerEdge R730xd Performance and Sizing Guide for Red Hat Ceph Storage - A Dell Red Hat Technical White Paper 25
CBT Diagram
The utility is installed on the admin VM. From there, it communicates with various servers in different
capacities via
pdsh
as follows:
Head Node: a system that has administrative access to the Ceph cluster for the purpose of
creating pools, rbds, change configuration or even re-deploy the entire cluster as part of a
benchmark run
Clients: these are the systems which have access to the Ceph cluster and from which CBT will
generate load on the cluster using locally installed tools such as fio, rados or cosbench or run VMs
to access the cluster
OSDs/MONs: CBT triggers performance collection with various tools such as valgrind, perf,
collect or blktrace on these nodes during the benchmark run and transfers their telemetry back to
the head node after each execution
The CBT configuration file syntax was used to orchestrate most of the benchmarks. CBT provides
flexibility to run benchmarks over multiple cluster configurations by specifying custom
ceph.conf
files.
CBT also allows the user to re-deploy the cluster between benchmark runs completely.
In this benchmark, CBT was mainly used to execute the benchmarks. The cluster deployment and
configuration was provided by ansible-ceph. The setup of CBT, including necessary pre-requisites and
dependencies is described on the project homepage.