Release Notes
Dell PowerEdge R730xd Performance and Sizing Guide for Red Hat Ceph Storage - A Dell Red Hat Technical White Paper 21
ď‚· Previous benchmark data has shown that per-disk read-ahead settings had no effect on Ceph
performance.
3.3 Deploying Red Hat Enterprise Linux (RHEL)
Red Hat Ceph Storage is a software-defined object storage technology which runs on RHEL. Thus, any
system that can run RHEL and offer block storage devices is able to run Red Hat Ceph Storage. For the
purpose of repeated execution, the configuration of the R730xd and R630 nodes as well as the
deployment of RHEL on top of them has been automated. A virtual machine running RHEL 7.2 was
created to control automated installation and to coordinate benchmarks. The virtual machine will be
referenced as the Admin VM throughout the remainder of this document. The Admin VM is connected to
the R730xd and R630 servers via the 1 GbE management network. RHEL was installed by using the
standard installation process as recommended by Red Hat. For additional information, please see
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Installation_Guide/.
3.4 Configuring the Dell PowerEdge Servers
The Dell PowerEdge R730xd and Dell PowerEdge R630 servers are automatically configured using the
iDRAC and the racadm configuration utility. The iDRAC configuration is deployed on the Admin VM and
used to reset the server configuration—including the BIOS and PERC RAID controller configuration. This
ensures all systems have the same configuration and were set back to known states between
configuration changes.
The configuration for iDRAC is described in XML format and can be found in the GitHub repository at
https://github.com/red-hat-storage/dell-ceph-psg. With the racadm command, the configuration can be
retrieved and restored to and from an NFS share, which is provided by the Admin VM.
3.5 Deploying Red Hat Ceph Storage
In production environments, Red Hat Ceph Storage can be deployed with a single easy-to-use installer.
This installer ships with the Red Hat Ceph Storage Distribution and is referred to as ceph-deploy. In this
benchmark, for the purpose of integration into automation flows, an alternative installation routine called
ceph-ansible has been selected on the basis of Ansible playbooks.
Ceph-ansible is an easy to use, end-to-end automated installation routine for Ceph clusters based on the
Ansible automation framework. Relevant for this benchmarking process are mainly two configuration files
in the form of Ansible variable declarations for host groups. Predefined Ansible host groups exist to denote
certain servers according to their function in the Ceph cluster, namely OSD nodes, Monitor nodes, RADOS
Gateway nodes, and CephFS Metadata server nodes. Tied to the predefined host groups are predefined
Ansible roles. The Ansible roles are a way to organize Ansible playbooks according to the standard Ansible
templating framework, which in turn, are modeled closely to roles that a server can have in a Ceph cluster.
For additional information on ceph-ansible, see https://github.com/ceph/ceph-ansible.
In this benchmark, Ceph MON and RGW roles are hosted side-by-side on the Dell PowerEdge R630
servers. Although, no RGW tests were performed for this paper. The configuration files are available in the
ansible-ceph-configurations directory after checkout from the GitHub repository at
https://github.com/red-hat-storage/dell-ceph-psg.