Release Notes

Dell PowerEdge R730xd Performance and Sizing Guide for Red Hat Ceph Storage - A Dell Red Hat Technical White Paper 14
RADOS Layer in the Ceph Architecture
Writing and reading data in a Ceph storage cluster is accomplished by using the Ceph client architecture.
Ceph clients differ from competitive offerings in how they present data storage interfaces. A range of
access methods are supported, including:
RADOSGW: Bucket-based object storage gateway service with S3 compatible and OpenStack
Swift compatible RESTful interfaces.
LIBRADOS: Provides direct access to RADOS with libraries for most programming languages,
including C, C++, Java, Python, Ruby, and PHP.
RBD: Offers a Ceph block storage device that mounts like a physical storage drive for use by both
physical and virtual systems (with a Linux® kernel driver, KVM/QEMU storage backend, or
userspace libraries).
Storage access method and data protection method (discussed later in this technical white paper) are
interrelated. For example, Ceph block storage is currently only supported on replicated pools, while Ceph
object storage is supported on either erasure-coded or replicated pools. The cost of replicated
architectures is categorically more expensive than that of erasure-coded architectures because of the
significant difference in media costs.
2.3 Selecting a Storage Protection Method
As a design decision, choosing the data protection method can affect the solution’s total cost of
ownership (TCO) more than any other factor. This is because the chosen data protection method strongly
affects the amount of raw storage capacity that must be purchased to yield the desired amount of usable
storage capacity. Applications have diverse needs for performance and availability. As a result, Ceph
provides data protection at the storage pool level.