Open Source Object Storage for Unstructured Data: Ceph on HP ProLiant SL4540 Gen8 Servers
Table Of Contents
- Executive summary
- Introduction
- Overview
- Solution components
- Workload testing
- Configuration guidance
- Bill of materials
- Summary
- Appendix A: Sample Reference Ceph Configuration File
- Appendix B: Sample Reference Pool Configuration
- Appendix C: Syntactical Conventions for command samples
- Appendix D: Server Preparation
- Appendix E: Cluster Installation
- Naming Conventions
- Ceph Deploy Setup
- Ceph Node Setup
- Create a Cluster
- Add Object Gateways
- Apache/FastCGI W/100-Continue
- Configure Apache/FastCGI
- Enable SSL
- Install Ceph Object Gateway
- Add gateway configuration to Ceph
- Redeploy Ceph Configuration
- Create Data Directory
- Create Gateway Configuration
- Enable the Configuration
- Add Ceph Object Gateway Script
- Generate Keyring and Key for the Gateway
- Restart Services and Start the Gateway
- Create a Gateway User
- Appendix F: Newer Ceph Features
- Appendix G: Helpful Commands
- Appendix H: Workload Tool Detail
- Glossary
- For more information
Reference Architecture | Product, solution, or service
General points
The analysis details will help make cluster planning decisions versus the target workload/use case, but a few general points
that can be derived from the data are:
• Reads are significantly more performant than writes at the same size
• Writes mixed with reads have a noticeable impact on read performance
• Object IO maximum latency can be significant, although max latency cases are atypical
Object testing
There are two IO sizes of note in the object matrix. One is 512K, which is typically the largest sequential IO issued at the
kernel block layer. The other is 4M, the size of Ceph’s RADOS objects in the target pools. Objects greater than 4M submitted
using the Swift API must be split into multiple RADOS objects.
While the object server listening on HTTPS is configured—and a test suite was run over SSL—the detailed results here are
unencrypted traffic. Expect an additional processing load for using HTTPS at the object gateway and on the clients; the
largest effect was at highest object sizes (16M, 128M), where average client utilization increased by a bit over 10% and
object gateway load was up 5-8% on average. Peak spikes were also up significantly for HTTPS with large objects; at 128M
PUT and MIX tests rose to the low 40% range while GETs went from 9% to 17% peak CPU.
Object gateway test infrastructure is bottlenecking bandwidth within a single 10GbE link; the results show ~900MB/sec as
the roll off point for average bandwidth. Quick samples show greater peak IO (~1100MB/sec).
19










