Open Source Object Storage for Unstructured Data: Ceph on HP ProLiant SL4540 Gen8 Servers
Table Of Contents
- Executive summary
- Introduction
- Overview
- Solution components
- Workload testing
- Configuration guidance
- Bill of materials
- Summary
- Appendix A: Sample Reference Ceph Configuration File
- Appendix B: Sample Reference Pool Configuration
- Appendix C: Syntactical Conventions for command samples
- Appendix D: Server Preparation
- Appendix E: Cluster Installation
- Naming Conventions
- Ceph Deploy Setup
- Ceph Node Setup
- Create a Cluster
- Add Object Gateways
- Apache/FastCGI W/100-Continue
- Configure Apache/FastCGI
- Enable SSL
- Install Ceph Object Gateway
- Add gateway configuration to Ceph
- Redeploy Ceph Configuration
- Create Data Directory
- Create Gateway Configuration
- Enable the Configuration
- Add Ceph Object Gateway Script
- Generate Keyring and Key for the Gateway
- Restart Services and Start the Gateway
- Create a Gateway User
- Appendix F: Newer Ceph Features
- Appendix G: Helpful Commands
- Appendix H: Workload Tool Detail
- Glossary
- For more information

Reference Architecture | Product, solution, or service
Your public key has been saved in /ceph-client/.ssh/id_rsa.pub.
Copy the key to each Ceph Node
ssh-copy-id ceph@<node01>
…
ssh-copy-id ceph@<nodexx>
Modify the ~/.ssh/config file of the ceph-deploy admin node so that it logs in to Ceph Nodes as the user created (e.g., ceph).
Host <node01>
Hostname <node01 fully qualified domain name>
User ceph
…
Host <nodexx>
Hostname <nodexx fully qualified domain name>
User ceph
Ensure connectivity using ping with short hostnames (hostname –s).
Create a Cluster
Start cluster installation
Create the cluster staging directory then set up the initial config file and monitor keyring.
mkdir cluster-stage; cd cluster-stage
ceph-deploy new <initial-monitor-node(s) fully qualified domain names>
Initial Configuration modification
Making some configuration modifications at this step will avoid restart of affected services during this install. It’s
recommended to make ceph.conf changes in this staging directory rather than /etc/ceph/ceph.conf so new configuration
updates can push to all nodes with ‘ceph-deploy --overwrite-conf config push <nodes>’ or ‘ceph-deploy --overwrite-conf
admin <nodes>’.
Set replica counts to 3 and min count for writes to 2 so pools are at enterprise reliability levels. Replication at this level
consumes more disk and network bandwidth but allows repair without data loss risk from additional device failures. This
also allows for a quorum on object coherency since odd counts > 1 can agree on a majority.
<cluster creation dir>/ceph.conf
osd_pool_default_size = 3
osd_pool_default_min_size = 2
If the object gateway is installed per the Ceph default instructions, related pools will be created automatically on demand as
the object gateway is utilized—which means starting with defaults. The default of 8 PGs is low, although it may be
appropriate for object counts in very lightly utilized pools. Too boost defaults based on cluster size, here are the
configuration parameters.
<cluster creation dir>/ceph.conf
[global]
…
osd_pool_default_pg_num = <default_pool_placement_group_count>
osd_pool_default_pgp_num = <default_pool_placement_group_count>
If you want to offload cluster network traffic like our sample reference configuration did, you’ll need to specify both public
(data) and cluster network settings in ceph.conf using the network and netmask slash notation.
<cluster creation dir>/ceph.conf
[global]
…
public_network = <public network>/<netmask>
cluster_network = <cluster network>/<netmask>
41










