Open Source Object Storage for Unstructured Data: Ceph on HP ProLiant SL4540 Gen8 Servers

Table Of Contents
Reference Architecture | Product, solution, or service
Your public key has been saved in /ceph-client/.ssh/id_rsa.pub.
Copy the key to each Ceph Node
ssh-copy-id ceph@<node01>
ssh-copy-id ceph@<nodexx>
Modify the ~/.ssh/config file of the ceph-deploy admin node so that it logs in to Ceph Nodes as the user created (e.g., ceph).
Host <node01>
Hostname <node01 fully qualified domain name>
User ceph
Host <nodexx>
Hostname <nodexx fully qualified domain name>
User ceph
Ensure connectivity using ping with short hostnames (hostname s).
Create a Cluster
Start cluster installation
Create the cluster staging directory then set up the initial config file and monitor keyring.
mkdir cluster-stage; cd cluster-stage
ceph-deploy new <initial-monitor-node(s) fully qualified domain names>
Initial Configuration modification
Making some configuration modifications at this step will avoid restart of affected services during this install. It’s
recommended to make ceph.conf changes in this staging directory rather than /etc/ceph/ceph.conf so new configuration
updates can push to all nodes with ‘ceph-deploy --overwrite-conf config push <nodes>’ or ‘ceph-deploy --overwrite-conf
admin <nodes>’.
Set replica counts to 3 and min count for writes to 2 so pools are at enterprise reliability levels. Replication at this level
consumes more disk and network bandwidth but allows repair without data loss risk from additional device failures. This
also allows for a quorum on object coherency since odd counts > 1 can agree on a majority.
<cluster creation dir>/ceph.conf
osd_pool_default_size = 3
osd_pool_default_min_size = 2
If the object gateway is installed per the Ceph default instructions, related pools will be created automatically on demand as
the object gateway is utilizedwhich means starting with defaults. The default of 8 PGs is low, although it may be
appropriate for object counts in very lightly utilized pools. Too boost defaults based on cluster size, here are the
configuration parameters.
<cluster creation dir>/ceph.conf
[global]
osd_pool_default_pg_num = <default_pool_placement_group_count>
osd_pool_default_pgp_num = <default_pool_placement_group_count>
If you want to offload cluster network traffic like our sample reference configuration did, you’ll need to specify both public
(data) and cluster network settings in ceph.conf using the network and netmask slash notation.
<cluster creation dir>/ceph.conf
[global]
public_network = <public network>/<netmask>
cluster_network = <cluster network>/<netmask>
41