Administrator’s Command Line Guide

Table Of Contents
3.2. Deploying Object Storage
# systemctl start ostor-cfgd.service
# systemctl enable ostor-cfgd.service
6. Initialize new object storage on the first node. The ostor_dir directory will be created in the root of your
cluster.
# ostor-ctl init-storage -n <IP_addr> -s <cluster_mount_point>
You will need to provide the IP address and object storage password specified on step 3.
7. Add to the DNS public IP addresses of nodes that will run GW services. You can configure the DNS to
enable access to your object storage via a hostname, and to have the S3 endpoint receive virtual hosted-
style REST API requests with URIs like http://bucketname.s3.example.com/objectname.
After configuring DNS, make sure that DNS resolver for your S3 access point works from client machines.
Note: Only buckets with DNS-compatible names can be accessed with virtual hosted-style re-
quests. For more details, see Bucket and Key Naming Policies on page 36.
Below is an example of a DNS zones configuration file for the BIND DNS server:
;$Id$
$TTL 1h @ IN SOA ns.example.com. s3.example.com. (
2013052112 ; serial
1h ; refresh
30m ; retry
7d ; expiration
1h ) ; minimum
NS ns.example.com.
$ORIGIN s3.example.com
h1 IN A 10.29.1.95
A 10.29.0.142
A 10.29.0.137
* IN CNAME @
This configuration instructs the DNS to redirect all requests with URI http//.s3.example.com/ to one of the
endpoints listed in resource record h1 (10.29.1.95, 10.29.0.142 or 10.29.0.137) in a cyclic (round-robin)
manner.
8. Add nodes where object storage services will run to the configuration. To do this run the ostor-ctl add-host
27