Release Notes

Dell PowerEdge R730xd Performance and Sizing Guide for Red Hat Ceph Storage - A Dell Red Hat Technical White Paper 16
3 Test Setup and Methodology
This section describes the Red Hat Ceph Storage on Dell PowerEdge R730xd Testbed. It also describes the
testing performed on the testbed. The following subsections cover:
Testbed hardware configuration
Installation of Red Hat Ceph Storage software
Benchmarking procedure
3.1 Physical setup
Figure 6 illustrates the testbed for the Red Hat Ceph Storage on Dell PowerEdge R730xd. The
benchmarking testbed consists of five Ceph Storage nodes based on the Dell PowerEdge R730xd servers
with up to sixteen 3.5” drives. These serve as the OSD tier. The MON servers are based on three Dell
PowerEdge R630 servers. The load generators are based on Dell PowerEdge R220 servers, providing a
total of 10 clients that execute various load patterns.
Each Ceph Storage node and MON server has two 10GbE links. One link is connected to the front-end
network shown in Figure 5. The other link is connected to the back-end network. The load generator
servers have a single 10GbE link connected to the front-end network. The ToR switching layer is provided
by Dell Force10 S4048 Ethernet switches.
The subnet configuration covers two separate IP subnets, one for front-end Ceph client traffic (in orange)
and a separate subnet for the back-end Ceph cluster traffic (in blue). A separate 1 GbE management
network is used for administrative access to all nodes through SSH, that is not shown in the Figure 6.