Specifications

Project Goals
I wanted to consolidate several Linux file servers that I use for
disk-to-disk backups. These were all in the 3–4TB range and
were constantly running out of space, requiring me either to
adjust which systems were being backed up to which server or
to reduce the number of previous backups that I could keep
on hand. My overall goal for this project was to create a
system with a large amount of cheap, fast and reliable disk
space. This system would be the destination for a number of
daily disk-to-disk backups from a mix of Solaris, Linux and
Windows servers. I am familiar with Linux’s software RAID and
LVM2 features, but I specifically wanted hardware RAID, so the
OS would be “unaware” of the RAID controller. These features
certainly cost more than a software-based RAID system, and
this article is not about creating the cheapest possible solution
for a given amount of disk space.
The hardware RAID controller would make it as simple as
possible for a non-Linux administrator to replace a failed disk.
The RAID controller would send an e-mail message warning
about a disk failure, and the administrator typically would
respond by identifying the location of the failed disk and
replacing it, all with no downtime and no Linux administration
skills required. The entire disk replacement experience would
be limited to the Web interface of the RAID controller card.
In reality, a hot spare disk would replace any failed disk
automatically, but use of the RAID Web interface still would
be required to designate any newly inserted disk as the
replacement hot spare. For my company, I had specific con-
cerns about the availability of Linux administration skills
that justified the expense of hardware RAID.
Hardware Choices
For me, the above requirements meant using hot-swappable
1TB SATA drives with a fast RAID controller in a system with a
decent CPU, adequate memory and redundant power supplies.
The chassis had to be rack-mountable and easy to service.
Noise was not a factor, as this system would be in a dedicated
machine room with more than one hundred other servers.
I decided to build the system around the 3ware 9560
16-port RAID controller, which requires a motherboard that
has a PCI Express slot with enough “lanes” (eight in this
instance). Other than this, I did not care too much about the
CPU choice or integrated motherboard features (other than
Gigabit Ethernet). As I had decided on 16 disks, this choice
pretty much dictated a 3U or larger chassis for front-mounted
hot-swap disks. This also meant there was plenty of room for
a full-height PCI card in the chassis.
I have built the vast majority of my rackmount servers
(more than a hundred) using Supermicro hardware, so I
am quite comfortable with its product line. In the past, I
have always used Supermicro’s “bare-bones” units, which
had the motherboard, power supply, fans and chassis
already integrated.
For this project, I could not find a prebuilt bare-bones
model with the exact feature set I required. I was looking for a
system that had lots of cheap disk capacity, but did not require
lots of CPU power and memory capacity—most high-end con-
figurations seemed to assume quad-core CPUs, lots of memory
and SAS disks. The Supermicro SC836TQ-R800B chassis looked
like a good fit to me, as it contained 16 SATA drives in a 3U
enclosure and had redundant power supplies (the B suffix
indicates a black-colored front panel).
Next, I selected the X7DBE motherboard. This model would
allow me to use a relatively inexpensive dual-core Xeon CPU
and have eight slots available for memory. I could put in 8GB
of RAM using cheap 1GB modules. I chose to use a single
1.6GHz Intel dual-core Xeon for the processor, as I didn’t think
I could justify the cost of multiple CPUs or top-of-the-line
quad-core models for the file server role.
I double-checked the description of the Supermicro
chassis to make sure that the CPU heat sink is included
with the chassis. For the SC836TQ-R800B, the heat sink
had to be ordered separately.
Figure 1. Front View of the Server Chassis
RAID Card Battery
I wanted the best possible RAID performance, which means
using the “write-back” setting in the RAID controller, as
opposed to “write-through”. The advantage of write-back
cache is that it should improve write performance by writing
to RAM first and then to disk later, but the disadvantage is
that data could be lost if the system crashes before the data
was actually written to disk.
The battery backup unit (BBU) option for the 3ware 9560
RAID controllers protects this cached data from being lost by
preserving it across reboots.
Ordering Process
I had no problems finding all the hardware using the various
price-comparison Web sites, although I was unable to find a
single vendor that had every component I needed in stock.
Beware that the in-stock indications on those price-comparison
Web sites are unreliable. I followed up with a phone call for
the big-ticket items to make sure they actually were in stock
before ordering on-line. Table 1 shows the details.
As you can see from Table 1, the hardware RAID compo-
nents are about $1,000 of the total system cost.
Hardware Assembly
The chassis is pretty much pre-assembled. I had to insert some
additional motherboard stand-offs and put on the rackmount-
ing rails. I also snapped off some of the material on the plastic
www.linuxjournal.com august 2008
| 63