Specifications

cable was labeled with the first port at 0 (and ending at 3),
and the other three had the first port at 1 (and ending at 4),
while the backplane ports were numbered starting at 0 (and
ending at 15), with the lowest numbered port at the bottom
left (as viewed from the front). This seems to be a chassis-
specific scheme, as other Supermicro chassis models number
the ports from the top left down. SATA ports are numbered
starting at 0 within the 3ware administrative interfaces. The
3ware administrative tools have a feature to “blink” a drive
LED for locating a specific drive, but that is not supported in
this particular Supermicro chassis.
The 3ware BBU typically is mounted on the controller card,
but I have found that the controller starts complaining about
battery temperature being too high unless there is generous
airflow over the battery. I purchased the remote BBU option,
which is a dummy PCI card that carries the battery and an
extension cable that runs from the remote BBU to the main
RAID controller card. I mounted the battery a couple PCI
slots away from the RAID controller so it would be as cool
as possible.
Figure 4. 3ware RAID Controller with Remote Battery Option
RAID Array Configuration
I planned to create a conventional set of partitions for the
operating system and then a single giant partition for the
storage of system backup files (named backup).
Some RAID vendors allow you to create an arbitrary num-
ber of virtual disks from a given physical array of disks. The
3ware interface allows only two virtual disks (or volumes) per
physical array. When you are creating a physical array, you
typically will end up with a single virtual disk using the entire
capacity of the array (for example, creating a virtual disk from
a RAID 1 array of two 1TB disks gives you a 1TB /dev/sda disk).
If you want two virtual disks, specify a nonzero size for the
boot partition. The first virtual disk will be created in the
size you specify for the boot partition, and the second will
be the physical array capacity minus the size of the boot
partition (for example, using 1TB disks, specifying a 150GB
boot partition yields a 150GB /dev/sda disk and an 850GB
/dev/sdb disk).
You can perform the entire RAID configuration from the
3ware controller 3dm BIOS interface before the OS boots
(press Alt-3), or use the tw_cli command line or 3dm Web
interface (or a combination of these) after the OS is running.
For example, you could use the BIOS interface to set up just
the minimal array you need for the OS installation and then
use the 3dm Web interface to set up additional arrays and hot
sparing after the OS is running.
For my 16-drive system, I decided to use 15 drives in a
RAID 5 array. The remaining 16th drive is a hot spare. With
this scheme, three disks would have to fail before any data
was lost (the system could tolerate the loss of one array
member disk and then the loss of the hot spare that would
replace it). I used the 3ware BIOS interface to create a
100GB boot partition, which gave me a virtual sda disk
and a virtual sdb disk of about 12.64TB.
I found the system to be very slow during the RAID array
initialization. I did not record the time, but the initialization
of the RAID 5 array seemed to take at least a day, maybe
longer. I suggest starting the process and working on
something else until it finishes, or you will be frustrated
with the poor interactive performance.
OS Installation
I knew I had to use something other than ext3 for the giant
data partition, and XFS looked like the best solution according
the information I could find on the Web. Most of my Linux
www.linuxjournal.com august 2008
| 65