Datasheet

3
TABLE I
Comparison of Large EIDE Disks for a RAID-5 Array
Spin-Up
Cost GB per Current
Disk Model GB RPM per GB platter at 12V
Maxtor D540X[15] 80 5400 $2.11 20 2.00A
Maxtor D536X[16] 100 5400 $2.27 33 0.64A
Maxtor D540X[15] 160 5400 $1.85 40 1.80A
IBM 75GXP [17] 75 7200 $3.19 15 2.00A
IBM 120GXP [18] 120 7200 $2.91 40 2.00A
components from other manufacturers also work. We have
measured the wall power consumption for the whole dis k
array box in Table II. It uses 276 watts at startup and 156
watts during normal sustained running.
TABLE II
700 GB RAID-5 Configuration
System Unit
Component Price
100GB Maxtor system disk [16] $227
8 100GB Maxtor RAID-5 disks [16] $227
2 Promise ATA/100 P C I cards [9] $27
4 StarTe ch 24” ATA/10 0 cables [6] $3
AMD Athlon 1.4 GHz/266 CPU [19] $120
Asus A7A266 motherboard, audio [20] $132
2 256MB DDR PC2100 DIMMs $35
In-Win Q500P Full Tower Case [21] $77
Sparkle 15A @ 12V power supply [22] $34
2 Antec 8 0mm ball bearing case fans $8
110 Alert temper ature alarm [23] $15
Pine 8MB AGP video card [24], [25] $20
SMC EZ card 10/100 ethernet [26], [27] $12
Toshiba 16x DVD, 48x CDROM $54
Sony 1.44 MB floppy drive $12
KeyTronic 104 key PS/2 ke ybo ard $7
DEXXA 3 button PS/2 mouse $4
Total $2 682
To install the second power s upply we had to modify our
tower case with a jigsaw a nd a hand drill. We also had to
use a jumper to ground the green wire in the 20-pin block
ATXPWR connector to fake the power -on switch.
When insta lling the two disk controller cards c are had
to be taken that they did not share interrupts with o ther
highly utilized hardware such as the video card and the
ethernet card. We also tried to make sure tha t they did
not sha re interrupts with each other. T here are 16 possible
interrupt reques ts (IRQs) that allow the various devices,
such as EIDE controllers, video cards, mice, serial, and
parallel ports, to communicate with the CPU. Mo st PC
operating systems allow sharing of IRQs but one would
naturally want to avoid overburdening any one IRQ. There
are also a special class of IRQs used by the PCI bus, they
are called PCI IRQs (PIRQ). Each P CI card slot has 4
interrupt numbers. This means that they share some IRQs
with the other slots; therefore, we had to juggle the cards
we used (video, 2 EIDE controllers, and an ethernet).
When we tried to use a disk as a Slave” on a mother-
board EIDE bus, we found that it would not run at the
full speed of the bus and slowed down the access speed of
the entire RAID-5 a rray. This was a problem of either the
motherboard’s basic input/output system (BIOS) or EIDE
controller. This problem was not in evidence when using
the disk co ntroller cards. Therefore, we decided that rather
than take a factor of 10 hit in the acc ess speed we would
rather use 8 instead of 9 hard disks.
B. Software
Fo r the actual tests we used Linux kernel 2.4.5 with the
RedHat 7 (see http://www.redhat.com/) distribution (we
had to upg rade the kernel to this level). The latest sta-
ble kernel version is 2.4.18 (see http:// ww w.lwn.net/). We
needed the 2.4.x kernel to allow full support for Journal-
ing” file systems. Journaling file systems provide rapid re-
covery from crashes. A computer can finish its boot-up at a
normal speed, ra ther than waiting to perform a file system
check (FSCK) on the entire RAID array. This is then con-
ducted in the background allowing the user to continue to
use the RAID array. There are now 4 different Journaling
file systems: XFS, a port from SGI [28]; JFS, a por t fr om
IBM [29]; ext3 [30], a Journalized version of the standard
ext2 file s ystem; and ReiserFS from namesys [31]. C om-
parisons of these Journaling file systems have been done
elsewhere [32 ]. When we tested our RAID-5 arrays only
ext3 and the ReiserFS were eas ily available for the 2.4.x
kernel; ther e fo re, we tested 2 different Journaling file sys-
tems; ReiserFS and ext3. We opted on using ext3 for two
reasons 1) At the time there were stability pr oblems with
ReiserFS and NFS (this has since been r esolved with kernel
2.4.7) and 2) it was an extension of the standard ex t2fs (it
was originally developed for the 2.2 kernel) and, if synced
properly could be mounted as ext2. Ext3 is the only one
that will allow direct upgrading from ext2, this is why it is
now the default for RedHat 7.2.
NFS is a very flexible system that allows one to manage
files on several computers inside a network as if they were
on the local hard disk. So, there’s no need to know what
actual file system they are stored under nor where the files
are phys ic ally located in order to access them. Therefore,
we use NFS to connect these disks arrays to computers that
cannot run Linux 2.4. We have successfully used NFS to
mount this disk a rray on the following types of comput-
ers: a DECstation 5000/150 running Ultrix 4.3A, a Sun
UltraSparc 10 running Solaris 7, a Ma c intosh G3 running
MacOS X, and various Linux boxes with both the 2.2 and
2.4 kernels. We are cur rently using two of these RAID-5
boxes to run analysis software with the BaBar KANGA
code and the CMS CMSIM/ORCA code.