Install guide

Chapter 2. Hardware Installation and Configuration
A Cluster is a complex arrangement of bits and pieces that, once combined with the software
configuration, produces a highly available platform for mission critical Oracle databases. T he hardware
configuration requires some knowledge of the application, or at a minimum, its expectation of
performance. The goal is always to produce a reliable Red Hat Cluster Suite HA platform, but rarely at
the expense of performance or scalability. Oracle uses the terms MAA or Maximum Availability
Architecture, but whatever the term, optimizing a platform for availability, scalability and reliability often
feels like juggling chainsaws.
2.1. Server Node
Most servers that are configured to run Oracle must provide a large amount of memory and processing
power, and our sample cluster is no exception. Each node is an HP Proliant DL585, with 32GB of RAM,
and multi-core processors.
The server comes standard with HP’s Integrated Light Out processor management that will be used as
the Red Hat Cluster Suite fencing mechanism. It also has two built-in GbE NICs. This configuration also
includes an additional dual-ported GbE NIC used by Red Hat Cluster Suite and Oracle Clusterware (in
the RAC install).
The local storage requirements on each server are minimal and any basic configuration will have more
than adequate disk space. It is recommended that you configure the local array for reliable speed, not
space (i.e., not RAID5). Oracle can produce a trace log load, especially Clusterware, which may impact
cluster recovery performance.
2.2. Storage Topology
Storage layout is very workload dependent, and some rudimentary knowledge of the workload is
necessary. Historically, database storage is provisioned by space, not speed. In the rare case where
performance is considered, topology bandwidth (MB/sec) is used as the metric. T his is the wrong
performance metric for databases. All but the largest data warehouses require 1000s of IOPs to perform
well. IOPS only come from high numbers of spindles that are provisioned underneath the file system.
The easiest way to configure an array for both performance and reliability is to use a RAID set size of 8-
12 (depending on the RAID algorithm). Many RAID sets can be combined to produce a single large
volume. It is recommended that you then use this volume and strip the LUNs off this high IOP volume to
create the specific number of sized LUNS. This is often called the "block of cheese" model, where every
strip independent of size has full access to the IOP capacity of this large, single volume. T his is the
easiest way to produce high performance LUN for a database.
Acquire as many 15K spindles as is practical or affordable. Resist the temptation to use large, low RPM
drives (i.e., SATA). Resist the temptation to use drive technology (including controllers and arrays) that
don’t support tagged queuing (i.e., most SAT A). T agged queuing is critical to sustained high IOP rates. In
the SAT A world, it is called NCQ (Native Command Queuing). In the FCP/SAS world, it is called Tagged
Queuing. It is usually implemented at the shelf level; insist on it.
Contrary to some detailed studies, in general a 15K 72GB drive has better performance than a 10K
300GB drive. Outer track optimizations cannot be relied upon over the lifecycle of the application, nor
can they be relied upon with many storage array allocation algorithms. If you could ensure that only the
outer tracks were used, then larger capacity drives should seek less. It is difficult to buy small, high RPM
drives, but they will always have the best IOP price/performance ratio.
Software, or host-based RAID is less reliable than array-based RAID, especially during reconstruction,
Chapter 2. Hardware Installation and Configuration
15