HP Extended Cluster for RAC Continuous availability with the flexibility of virtualization
• Cluster arbitration: dual cluster lock disks (Logical Volume Manager (LVM) volume groups) or
Quorum Server.
– Note: Three-site configuration is supported using Quorum Server to prevent split-brain, the
condition in which two equal-sized subgroups of cluster nodes re-form a cluster independent of
each other.
• Nortel OPTera DWDM switches were used for initial testing and certification.
Supported software configuration
• Operating system: HP-UX 11.11 or HP-UX 11i v2.0
• Shared Logical Volume Manager (with physical volume links and HP MirrorDisk/UX,
HP StorageWorks SecurePath with EVA storage)
• VERITAS VxVM/CVM 3.5 (with DMP and mirroring) for cluster volume management
• HP Serviceguard Quorum Server or dual cluster lock disks for cluster arbitration (The dual cluster
lock disks are required—one at each center—to facilitate recovery from an entire data center
failure.)
• HP Serviceguard and HP SGeRAC A.11.15 or later
• Oracle RAC 9.2 or later
• HP StorageWorks Extended Fabric on Fibre Channel switches (HP StorageWorks Extended Fabric
enables dynamically allocated long-distance configurations in a Fibre Channel switch.)
Test descriptions
The following categories of testing were performed:
• IPC (Inter-Process Communication) tests
• I/O tests
• Online transaction processing (OLTP)-like workload tests
• Failover tests
IPC test description
Raw IPC throughput was evaluated using CRTEST, a micro-level performance benchmark. The test first
updates a set of blocks in a hash cluster table on instance A, and then an increasing number of clients
running SELECT are started on instance B. These queries cause messages to be sent from instance B to
instance A; instance A returns a CR block. CR Fairness Down Converts were disabled for this test to
create a fundamental dialog of “send a message” and “receive a CR block” back.
The CRTEST (IPC) tests were performed initially using one Gigabit Ethernet network for cluster
interconnection, and then the tests were repeated using two interconnects.
I/O test description
The I/O tests were executed using the Diskbench (db) disk subsystem performance measurement tool.
The db tool measures the performance of a disk subsystem, host bus adapter, and driver in terms of
throughput (for sequential operation) and number of I/Os (for random operation). Diskbench can
evaluate the performance of kernel drivers and can be used on one-way or multiprocessor systems to
completely saturate the processors and effectively measure the efficiency of a disk subsystem and
associated drivers. The tests were performed using a mix of 60% reads and 40% writes to simulate
“real-world” traffic.
OLTP test description
The industry-standard TPC-C test was utilized to simulate OLTP transactions over the cluster
interconnects and DWDM network. TPC-C can emulate multiple transaction types and complex
database structures. The benchmark involves a mix of five concurrent transactions of varying type and
10