Performance analysis of the HP Serviceguard Storage Management Suite for Oracle Database

The most important factor of performance improvement is Oracle’s ODM. ODM is a standard API
specified by Oracle for database I/O. When Oracle performs an I/O operation, it uses the vendor‘s
ODM library if provided. ODM improves both performance and manageability of the file system.
The HP Serviceguard Storage Management Suite for Oracle’s implementation of ODM improves
performance by providing asynchronous direct access for the database to the underlying storage
without passing through the actual file system interface.
This paper compares the database performance with ODM against Raw SLVM. We determined that
an OLTP application was an appropriate workload to achieve our objective.
Important
As with any laboratory testing, the performance metrics quoted in this
paper are idealized. In a production environment, these metrics may be
impacted by a variety of factors.
HP recommends proof-of-concept testing in a non-production environment
using the actual target application as a matter of best practice for all
application deployments. Testing the actual target application in a
test/staging environment identical to, but isolated from, the production
environment is the most effective way to estimate systems behavior.
Test methodology
The workload used is an OLTP application with a read/write ratio oscillating between 59/41 and
52/48. The workload was designed to put a reasonable load on both the storage and the database
servers in order to be representative of a typical customer environment. Nevertheless we limited the
workload so as not to exercise the storage so much that results are skewed. The fact that the I/O rate
scales nearly linearly validates our measurements. The database footprint (configured in RAID0+1 on
an HP StorageWorks 4000 Enterprise Virtual Array (EVA4000)) was about 250GB.
In order to achieve proper scaling in a RAC environment we partitioned two of the most transaction
intensive tables. Each instance has a partition for those 2 tables.
For both CFS and Raw we conducted 3 series of tests:
Single node/instance database server: in order to minimize code path changes and to have proper
scaling numbers, Oracle was configured in RAC mode with only one cluster member (additional
overhead compared to single instance). With one client system, we did 3 distinct load scenarios:
10, 20 and 40 users accessing one of the 3 database partitions.
Two node/instance RAC database servers: in this setup we have 2 clients accessing 2 of the 3
database partitions. Each client spawned 20, 40 then 80 users, which facilitates the calculations
when increasing the number of instances and comparing the results.
Three node/instance RAC database servers: 3 clients accessing all 3 database partitions. Each
client spawned 30, 60 and 120 users.
We first ran the workload for a period of 60 minutes to warm up the database. The measured runs
lasted 60 minutes. Between each test the database was restored fresh from a backup.
When using CFS we need to link in the vendor supplied ODM library so that Oracle can use it
instead of the dummy library that ships with the product.
We used a combination of Oracle and HP tools to collect statistics during the entire run for each test
case. We compared the different run results using the application throughput measured in:
number of transactions per minute (TPMs)
3