HP Extended Cluster for RAC Continuous availability with the flexibility of virtualization
Utilizing its intelligent policy engine, the WLM component of the VSE allows the real-time allocation of
assets throughout the virtual resource pool—yielding enhanced server utilization, increasing return on
IT investment, and dramatically improving the enterprise’s ability to accommodate business volatility.
If an entire data center is lost, the remaining data center continues to function during the rerouting of
users to the functioning environment. This capability provides continuous availability across the two
data centers. When the failed data center comes back online, all resynchronization takes place
automatically and is transparent to users.
The administration of the overall environment is greatly simplified because the application is resident
on a single data repository. The concept of unnecessarily replicated databases, with the associated
burden of replicated management chores, has been all but eliminated. Even when spread across a
distance of a hundred kilometers, the Oracle9i database is still a single database instance,
possessing inherent economies of system administration over multi-instance solutions.
The move to a 100-km separation thus represents a dramatic increase in overall application
resiliency. The ability to place such a large distance between two data centers ensures that only the
most widespread disaster will impact more than one of the installations.
The tests
Testing methodology
The depth of experience HP has had with extended clusters and SGeRAC was leveraged in
developing the test plans for the extended distance testing. Together with partners Oracle, AT&T, and
Nortel, the tests focused on demonstrating the ability to achieve a robust solution capable of
maximizing resource utilization between remote data centers, specifically, up to 100 km. In doing so,
particular attention was paid to high availability, disaster tolerance, and performance results across
this extended distance.
The underlying premise for the testing was the validation of the remotely distributed cluster’s ability to
sustain full functionality while being subjected to failovers. Testing was done at distances of 25, 50,
and 100 km to validate availability, performance, and network latency.
After the link between the data centers was established, a series of loading and failure scenarios was
executed to validate the expected disaster-tolerant characteristics of the configuration. The scenarios
simulated failure-inducing events on a loaded configuration that traditionally would prove catastrophic
in a non-SGeRAC environment. Each test was designed to yield significant transaction-based traffic,
triggering a set of failure-inducing events and the subsequent validation of data integrity during and
following the failure.
8