Sample Configuration with HP Serviceguard Extension for RAG and Oracle Real Application Clusters 11g release 2 using Cluster File System
11
Figure 3 is a variation of figure 2 that shows the CSS-HB and RAC-DB1-IC residing on the same
subnet as the SG-HB. The RAC-DB2-IC is for a second database and is on a separate network. It does
not affect the HB and RAC-DB1-IC traffic. If the primary (LAN1) fails, SG performs a local LAN
failover. If both the primary (LAN1) and standby (LAN2) fail, the RAC IMR reforms and evicts the
affected database instance.
The advantage of this configuration is that the second RAC database instance traffic is separated from
heartbeat traffic. The RAC traffic does not interfere with the heartbeat traffic. A SG package can be
configured to monitor the RAC-DB-IC subnet using the cluster interconnect subnet monitoring feature to
detect failure before the IMR timeout. If the RAC-DB2-IC subnet fails (both primary and standby), the
SG package will shutdown the RAC instance before the IMR timeout.
This configuration is for heavily loaded networks where RAC-DB-IC traffic interferes with heartbeats
and other cluster communications. Placing RAC-DB-IC traffic on a separate network allows shorter
SG-HB timeout values. The drawback is that the RAC-DB-IC should be monitored so network failures
can be detected sooner than the IMR timeout, and recovery can be started earlier.
Storage high availability
Storage HA is available at several levels, as follows:
• Redundant links to the same disk device through multipathing
• Storage arrays that provide redundancy at the disk level
• Volume Manager mirroring to multiple devices, for example using CVM
• Multiple copies on multiple disks
OC relies on the underlying platform to provide transparent redundant links to the same device. OC
provides redundancy for the Oracle Cluster Registry (OCR) by providing mirroring capability so disks
from JBODs (“just a bunch of disks”) can be used. When JBODs are used and the disks are not HA,
OCR requires two physical disks to protect against link or disk failure. In addition, three voting disks
(each on a separate physical disk) are required.
When redundant links and storage arrays are used, OC configuration is simplified by configuring a
single OCR and a single voting disk with HA provided by CVM or the storage array. Dynamic
multipathing is enabled by default for CVM disk groups. The CVM disk monitor can be used to detect
storage and root disk failures and trigger a TOC, which leads to quicker recovery.
Multiple nodes
Multiple nodes protect against failures at the system-node level. For cluster HA, a minimum of two
nodes are required.
Power considerations
Redundant components should be separately powered so that a single power failure does not impact
all nodes, all switches, or all storage.
Storage requirements
When OC starts, it assumes that the required storage is available. OC does not perform any storage
activation. The platform or user is expected to activate the required storage prior to starting OC. For
SGeRAC configurations, packages are used as the mechanism to activate storage prior to starting
OC. For CFS configurations, shared storage activation is performed by multi-node packages. The SG
package that starts OC must have a dependency on the relevant multi-node packages.