Architecture considerations and best practices for architecting an Oracle RAC solution with Serviceguard and SGeRAC
6
The Oracle VIP must be configured on the public network for each RAC instance. Clients use VIP to
connect to the RAC database. If a node fails, CRS fails the VIP over to another cluster node to provide
an immediate node-down response to the clients’ connection request. This increases the availability of
other RAC instances to the clients because the clients no longer have to wait for a network timeout
before the connection request fails over to another instance in the cluster. When an RAC instance is
configured on a cluster node, the VIP that is specified for that node during the Oracle Clusterware
installation will be used as the instance’s dependent resource. Note that all RAC instances (for
different RAC databases) running in the same system depend on the same VIP configured for that
system. If the VIP fails, all RAC instances that use it will also fail.
Clients that are Oracle Fast Application Notification (FAN) integrated or using the FAN API may
interrupt existing sessions and failover. Remote VIP failover is useful for non-FAN clients attempting to
connect to the local node to avoid the TCP connect timeout.
Previously, Oracle VIP and Serviceguard relocatable IP address (RIP) should not exist on the same
subnet because of potential collisions during IP address configuration. This issue was addressed in
Oracle 10.2.0.2 and later for the HP Integrity platform and is addressed in Oracle 10.2.0.3 and
later for the HP 9000 platform.
It is preferable to configure all interconnect traffic for cluster communications on a single redundant
heartbeat network. This will allow Serviceguard to monitor the network and quickly resolve
interconnect failures by cluster configuration if necessary. This configuration is the recommended and
most common configuration.
The following examples are instances when it is not possible to place all interconnect traffic on the
same network:
• RAC GCS (Cache Fusion) traffic may be very high, so a network separate for RAC interconnect,
which includes GCS, may be needed.
2
• Some types of networks are not supported by CFS/CVM, so the RAC interconnect traffic may be on
a separate network.
Traffic from one RAC interconnect may interfere with
another RAC interconnect on the same cluster.
Storage
Starting with Oracle 10g, Oracle provides its own storage management with a new feature called
Automatic Storage Management (ASM). ASM provides some file system and volume management
capabilities for Oracle database files only.
3
ASM does not have multi-pathing capability. It assumes the underlying OS will provide this
functionality. In HP-UX, multi-pathing is provided by a volume manager feature such as PVLinks in the
HP-UX Logical Volume Manager (LVM), Dynamic Multipathing (DMP) in VERITAS Volume Manager
from Symantec (VxVM), or by other third-party software such as Securepath or Powerpath. Starting
with HP-UX 11i v3, Native Multipathing is built-in to the OS and can be used to provide multi-pathing
for ASM.
These include DB control files, redo logs, archived redo
logs, data files, spfiles and Oracle Recovery Manager (RMAN) backup files. ASM cannot be used for
Oracle executables and non-database files.
On HP-UX, an ASM filesystem layer is implicitly created within a diskgroup. This filesystem is
transparent to users and only accessible through ASM instance, interfacing databases, and ASM’s
utilities. For example, database backups of ASM files can be performed only with RMAN.
2
See CLUSTER_INTERCONNECTS, page 5-11, Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide
version 10g Release 2 (10.2) (
http://download-west.oracle.com/docs/cd/B19306_01/rac.102/b14197.pdf); See also Administering
Multiple Cluster Interconnects on Linux and UNIX Platforms, page 3-16, Oracle Real Application Clusters Administration and Deployment Guide
11g Release 1 (11.1) B28254-04, January 2009 (
http://download.oracle.com/docs/cd/B28359_01/rac.111/b28254.pdf)
3
Starting with RAC 11gR2, Oracle introduced the ASM/ACFS Cluster File System for Linux, Solaris, and AIX, providing support for storing
non-database files. As of RAC 11gR2, ACFS is not supported for HP-UX.