Using Serviceguard Extension for RAC, 8th Edition, March 2009

For SLVM: F + SLVM timeout + 15 seconds
For CVM/CFS: 3 times F + 15 seconds
When both SLVM and CVM/CFS are used, then take the max of the above two calculations.
NOTE:
1. The “F” represents the Serviceguard failover time as given by the
max_reformation_duration field of cmviewcl –v –f line output.
2. SLVM timeout is documented in the whitepaper, LVM link and Node Failure Recovery Time.
Limitations of Cluster Communication Network Monitor
The Cluster Interconnect Monitoring feature does not coordinate with any feature handling
subnet failures (including self). The failure handling of multiple subnet failures may result in a
loss of services, for example:
A double switch failure resulting in the simultaneous failure of CSS-HB subnet and SG-HB
subnet on all nodes of a two-node cluster. (Assuming the CSS-HB subnet is different from
SG-HB subnet). Serviceguard may choose to retain one node while the failure handling of
interconnect subnets might choose to retain the other node to handle CSS-HB network failure.
As a result, both nodes will go down.
NOTE: To reduce the risk of failure of multiple subnets simultaneously, each subnet must
have its own networking infrastructure (including networking switches).
A double switch failure resulting in the simultaneous failure of CSS-HB subnet and RAC-IC
network on all nodes may result in loss of services (Assuming the CSS-HB subnet is different
from RAC-IC network). The failure handling of interconnect subnets might choose to retain
one node for CSS-HB subnet failures and to retain RAC instance on some other node for
RAC-IC subnet failures. Eventually, the database instance will not run on any node as the
database instance is dependent on clusterware to run on that node.
Cluster Interconnect Monitoring Restrictions
In addition to the above limitations the Cluster Interconnect Monitoring feature has the following
restriction:
Cluster Lock device/Quorum Server/Lock Lun must be configured in the cluster.
Creating a Storage Infrastructure with LVM
In addition to configuring the cluster, you create the appropriate logical volume infrastructure
to provide access to data from different nodes. This is done with Logical Volume Manager (LVM),
Veritas Cluster Volume Manager (CVM), or Veritas Volume Manager (VxVM). LVM and VxVM
configuration are done before cluster configuration, and CVM configuration is done after cluster
configuration.
This section describes how to create LVM volume groups for use with Oracle data. Before
configuring the cluster, you create the appropriate logical volume infrastructure to provide access
to data from different nodes. This is done with Logical Volume Manager. Separate procedures
are given for the following:
Building Volume Groups for RAC on Mirrored Disks
Building Mirrored Logical Volumes for RAC with LVM Commands
Creating RAC Volume Groups on Disk Arrays
Creating Logical Volumes for RAC on Disk Arrays
The Event Monitoring Service HA Disk Monitor provides the capability to monitor the health
of LVM disks. If you intend to use this monitor for your mirrored disks, you should configure
them in physical volume groups. For more information, refer to the manual Using HA Monitors.
Creating a Storage Infrastructure with LVM 41