Using the Oracle Toolkit in a HP Serviceguard Cluster README Revision: B.06.00, August 2010

- log - Log a message. A message is logged to the package
log everytime a hang is detected. If the MONITOR_INTERVAL
attribute is set to 30 seconds, then a message is logged
to the package log file every 30 seconds.
- alert - Send an alert mail. An alert mail is sent to
the email address specified with the ALERT_MAIL_ID attribute.
The mail is sent only the first time a database hang is detected.
- failover - Failover the package to adoptive node.
The default value for ACTION is 'failover'.
The syntax of the service command is as follows:
service_cmd "$SGCONF/scripts/ecmt/oracle/tkit_module.sh
oracle_hang_monitor <TIMEOUT> <ACTION>"
Following is an example in which the TIMEOUT is set to 40 seconds and
the ACTION is set to 'alert':
service_name db_hang_check
service_cmd "$SGCONF/scripts/ecmt/oracle/tkit_module.sh
oracle_hang_monitor 40 alert"
service_restart none
service_fail_fast_enabled no
service_halt_timeout 300
- Module Script (tkit_module.sh)
This script is called by the Master Control Script and acts as an
interface between the Master Control Script and the Toolkit interface
script (toolkit.sh). It is also responsible for calling the Toolkit
Configuration File Generator Script (described below).
- Toolkit Configuration File Generator Script (tkit_gen.sh)
This script is called by the Module Script when the package
configuration is applied using 'cmapplyconf' to generate the user
configuration file in the package directory(TKIT_DIR).
iii. Oracle Package Configuration Example
- Package Setup and Configuration
1. Assuming Oracle is already installed in its default home
directory (example, /home/oracle), perform the following steps to
make necessary directories shareable by all clustered nodes.
If you are using LVM/VxVM
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Follow the instructions in the chapter "Building an HA Cluster
Configuration" in the manual "Managing Serviceguard" to create a
logical volume infrastructure on a shared disk. The disk must be
available to all clustered nodes that will be configured to run