Using the Oracle Toolkit in a HP Serviceguard Cluster README Revision: B.06.00, August 2010

2. Make sure that the 'oracle' user has the same user id and group
id on all nodes in the cluster.
3. Some of the possible configurations:
Configuration of shared file system using LVM
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Create a volume group, logical volume and file system to hold the
necessary configuration information and symbolic links to the Oracle
executables. This file system will be defined as ORACLE_HOME in the
package control scripts. Since the volume group and file system must
be uniquely named within the cluster, use the name of the database
instance (SID_NAME) in the names:
Assuming that the name of the database is'ORACLE_TEST0', create the
following:
A volume group: /dev/vg0_ORACLE_TEST0
A logical volume: /dev/vg0_ORACLE_TEST0/lvol1
A file system: /dev/vg0_ORACLE_TEST0/lvol1 mounted at
/ORACLE_TEST0
After the volume group, logical volume and file system have been
created on one node, it must be imported to the other nodes that will
run this database. Create the directory /ORACLE_TEST0 on all nodes so
that /dev/vg0_ORACLE_TEST0/lvol1 can be mounted on that node if the
package is to be run on the node.
Configuration of shared file system using VxVM
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Create a disk group, logical volume and file system to hold the
necessary configuration information and symbolic links to the Oracle
executables. This file system will be defined as ORACLE_HOME in the
package control scripts. Since the disk group and file system must
be uniquely named within the cluster, use the name of the database
instance (SID_NAME) in the names:
Assuming that the name of the database is'ORACLE_TEST0', create the
following:
A disk group: /dev/vx/dsk/DG0_ORACLE_TEST0
A logical volume: /dev/vx/dsk/DG0_ORACLE_TEST0/lvol1
A file system: /dev/vx/dsk/DG0_ORACLE_TEST0/lvol1 mounted at
/ORACLE_TEST0
After the disk group, logical volume and file system have been
created on one node, it must be deported.
Issue the following command on all cluster nodes to allow them to
access the disk groups