Building Disaster Recovery Serviceguard Solutions Using Metrocluster with Continuous Access for P9000 and XP A.11.00

export ORACLE_BASE=/opt/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/10.2.0/db_1
export ORA_CRS_HOME=/opt/crs/oracle/product/10.2.0/crs
LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib:
$ORACLE_HOME/rdbms/lib
SHLIB_PATH=$ORACLE_HOME/lib32:$ORACLE_HOME/rdbms/lib32
export LD_LIBRARY_PATH SHLIB_PATH
export PATH=$PATH:$ORACLE_HOME/bin:$ORA_CRS_HOME/bin:
/usr/local/bin:
CLASSPATH=$ORACLE_HOME/jre:$ORACLE_HOME/jlib:
$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib
export CLASSPATH
export ORACLE_SID=<set RAC database instance SID>
Configuring the storage device for installing Oracle clusterware
When Oracle Clusterware is installed in a site, it is installed only on a local file system on the
Clusterware sub-cluster nodes of that site. Complete the following steps on all the nodes at the site:
1. Create a directory path for Oracle Clusterware Home, set an owner, and specify appropriate
permissions.
2. Create an Oracle directory to save installation logs, set an owner, and specify appropriate
permissions.
3. Create mount points on all nodes in the site for a CFS file system where the Clusterware
sub-cluster OCR and Voting files is stored.
Setting up CRS OCR and VOTING directories
The shared storage for storing OCR and VOTING data can be configured using SLVM, or CVM,
or CFS. When using SLVM or CVM, a separate SLVM volume group or CVM disk groups, with all
required raw volumes must be configured using non replicated disks. For more information about
using raw devices for OCR and VOTING storage, see the Oracle® Clusterware Installation Guide
available at the Oracle documentation website. This CRS storage is however not required to be
replicated in SADTA.
NOTE: The following example shows how to configure CFS for OCR and VOTING data in a
legacy package style. It is recommended to follow modular style of packaging wherever possible.
1. Initialize the disk that is used for the CFS file system from the CVM master node at the site.
# /etc/vx/bin/vxdisksetup -i c4t0d3
NOTE: This disk must be a non-replicated shared disk that is connected only to the nodes
in the Clusterware sub-cluster site.
2. From the site CVM master node, create the CRS disk group.
# vxdg s init sfo_crsdg c4t0d3
3. Create the Serviceguard Disk Group MNP packages for the disk group.
# cfsdgadm add sfo_crsdg sfo_crs_dg all=sw SFO_1 SFO_2
4. Activate the CVM DG in the site CFS sub-cluster.
# cfsdgadm activate sfo_crsdg
5. Create a volume for the CRS disk group.
# vxassist -g sfo_crsdg make crs_vol 500m
6. Create a file system using the created volume.
# newfs -F vxfs /dev/vx/rdsk/sfo_crsdg/crs_vol
7. Create Serviceguard Mount Point MNP packages for the clustered file system.
# cfsmntadm add sfo_crsdg crs_vol /cfs/sfo_crs sfo_crs_mp all=rw\
SFO_1 SFO_2
198 Configuring Oracle RAC in SADTA