Owner's Manual
Deployment Guide 37
Configuring Shared Storage Using OCFS
If you are using Oracle Cluster File System (OCFS) for either CRS, quorum, or database files, ensure
that the new nodes can access the cluster file systems in the same way as the existing nodes.
1
Edit the
/etc/fstab
file on the new node and add OCFS volume information exactly as it appears
on the existing nodes:
For example:
/dev/emcpowera1 /u01 ocfs _netdev 0 0
/dev/emcpowerb1 /u02 ocfs _netdev 0 0
/dev/emcpowerc1 /u03 ocfs _netdev 0 0
2
Create OCFS mount points on the new node as they exist on the existing nodes
(for example, /u01, /u02, and /u03).
3
Run
ocfstool
to generate the OCFS configuration file
/etc/ocfs.conf
by performing the following steps:
a
Ty p e
startx
to start the X Window System.
b
Open a terminal window and type:
ocfstool
c
From the menu, click
Tools
and click
Generate Config
.
d
Enter the private IP address and private host name of the node and click
OK
.
e
Click
Exit
.
4
Type the following commands to load the OCFS module and mount all volumes listed
in the
/etc/fstab
file:
/sbin/load_ocfs
mount -a -t ocfs
Adding a New Node to the Clusterware Layer
1
Log in as
oracle
.
2
From the
/opt/oracle/product/10.1.0/crs_1/oui/bin
directory on one of the existing nodes, type
addNode.sh
to start the Oracle Universal Installer.
3
In the
Welcome
window, click
Next
.
4
In the
Specify Cluster Nodes for Node Addition
window, enter the public and private node names
for the new node and click
Next
.
If all the network and storage verification checks pass, the
Node Addition Summary
window appears.
5
Click
Next
.
The
Cluster Node Addition Progress
window displays the status of the cluster node addition process.
6
When prompted, run
rootaddnode.sh
on the local node.
When
rootaddnode.sh
finishes running, click
OK
.