Deployment Guide

Deployment Guide 43
Start Oracle ASM library driver on boot (y/n) [n]: y
A message appears prompting you to fix permissions of Oracle ASM disks on boot. Type
y
as mentioned below:
Fix permissions of Oracle ASM disks on boot (y/n) [y]:y
The following messages appear on the screen:
Writing Oracle ASM library driver configuration: [ OK ]
Creating /dev/oracleasm mount point: [ OK ]
Loading module "oracleasm": [ OK ]
Mounting ASMlib driver filesystem: [ OK ]
Scanning system for ASM disks: [ OK ]
3
Scan the ASM disks by typing:
/etc/init.d/oracleasm scandisks
Scanning system for ASM disks: [ OK ]
4
Verify that all the ASM disks are visible by typing:
/etc/init.d/oracleasm listdisks
A list of all the configured ASM disks appears.
Configuring Shared Storage Using Raw Devices
Log in as root on the new node and perform the following procedure:
1
Edit the
/etc/sysconfig/rawdevices
file and add the following lines for a Fibre Channel cluster:
/dev/raw/ASM1 /dev/emcpowerb1
/dev/raw/ASM2 /dev/emcpowerc1
2
Restart the Raw Devices Service by typing:
service rawdevices restart
Configuring Shared Storage Using OCFS2
If you are using OCFS2 for either CRS, quorum, or database files, ensure that the new nodes can access
the cluster file systems in the same way as the existing nodes.
1
Edit the
/etc/fstab
file on the new node and add OCFS2 volume information exactly as it appears
on the existing nodes:
For example:
/dev/emcpowera1 /u01 ocfs2 _netdev,datavolume,nointr 0 0
/dev/emcpowerb1 /u02 ocfs2 _netdev,datavolume,nointr 0 0
2
Create OCFS2 mount points on the new node as they exist on the existing nodes
(for example, /u01, /u02, and /u03).