Using Serviceguard Extension for RAC Version A.11.20 - (August 2011)

13. Start up the Oracle RAC instances on all nodes.
14. Activate automatic cluster startup.
NOTE: As you add new disks to the system, update the planning worksheets (described in
Appendix B: “Blank Planning Worksheets”), so as to record the exact configuration you are using.
Replacing Disks
The procedure for replacing a faulty disk mechanism depends on the type of disk configuration
you are using and on the type of Volume Manager software. For a description of replacement
procedures using CVM, refer to the chapter on “Administering Hot-Relocation” in the VERITAS
Volume Manager Administrator’s Guide. Additional information is found in the VERITAS Volume
Manager Troubleshooting Guide.
The following sections describe how to replace disks that are configured with LVM. Separate
descriptions are provided for replacing a disk in an array and replacing a disk in a high-availability
enclosure.
Replacing a Mechanism in a Disk Array Configured with LVM
With any HA disk array configured in RAID 1 or RAID 5, refer to the array’s documentation for
instruction on how to replace a faulty mechanism. After the replacement, the device itself
automatically rebuilds the missing data on the new disk. No LVM activity is needed. This process
is known as hot swapping the disk.
NOTE: If your LVM installation requires online replacement of disk mechanisms, the use of disk
arrays may be required, because software mirroring of JBODs with MirrorDisk/UX does not permit
hot swapping for disks that are activated in shared mode.
Replacing a Mechanism in an HA Enclosure Configured with Exclusive LVM
Non-Oracle data that is used by packages may be configured in volume groups that use exclusive
(one-node-at-a-time) activation. If you are using exclusive activation and software mirroring with
MirrorDisk/UX and the mirrored disks are mounted in a high-availability disk enclosure, you can
use the following steps to hot plug a disk mechanism.
1. Identify the physical volume name of the failed disk and the name of the volume group in
which it was configured. In the following examples, the volume group name is shown as/dev/
vg_sg01 and the physical volume name is shown as/dev/c2t3d0. Substitute the volume
group and physical volume names that are correct for your system.
2. Identify the names of any logical volumes that have extents defined on the failed physical
volume.
3. On the node where the volume group is currently activated, use the following command for
each logical volume that has extents on the failed physical volume:
# lvreduce -m 0 /dev/vg_sg01/lvolname /dev/dsk/c2t3d0
4. Remove the failed disk and insert a new one. The new disk will have the same HP-UX device
name as the old one.
5. On the node from where you issued the lvreduce command, issue the following command
to restore the volume group configuration data to the newly inserted disk:
# vgcfgrestore /dev/vg_sg01 /dev/dsk/c2t3d0
132 Maintenance