Using Serviceguard Extension for RAC, 2nd Edition, February 2005 Update

Table Of Contents
Maintenance and Troubleshooting
Replacing Disks
Chapter 3 95
1. Make a note of the physical volume name of the failed mechanism
(e.g., /dev/dsk/c2t3d0).
2. Deactivate the volume group on all nodes of the cluster:
# vgchange -a n vg_ops
3. Replace the bad disk mechanism with a good one.
4. From one node, initialize the volume group information on the good
mechanism using vgcfgrestore(1M), specifying the name of the
volume group and the name of the physical volume that is being
replaced:
# vgcfgrestore /dev/vg_ops /dev/dsk/c2t3d0
5. Activate the volume group on one node in exclusive mode then
deactivate the volume group:
# vgchange -a e vg_ops
This will synchronize the stale logical volume mirrors. This step can
be time-consuming, depending on hardware characteristics and the
amount of data.
6. Deactivate the volume group:
# vgchange -a n vg_ops
7. Activate the volume group on all the nodes in shared mode using
vgchange - a s:
# vgchange -a s vg_ops
Replacing a Lock Disk
Replacing a failed lock disk mechanism is the same as replacing a data
disk. If you are using a dedicated lock disk (one with no user data on it),
then you need to issue only one LVM command:
# vgcfgrestore /dev/vg_lock /dev/dsk/c2t1d0
After doing this, wait at least an hour, then review the syslog file for a
message showing that the lock disk is healthy again.