Managing Serviceguard Fifteenth Edition, reprinted May 2008

Troubleshooting Your Cluster
Replacing Disks
Chapter 8414
Replacing a Lock Disk
You can replace an unusable lock disk while the cluster is running,
provided you do not change the devicefile name (DSF).
IMPORTANT If you need to replace a disk under the HP-UX 11i v3 agile addressing
scheme (see “About Device File Names (Device Special Files)” on
page 112), you may need to use the io_redirect_dsf(1M) command to
reassign the existing DSF to the new device, depending on whether the
operation changes the WWID of the device. See the section Replacing a
Bad Disk in the Logical Volume Management volume of the HP-UX
System Administrator’s Guide, posted at http://docs.hp.com -> 11i
v3 -> System Administration. See also the section on
io_redirect_dsf in the white paper The Next Generation Mass Storage
Stack under Network and Systems Management -> Storage Area
Management on docs.hp.com.
If, for any reason, you are not able to use the existing DSF for the new
device, you must halt the cluster and change the name of the DSF in the
cluster configuration file; see “Updating the Cluster Lock Disk
Configuration Offline” on page 362.
CAUTION Before you start, make sure that all nodes have logged a message in
syslog saying that the lock disk is corrupt or unusable.
Replace a failed LVM lock disk in the same way as you replace a data
disk. If you are using a dedicated lock disk (one with no user data on it),
then you need to use only one LVM command, for example:
vgcfgrestore -n /dev/vg_lock /dev/dsk/c2t3d0
Serviceguard checks the lock disk every 75 seconds. After using the
vgcfgrestore command, review the syslog file of an active cluster node
for not more than 75 seconds. By this time you should see a message
showing that the lock disk is healthy again.