Managing Serviceguard Eighteenth Edition, September 2010

7. Finally, use the lvsync command for each logical volume that has extents on the failed
physical volume. This synchronizes the extents of the new disk with the extents of
the other mirror.
lvsync /dev/vg_sg01/lvolname
Replacing a Lock Disk
You can replace an unusable lock disk while the cluster is running. You can do this
without any cluster reconfiguration if you do not change the devicefile name (Device
Special File, or DSF); or, if you need to change the DSF, you can do the necessary
reconfiguration while the cluster is running.
IMPORTANT: If you need to replace a disk under the HP-UX 11i v3 agile addressing
scheme, also used by cDSFs (see About Device File Names (Device Special Files)”
(page 106) and About Cluster-wide Device Special Files (cDSFs)” (page 135)), and you
use the same DSF, you may need to use the io_redirect_dsf(1M) command to
reassign the existing DSF to the new device, depending on whether the operation
changes the WWID of the device. See the section Replacing a Bad Disk in the Logical
Volume Management volume of the HP-UX System Administrator’s Guide, posted at
www.hp.com/go/hpux-core-docs. See also the section on io_redirect_dsf at
the same address.
If you do not use the existing DSF for the new device, you must change the name of
the DSF in the cluster configuration file and re-apply the configuration; see “Updating
the Cluster Lock Disk Configuration Online” (page 364). Do this after running
vgcfgrestore as described below.
CAUTION: Before you start, make sure that all nodes have logged a message in syslog
saying that the lock disk is corrupt or unusable.
Replace a failed LVM lock disk in the same way as you replace a data disk. If you are
using a dedicated lock disk (one with no user data on it), then you need to use only one
LVM command, for example:
vgcfgrestore -n /dev/vg_lock /dev/dsk/c2t3d0
Serviceguard checks the lock disk every 75 seconds. After using the vgcfgrestore
command, review the syslog file of an active cluster node for not more than 75 seconds.
By this time you should see a message showing that the lock disk is healthy again.
404 Troubleshooting Your Cluster