HP-UX System Administrator's Guide: Logical Volume Management (762803-001, March 2014)

4.7.5 Step 5: Removing a Bad Disk
You can elect to remove the failing disk from the system instead of replacing it if you are certain
that another valid copy of the data exists or the data can be moved to another disk.
4.7.5.1 Removing a Mirror Copy from a Disk
If you have a mirror copy of the data already, you can stop LVM from using the copy on the failing
disk by reducing the number of mirrors. To remove the mirror copy from a specific disk, use
lvreduce, and specify the disk from which to remove the mirror copy.
For example (if you have a single mirror copy):
# lvreduce -m 0 -A n /dev/vgname/lvname bad_disk_path
Or, if you have two mirror copies:
# lvreduce -m 1 -A n /dev/vgname/lvname bad_disk_path
The A n option is used to prevent the lvreduce command from performing an automatic
vgcfgbackup operation, which might hang while accessing a defective disk.
If you have only a single mirror copy and want to maintain redundancy, as soon as possible,
create a second mirror of the data on a different, functional disk, subject to the mirroring guidelines,
described in “Step 1: Preparing for Disk Recovery” (page 124), before you run lvreduce.
4.7.5.2 Removing Mirror Copy from Ghost Disk
One might encounter a situation where you have to remove from the volume group a failed physical
volume or a physical volume that is not actually connected to the system but is still recorded in the
LVM configuration file. Such a physical volume is sometimes called a ghost disk or phantom disk.
You can get a ghost disk if the disk has failed before volume group activation, possibly because
the system was rebooted after the failure.
A ghost disk is usually indicated by vgdisplay reporting more current physical volumes than
active ones. Additionally, LVM commands might complain about the missing physical volumes as
follows:
# vgdisplay vg01
vgdisplay: Warning: couldn't query physical volume "/dev/dsk/c5t5d5":
The specified path does not correspond to physical volume attached
to this volume group
vgdisplay: Couldn't query the list of physical volumes.
--- Volume groups ---
VG Name /dev/vg01
VG Write Access read/write
VG Status available
Max LV 255
Cur LV 3
Open LV 3
Max PV 16
Cur PV 2 (#No. of PVs belonging to vg01)
Act PV 1 (#No. of PVs recorded in the kernel)
Max PE per PV 4350
VGDA 2
PE Size (Mbytes) 8
Total PE 4341
Alloc PE 4340
Free PE 1
Total PVG 0
Total Spare PVs 0
Total Spare PVs in use 0
In these situations where the disk was not available at boot time, or the disk has failed before
volume group activation (pvdisplay failed), the lvreduce command fails with an error that it
134 Troubleshooting LVM