HP-UX System Administrator's Guide: Logical Volume Management HP-UX 11i Version 3 Abstract This document describes how to configure, administer, and troubleshoot the Logical Volume Manager (LVM) product on the HP-UX Version 3 platform. The HP-UX System Administrator’s Guide is written for administrators of HP-UX systems of all skill levels needing to administer HP-UX systems beginning with HP-UX Release 11i Version 3.
© Copyright 2011 Hewlett-Packard Development Company, L.P. Legal Notices Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice.
Contents 1 Introduction...............................................................................................8 LVM Features...........................................................................................................................8 LVM Architecture.......................................................................................................................9 Physical versus Logical Extents.......................................................................................
Physical Volume Groups.................................................................................................30 Snapshots and Performance............................................................................................30 Increasing Performance Through Disk Striping........................................................................31 Determining Optimum Stripe Size....................................................................................
Moving and Reconfiguring Your Disks.......................................................................................70 Moving Disks Within a System.............................................................................................71 Moving Disks Between Systems............................................................................................72 Moving Data to a Different Physical Volume...........................................................................
Volume Group Activation Failures...........................................................................................111 Quorum Problems with a Nonroot Volume Group.................................................................111 Quorum Problems with a Root Volume Group......................................................................112 Version 2.x Volume Group Activation Failures......................................................................112 Root Volume Group Scanning...............
D Striped and Mirrored Logical Volumes.......................................................157 Summary of Hardware Raid Configuration...............................................................................157 LVM Implementation of RAID Levels in HP-UX............................................................................157 LVM Striped and Mirrored Logical Volume Configuration............................................................158 Examples...............................................
1 Introduction This chapter addresses the following topics: • “LVM Features” (page 8) • “LVM Architecture” (page 9) • “Physical versus Logical Extents” (page 10) • “LVM Volume Group Versions” (page 11) • “LVM Device File Usage” (page 12) • “LVM Disk Layout” (page 16) • “LVM Limitations” (page 18) • “Shared LVM” (page 18) LVM Features Logical Volume Manager (LVM) is a storage management system that lets you allocate and manage disk space for file systems or raw data.
LVM Architecture An LVM system starts by initializing disks for LVM usage. An LVM disk is known as a physical volume (PV). A disk is marked as an LVM physical volume using either the HP System Management Homepage (HP SMH) or the pvcreate command. Physical volumes use the same device special files as traditional HP-UX disk devices. LVM divides each physical volume into addressable units called physical extents (PEs).
Figure 1 Disk Space Partitioned Into Logical Volumes Physical versus Logical Extents When LVM allocates disk space to a logical volume, it automatically creates a mapping of the logical extents to physical extents. This mapping depends on the policy chosen when creating the logical volume. Logical extents are allocated sequentially, starting at zero, for each logical volume. LVM uses this mapping to access the data, regardless of where it physically resides.
Figure 2 Physical Extents and Logical Extents As shown in Figure 2, the contents of the first logical volume are contained on all three physical volumes in the volume group. Because the second logical volume is mirrored, each logical extent is mapped to more than one physical extent. In this case, there are two physical extents containing the data, each on both the second and third disks within the volume group.
Version 2.0, 2.1, and 2.2 enable the configuration of larger volume groups, logical volumes, physical volumes, and other parameters. Version 2.1 is identical to Version 2.0, but allows a greater number of volume groups, physical volumes, and logical volumes. Version 2.x volume groups are managed exactly like Version 1.0 volume groups, with the following exceptions: • Version 2.x volume groups have simpler options to the vgcreate command. When creating a Version 2.
Legacy device files were the only type of mass storage device files in releases prior to HP-UX 11i Version 3. They have hardware path information such as SCSI bus, target, and LUN encoded in the device file name and minor number. For example, the legacy device file /dev/dsk/c3t2d0 represents the disk at card instance 3, target address 2, and lun address 0. Persistent device files are not tied to the physical hardware path to a disk, but instead map to the disk's unique worldwide identifier (WWID).
Table 1 Physical Volume Naming Conventions Device File Name Type of Device /dev/disk/diskn Persistent block device file /dev/disk/diskn_p2 Persistent block device file, partition 2 /dev/rdisk/diskn Persistent character device file /dev/rdisk/diskn_p2 Persistent character device file, partition 2 /dev/dsk/cntndn Legacy block device file /dev/dsk/cntndns2 Legacy block device file, partition 2 /dev/rdsk/cntndn Legacy character device file /dev/rdsk/cntndns2 Legacy character device file, partit
When assigned by default, these names take the form /dev/vgnn/lvolN (the block device file form) and /dev/vgnn/rlvolN (the character device file form). The number N starts at 1 and increments in the order that logical volumes are created within each volume group. When LVM creates a logical volume, it creates both block and character device files. LVM then places the device files for a logical volume in the appropriate volume group directory.
Version 1.0 Device Number Format Table 2 lists the format of the device file number for Version 1.0 volume groups. Table 2 Version 1.0 Device Number Format Major Number Volume Group Number Reserved Logical Volume Number 64 0–0xff 0 0–0xff 0=group file For Version 1.0 volume groups, the major number for LVM device files is 64. The volume group number is encoded into the top eight bits of the minor number, and the logical volume number is encoded into the low eight bits.
Swap: lvol2 Dump: lvol2 /dev/dsk/c4t0d0 on: /dev/dsk/c3t0d0 /dev/dsk/c4t0d0 on: /dev/dsk/c3t0d0 The physical volumes designated "Boot Disk" are bootable, having been initialized with mkboot and pvcreate -B. Multiple lines for lvol1 and lvol2 indicate that the root and swap logical volumes are being mirrored. Logical Interface Format Area LVM boot disks contain a Logical Interface Format (LIF) area, in which is stored a LABEL file.
LVM Limitations LVM is a sophisticated subsystem. It requires time to learn, it requires maintenance, and in rare cases, things can go wrong. HP recommends using logical volumes as the preferred method for managing disks. Use LVM on file and application servers. On servers that have only a single disk and are used only to store the operating system and for swap, a “whole-disk” approach is simpler and easier to manage. LVM is not necessary on such systems.
Volume group Version 2.2 and higher with snapshot logical volumes configured cannot be activated in shared mode. Also, snapshots cannot be created off logical volumes belonging to shared volume groups. Synchronization of mirrored volume groups will be slower on shared volume groups.
2 Configuring LVM By default, the LVM commands are already installed on your system. This chapter discusses issues to consider when setting up your logical volumes. It addresses the following topics: • “Planning Your LVM Configuration” (page 20) • “Setting Up Different Types of Logical Volumes” (page 20) • “Planning for Availability” (page 24) • “Planning for Recovery” (page 34) • “Planning for Performance” (page 29) Planning Your LVM Configuration Using logical volumes requires some planning.
and /etc/lvmtab_p, which means that data of a logical volume might not be evenly distributed over all the physical volumes within your volume group. As a result, when I/O access to the logical volumes occurs, one or more disks within the volume group might be heavily used, while the others might be lightly used, or not used at all. This arrangement does not provide optimum I/O performance.
TIP: Because increasing the size of a file system is usually easier than reducing its size, be conservative in estimating how large to create a file system. An exception is the root file system. As a contiguous logical volume, the root file system is difficult to extend.
When using LVM, set up secondary swap areas within logical volumes that are on different disks using lvextend. If you have only one disk and must increase swap space, try to move the primary swap area to a larger region. • Similar-sized device swap areas work best. Device swap areas must have similar sizes for best performance. Otherwise, when all space in the smaller device swap area is used, only the larger swap area is available, making interleaving impossible.
Planning for Availability This section describes LVM features that can improve the availability and redundancy of your data. It addresses the following topics: • “Increasing Data Availability Through Mirroring” (page 24) • “Increasing Disk Redundancy Through Disk Sparing” (page 27) • “Increasing Hardware Path Redundancy Through Multipathing” (page 28) Increasing Data Availability Through Mirroring NOTE: Mirroring requires an optional product, HP MirrorDisk/UX.
on the same physical volume. The -s y and -s n options to the lvcreate or lvchange commands set strict or nonstrict allocation. CAUTION: Using nonstrict allocation can reduce the redundancy created by LVM mirroring because a logical extent can be mirrored to different physical extents on the same disk. Therefore, the failure of this one disk makes both copies of the data unavailable.
database data or file systems with few or infrequently written large files (greater than 256K) must not use the MWC when runtime performance is more important than crash recovery time. The -M option to the lvcreate or lvchange command controls the MWC. Synchronization Using Mirror Consistency Recovery When the Mirror Consistency Recovery is enabled, LVM does not impact runtime I/O performance.
TIP: The vgchange, lvmerge, and lvextend commands support the –s option to suppress the automatic synchronization of stale extents. If you are performing multiple mirror-related tasks, you can suppress the extent synchronization until you have finished all the tasks, then run lvsync with –T to synchronize all the mirrored volumes in parallel. For example, you can use vgchange -s with lvsync -T to reduce the activation time for volume groups with mirrored logical volumes.
Increasing Hardware Path Redundancy Through Multipathing Your hardware might provide the capability for dual cabling (dual controllers) to the same physical volume. If so, LVM can be configured with multiple paths to the same physical volume. If the primary link fails, an automatic switch to an alternate link occurs. Using this type of multipathing (also called pvlinks) increases availability. NOTE: As of HP-UX 11i Version 3, the mass storage stack supports native multipathing without using LVM pvlinks.
Planning for Performance This section describes strategies to obtain the best possible performance using LVM. It addresses the following topics: • “General Performance Factors” (page 29) • “Internal Performance Factors” (page 29) • “Increasing Performance Through Disk Striping” (page 31) • “Increasing Performance Through I/O Channel Separation” (page 33) General Performance Factors The following factors affect overall system performance, but not necessarily the performance of LVM.
On each write request to a mirrored logical volume that uses MWC, LVM potentially introduces one extra serial disk write to maintain the MWC. Whether this condition occurs depends on the degree to which accesses are random. The more random the accesses, the higher the probability of missing the MWC. Getting an MWC entry can involve waiting for one to be available.
The larger the unit of unshare, the more will the latency be when data has to be unshared and the lesser the metadata space needed on disk to track the sharing relationship between snapshots and the original logical volume. If the application performs large and occasional writes, then it is recommended that a larger unshare unit is used. If the writes are small and occasional, then a smaller unshare unit is recommended.
Figure 4 Interleaving Disks Among Buses • Increasing the number of disks might not improve performance because the maximum efficiency that can be achieved by combining disks in a striped logical volume is limited by the maximum throughput of the file system itself and by the buses to which the disks are attached. • Disk striping is highly beneficial for applications with few users and large, sequential transfers.
You might need to experiment to determine the optimum stripe size for your particular situation. To change the stripe size, re-create the logical volume. Interactions Between Mirroring and Striping Mirroring a striped logical volume improves the read I/O performance in a same way that it does for a nonstriped logical volume. Simultaneous read I/O requests targeting a single logical extent are served by two or three different physical volumes instead of one.
Planning for Recovery Flexibility in configuration, one of the major benefits of LVM, can also be a source of problems in recovery. The following are guidelines to help create a configuration that minimizes recovery time: • Keep the number of disks in the root volume group to a minimum; HP recommends using three disks, even if the root volume group is mirrored.
Preparing for LVM System Recovery To ensure that the system data and configuration are recoverable in the event of a system failure, follow these steps: 1. Load any patches for LVM. 2. Use Ignite-UX to create a recovery image of your root volume group. Although Ignite-UX is not intended to be used to back up all system data, you can use it with other data recovery applications to create a method of total system recovery. 3. Perform regular backups of the other important data on your system.
NOTE: For volume group Version 2.2 and higher: volume groups that have snapshots on which data unsharing is occurring, the LVM configuration backup file might not always be in sync with the LVM metadata on disk. LVM ensures that the configuration for volume group Version 2.2 and higher is the latest by automatically backing it up during deactivation, unless backup has been disabled by the -A n option.
Example Script for LVM Configuration Recording The following example script captures the current LVM and I/O configurations. If they differ from the previously captured configuration, the script prints the updated configuration files and notifies the system administrator.
3 Administering LVM This section contains information on the day-to-day operation of LVM.
Management Commands” (page 39), and “Logical Volume Management Commands” (page 40). The following tables provide an overview of which commands perform a given task. For more information, see the LVM individual manpages.
Table 5 Volume Group Management Commands (continued) Task Command Migrating a volume group to a different volume group version vgversion Migrating a volume group to new disks vgmove 1 To convert the cDSFs of a volume group in a particular node back to their corresponding persistent DSFs, use the vgscan -f command. For example: # vgscan -f vgtest *** LVMTAB has been updated successfully. Repeat the above command on all the nodes in the cluster where this shared volume group is present (i.e.
NOTE: For volume group Version 2.2 or higher, when snapshots are involved, additional fields are displayed by these commands. See the individual command manpages for full description of the fields displayed. The “Creating and Administering Snapshot Logical Volumes” (page 103) section provides a summary of additional data displayed for snapshots. Information on Volume Groups Use the vgdisplay command to show information about volume groups.
# pvdisplay -v /dev/disk/disk47 -- Physical volumes -PV Name /dev/disk/disk47 VG Name /dev/vg00 PV Status available Allocatable yes VGDA 2 Cur LV 9 PE Size (Mbytes) 4 Total PE 1023 Free PE 494 Allocated PE 529 Stale PE 0 IO Timeout (Seconds) default Autoswitch On Proactive Polling On -- Distribution of physical volume -LV Name LE of LV PE for LV /dev/vg00/lvol1 25 25 /dev/vg00/lvol2 25 25 /dev/vg00/lvol3 50 50 --- Physical extents --PE Status LV 0000 current /dev/vg00/lvol1 0001 current /dev/vg00/lvol1 0002
Common LVM Tasks The section addresses the following topics: • “Initializing a Disk for LVM Use” (page 43) • “Creating a Volume Group” (page 44) • “Migrating a Volume Group to a Different Version: vgversion” (page 46) • “Adding a Disk to a Volume Group” (page 51) • “Removing a Disk from a Volume Group” (page 51) • “Creating a Logical Volume” (page 52) • “Extending a Logical Volume” (page 53) • “Reducing a Logical Volume” (page 55) • “Adding a Mirror to a Logical Volume” (page 55) • “Remov
6. Initialize the disk as a physical volume using the pvcreate command. For example: # pvcreate /dev/rdisk/disk3 Use the character device file for the disk. If you are initializing a disk for use as a boot device, add the -B option to pvcreate to reserve an area on the disk for a LIF volume and boot utilities. If you are creating a boot disk on an HP Integrity server, make sure the device file specifies the HP-UX partition number (2). For example: # pvcreate -B /dev/rdisk/disk3_p2 NOTE: Version 2.
Use the block device file to include each disk in your volume group. You can assign all the physical volumes to the volume group with one command, or create the volume group with a single physical volume. No physical volume can already be part of an existing volume group. You can set volume group attributes using the following options: -V 1.0 Version 1.
specified, it cannot be changed. The default unshare unit size is 1024 KB if the -U option is not used. Below is an example of vgcreate with the -U option: # vgcreate -V 2.2 -S 4t -s 8 -U 2048 /dev/vg01 /dev/disk/disk20 Migrating a Volume Group to a Different Version: vgversion Beginning with the HP-UX 11i v3 March 2009 Update, LVM offers the new command vgversion, which enables you to migrate the current version of an existing volume group to any other version, except for migrating to Version 1.0.
Command Syntax The vgversion syntax is: vgversion [-r] [-v] –V [-U unshare_unit] vg_version_new vg_name where -r is Review mode. This allows you to review the operation before performing the actual volume group version migration. -v is Verbose mode. -U unshare_unit sets the unit at which data will be unshared between a logical volume and its snapshots, in the new volume group. This is only applicable for migration to volume group Version 2.2 or higher.
Version 2.1 requires more metadata. Thus, it is possible that there is not enough space in the LUN for the increase in metadata. In this example, vgversion –r should display the following: #vgversion -V 2.1 -r -v vg01 Performing "vgchange -a r -l -p -s vg01" to collect data Activated volume group Volume group "vg01" has been successfully activated. The space required for Volume Group version 2.1 metadata on Physical Volume /dev/disk/disk12 is 8448 KB, but available free space is 1024 KB.
relocation. The bad block relocation policy of all logical volumes will be set to NONE. Volume Group version can be successfully changed to 2.1 Review complete. Volume group not modified 3. After messages from the review indicate a successful migration, you can begin the actual migration: a. Unlike in Review mode, the target volume group must meet certain conditions at execution time, including being de-activated.
CAUTION: The recovery script should be run only in cases where the migration unexpectedly fails, such as an interruption during migration execution. • The recovery script should not be used to “undo” a successful migration. For a successful vgversion migration, you should use only a subsequent vgversion execution (and not the recovery script) to reach the newly desired volume group version. Note that a migration to 1.0 is not supported, so no return path is available once a migration from Version 1.
NOTE: Once the recovery is complete using the restore script, an immediate vgversion operation in review mode will fail. You need to de-active the volume group and activate it again before running vgversion in review mode. This reset of volume group is not needed for a vgversion operation not in review mode. Adding a Disk to a Volume Group Often, as new disks are added to a system, they must be added to an existing volume group rather than creating a whole new volume group.
Check that the number of free physical extents (Free PE) matches the total number of physical extents (Total PE). If they are not the same, do one of the following tasks: 2. • Move the physical extents onto another physical volume in the volume group. See “Moving Data to a Different Physical Volume” (page 73). • Remove the logical volumes from the disk, as described in “Removing a Logical Volume” (page 57).
NOTE: When you stripe across multiple disks, the striped volume size cannot exceed the capacity of the smallest disk multiplied by the number of disks used in the striping. Creating a Mirrored Logical Volume To create a mirrored logical volume, use lvcreate with the -m option to select the number of mirror copies.
Act PV Max PE per PV VGDA PE Size (Mbytes) Total PE Alloc PE Free PE Total PVG Total Spare PVs Total Spare PVs in use VG Version VG Max Size VG Max Extents 1 2000 2 4 249 170 79 0 0 0 1.0 1082g 69248 The Free PE entry indicates the number of 4 MB extents available, in this case, 79 (316 MB). 3. Extend the logical volume. For example: # lvextend -L 332 /dev/vg00/lvol7 This increases the size of this volume to 332 MB.
Reducing a Logical Volume CAUTION: Before you reduce a logical volume, you must notify the users of that logical volume. For example, before reducing a logical volume that contains a file system, back up the file system. Even if the file system currently occupies less space than the new (reduced) size of the logical volume, you will almost certainly lose data when you reduce the logical volume.
3. Use the lvextend command with the -m option to add the number of additional copies you want. For example: # lvextend -m 1 /dev/vg00/lvol1 This adds a single mirror copy of the given logical volume. To force the mirror copy onto a specific physical volume, add it at end of the command line.
3. Update all references to the old name in any other files on the system. These include /etc/ fstab for mounted file systems or swap devices and existing mapfiles from a vgexport command. Removing a Logical Volume CAUTION: Removing a logical volume makes its contents unavailable and likely to be overwritten. In particular, any file system contained in the logical volume is destroyed. To remove a logical volume, follow these steps: 1.
When vgexport completes, all information about the volume group has been removed from the system. The disks can now be moved to a different system, and the volume group can be imported there. Importing a Volume Group To import a volume group, follow these steps: 1. Connect the disks to the system. 2.
For version 2.x volume groups, you can use vgmodify to do the following: • Detect and handle physical volume size changes • Modify the maximum volume group size. • Handle physical volume LUN expansion. See “Modifying Physical Volume Characteristics” (page 77) for more details • Prepare a physical volume for a LUN contraction. See “Modifying Physical Volume Characteristics” (page 77) for more details.
New configuration requires "max_pes" are increased from 1016 to 6652 The current and new Volume Group parameters differ. An update to the Volume Group IS required New Volume Group settings: Max LV Max PV Max PE per PV PE Size (Mbytes) VGRA Size (Kbytes) Review complete.
Volume Group configuration for /dev/vg32 has been saved in /etc/lvmconf/vg32.conf # pvmove /dev/disk/disk6:0 /dev/disk/disk6 Transferring logical extents of logical volume "/dev/vg32/lvol1"... Physical volume "/dev/disk/disk6" has been successfully moved. Volume Group configuration for /dev/vg32 has been saved in /etc/lvmconf/vg32.conf 4.
# vgdisplay vg32 --- Volume groups --VG Name VG Write Access VG Status Max LV Cur LV Open LV Max PV Cur PV Act PV Max PE per PV VGDA PE Size (Mbytes) Total PE Alloc PE Free PE Total PVG Total Spare PVs Total Spare PVs in use VG Version /dev/vg32 read/write available 255 0 0 255 2 2 15868 4 32 1084 0 1084 0 0 0 1.0 vgmodify for a Version 2.x Volume Group If the maximum volume group size (chosen when the Version 2.
2. If review mode indicates that the maximum VG size can be increased, perform the actual reprovisioning reconfiguration. This operation reconfigures every PV in the VG to the new (larger) maximum VG size. This operation also automatically adds new extents to PVs where the new extents were previously not added because the volume group was at its maximum number of extents. # vgmodify -a -E -S 64t vg1 3. 4. 5.
# vgdisplay -v vg1 --- Volume groups --VG Name ... VG Version VG Max Size VG Max Entents ... 2. /dev/vg1 2.1 500 GB 64000 Run diskinfo on the physical volumes to display their size. # diskinfo /dev/rdisk/disk46 # diskinfo /dev/rdisk/disk47 3. Run online vgmodify in review mode to verify that all physical volumes can be reconfigured to the new (larger) maximum VG size of 8TB.
5. Note that the number of extents for /dev/disk/disk47 is increased from 25604 to 38396 after the maximum VG size is increased, as shown by vgdisplay -v here. This command confirms that the maximum VG size was increased from 500 GB to 8 TB, and that the number of extents for /dev/disk/disk47 was increased from 25602 extents to 38396 extents, as shown by the vgdisplay command below. # vgdisplay –v vg1 --- Volume groups --VG Name ... /dev/vg1 PE Size (Mbytes) Total PE Alloc PE Free PE ...
NOTE: Individual physical volumes or logical volumes cannot be quiesced using this feature. To temporarily quiesce a physical volume to disable or replace it, see “Disabling a Path to a Physical Volume” (page 87). To quiesce a logical volume, quiesce or deactivate the volume group. To provide a stable image of a logical volume without deactivating the volume group, mirror the logical volume, then split off one of the mirrors, as described in “Backing Up a Mirrored Logical Volume” (page 68).
# rm /etc/lvmconf/vg01.conf Note: if your volume group is Version 2.x and does not have any bootable physical volumes, and if you have configured a new path for the configuration file using the LVMP_CONF_PATH_NON_BOOT variable in the /etc/lvmrc file, you need to remove the configuration file from the new path. 9. Update all references to the old name in any other files on the system. These include /etc/ fstab for mounted file systems or swap devices, and existing mapfiles from a vgexport command.
# vgreduce -f vgold # vgreduce -f vgnew 11. Enable quorum checks for the old volume group as follows: # vgchange -a y -q y /dev/vgold On completion, the original volume group contains three logical volumes (lvol1, lvol2, and lvol3) with physical volumes /dev/disk/disk0 and /dev/disk/disk1. The new volume group vgnew contains three logical volumes (lvol4, lvol5, and lvol6) across physical volumes /dev/disk/disk2, /dev/disk/disk3, /dev/disk/disk4, and /dev/disk/disk5.
1. Split the logical volume /dev/vg00/lvol1 into two separate logical volumes as follows: # lvsplit /dev/vg00/lvol1 This creates the new logical volume /dev/vg00/lvol1b. The original logical volume /dev/ vg00/lvol1 remains online. 2. Perform a file system consistency check on the logical volume to be backed up as follows: # fsck /dev/vg00/lvol1b 3. Mount the file system as follows: # mkdir /backup_dir # mount /dev/vg00/lvol1b /backup_dir 4. 5. Perform the backup using the utility of your choice.
By default, vgcfgbackup saves the configuration of a volume group to the file volume_group_name.conf in the default directory /etc/lvmconf/. You can override this default directory setting for volume group Version 2.x by configuring a new path in the LVMP_CONF_PATH_NON_BOOT variable in the /etc/lvmrc file.
• “Mirroring the Boot Disk on HP 9000 Servers” (page 90) • “Mirroring the Boot Disk on HP Integrity Servers” (page 92) You might need to do the following tasks: • Move the disks in a volume group to different hardware locations on a system. • Move entire volume groups of disks from one system to another. CAUTION: Moving a disk that is part of your root volume group is not recommended. For more information, see Configuring HP-UX for Peripherals.
5. 6. Physically move your disks to their desired new locations. To view the new locations, enter the following command: # vgscan -v 7. If you are using an HP-UX release before March 2008, or if you want to retain the minor number of the volume group device file, create it using the procedure in “Creating the Volume Group Device File” (page 44).
6. 7. If you are using an HP-UX release before March 2008, create the volume group device file using the procedure in “Creating the Volume Group Device File” (page 44). To get device file information about the disks, run the ioscan command: # ioscan -funN -C disk 8. To preview the import operation, run the vgimport command with the -p option: # vgimport -p -N -v -s -m /tmp/vg_planning.map /dev/vg_planning 9.
5. To reduce the source disk from the volume group, enter: # vgreduce /dev/vg00 /dev/disk/disk1 6. To shut down and reboot from the new root disk in maintenance mode, enter: ISL> hpux -lm (;0)/stand/vmunix 7. In maintenance mode, to update the BDRA and the LABEL file, enter: # # # # # # 8. vgchange lvlnboot lvlnboot lvlnboot lvlnboot vgchange -a y /dev/vg00 -b /dev/vg00/lvol1 -s /dev/vg00/lvol2 -r /dev/vg00/lvol3 -Rv -a n /dev/vg00 Reboot the system normally.
NOTE: The pvmove command is not an atomic operation; it moves data extent by extent. The following might happen upon abnormal pvmove termination by a system crash or kill -9: For Version 1.0 volume groups prior to the September 2009 Update, the volume group can be left in an inconsistent configuration showing an additional pseudomirror copy for the extents being moved.
Example Consider a volume group having the configuration below: • Three physical volumes each of size 1245 extents: /dev/disk/disk10, /dev/disk/disk11, and /dev/disk/disk12. • Three logical volumes all residing on same disk, /dev/disk/disk10: /dev/vg_01/lvol1 (Size = 200 extents), /dev/vg_01/lvol2 (Size = 300 extents), /dev/vg_01/lvol3 (Size = 700 extents).
1. Use the pvcreate command to initialize the disk as an LVM disk. NOTE: Do not use the -B option because spare physical volumes cannot contain boot information. # pvcreate /dev/rdisk/disk3 2. Ensure the volume group has been activated, as follows: # vgchange -a y /dev/vg01 3. Use the vgextend command with -z y to designate one or more physical volumes as spare physical volumes within the volume group.
Handling Size Increases From the LUN Side: Disk arrays typically allow a LUN to be resized. If the volume group is activated during size increase, this is known as Dynamic LUN Expansion (DLE). If you increase the size of a LUN, follow these steps to incorporate the additional space into the volume group: 1. Increase the LUN size using the instructions for the array. 2. Verify that the LUN is expanded by running the diskinfo command. From the LVM Side (for Version 1.
3. 4. on the command line, vgmodify checks all PVs in the volume group. Optionally, you can specify the PVs you want to check in the command line after the VG name. . Perform the actual DLE reconfiguration (run vgmodify without the -r review option). If no PVs are specified, vgmodify attempts reconfiguration on all PVs in the volume group (if it detects them as having been expanded). Optionally, you can list the specific PVs you want to reconfigure in the command line after the VG name.
28 30 32 35 38 ... 255 3836 3580 3324 3068 2812 122753 114561 106369 98177 89985 252 8065 The table shows that without renumbering physical extents, a max_pv of 35 or lower permits a max_pe sufficient to accommodate the increased physical volume size. # vgmodify -v -t -n vg32 Volume Group configuration for /dev/vg32 has been saved in /etc/lvmconf/vg32.
6. Commit the new values as follows: # vgmodify -p 10 -e 10748 vg32 Current Volume Group settings: Max LV 255 Max PV 16 Max PE per PV 1016 PE Size (Mbytes) 32 VGRA Size (Kbytes) 176 The current and new Volume Group parameters differ.
1. Run online vgmodify in review mode to verify that the physical volumes require reconfiguration for the DLE and to preview the number of new extents to be added: # vgmodify -r -a -E vg1 Physical volume "/dev/disk/disk46" requires reconfiguration for expansion. Current number of extents: 12790 Number of extents after reconfiguration: 25590 Physical volume "/dev/disk/disk46" was not changed. Physical volume "/dev/disk/disk47" requires reconfiguration for expansion.
3. Verify that the physical volumes were reconfigured and that there are new extents available with the vgdisplay –v command: # vgdisplay -v vg1 --- Volume groups --VG Name ... PE Size (Mbytes) Total PE Alloc PE Free PE … VG Version VG Max Size VG Max Extents … --- Logical volumes --LV Name LV Status LV Size (Mbytes) Current LE Allocated PE Used PV --- Physical volumes --PV Name PV Status Total PE Free PE … PV Name PV Status Total PE Free PE /dev/vg1 8 51180 25580 25600 2.
Handling Size Decreases CAUTION: A similar procedure can also be used when the size of a physical volume is decreased. However, there are limitations: • Sequence: The sequence must be reversed to avoid data corruption. For 1. 2. For 1. 2. • an increase in size, the sequence is: Increase the LUN size from the array side. Then, increase the volume group size from the LVM side. a decrease in size, the sequence is: Decrease the volume group size from the LVM side.
information. If a physical volume was accidentally initialized as bootable, you can convert the disk to a nonbootable disk and reclaim LVM metadata space. CAUTION: The boot volume group requires at least one bootable physical volume. Do not convert all of the physical volumes in the boot volume group to nonbootable, or your system will not boot. To change a disk type from bootable to nonbootable, follow these steps: 1. Use vgcfgrestore to determine if the volume group contains any bootable disks. 2.
1 2 ... 255 65535 45820 2097120 1466240 252 8064 If you change the disk type, the VGRA space available increases from 768 KB to 2784KB (if physical extents are not renumbered) or 32768 KB (if physical extents are renumbered). Changing the disk type also permits a larger range of max_pv and max_pe. For example, if max_pv is 255, the bootable disk can only accommodate a disk size of 8064 MB, but after conversion to nonbootable, it can accommodate a disk size of 40834 MB. 3.
"/etc/lvmconf/vg01.conf.old" Volume group "vg01" has been successfully changed. 6. Activate the volume group and verify the changes as follows: # vgchange -a y vg01 Activated volume group Volume group "vg01" has been successfully changed. # vgcfgbackup vg01 Volume Group configuration for /dev/vg01 has been saved in /etc/lvmconf/vg01.conf # vgcfgrestore -l -v -n vg01 Volume Group Configuration information in "/etc/lvmconf/vg01.
Detaching a link does not disable sparing. That is, if all links to a physical volume are detached and a suitable spare physical volume is available in the volume group, LVM uses it to reconstruct the detached disk. For more information on sparing, see “Increasing Disk Redundancy Through Disk Sparing” (page 27). You can view the LVM status of all links to a physical volume using vgdisplay with the -v option.
1. Create a bootable physical volume. a. On an HP Integrity server, partition the disk using the idisk command and a partition description file, then run insf as described in “Mirroring the Boot Disk on HP Integrity Servers” (page 92). b. Run pvcreate with the -B option. On an HP Integrity server, use the device file denoting the HP-UX partition: # pvcreate -B /dev/rdisk/disk6_p2 On an HP 9000 server, use the device file for the entire disk: # pvcreate -B /dev/rdisk/disk6 2.
Boot: Root: Swap: Dump: bootlv rootlv swaplv swaplv /dev/disk/disk6 -- Boot Disk on: /dev/disk/disk6 on: /dev/disk/disk6 on: /dev/disk/disk6 on: /dev/disk/disk6, 0 15. Once the boot and root logical volumes are created, create file systems for them. For example: # mkfs –F hfs /dev/vgroot/rbootlv # mkfs –F vxfs /dev/vgroot/rrootlv NOTE: On HP Integrity servers, the boot file system can be VxFS.
2. Create a bootable physical volume as follows: # pvcreate -B /dev/rdisk/disk4 3. Add the physical volume to your existing root volume group as follows: # vgextend /dev/vg00 /dev/disk/disk4 4. Place boot utilities in the boot area as follows: # mkboot /dev/rdisk/disk4 5. Add an autoboot file to the disk boot area as follows: # mkboot -a "hpux" /dev/rdisk/disk4 NOTE: If you expect to boot from this disk only when you lose quorum, you can use the alternate string hpux –lq to disable quorum checking.
TIP: To shorten the time required to synchronize the mirror copies, use the lvextend and lvsync command options introduced in the September 2007 release of HP-UX 11i Version 3. These options enable you to resynchronize logical volumes in parallel rather than serially. For example: # # # # # # # # # 8.
For this example, the disk to be added is at hardware path 0/1/1/0.0x1.0x0, with device special files named /dev/disk/disk2 and /dev/rdisk/disk2. Follow these steps: 1. Partition the disk using the idisk command and a partition description file. a. Create a partition description file.
00271 current /dev/vg00/lvol7 00000 00408 current /dev/vg00/lvol8 00000 8. Mirror each logical volume in vg00 (the root volume group) onto the specified physical volume. For example: # lvextend –m 1 /dev/vg00/lvol1 /dev/disk/disk2_p2 The newly allocated mirrors are now being synchronized. This operation will take some time. Please wait .... # lvextend –m 1 /dev/vg00/lvol2 /dev/disk/disk2_p2 The newly allocated mirrors are now being synchronized. This operation will take some time. Please wait ....
12. Add a line to /stand/bootconf for the new boot disk using vi or another text editor as follows: # vi /stand/bootconf l /dev/disk/disk2_p2 Where the literal “l” (lower case L) represents LVM. Migrating a Volume Group to New Disks: vgmove Beginning with September 2009 Update, LVM provides a new vgmove command to migrate data in a volume group from an old set of disks to a new set of disks.
1. Instead of manually creating a diskmap file, mapping the old source to new destination disks, the -i option is used to generate a diskmap file for the migration. The user provides a list of destination disks, called newdiskfile in this example. # cat newdiskfile /dev/disk/disk10 /dev/disk/disk11 # vgmove -i newdiskfile -f diskmap.txt /dev/vg00 2. The resulting diskmap.txt file contains the mapping of old source disks to new destination disks: # cat diskmap.
NOTE: For volume groups that support bootable PVs (Version 1.0 and Version 2.2 or higher), lvmove will not move root, boot, swap, or dump PVs. The lvmove migration will fail for a space efficient snapshot that has pre-allocated extents. For more information about snapshots, see “Creating and Administering Snapshot Logical Volumes” (page 103) Administering File System Logical Volumes This section describes special actions you must take when working with file systems inside logical volumes.
5. Create the file system using the character device file. For example: # newfs -F fstype /dev/vg02/rlvol1 If you do not use the -F fstype option, then newfs creates a file system based on the content of your /etc/fstab file. If there is no entry for the file system in /etc/fstab, then the file system type is determined from the file /etc/default/fs. For information on additional options, see newfs(1M). When creating a VxFS file system, file names will be long automatically.
1. If the file system must be unmounted, unmount it. a. Be sure no one has files open in any file system mounted to this logical volume and that it is no user's current working directory. For example: # fuser -cu /work/project5 If the logical volume is in use, confirm that the underlying applications no longer need it. If necessary, stop the applications. NOTE: If the file system is exported using NFS to other systems, verify that no one is using those other systems, then unmount it on those systems. b.
Reducing a File System Created with OnlineJFS Using the fsadm command shrinks the file system, provided the blocks it attempts to deallocate are not currently in use; otherwise, it fails. If sufficient free space is currently unavailable, file system defragmentation of both directories and extents might consolidate free space toward the end of the file system, allowing the contraction process to succeed when retried. For example, suppose your VxFS file system is currently 6 GB.
Backing Up a VxFS Snapshot File System NOTE: Creating and backing up a VxFS snapshot file system requires that you have the optional HP OnlineJFS product installed on your system. For more information, see HP-UX System Administrator's Guide: Configuration Management. VxFS enables you to perform backups without taking the file system offline by making a snapshot of the file system, a read-only image of the file system at a moment in time. The primary file system remains online and continues to change.
If you plan device swap areas in addition to primary swap, you get the best performance when the device swap areas are on different physical volumes. This configuration allows for the interleaving of I/O to the physical volumes when swapping occurs. To create interleaved swap, create multiple logical volumes for swap, with each logical volume on a separate disk. You must use HP-UX commands to help you obtain this configuration. HP SMH does not allow you to create a logical volume on a specific disk.
After creating a logical volume to be used as a dump device, use the lvlnboot command with the -d option to update the dump information used by LVM. If you created a logical volume /dev/ vg00/lvol2 for use as a dump area, update the boot information by entering the following: # lvlnboot -d /dev/vg00/lvol2 Removing a Dump Logical Volume To discontinue the use of a currently configured logical volume as a dump device, use the lvrmboot command with the -d option.
Beginning with the HP-UX 11i v3 September 2010 Update, LVM can be enabled to automatically increase the pre-allocated extents of a snapshot when the threshold is reached. With this update, space efficient snapshots' threshold value can also be configured by the user. Administrators can use the lvcreate or lvchange command to enable automatic increase of pre-allocated extents (via the -e option) and specify the threshold value (via the -P option).
• The current and the maximum snapshot capacity for the volume group. • The listing of snapshot logical volumes and some snapshot related attributes for each. The lvdisplay command now additionally displays the following data for a logical volume which has snapshots associated with it: • The number of snapshots associated with the logical volume • A listing of the names of the snapshot logical volumes associated with the logical volume.
START_LVMPUD=1 To stop the daemon manually, use the following command: #/sbin/init.d/lvm stop To start the daemon manually again, use the following command: #/sbin/init.d/lvm start To start the lvmpud daemon even if START_LVMPUD is not set to 1, enter: #lvmpud See the lvmpud(1M) manpage for full details of this daemon. Hardware Issues This section describes hardware-specific issues dealing with LVM. Integrating Cloned LUNs Certain disk arrays can create clones of their LUNs.
4 Troubleshooting LVM This chapter provides conceptual troubleshooting information as well as detailed procedures to help you plan for LVM problems, troubleshoot LVM, and recover from LVM failures.
Max Max Max Max Max Max Max Max Max Max PV Size (Tbytes) VGs LVs PVs Mirrors Stripes Stripe Size (Kbytes) LXs per LV PXs per PV Extent Size (Mbytes) 16 2048 2047 2048 5 511 262144 33554432 16777216 256 If your release does not support Version 2.1 volume groups, it displays the following: # lvmadm -t -V 2.1 Error: 2.1 is an invalid volume group version. • To display the contents of the /etc/lvmtab and /etc/lvmtab_p files in a human-readable fashion.
A maintenance mode boot differs from a standard boot as follows: • The system is booted in single-user mode. • No volume groups are activated. • Primary swap and dump are not available. • Only the root file system and boot file system are available. • If the root file system is mirrored, only one copy is used. Changes to the root file system are not propagated to the mirror copies, but those mirror copies are marked stale and will be synchronized when the system boots normally.
Temporarily Unavailable Device By default, LVM retries I/O requests with recoverable errors until they succeed or the system is rebooted. Therefore, if an application or file system stalls, your troubleshooting must include checking the console log for problems with your disk drives and taking action to restore the failing devices to service.
Media Errors If an I/O request fails because of a media error, LVM typically prints a message to the console log file (/var/adm/syslog/syslog.log) when the error occurs. In the event of a media error, you must replace the disk (see “Disk Troubleshooting and Recovery Procedures” (page 116)). If your disk hardware supports automatic bad block relocation (usually known as hardware sparing), enable it, because it minimizes media errors seen by LVM. NOTE: LVM does not perform software relocation of bad blocks.
or is not configured into the kernel. vgchange: Couldn't activate volume group "/dev/vg01": Either no physical volumes are attached or no valid VGDAs were found on the physical volumes. If a nonroot volume group does not activate because of a failure to meet quorum, follow these steps: 1. Check the power and data connections (including Fibre Channel zoning and security) of all the disks that are part of the volume group that you cannot activate.
# vgchange -a y /dev/vgtest vgchange: Error: The "lvmp" driver is not loaded. Here is another possible error message: # vgchange -a y /dev/vgtest vgchange: Warning: Couldn't attach to the volume group physical volume "/dev/disk/disk1": Illegal byte sequence vgchange: Couldn't activate volume group "/dev/vgtest": Quorum not present, or some physical volume(s) are missing.
Max Max Max Max Max Max Max Max Max Max Max Max VG Size (Tbytes) LV Size (Tbytes) PV Size (Tbytes) VGs LVs PVs Mirrors Stripes Stripe Size (Kbytes) LXs per LV PXs per PV Extent Size (Mbytes) 2048 256 16 512 511 511 6 511 262144 33554432 16777216 256 TIP: If your system has no Version 2.x volume groups, you can free up system resources associated with lvmp by unloading it from the kernel.
LVM Boot Failures There are several reasons why an LVM configuration cannot boot. In addition to the problems associated with boots from non-LVM disks, the following problems can cause an LVM-based system not to boot. Insufficient Quorum In this scenario, not enough disks are present in the root volume group to meet the quorum requirements.
successfully backed up in this step will be recoverable, but some or all of your data might not be successfully backed up because of file corruption. 3. 4. 5. Immediately unmount the corrupted file system if it is mounted. Use the logical volume for swap space or raw data storage, or use HP SMH or the newfs command to create a new file system in the logical volume. This new file system now matches the current reduced size of the logical volume.
The LVM OLR feature uses a new option (–a) in the pvchange command. The –a option disables or re-enables a specified path to an LVM disk, as used to halt LVM access to the disk under “Step 6: Replacing a Bad Disk (Persistent DSFs)” (page 129) or “Step 7: Replacing a Bad Disk (Legacy DSFs)” (page 138). For more information, see the pvchange(1M) manpage.
Step 2: Recognizing a Failing Disk This section explains how to look for signs that one of your disks is having problems, and how to determine which disk it is. I/O Errors in the System Log Often an error message in the system log file, /var/adm/syslog/syslog.log, is your first indication of a disk problem.
this volume group vgdisplay: Warning: couldn't query all of the physical volumes. #vgchange -a y /dev/vg01 vgchange: Warning: Couldn't attach to the volume group physical volume "/dev/dsk/c0t3d0": A component of the path of the physical volume does not exist. Volume group "/dev/vg01" has been successfully changed. Another sign of disk problem is seeing stale extents in the lvdisplay command output.
Step 3: Confirming Disk Failure Once you suspect a disk has failed or is failing, make certain that the suspect disk is indeed failing. Replacing or removing the incorrect disk makes the recovery process take longer. It can even cause data loss. For example, in a mirrored configuration, if you were to replace the wrong disk—the one holding the current good copy rather than the failing disk—the mirrored data on the good disk is lost. It is also possible that the suspect disk is not failing.
# dd if=/dev/rdsk/c0t5d0 of=/dev/null bs=1024k count=64 64+0 records in 64+0 records out NOTE: If the dd command hangs or takes a long time, Ctrl+C stops the read on the disk. To run dd on the background, add & at the end of the command. The following command shows an unsuccessful read of the whole disk: # dd if=/dev/rdsk/c1t3d0 of=/dev/null bs=1024k dd read error: I/O error 0+0 records in 0+0 records out 4.
Note the value calculated is used in the skip argument. The count is obtained by multiplying the PE size by 1024.
Step 4: Determining Action for Disk Removal or Replacement Once you know which disk is failing, you can decide how to deal with it. You can choose to remove the disk if your system does not need it, or you can choose to replace it. Before deciding on your course of action, you must gather some information to help guide you through the recovery process.
command shows '???' for the physical volume if it is unavailable. The issue with this approach is that it does not show precisely how many disks are unavailable. To ensure that multiple simultaneous disk failures have not occurred, run vgdispay to check the difference between the number of active and number of current physical volumes. For example, a difference of one means only one disk is failing.
NOTE: There might be an instance where you see that only the failed physical volume holds the current copy of a given extent (and all other mirror copies of the logical volume hold the stale data for that given extent), and LVM does not permit you to remove that physical volume from the volume group. In this case, use the lvunstale command (available from your HP support representative) to mark one of the mirror copies as “nonstale” for that given extent.
Step 5: Removing a Bad Disk You can elect to remove the failing disk from the system instead of replacing it if you are certain that another valid copy of the data exists or the data can be moved to another disk. Removing a Mirror Copy from a Disk If you have a mirror copy of the data already, you can stop LVM from using the copy on the failing disk by reducing the number of mirrors. To remove the mirror copy from a specific disk, use lvreduce, and specify the disk from which to remove the mirror copy.
The physical volume key of a disk indicates its order in the volume group. The first physical volume has the key 0, the second has the key 1, and so on. This need not be the order of appearance in /etc/lvmtab file although it is usually the case, at least when a volume group is initially created. You can use the physical volume key to address a physical volume that is not attached to the volume group.
Total PE Free PE Allocated PE Stale PE IO Timeout (Seconds) Autoswitch 1023 1023 0 0 default On In this example, there are two entries for PV Name. Use the vgreduce command to reduce each path as follows: # vgreduce vgname /dev/dsk/c0t5d0 # vgreduce vgname /dev/dsk/c1t6d0 If the disk is unavailable, the vgreduce command fails. You can still forcibly reduce it, but you must then rebuild the lvmtab, which has two side effects.
Step 6: Replacing a Bad Disk (Persistent DSFs) If instead of removing the disk, you need to replace the faulty disk, this section provides a step-by-step guide to replacing a faulty LVM disk, for systems configured with persistent DSFs. For systems using legacy DSFs, refer to the next step “Step 7: Replacing a Bad Disk (Legacy DSFs)” (page 138) If you have any questions about the recovery process, contact your local HP Customer Response Center for assistance.
If the disk is hot-swappable, replace it. If the disk is not hot-swappable, shut down the system, turn off the power, and replace the disk. Reboot the system. 4. Notify the mass storage subsystem that the disk has been replaced. If the system was not rebooted to replace the failed disk, then run scsimgr before using the new disk as a replacement for the old disk.
8. Restore LVM access to the disk. If you did not reboot the system in Step 2, “Halt LVM access to the disk,” reattach the disk as follows: # pvchange –a y /dev/disk/disk14 If you did reboot the system, reattach the disk by reactivating the volume group as follows: # vgchange -a y /dev/vgnn NOTE: The vgchange command with the -a y option can be run on a volume group that is deactivated or already activated.
b. If fuser reports process IDs using the logical volume, use the ps command to map the list of process IDs to processes, and then determine whether you can halt those processes. For example, look up processes 27815 and 27184 as follows: # ps -fp27815 -p27184 UID PID PPID C STIME TTY root 27815 27184 0 09:04:05 pts/0 root 27184 27182 0 08:26:24 pts/0 c. TIME COMMAND 0:00 vi test.c 0:00 -sh If so, use fuser with the –k option to kill all processes accessing the logical volume.
NOTE: If the system was rebooted to replace the failed disk, then ioscan –m lun does not display the old disk. 6. Assign the old instance number to the replacement disk. For example: # io_redirect_dsf -d /dev/disk/disk14 -n /dev/disk/disk28 This assigns the old LUN instance number (14) to the replacement disk. In addition, the device special files for the new disk are renamed to be consistent with the old LUN instance number.
9. Recover any lost data. LVM recovers all the mirrored logical volumes on the disk, and starts that recovery when the volume group is activated. For all the unmirrored logical volumes that you identified in Step 2, “Halt LVM access to the disk,” restore the data from backup and reenable user access as follows: • For raw volumes, restore the full raw volume using the utility that was used to create your backup. Then restart the application. • For file systems, you must re-create the file systems first.
NOTE: On an HP 9000 server, the boot disk is not partitioned so the physical volume refers to the entire disk, not the HP-UX partition. Use the following command: # pvchange -a N /dev/disk/disk14 3. Replace the disk. For the hardware details on how to replace the disk, see the hardware administrator’s guide for the system or disk array. If the disk is hot-swappable, replace it. If the disk is not hot-swappable, shut down the system, turn off the power, and replace the disk. Reboot the system.
NOTE: If the system was rebooted to replace the failed disk, then ioscan –m lun does not display the old disk. 6. (HP Integrity servers only) Partition the replacement disk. Partition the disk using the idisk command and a partition description file, and create the partition device files using insf, as described in “Mirroring the Boot Disk on HP Integrity Servers” (page 92). 7. Assign the old instance number to the replacement disk.
NOTE: The vgchange command with the -a y option can be run on a volume group that is deactivated or already activated. It attaches all paths for all disks in the volume group and resumes automatically recovering any disks in the volume group that had been offline or any disks in the volume group that were replaced. Therefore, run vgchange only after all work has been completed on all disks and paths in the volume group, and it is necessary to attach them all. 10. Initialize boot information on the disk.
Step 7: Replacing a Bad Disk (Legacy DSFs) Follow these steps to replace a bad disk if your system is configured with only legacy DSFs. NOTE: LVM recommends the use of persistent device special files, because they support a greater variety of load balancing options. For replacing a disk with persistent device special files, see “Step 6: Replacing a Bad Disk (Persistent DSFs)” (page 129) To replace a bad disk, follow these steps. 1. Halt LVM Access to the Disk.
3. Initialize the Disk for LVM This step copies LVM configuration information onto the disk, and marks it as owned by LVM so it can subsequently be attached to the volume group. If you replaced a mirror of the root disk on an Integrity server, run the idisk and insf commands as described in “Mirroring the Boot Disk on HP Integrity Servers” (page 92). For PA-RISC servers or non-root disks, this step is unnecessary.
NOTE: The ITRC resource forums at http://www.itrc.hp.com offer peer-to-peer support to solve problems and are free to users after registration. If this is a new problem or if you need additional help, log your problem with the HP Response Center, either online through the support case manager at http://www.itrc.hp.com, or by calling HP Support.
5 Support and Other Resources New and Changed Information in This Edition The eighth edition of HP-UX System Administrator's Guide: Logical Volume Management addresses the following new topics: • Added information about converting cDSFs back to their corresponding persistent DSFs, in Table 5 (page 39). • Provided new information on LVM I/O timeout parameters, see “Configuring LVM I/O Timeout Parameters” (page 33) and “LVM I/O Timeout Parameters” (page 161).
Key The name of a keyboard key. Return and Enter both refer to the same key. Term The defined use of an important word or phrase. User input Commands and other text that you type. Variable or Replaceable The name of a placeholder in a command, function, or other syntax display that you replace with an actual value. -chars One or more grouped command options, such as -ikx. The chars are usually a string of literal characters that each represent a specific option.
Related Information HP-UX technical documentation can be found on HP's documentation website at http:// www.hp.com/go/hpux-core-docs. In particular, LVM documentation is provided on this web page:http://www.hp.com/go/ hpux-LVM-VxVM-docs. See the HP-UX Logical Volume Manager and Mirror Disk/UX Release Notes for information about new features and defect fixes on each release.
HP-UX 11i Release Names and Operating System Version Identifiers With HP-UX 11i, HP delivers a highly available, secure, and manageable operating system that meets the demands of end-to-end Internet-critical computing. HP-UX 11i supports enterprise, mission-critical, and technical computing environments. HP-UX 11i is available on both HP 9000 systems and HP Integrity systems. Each HP-UX 11i release has an associated release name and release identifier.
Document updates can be issued between editions to correct errors or document product changes. To ensure that you receive the updated or new editions, subscribe to the appropriate product support service. See your HP sales representative for details. You can find the latest version of this document online at: http://www.hp.com/go/hpux-LVM-VxVM-docs .
HP Encourages Your Comments HP encourages your comments concerning this document. We are committed to providing documentation that meets your needs. Send any errors found, suggestions for improvement, or compliments to: http://www.hp.com/bizsupport/feedback/ww/webfeedback.html Include the document title, manufacturing part number, and any comment, error found, or suggestion for improvement you have concerning this document.
A LVM Specifications and Limitations This appendix discusses LVM product specifications. NOTE: Do not infer that a system configured to these limits is automatically usable. Table 13 Volume Group Version Maximums Version 1.0 Volume Groups Version 2.0 Volume Groups Version 2.1 Volume Groups Version 2.
1 2 The limit of 2048 volume groups is shared among Version 2.x volume groups. Volume groups of Versions 2.x can be created with volume group numbers ranging from 0-2047. However, the maximum number of Version 2.0 volume groups that can be created is 512. For volume group Version 2.2 or higher, the total number of logical volumes includes normal logical volumes as well as snapshot logical volumes. Table 14 Version 1.
Table 15 Version 2.x Volume Group Limits Parameter Command to Set/Change Parameter Minimum Value Default Value Maximum Value 0 n/a 20481 Number of physical volumes n/a in a volume group 511 511 (2.0) 511 (2.0) 2048 (2.1, 2.2) 2048 (2.1, 2.2) Number of logical volumes in a volume group n/a 511 511 (2.0) 511 (2.0) 2047 (2.1, 2.2) 2047 (2.1, 2.
Determining LVM’s Maximum Limits on a System The March 2008 update to HP-UX 11i v3 (11.31) introduced a new command that enables the system administrator to determine the maximum LVM limits supported on the target system for a given volume group version. The lvmadm command displays the implemented limits for Version 1.0 and Version 2.x volume groups. It is impossible to create a volume group that exceeds these limits.
Max Max Min Max Max PXs per PV Extent Size (Mbytes) Unshare unit (Kbytes) Unshare unit (Kbytes) Snapshots per LV 16777216 256 512 4096 255 Determining LVM’s Maximum Limits on a System 151
B LVM Command Summary This appendix contains a summary of the LVM commands and descriptions of their use. Table 16 LVM Command Summary Command Description and Example extendfs Extends a file system: # extendfs /dev/vg00/rlvol3 lvmadm Displays the limits associated with a volume group version: # lvmadm -t -V 2.
Table 16 LVM Command Summary (continued) Command Description and Example pvchange Changes the characteristics of a physical volume: # pvchange -a n /dev/disk/disk2 pvck Performs a consistency check on a physical volume: # pvck /dev/disk/disk47_p2 pvcreate Creates a physical volume be to used as part of a volume group: # pvcreate /dev/rdisk/disk2 pvdisplay Displays information about a physical volume: # pvdisplay -v /dev/disk/disk2 pvmove Moves extents from one physical volume to another: # pvmove
Table 16 LVM Command Summary (continued) Command Description and Example vgscan Scans the system disks for volume groups: # vgscan -v vgreduce Reduces a volume group by removing one or more physical volumes from it: # vgreduce /dev/vg00 /dev/disk/disk2 vgremove Removes the definition of a volume group from the system and the disks: # vgremove /dev/vg00 /dev/disk/disk2 vgsync Synchronizes all mirrored logical volumes in the volume group: # vgsync vg00 vgversion Migrates a volume group to a differe
C Volume Group Provisioning Tips This appendix contains recommendations for parameters to use when creating your volume groups. Choosing an Optimal Extent Size for a Version 1.0 Volume Group When creating a Version 1.0 volume group, the vgcreate command may fail and display a message that the extent size is too small or that the VGRA is too big. In this situation, you must choose a larger extent size and run vgcreate again.
roundup(16 * lvs, BS) + roundup(16 + 4 * pxs, BS) * pvs) / BS, 8); if (length > 768) { printf("Warning: A bootable PV cannot be added to a VG \n" "created with the specified argument values. \n" "The metadata size %d Kbytes, must be less \n" "than 768 Kbytes.\n" "If the intention is not to have a boot disk in this \n" "VG then do not use '-B' option during pvcreate(1M) \n" "for the PVs to be part of this VG.
D Striped and Mirrored Logical Volumes This appendix provides more details on striped and mirrored logical volumes. It describes the difference between standard hardware-based RAID and LVM implementation of RAID. Summary of Hardware Raid Configuration RAID 0, commonly referred to as striping, refers to the segmentation of logical sequences of data across disks. RAID 1, commonly referred to as mirroring, refers to creating exact copies of logical sequences of data.
set, the logical extents are stripped and mirrored to obtain the data layout displayed in Figure 6 (page 157). Striping and mirroring in LVM combines the advantages of the hardware implementation of RAID 1+0 and RAID 0+1, and provides the following benefits: • Better write performance. Write operations take place in parallel and each physical write operation is directed to a different physical volume. • Excellent performance for read.
NOTE: Striping with mirroring always uses strict allocation policies where copies of data do not exist on the same physical disk. This results in a configuration similar to the RAID 01 as illustrated in Figure 7 (page 158).
Compatibility Note Releases prior to HP-UX 11i v3 only support striped or mirrored logical volumes and do not support combination of striped and mirrored logical volumes. If a logical volume using simultaneous mirroring and striping is created on HP-UX 11i v3, attempts to import or activate its associated volume group fails on a previous HP-UX release.
E LVM I/O Timeout Parameters When LVM receives an I/O to a logical volume, it converts this logical I/O to physical I/Os to one or more physical volumes from which the logical volume is allocated. There are two LVM timeout values which affect this operation: • Logical volume timeout (LV timeout). • Physical volume timeout (PV timeout). Logical Volume Timeout (LV timeout) LV timeout controls how long LVM retries a logical I/O after a recoverable physical I/O error.
Timeout Differences: 11i v2 and 11i v3 Since native multi-pathing is included in 11i v3 mass storage stack and it is enabled by default, the LVM timeout concepts may vary between 11i v2 and 11i v3 in certain cases. • Meaning of PV timeout. In 11i v2, LVM utilizes the configured PV timeout fully for a particular PV link to which it is set. If there is any I/O failure, LVM will retry the I/O on a next available PV link to the same physical volume with the new PV timeout budget.
F Warning and Error Messages This appendix lists some of the warning and error messages reported by LVM. For each message, the cause is described and an action is recommended. Matching Error Messages to Physical Disks and Volume Groups Often an error message contains the device number for a device, rather than the device file name. For example, you might see the following message in /var/adm/syslog/syslog.
The example error message refers to the Version 2.x volume group vgtest2. Messages For All LVM Commands Message Text vgcfgbackup: /etc/lvmtab is out of date with the running kernel: Kernel indicates # disks for "/dev/vgname"; /etc/lvmtab has # disks. Cannot proceed with backup. Cause The number of current physical volumes (Cur PV) and active physical volumes (Act PV) are not the same. Cur PV and Act PV must always agree for the volume group.
Max PE per PV VGDA PE Size (Mbytes) Total PE Alloc PE Free PE Total PVG Total Spare PVs Total Spare PVs in use VG Version VG Max Size VG Max Extents 4350 2 4 4340 3740 600 0 0 0 1.0 1082g 69248 In this example, the total free space is 600 physical extents of 4 MB, or 2400 MB. 2. 3. The logical volume is mirrored with a strict allocation policy, and there are not enough extents on a separate disk to comply with the allocation policy.
pvchange(1M) Message Text Unable to detach the path or physical volume via the pathname provided. Either use pvchange(1M) -a N to detach the PV using an attached path or detach each path to the PV individually using pvchange(1M) –a n Cause The specified path is not part of any volume group, because the path has not been successfully attached to the otherwise active volume group it belongs to. Recommended Action Check the specified path name to make sure it is correct.
Cause The vgcfgrestore command was used to initialize a disk that already belongs to an active volume group. Recommended Action Detach the physical volume or deactivate the volume group before attempting to restore the physical volume. If the disk may be corrupted, detach the disk and mark it using vgcfgrestore, then attach it again without replacing the disk. This causes LVM to reinitialize the disk and synchronize any mirrored user data mapped there.
1. 2. The disk was missing when the volume group was activated, but was later restored. This typically occurs when a system is rebooted or the volume group is activated with a disk missing, uncabled, or powered down. The disk LVM header was overwritten with the wrong volume group information. If the disk is shared between two systems, one system might not be aware that the disk was already in a volume group.
# mkdir /dev/vgname # mknod /dev/vgname/group c 64 unique_minor_number # vgimport -m vgname.map -v -f vgname.file /dev/vgname vgcreate(1M) Message Text vgcreate: "/dev/vgname/group": not a character device. Cause The volume group device file does not exist, and this version of the vgcreate command does not automatically create it. Recommended Action Create the directory for the volume group and create a group file, as described in “Creating the Volume Group Device File” (page 44).
vgdisplay(1M) Message Text vgdisplay: Couldn't query volume group "/dev/vgname". Possible error in the Volume Group minor number; Please check and make sure the group minor number is unique. vgdisplay: Cannot display volume group "/dev/vgname". Cause This error has the following possible causes: 1. There are multiple LVM group files with the same minor number. 2. Serviceguard was previously installed on the system and the /dev/slvmvg device file still exists. Recommended Action 1.
Recommended Action See the recommended actions under the “vgchange(1M)” (page 167) error messages. vgextend(1M) Message Text vgextend: Not enough physical extents per physical volume. Need: #, Have: #. Cause The disk size exceeds the volume group maximum disk size. This limitation is defined when the volume group is created, as a product of the extent size specified with the –s option of vgcreate and the maximum number of physical extents per disk specified with the –e option.
vgmodify(1M) Message Text Error: Cannot reduce max_pv below n+1 when the volume group is activated because the highest pvkey in use is n. Cause The command is trying to reduce max_pv below the highest pvkey in use. This is disallowed when a version 1.0 volume group is activated since it requires the compacting of pvkeys . Recommended Action Try executing the vgmodify operation with n+1 PVs. The permissible max_pv values can be obtained using the vgmodify -t option (used with and without the -a option).
Cause The pvkey of a physical volume can range between 0 to a value equal to one less than the maximum supported number of physical volumes for a volume group version. If the pvkey of the physical volume is not in this range for the target Version 2.x volume group, vgversion fails the migration. Recommended Action 1. Use lvmadm to determine the maximum supported number of physical volumes for the target volume group version. 2.
Message Text LVM: Begin: Contiguous LV (VG mmm 0x00n000, LV Number: p) movement: LVM: End: Contiguous LV (VG mmm 0x00n000, LV Number: p) movement: Cause This message is advisory. It is generated whenever the extents of a contiguous logical volume belonging to version 2.x volume group is moved using pvmove. This message is generated beginning with the September 2009 Update. Recommended Action None, if both the Begin and End message appear for a particular contiguous LV.
Recommended Action None. Message Text LVM: vg 64 0xnnnnnn: Unable to register for event notification on device 0xnnnnnnnn (1) Cause This message can be displayed on the first system boot after upgrading to HP-UX 11i Version 3. It is a transient message caused by updates to the I/O configuration. Later in the boot process, LVM registers for event notification again, and succeeds. Recommended Action None. Message Text LVM: WARNING: Snapshot LV (VG mmm 0x00n000, LV Number: p) threshold value reached.
Recommended Action Start the lvmpud daemon. Refer to the “Managing the lvmpud Daemon” (page 105) section. Message Text vmunix: LVM: ERROR: The task posted for increasing the pre-allocated extents failed for the snapshot LV (VG 128 0x004000, LV Number: 48). Cause The above message is logged when LVM fails to increase the pre-allocated extents automatically. This message should not be seen frequently. If seen frequently, it must be a LVM software or configuration issue.
Glossary Agile Addressing The ability to address a LUN with the same device special file regardless of the physical location of the LUN or the number of paths leading to it. In other words, the device special file for a LUN remains the same even if the LUN is moved from one Host Bus Adaptor (HBA) to another, moved from one switch/hub port to another, presented via a different target port to the host, or configured with multiple hardware paths. Also referred to as persistent binding.
Mirroring Simultaneous replication of data, ensuring a greater degree of data availability. LVM can map identical logical volumes to multiple LVM disks, thus providing the means to recover easily from the loss of one copy (or multiple copies in the case of multi-way mirroring) of data. Mirroring can provide faster access to data for applications using more data reads than writes. Mirroring requires the MirrorDisk/UX product.
Index Symbols /etc/default/fs, 98 /etc/fstab, 35, 57, 67, 98 /etc/lvmconf/ directory, 18, 35, 70 /etc/lvmpvg, 33 /etc/lvmtab, 11, 21, 39, 57, 71, 72, 108, 115, 163, 164 /stand/bootconf, 92, 95 /stand/rootconf, 108 /stand/system, 23 /var/adm/syslog/syslog.
backing up via mirroring, 68 boot file system see boot logical volume creating, 97 determining who is using, 57, 68, 99, 100, 131 extending, 98 guidelines, 22 in /etc/fstab, 98 initial size, 21 OnlineJFS, 98 overhead, 21 performance considerations, 22 reducing, 99, 115 HFS or VxFS, 100 OnlineJFS, 100 resizing, 22 root file system see root logical volume short or long file names, 98 stripe size for HFS, 32 stripe size for VxFS, 32 unresponsive, 109 finding logical volumes using a disk, 123 fsadm command, 99,
for root logical volume, 89 for swap logical volume, 89, 102 updating boot information, 36, 92, 94, 115 lvmadm command, 12, 39, 107, 108, 112, 150, 152, 163 lvmchk command, 39 lvmerge command, 40, 68, 152 synchronization, 26 lvmove command, 40, 152 lvmpud, 105 lvmpud command, 39, 152 lvreduce command, 40 and pvmove failure, 75 reducing a file system, 100, 115 reducing a logical volume, 55, 152 reducing a swap device, 102 removing a mirror, 56, 152 removing a mirror from a specific disk, 56 lvremove command,
policies for allocating, 24 policies for writing, 25 size, 9, 17 synchronizing, 26 physical volume groups, 30, 33 naming convention, 15 Physical Volume Reserved Area see PVRA physical volumes adding, 51 auto re-balancing, 75 commands for, 39 converting from bootable to nonbootable, 84, 169 creating, 43 defined, 9 device file, 13, 15, 163 disabling a path, 87 disk layout, 16 displaying information, 41 moving, 71, 72 moving data between, 73 naming convention, 13 removing, 51 resizing, 77 pre-allocated extents
creating a spare disk, 76 defined, 8, 27 detaching links, 87 reinstating a spare disk, 77 requirements, 27 splitting a mirrored logical volume, 68 splitting a volume group, 67 stale data, 26 strict allocation policy, 24 stripe size, 32 striping, 31 and mirroring, 33 benefits, 31 creating a striped logical volume, 52 defined, 8 interleaved disks, 31 performance considerations, 31 selecting stripe size, 32 setting up, 31 swap logical volume, 22, 23, 101 see also primary swap logical volume creating, 89, 102 e
splitting a volume group, 67 with multipathed disks, 58 vgmodify command, 12, 39, 58, 153 changing physical volume type, 84, 169 collecting information, 59, 62 errors, 172 modifying volume group parameters, 59, 171 resizing physical volumes, 77, 165 vgmove command, 95, 153 VGRA and vgmodify, 59, 63 area on disk, 17 size dependency on extent size, 17, 169 vgreduce command, 39, 51, 154 with multipathed disks, 58 vgremove command, 39, 68, 154 vgscan command, 39, 154 moving disks, 72 recreating /etc/lvmtab, 115