HP-UX System Administrator's Guide: Logical Volume Management HP-UX 11i Version 3 Abstract This document describes how to configure, administer, and troubleshoot the Logical Volume Manager (LVM) product on the HP-UX Version 3 platform. The HP-UX System Administrator’s Guide is written for administrators of HP-UX systems of all skill levels needing to administer HP-UX systems beginning with HP-UX Release 11i Version 3.
© Copyright 2011, 2014 Hewlett-Packard Development Company, L.P. Legal Notices Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice.
Contents HP secure development lifecycle......................................................................9 1 Introduction.............................................................................................11 1.1 LVM features.....................................................................................................................11 1.2 LVM architecture...............................................................................................................12 1.
2.4.2 Internal Performance Factors.......................................................................................32 2.4.2.1 Scheduling Policy...............................................................................................32 2.4.2.2 Mirror Write Consistency Cache..........................................................................32 2.4.2.3 Disk Spanning...................................................................................................33 2.4.2.
3.3.16 Quiescing and Resuming a Volume Group...................................................................68 3.3.17 Renaming a Volume Group.......................................................................................69 3.3.18 Splitting a Volume Group..........................................................................................70 3.3.19 Removing a Volume Group........................................................................................71 3.3.
3.13.4 Hardware Requirements..........................................................................................112 3.14 Hardware Issues............................................................................................................113 3.14.1 Integrating Cloned LUNs..........................................................................................113 4 Troubleshooting LVM...............................................................................115 4.1 Troubleshooting Overview.
5.3 Typographic Conventions.................................................................................................149 5.3.1 Examples and Shells................................................................................................150 5.4 Related Information.........................................................................................................151 5.5 Finding HP-UX Information................................................................................................
Glossary..................................................................................................193 Index.......................................................................................................
HP secure development lifecycle Starting with HP-UX 11i v3 March 2013 update release, HP secure development lifecycle provides the ability to authenticate HP-UX software. Software delivered through this release has been digitally signed using HP's private key. You can now verify the authenticity of the software before installing the products, delivered through this release. To verify the software signatures in signed depot, the following products must be installed on your system: • B.11.31.
1 Introduction This chapter addresses the following topics: • “LVM features” (page 11) • “LVM architecture” (page 12) • “Physical versus logical extents” (page 13) • “LVM volume group versions” (page 14) • “LVM device file usage” (page 15) • “LVM Disk Layout” (page 19) • “LVM Limitations” (page 21) • “Shared LVM” (page 21) 1.1 LVM features Logical Volume Manager (LVM) is a storage management system that lets you allocate and manage disk space for file systems or raw data.
1.2 LVM architecture An LVM system starts by initializing disks for LVM usage. An LVM disk is known as a physical volume (PV). A disk is marked as an LVM physical volume using either the HP System Management Homepage (HP SMH) or the pvcreate command. Physical volumes use the same device special files as traditional HP-UX disk devices. LVM divides each physical volume into addressable units called physical extents (PEs).
Figure 1 Disk Space Partitioned Into Logical Volumes 1.3 Physical versus logical extents When LVM allocates disk space to a logical volume, it automatically creates a mapping of the logical extents to physical extents. This mapping depends on the policy chosen when creating the logical volume. Logical extents are allocated sequentially, starting at zero, for each logical volume. LVM uses this mapping to access the data, regardless of where it physically resides.
Figure 2 Physical Extents and Logical Extents As shown in Figure 2, the contents of the first logical volume are contained on all three physical volumes in the volume group. Because the second logical volume is mirrored, each logical extent is mapped to more than one physical extent. In this case, there are two physical extents containing the data, each on both the second and third disks within the volume group.
Version 1.0 is the version supported on all current and previous versions of HP-UX 11i. The procedures and command syntax for managing Version 1.0 volume groups are unchanged from previous releases. When creating a new volume group, vgcreate defaults to Version 1.0. Version 2.0, 2.1, and 2.2 enable the configuration of larger volume groups, logical volumes, physical volumes, and other parameters. Version 2.1 is identical to Version 2.
link) information. The multipathed disk has a single persistent DSF regardless of the number of physical paths to it. The legacy view, represented by the legacy DSF, continue to exist. 1.5.1 Legacy device files versus persistent device files As of HP-UX 11i Version 3, disk devices can be represented by two different types of device files in the /dev directory, legacy and persistent. Legacy device files were the only type of mass storage device files in releases prior to HP-UX 11i Version 3.
1.5.3 Naming Conventions for LVM You must refer to LVM devices or volume groups by name when using them within HP SMH or with HP-UX commands. By default, the LVM device files created by both HP SMH and HP-UX commands follow a standard naming convention. However, you can choose customized names for volume groups and logical volumes.
vg, HP recommends using this prefix. By default, HP SMH uses the names of the form /dev/vgnn. The number nn starts at 00 and is incremented in the order that volume groups are created. By default, the root volume group is vg00. Logical Volume Names Logical volumes are identified by their device file names, which can either be assigned by you or assigned by default when you create a logical volume using the lvcreate command. When assigned by you, you can choose any name up to 255 characters.
crw-r----crw-r----- 1 root 1 root root root 64 0x010001 Mar 28 64 0x010002 Mar 28 2004 rlvol1 2004 rlvol2 By default, volume group numbering begins with zero (vg00), while logical volumes begin with one (lvol1). This is because the logical volume number corresponds to the minor number and the volume group's group file is assigned minor number 0. Physical volumes use the device files associated with their disk. LVM does not create device files for physical volumes. 1.5.4.1 Version 1.
Information about the LVM disk data structures in the BDRA is maintained with the lvlnboot and lvrmboot commands.
A smaller-than-default extent size or number of physical extents might be preferable. A high-capacity physical volume might be unusable in a volume group whose extent size is small or set with a small number of physical extents per disk. 1.6.5 User Data Area The user data area is the region of the LVM disk used to store all user data, including file systems, virtual memory system (swap), or user applications. 1.7 LVM Limitations LVM is a sophisticated subsystem.
• For volume group Version 1.0 or 2.0, lvchange, lvcreate, lvextend, lvmerge, lvmove, lvreduce, lvremove, lvsplit, vgextend, vgmodify, vgmove, and vgreduce cannot be used in shared mode. • The pvmove command cannot be used in shared mode for volume group Version 1.0. • For pvchange, only the -a option (for attaching or detaching a path to the specified physical volume) is allowed in shared mode. All other pvchange options are not supported in shared mode.
2 Configuring LVM By default, the LVM commands are already installed on your system. This chapter discusses issues to consider when setting up your logical volumes. It addresses the following topics: • “Planning Your LVM Configuration” (page 23) • “Setting Up Different Types of Logical Volumes” (page 23) • “Planning for Availability” (page 27) • “Planning for Recovery” (page 37) • “Planning for Performance” (page 32) 2.1 Planning Your LVM Configuration Using logical volumes requires some planning.
same manner. LVM uses the physical volumes in the order in which they appear in /etc/lvmtab and /etc/lvmtab_p, which means that data of a logical volume might not be evenly distributed over all the physical volumes within your volume group. As a result, when I/O access to the logical volumes occurs, one or more disks within the volume group might be heavily used, while the others might be lightly used, or not used at all. This arrangement does not provide optimum I/O performance.
Although these estimates are not precise, they suffice for planning a file system size. Create your file system large enough to be useful for some time before increasing its size. TIP: Because increasing the size of a file system is usually easier than reducing its size, be conservative in estimating how large to create a file system. An exception is the root file system. As a contiguous logical volume, the root file system is difficult to extend.
2.2.3.1 Swap Logical Volume Guidelines Use the following guidelines when configuring swap logical volumes: • Interleave device swap areas for better performance. Two swap areas on different disks perform better than one swap area with the equivalent amount of space. This configuration allows interleaved swapping, which means the swap areas are written to concurrently, thus enhancing performance. When using LVM, set up secondary swap areas within logical volumes that are on different disks using lvextend.
unlike the lvsplit approach, does not require the user to reduce a mirror copy from the original logical volume. Refer to the LVM Snapshot Logical Volumes white paper for more details on backups using snapshot logical volumes. 2.3 Planning for Availability This section describes LVM features that can improve the availability and redundancy of your data.
Strict and Nonstrict Allocation Strict allocation requires logical extents to be mirrored to physical extents on different physical volumes. Nonstrict allocation allows logical extents to be mirrored to physical extents that may be on the same physical volume. The -s y and -s n options to the lvcreate or lvchange commands set strict or nonstrict allocation.
disk that is already recorded, the performance is not impaired. Upon system reboot after crash, the operating system uses the MWC to resynchronize inconsistent data blocks quickly. The frequency of extra disk writes is small for sequentially accessed logical volumes (such as database logs), but increases when access is more random.
Parallel Synchronization By default, the lvsync command synchronizes logical volumes serially. In other words, it acts on the logical volumes specified on the command line one at a time, waiting until a volume finishes synchronization before starting the next. Starting with the September 2007 release of HP-UX 11i Version 3, you can use the –T option to synchronize logical volumes in parallel.
The pvdisplay and vgdisplay commands provide information on whether a given physical volume is an empty standby spare or currently holding data as a spare in use, along with information on any physical volume that is currently unavailable but had data spared. 2.3.3 Increasing Hardware Path Redundancy Through Multipathing Your hardware might provide the capability for dual cabling (dual controllers) to the same physical volume. If so, LVM can be configured with multiple paths to the same physical volume.
2.4 Planning for Performance This section describes strategies to obtain the best possible performance using LVM. It addresses the following topics: • “General Performance Factors” (page 32) • “Internal Performance Factors” (page 32) • “Increasing Performance Through Disk Striping” (page 34) • “Increasing Performance Through I/O Channel Separation” (page 36) 2.4.1 General Performance Factors The following factors affect overall system performance, but not necessarily the performance of LVM. 2.4.1.
in the MWC from one of the good copies to all the other copies. This process ensures that the mirrors are consistent but does not guarantee the quality of the data. On each write request to a mirrored logical volume that uses MWC, LVM potentially introduces one extra serial disk write to maintain the MWC. Whether this condition occurs depends on the degree to which accesses are random. The more random the accesses, the higher the probability of missing the MWC.
to the snapshots or the original logical volume, and also consider the tradeoff between performance requirements and metadata space that will have to be provisioned on disk. The larger the unit of unshare, the more will the latency be when data has to be unshared and the lesser the metadata space needed on disk to track the sharing relationship between snapshots and the original logical volume.
Figure 4 Interleaving Disks Among Buses • Increasing the number of disks might not improve performance because the maximum efficiency that can be achieved by combining disks in a striped logical volume is limited by the maximum throughput of the file system itself and by the buses to which the disks are attached. • Disk striping is highly beneficial for applications with few users and large, sequential transfers.
2.4.3.2 Interactions Between Mirroring and Striping Mirroring a striped logical volume improves the read I/O performance in a same way that it does for a nonstriped logical volume. Simultaneous read I/O requests targeting a single logical extent are served by two or three different physical volumes instead of one. A striped and mirrored logical volume follows a strict allocation policy; that is, the data is always mirrored on different physical volumes.
2.6 Planning for Recovery Flexibility in configuration, one of the major benefits of LVM, can also be a source of problems in recovery. The following are guidelines to help create a configuration that minimizes recovery time: • Keep the number of disks in the root volume group to a minimum; HP recommends using three disks, even if the root volume group is mirrored.
2.6.1 Preparing for LVM System Recovery To ensure that the system data and configuration are recoverable in the event of a system failure, follow these steps: 1. 2. 3. Load any patches for LVM. Use Ignite-UX to create a recovery image of your root volume group. Although Ignite-UX is not intended to be used to back up all system data, you can use it with other data recovery applications to create a method of total system recovery. Perform regular backups of the other important data on your system.
NOTE: For volume group Version 2.2 and higher: volume groups that have snapshots on which data unsharing is occurring, the LVM configuration backup file might not always be in sync with the LVM metadata on disk. LVM ensures that the configuration for volume group Version 2.2 and higher with snapshots configured is the latest by automatically backing it up during deactivation, unless backup has been disabled by the -A n option.
2.6.1.1 Example Script for LVM Configuration Recording The following example script captures the current LVM and I/O configurations. If they differ from the previously captured configuration, the script prints the updated configuration files and notifies the system administrator.
3 Administering LVM This section contains information on the day-to-day operation of LVM.
For help using HP SMH, see the HP SMH online help. • LVM command-line interface: LVM has a number of low-level user commands to perform LVM tasks, described in “Physical Volume Management Commands” (page 42), “Volume Group Management Commands” (page 42), and “Logical Volume Management Commands” (page 43). The following tables provide an overview of which commands perform a given task. For more information, see the LVM individual manpages.
Table 5 Volume Group Management Commands (continued) Task Command Handling online shared LVM reconfiguration, and pre-allocation of extents lvmpud for space-efficient snapshots Migrating a volume group to a different volume group version vgversion Migrating a volume group to new disks vgmove 1 To convert the cDSFs of a volume group in a particular node back to their corresponding persistent DSFs, use the vgscan -f command. For example: # vgscan -f vgtest *** LVMTAB has been updated successfully.
3.2 Displaying LVM Information To display information about volume groups, logical volumes, or physical volumes, use one of three commands. Each command supports the -v option to display detailed output and the -F option to help with scripting. NOTE: For volume group Version 2.2 or higher, when snapshots are involved, additional fields are displayed by these commands. See the individual command manpages for full description of the fields displayed.
3.2.2 Information on Physical Volumes Use the pvdisplay command to show information about physical volumes.
0000 /dev/disk/disk42 0001 /dev/disk/disk42 0002 /dev/disk/disk42 0000 current 0001 current 0002 current 3.
# ioscan -f -n -N -C disk For more information, see ioscan(1M). 6. Initialize the disk as a physical volume using the pvcreate command. For example: # pvcreate /dev/rdisk/disk3 Use the character device file for the disk. If you are initializing a disk for use as a boot device, add the -B option to pvcreate to reserve an area on the disk for a LIF volume and boot utilities. If you are creating a boot disk on an HP Integrity server, make sure the device file specifies the HP-UX partition number (2).
3.3.2.2 Creating a Version 1.0 Volume Group To create a Version 1.0 volume group, use the vgcreate command, specifying each physical volume to be included. For example: # vgcreate /dev/vgname /dev/disk/disk3 Use the block device file to include each disk in your volume group. You can assign all the physical volumes to the volume group with one command, or create the volume group with a single physical volume. No physical volume can already be part of an existing volume group.
Conversely, to display the minimum physical extent size for a given volume group size, use the -E option to vgcreate with -S. For example: # vgcreate -V 2.0 -E -S 2t Max_VG_size=2t:extent_size=1m For volume group Version 2.2 or higher, a new vgcreate-U option is introduced to configure the size of the unshare unit for snapshots in the volume group. Once the unshare unit size is specified, it cannot be changed. The default unshare unit size is 1024 KB if the -U option is not used.
VG Version VG Max Size VG Max Extents 1.0 81856m 20464 The VG Version field indicates the target volume group is Version 1.0 3.3.3.2 Command Syntax The vgversion syntax is: vgversion [-r] [-v] –V [-U unshare_unit] vg_version_new vg_name where -r is Review mode. This allows you to review the operation before performing the actual volume group version migration. -v is Verbose mode.
(For a list of all conditions, see the latest vgversion(1M) manpage.) When the migration will fail, the vgversion –r output messages should indicate the problem and possible solutions. For example, if you are migrating a volume group from Version 1.0 to 2.1, Version 2.1 requires more metadata. Thus, it is possible that there is not enough space in the LUN for the increase in metadata. In this example, vgversion –r should display the following: #vgversion -V 2.
is 1024 KB. 8 free user extents from the end of the Physical Volume /dev/disk/disk12 will be utilized to accommodate the Volume Group version 2.1 metadata. Warning: Volume Group version 2.1 does not support bad block relocation. The bad block relocation policy of all logical volumes will be set to NONE. Volume Group version can be successfully changed to 2.1 Review complete. Volume group not modified 3. After messages from the review indicate a successful migration, you can begin the actual migration: a.
3.3.3.5 Migration Recovery When running vgversion, recovery configuration files and a recovery script are created so that you can restore the target volume group to its original version in case there are problems during the actual migration. CAUTION: The recovery script should be run only in cases where the migration unexpectedly fails, such as an interruption during migration execution. • The recovery script should not be used to “undo” a successful migration.
NOTE: Once the recovery is complete using the restore script, an immediate vgversion operation in review mode will fail. You need to de-active the volume group and activate it again before running vgversion in review mode. This reset of volume group is not needed for a vgversion operation not in review mode. 3.3.4 Adding a Disk to a Volume Group Often, as new disks are added to a system, they must be added to an existing volume group rather than creating a whole new volume group.
2. • Move the physical extents onto another physical volume in the volume group. See “Moving Data to a Different Physical Volume” (page 76). • Remove the logical volumes from the disk, as described in “Removing a Logical Volume” (page 60). The logical volumes with physical extents on the disk are shown at the end of the pvdisplay listing. After the disk no longer holds any physical extents, use the vgreduce command to remove it from the volume group.
Strict, Nonstrict, or PVG-strict Extent Allocation -s y Strict allocation (default) -s n Nonstrict allocation -s g PVG-strict allocation Contiguous or Noncontiguous Extent Allocation -C y Contiguous allocation -C n Noncontiguous allocation (default) Mirror Scheduling Policy -d p Parallel scheduling (default) -d s Sequential scheduling Mirror Consistency Policy -M y MWC enable (default, optimal mirror resynchronization during crash recovery) -M n -c y MCR enable (full mirror resynchronizatio
VG Version VG Max Size VG Max Extents 1.0 1082g 69248 The Free PE entry indicates the number of 4 MB extents available, in this case, 79 (316 MB). 3. Extend the logical volume. For example: # lvextend -L 332 /dev/vg00/lvol7 This increases the size of this volume to 332 MB. NOTE: On the HP-UX 11i v3 March 2010 Update, the size of a logical volume cannot be extended if it has snapshots associated with it.
1. To find out what applications are using the logical volume, use the fuser command. For example: # fuser -cu /dev/vg01/lvol5 2. If the logical volume is in use, ensure the underlying applications can handle the size reduction. You might have to stop the applications. Decide on the new size of the logical volume. For example, if the logical volume is mounted to a file system, the new size must be greater than the space the data in the file system currently occupies.
3. Use the lvextend command with the -m option to add the number of additional copies you want. For example: # lvextend -m 1 /dev/vg00/lvol1 This adds a single mirror copy of the given logical volume. To force the mirror copy onto a specific physical volume, add it at end of the command line.
3. Update all references to the old name in any other files on the system. These include /etc/ fstab for mounted file systems or swap devices and existing mapfiles from a vgexport command. 3.3.12 Removing a Logical Volume CAUTION: Removing a logical volume makes its contents unavailable and likely to be overwritten. In particular, any file system contained in the logical volume is destroyed. To remove a logical volume, follow these steps: 1.
When vgexport completes, all information about the volume group has been removed from the system. The disks can now be moved to a different system, and the volume group can be imported there. 3.3.14 Importing a Volume Group To import a volume group, follow these steps: 1. 2. 3. Connect the disks to the system.
• Detect and handle physical volume size changes • Modify the maximum volume group size. • Handle physical volume LUN expansion. See “Modifying Physical Volume Characteristics” (page 80) for more details • Prepare a physical volume for a LUN contraction. See “Modifying Physical Volume Characteristics” (page 80) for more details. Beginning with the March 2009 Update, the vgmodify command can be run online (volume group activated and applications running) for Version 1.0 and 2.x volume groups.
The current and new Volume Group parameters differ. An update to the Volume Group IS required New Volume Group settings: Max LV Max PV Max PE per PV PE Size (Mbytes) VGRA Size (Kbytes) Review complete.
/etc/lvmconf/vg32.conf # pvmove /dev/disk/disk6:0 /dev/disk/disk6 Transferring logical extents of logical volume "/dev/vg32/lvol1"... Physical volume "/dev/disk/disk6" has been successfully moved. Volume Group configuration for /dev/vg32 has been saved in /etc/lvmconf/vg32.conf 4.
# vgdisplay vg32 --- Volume groups --VG Name VG Write Access VG Status Max LV Cur LV Open LV Max PV Cur PV Act PV Max PE per PV VGDA PE Size (Mbytes) Total PE Alloc PE Free PE Total PVG Total Spare PVs Total Spare PVs in use VG Version /dev/vg32 read/write available 255 0 0 255 2 2 15868 4 32 1084 0 1084 0 0 0 1.0 3.3.15.3 vgmodify for a Version 2.x Volume Group If the maximum volume group size (chosen when the Version 2.
2. If review mode indicates that the maximum VG size can be increased, perform the actual reprovisioning reconfiguration. This operation reconfigures every PV in the VG to the new (larger) maximum VG size. This operation also automatically adds new extents to PVs where the new extents were previously not added because the volume group was at its maximum number of extents. # vgmodify -a -E -S 64t vg1 3. 4. 5.
# vgdisplay -v vg1 --- Volume groups --VG Name ... VG Version VG Max Size VG Max Entents ... 2. /dev/vg1 2.1 500 GB 64000 Run diskinfo on the physical volumes to display their size. # diskinfo /dev/rdisk/disk46 # diskinfo /dev/rdisk/disk47 3. Run online vgmodify in review mode to verify that all physical volumes can be reconfigured to the new (larger) maximum VG size of 8TB.
/dev/disk/disk47 (an increase from 25602 to 25604) before the maximum volume group size was increased 5. Note that the number of extents for /dev/disk/disk47 is increased from 25604 to 38396 after the maximum VG size is increased, as shown by vgdisplay -v here. This command confirms that the maximum VG size was increased from 500 GB to 8 TB, and that the number of extents for /dev/disk/disk47 was increased from 25602 extents to 38396 extents, as shown by the vgdisplay command below.
NOTE: Individual physical volumes or logical volumes cannot be quiesced using this feature. To temporarily quiesce a physical volume to disable or replace it, see “Disabling a Path to a Physical Volume” (page 90). To quiesce a logical volume, quiesce or deactivate the volume group. To provide a stable image of a logical volume without deactivating the volume group, mirror the logical volume, then split off one of the mirrors, as described in “Backing Up a Mirrored Logical Volume” (page 71).
8. Remove saved configuration information based on the old volume group name as follows: # rm /etc/lvmconf/vg01.conf Note: if your volume group is Version 2.x and does not have any bootable physical volumes, and if you have configured a new path for the configuration file using the LVMP_CONF_PATH_NON_BOOT variable in the /etc/lvmrc file, you need to remove the configuration file from the new path. 9. Update all references to the old name in any other files on the system.
10. The physical volumes are currently defined in both volume groups. Remove the missing physical volumes from both volume groups as follows: # vgreduce -f vgold # vgreduce -f vgnew 11. Enable quorum checks for the old volume group as follows: # vgchange -a y -q y /dev/vgold On completion, the original volume group contains three logical volumes (lvol1, lvol2, and lvol3) with physical volumes /dev/disk/disk0 and /dev/disk/disk1.
To back up a mirrored logical volume containing a file system, using lvsplit and lvmerge, follow these steps: 1. Split the logical volume /dev/vg00/lvol1 into two separate logical volumes as follows: # lvsplit /dev/vg00/lvol1 This creates the new logical volume /dev/vg00/lvol1b. The original logical volume /dev/ vg00/lvol1 remains online. 2. Perform a file system consistency check on the logical volume to be backed up as follows: # fsck /dev/vg00/lvol1b 3.
When snapshots are involved, the volume group configuration changes with respect to the snapshot data unsharing while writes are occurring on the snapshot tree. So, it is recommended that the automatic backup of the volume group configuration not be overridden by the —A n option during volume group deactivation. You can display LVM configuration information previously backed up with vgcfgbackup or restore it using vgcfgrestore.
• “Modifying Physical Volume Characteristics” (page 80) • “Disabling a Path to a Physical Volume” (page 90) • “Creating an Alternate Boot Disk” (page 91) • “Mirroring the Boot Disk” (page 93) • “Mirroring the Boot Disk on HP 9000 Servers” (page 93) • “Mirroring the Boot Disk on HP Integrity Servers” (page 95) You might need to do the following tasks: • Move the disks in a volume group to different hardware locations on a system. • Move entire volume groups of disks from one system to another.
# ls -l /dev/vgnn/group crw-r--r-- 1 root sys 64 0x010000 Mar 28 4. 2004 /dev/vgnn/group For this example, the volume group major number is 64, and the minor number is 0x010000. Remove the volume group device files and its entry from the LVM configuration files by entering the following command: # vgexport -v -s -m /tmp/vgnn.map /dev/vgnn 5. 6. Physically move your disks to their desired new locations. To view the new locations, enter the following command: # vgscan -v 7.
# vgexport -v -s -m /tmp/vg_planning.map /dev/vg_planning The vgexport command removes the volume group from the system and creates the /tmp/ vg_planning.map file. 5. 6. 7. Connect the disks to the new system and copy the /tmp/vg_planning.map file to the new system. If you are using an HP-UX release before March 2008, create the volume group device file using the procedure in “Creating the Volume Group Device File” (page 47).
# mkboot /dev/rdisk/disk4 # mkboot -a "hpux -a (;0)/stand/vmunix" /dev/rdisk/disk4 3. To extend your root volume group with the destination disk, enter: # vgextend /dev/vg00 /dev/disk/disk4 4. To move all physical extents from the source disk to the destination disk, enter: # pvmove /dev/disk/disk1 5. /dev/disk/disk4 To reduce the source disk from the volume group, enter: # vgreduce /dev/vg00 /dev/disk/disk1 6.
NOTE: The pvmove command is not an atomic operation; it moves data extent by extent. The following might happen upon abnormal pvmove termination by a system crash or kill -9: For Version 1.0 volume groups prior to the September 2009 Update, the volume group can be left in an inconsistent configuration showing an additional pseudomirror copy for the extents being moved.
Example Consider a volume group having the configuration below: • Three physical volumes each of size 1245 extents: /dev/disk/disk10, /dev/disk/disk11, and /dev/disk/disk12. • Three logical volumes all residing on same disk, /dev/disk/disk10: /dev/vg_01/lvol1 (Size = 200 extents), /dev/vg_01/lvol2 (Size = 300 extents), /dev/vg_01/lvol3 (Size = 700 extents).
1. Use the pvcreate command to initialize the disk as an LVM disk. NOTE: Do not use the -B option because spare physical volumes cannot contain boot information. # pvcreate /dev/rdisk/disk3 2. Ensure the volume group has been activated, as follows: # vgchange -a y /dev/vg01 3. Use the vgextend command with -z y to designate one or more physical volumes as spare physical volumes within the volume group.
3.4.7.1 Handling Size Increases From the LUN Side: Disk arrays typically allow a LUN to be resized. If the volume group is activated during size increase, this is known as Dynamic LUN Expansion (DLE). If you increase the size of a LUN, follow these steps to incorporate the additional space into the volume group: 1. 2. Increase the LUN size using the instructions for the array. Verify that the LUN is expanded by running the diskinfo command. From the LVM Side (for Version 1.
3. 4. on the command line, vgmodify checks all PVs in the volume group. Optionally, you can specify the PVs you want to check in the command line after the VG name. . Perform the actual DLE reconfiguration (run vgmodify without the -r review option). If no PVs are specified, vgmodify attempts reconfiguration on all PVs in the volume group (if it detects them as having been expanded). Optionally, you can list the specific PVs you want to reconfigure in the command line after the VG name.
28 30 32 35 38 ... 255 3836 3580 3324 3068 2812 122753 114561 106369 98177 89985 252 8065 The table shows that without renumbering physical extents, a max_pv of 35 or lower permits a max_pe sufficient to accommodate the increased physical volume size. # vgmodify -v -t -n vg32 Volume Group configuration for /dev/vg32 has been saved in /etc/lvmconf/vg32.
6. Commit the new values as follows: # vgmodify -p 10 -e 10748 vg32 Current Volume Group settings: Max LV 255 Max PV 16 Max PE per PV 1016 PE Size (Mbytes) 32 VGRA Size (Kbytes) 176 The current and new Volume Group parameters differ.
1. Run online vgmodify in review mode to verify that the physical volumes require reconfiguration for the DLE and to preview the number of new extents to be added: # vgmodify -r -a -E vg1 Physical volume "/dev/disk/disk46" requires reconfiguration for expansion. Current number of extents: 12790 Number of extents after reconfiguration: 25590 Physical volume "/dev/disk/disk46" was not changed. Physical volume "/dev/disk/disk47" requires reconfiguration for expansion.
3. Verify that the physical volumes were reconfigured and that there are new extents available with the vgdisplay –v command: # vgdisplay -v vg1 --- Volume groups --VG Name ... PE Size (Mbytes) Total PE Alloc PE Free PE … VG Version VG Max Size VG Max Extents … --- Logical volumes --LV Name LV Status LV Size (Mbytes) Current LE Allocated PE Used PV --- Physical volumes --PV Name PV Status Total PE Free PE … PV Name PV Status Total PE Free PE /dev/vg1 8 51180 25580 25600 2.
3.4.7.2 Handling Size Decreases CAUTION: A similar procedure can also be used when the size of a physical volume is decreased. However, there are limitations: • Sequence: The sequence must be reversed to avoid data corruption. For 1. 2. For 1. 2. • an increase in size, the sequence is: Increase the LUN size from the array side. Then, increase the volume group size from the LVM side. a decrease in size, the sequence is: Decrease the volume group size from the LVM side.
information. If a physical volume was accidentally initialized as bootable, you can convert the disk to a nonbootable disk and reclaim LVM metadata space. CAUTION: The boot volume group requires at least one bootable physical volume. Do not convert all of the physical volumes in the boot volume group to nonbootable, or your system will not boot. To change a disk type from bootable to nonbootable, follow these steps: 1. 2. 3. 4. 5. 6.
1 2 ... 255 65535 45820 2097120 1466240 252 8064 If you change the disk type, the VGRA space available increases from 768 KB to 2784KB (if physical extents are not renumbered) or 32768 KB (if physical extents are renumbered). Changing the disk type also permits a larger range of max_pv and max_pe. For example, if max_pv is 255, the bootable disk can only accommodate a disk size of 8064 MB, but after conversion to nonbootable, it can accommodate a disk size of 40834 MB. 3.
"/etc/lvmconf/vg01.conf.old" Volume group "vg01" has been successfully changed. 6. Activate the volume group and verify the changes as follows: # vgchange -a y vg01 Activated volume group Volume group "vg01" has been successfully changed. # vgcfgbackup vg01 Volume Group configuration for /dev/vg01 has been saved in /etc/lvmconf/vg01.conf # vgcfgrestore -l -v -n vg01 Volume Group Configuration information in "/etc/lvmconf/vg01.
Detaching a link does not disable sparing. That is, if all links to a physical volume are detached and a suitable spare physical volume is available in the volume group, LVM uses it to reconstruct the detached disk. For more information on sparing, see “Increasing Disk Redundancy Through Disk Sparing” (page 30). You can view the LVM status of all links to a physical volume using vgdisplay with the -v option.
1. Create a bootable physical volume. a. On an HP Integrity server, partition the disk using the idisk command and a partition description file, then run insf as described in “Mirroring the Boot Disk on HP Integrity Servers” (page 95). b. Run pvcreate with the -B option. On an HP Integrity server, use the device file denoting the HP-UX partition: # pvcreate -B /dev/rdisk/disk6_p2 On an HP 9000 server, use the device file for the entire disk: # pvcreate -B /dev/rdisk/disk6 2.
Physical Volumes belonging in Root Volume Group: /dev/disk/disk6 -- Boot Disk Boot: bootlv on: /dev/disk/disk6 Root: rootlv on: /dev/disk/disk6 Swap: swaplv on: /dev/disk/disk6 Dump: swaplv on: /dev/disk/disk6, 0 15. Once the boot and root logical volumes are created, create file systems for them. For example: # mkfs –F hfs /dev/vgroot/rbootlv # mkfs –F vxfs /dev/vgroot/rrootlv NOTE: On HP Integrity servers, the boot file system can be VxFS.
1. Make sure the device files are in place. For example: # insf -e -H 0/1/1/0.0x1.0x0 The following device files now exist for this disk: /dev/[r]disk/disk4 2. Create a bootable physical volume as follows: # pvcreate -B /dev/rdisk/disk4 3. Add the physical volume to your existing root volume group as follows: # vgextend /dev/vg00 /dev/disk/disk4 4. Place boot utilities in the boot area as follows: # mkboot /dev/rdisk/disk4 5.
TIP: To shorten the time required to synchronize the mirror copies, use the lvextend and lvsync command options introduced in the September 2007 release of HP-UX 11i Version 3. These options enable you to resynchronize logical volumes in parallel rather than serially. For example: # # # # # # # # # 8.
For this example, the disk to be added is at hardware path 0/1/1/0.0x1.0x0, with device special files named /dev/disk/disk2 and /dev/rdisk/disk2. Follow these steps: 1. Partition the disk using the idisk command and a partition description file. a. Create a partition description file.
00271 current /dev/vg00/lvol7 00000 00408 current /dev/vg00/lvol8 00000 8. Mirror each logical volume in vg00 (the root volume group) onto the specified physical volume. For example: # lvextend –m 1 /dev/vg00/lvol1 /dev/disk/disk2_p2 The newly allocated mirrors are now being synchronized. This operation will take some time. Please wait .... # lvextend –m 1 /dev/vg00/lvol2 /dev/disk/disk2_p2 The newly allocated mirrors are now being synchronized. This operation will take some time. Please wait ....
12. Add a line to /stand/bootconf for the new boot disk using vi or another text editor as follows: # vi /stand/bootconf l /dev/disk/disk2_p2 Where the literal “l” (lower case L) represents LVM. 3.5 Migrating a Volume Group to New Disks: vgmove Beginning with September 2009 Update, LVM provides a new vgmove command to migrate data in a volume group from an old set of disks to a new set of disks.
1. Instead of manually creating a diskmap file, mapping the old source to new destination disks, the -i option is used to generate a diskmap file for the migration. The user provides a list of destination disks, called newdiskfile in this example. # cat newdiskfile /dev/disk/disk10 /dev/disk/disk11 # vgmove -i newdiskfile -f diskmap.txt /dev/vg00 2. The resulting diskmap.txt file contains the mapping of old source disks to new destination disks: # cat diskmap.
3.7 Administering File System Logical Volumes This section describes special actions you must take when working with file systems inside logical volumes. It addresses the following topics: • “Creating a File System” (page 100) • “Extending a File System” (page 101) • “Reducing the Size of a File System” (page 102) • “Backing Up a VxFS Snapshot File System” (page 104) TIP: When dealing with file systems, you can use HP SMH or a sequence of HP-UX commands.
3.7.2 Extending a File System Extending a file system inside a logical volume is a two-step task: extending the logical volume, then extending the file system. The first step is described in “Extending a Logical Volume” (page 56). The second step, extending the file system itself, depends on the following factors: • What type of file system is involved? If it is HFS or VxFS? HFS requires the file system to be unmounted to be extended. Check the type of file system.
2. Extend the logical volume. For example: # /sbin/lvextend -L 332 /dev/vg01/lvol2 This increases the size of this volume to 332 MB. 3. Extend the file system size to the logical volume size. If the file system is unmounted, use the extendfs command as follows: # /sbin/extendfs /dev/vg01/rlvol2 If you did not have to unmount the file system, use the fsadm command instead. The new size is specified in terms of the block size of the file system.
Reducing a File System Created with OnlineJFS Using the fsadm command shrinks the file system, provided the blocks it attempts to deallocate are not currently in use; otherwise, it fails. If sufficient free space is currently unavailable, file system defragmentation of both directories and extents might consolidate free space toward the end of the file system, allowing the contraction process to succeed when retried. For example, suppose your VxFS file system is currently 6 GB.
3.7.4 Backing Up a VxFS Snapshot File System NOTE: Creating and backing up a VxFS snapshot file system requires that you have the optional HP OnlineJFS product installed on your system. For more information, see HP-UX System Administrator's Guide: Configuration Management. VxFS enables you to perform backups without taking the file system offline by making a snapshot of the file system, a read-only image of the file system at a moment in time. The primary file system remains online and continues to change.
3.8.1 Administering primary Swap Logical Volumes NOTE: Version 2.0 and 2.1 volume groups do not support configuring primary swap logical volume through lvlnboot(1M) command. However they do support configuring swap logical volume through swapon(1M) command. Please refer the section ““Administering secondary Swap Logical Volumes” (page 106) When you enable a swap area within a logical volume, HP-UX determines how large the area is, and it uses no more space than that.
3.8.1.3 Reducing the Size of a Swap Device If you are using a logical volume for swap, you must reduce the swap size before reducing the size of the logical volume. You can reduce the size of the logical volume using lvreduce or HP SMH. NOTE: Changes to primary swap configuration, such as re-configuring another logical volume as swap or size changes, will take effect in swap sub system only after the reboot. The swapinfo(1M) command displays current swap device and the size 3.8.
For more information, see lvcreate(1M). After creating a logical volume to be used as a dump device, use the lvlnboot command with the -d option to update the dump information used by LVM. If you created a logical volume /dev/ vg00/lvol2 for use as a dump area, update the boot information by entering the following: # lvlnboot -d /dev/vg00/lvol2 3.9.1.
When you configure non-root logical volume as a swap or dump, ensure that AUTO_VG_ACTIVATE settings in /etc/lvmrc is turned on: # grep "AUTO_VG_ACTIVATE=" AUTO_VG_ACTIVATE=1 /etc/lvmrc Without this setting, the logical volumes from non-root VG will not be configured as a swap/dump device after the reboot since the corresponding volume group stays deactivated. NOTE: Root volume group gets activated on every reboot regardless of AUTO_VG_ACTIVATE settings.
3.11.1 Types of Snapshots Snapshots can be of two types: fully-allocated and space-efficient. • When a fully allocated snapshot is created, the number of extents required for the snapshot is allocated immediately, just like for a normal logical volume. However, the data contained in the original logical volume is not copied over to these extents. The copying of data occurs through the data unsharing process.
# lvremove -F /dev/vg01/lvol1_S5 Refer to the lvremove(1M) manpage for more details and full syntax for deleting logical volumes on the snapshot tree. 3.11.4 Displaying Snapshot Information The vgdisplay, lvdisplay, and pvdisplay commands will now display additional information when snapshots are involved. A summary of the additional fields displayed by these commands is listed here. See the respective manpages and the LVM Snapshot Logical Volumes white paper for more detailed information.
NOTE: The value of “Pre-allocated LE” should be the sum of “Current pre-allocated LE” and “Unshared LE.” But, in some instances, this might not be shown as the case while an operation that changes the logical volume size or while unsharing of extents is in progress. The correct information will be displayed once the operation is complete. For more information about the LVM snapshot feature and limitations when snapshots are involved, see the lvm(7) manpage and the LVM Snapshot Logical Volumes white paper.
3.13.1 Administration of boot disks of size greater than 2 TB There is no change in LVM command interfaces when administering boot disks of size greater than 2 TB. Administration of boot disks greater than 2 TB is done in the same way as the smaller disks. However there are a few compatibility constraints that are discussed in the next section.
• SAS HBAs: 51378-B21(P711m), AM311A(P411), AM312A(P812), Internal HBA P410i • Fiber Channel HBAs: 403619-B21, 403621-B21, 451871-B21, 456972-B21, AD193A, AD194A, AD221A, AD222A, AD393A, AH400A, AH401A, AH402A, AH403A, AT094A NOTE: For an updated list of cards that support this feature, see the support matrixes at http:// www.hp.com/go/hpux-iocards-docs 3.14 Hardware Issues This section describes hardware-specific issues dealing with LVM. 3.14.
4 Troubleshooting LVM This chapter provides conceptual troubleshooting information as well as detailed procedures to help you plan for LVM problems, troubleshoot LVM, and recover from LVM failures.
Max Max Max Max Max Max Max Max Max Max Max LV Size (Tbytes) PV Size (Tbytes) VGs LVs PVs Mirrors Stripes Stripe Size (Kbytes) LXs per LV PXs per PV Extent Size (Mbytes) 256 16 2048 2047 2048 5 511 262144 33554432 16777216 256 If your release does not support Version 2.1 volume groups, it displays the following: # lvmadm -t -V 2.1 Error: 2.1 is an invalid volume group version. • To display the contents of the /etc/lvmtab and /etc/lvmtab_p files in a human-readable fashion.
A maintenance mode boot differs from a standard boot as follows: • The system is booted in single-user mode. • No volume groups are activated. • Primary swap and dump are not available. • Only the root file system and boot file system are available. • If the root file system is mirrored, only one copy is used. Changes to the root file system are not propagated to the mirror copies, but those mirror copies are marked stale and will be synchronized when the system boots normally.
4.2.1.1 Temporarily Unavailable Device By default, LVM retries I/O requests with recoverable errors until they succeed or the system is rebooted. Therefore, if an application or file system stalls, your troubleshooting must include checking the console log for problems with your disk drives and taking action to restore the failing devices to service. 4.2.1.
4.2.2.1 Media Errors If an I/O request fails because of a media error, LVM typically prints a message to the console log file (/var/adm/syslog/syslog.log) when the error occurs. In the event of a media error, you must replace the disk (see “Disk Troubleshooting and Recovery Procedures” (page 124)). If your disk hardware supports automatic bad block relocation (usually known as hardware sparing), enable it, because it minimizes media errors seen by LVM.
or is not configured into the kernel. vgchange: Couldn't activate volume group "/dev/vg01": Either no physical volumes are attached or no valid VGDAs were found on the physical volumes. If a nonroot volume group does not activate because of a failure to meet quorum, follow these steps: 1. 2. Check the power and data connections (including Fibre Channel zoning and security) of all the disks that are part of the volume group that you cannot activate.
# vgchange -a y /dev/vgtest vgchange: Error: The "lvmp" driver is not loaded. Here is another possible error message: # vgchange -a y /dev/vgtest vgchange: Warning: Couldn't attach to the volume group physical volume "/dev/disk/disk1": Illegal byte sequence vgchange: Couldn't activate volume group "/dev/vgtest": Quorum not present, or some physical volume(s) are missing.
VG Version Max VG Size (Tbytes) Max LV Size (Tbytes) Max PV Size (Tbytes) Max VGs Max LVs Max PVs Max Mirrors Max Stripes Max Stripe Size (Kbytes) Max LXs per LV Max PXs per PV Max Extent Size (Mbytes) 2.0 2048 256 16 512 511 511 6 511 262144 33554432 16777216 256 TIP: If your system has no Version 2.x volume groups, you can free up system resources associated with lvmp by unloading it from the kernel.
LVM: WARNING: BDRA lists the number of PV(s) for the root VG as nn, but rootvgscan found only nn. Proceeding with root VG activation. 4.5 LVM Boot Failures There are several reasons why an LVM configuration cannot boot. In addition to the problems associated with boots from non-LVM disks, the following problems can cause an LVM-based system not to boot. 4.5.1 Insufficient Quorum In this scenario, not enough disks are present in the root volume group to meet the quorum requirements.
1. 2. Reboot your system in single-user mode. If you already have a good current backup of the data in the now corrupt file system, skip this step. If you do not have backup data and if that data is critical, try to recover whatever part of the data that might remain intact by attempting to back up the files on the file system. Before you attempt any current backup, consider the following: 3. 4. 5. • When your backup program accesses the corrupt part of the file system, your system will crash again.
will lose time while you restore data from backup media, and you will lose any data changed since your last backup • Initializing from scratch: If you do not mirror or back up a logical volume, be aware that you will lose data if the underlying hard disk fails. This can be acceptable in some cases, such as a temporary or scratch volume. 4.7.1.2 Using LVM Online Disk Replacement (LVM OLR) LVM online disk replacement (LVM OLR) simplifies the replacement of disks under LVM.
4.7.2 Step 2: Recognizing a Failing Disk This section explains how to look for signs that one of your disks is having problems, and how to determine which disk it is. 4.7.2.1 I/O Errors in the System Log Often an error message in the system log file, /var/adm/syslog/syslog.log, is your first indication of a disk problem.
this volume group vgdisplay: Warning: couldn't query all of the physical volumes. #vgchange -a y /dev/vg01 vgchange: Warning: Couldn't attach to the volume group physical volume "/dev/dsk/c0t3d0": A component of the path of the physical volume does not exist. Volume group "/dev/vg01" has been successfully changed. Another sign of disk problem is seeing stale extents in the lvdisplay command output.
4.7.3 Step 3: Confirming Disk Failure Once you suspect a disk has failed or is failing, make certain that the suspect disk is indeed failing. Replacing or removing the incorrect disk makes the recovery process take longer. It can even cause data loss. For example, in a mirrored configuration, if you were to replace the wrong disk—the one holding the current good copy rather than the failing disk—the mirrored data on the good disk is lost. It is also possible that the suspect disk is not failing.
read of the first 64 megabytes of the disk: When you enter the following command, look for the solid blinking green LED on the disk: # dd if=/dev/rdsk/c0t5d0 of=/dev/null bs=1024k count=64 64+0 records in 64+0 records out NOTE: If the dd command hangs or takes a long time, Ctrl+C stops the read on the disk. To run dd on the background, add & at the end of the command.
# dd bs=1k skip=66560 count=32768 if=/dev/rdsk/c0t3d0 of=/dev/null # dd bs=1k skip=66560 count=32768 if=/dev/rdsk/c1t3d0 of=/dev/null Note the value calculated is used in the skip argument. The count is obtained by multiplying the PE size by 1024.
4.7.4 Step 4: Determining Action for Disk Removal or Replacement Once you know which disk is failing, you can decide how to deal with it. You can choose to remove the disk if your system does not need it, or you can choose to replace it. Before deciding on your course of action, you must gather some information to help guide you through the recovery process.
command shows '???' for the physical volume if it is unavailable. The issue with this approach is that it does not show precisely how many disks are unavailable. To ensure that multiple simultaneous disk failures have not occurred, run vgdisplay to check the difference between the number of active and number of current physical volumes. For example, a difference of one means only one disk is failing.
• “Step 5: Removing a Bad Disk” (page 134) • “Step 6: Replacing a Bad Disk (Persistent DSFs)” (page 137) • “Step 7: Replacing a Bad Disk (Legacy DSFs)” (page 145) 4.
4.7.5 Step 5: Removing a Bad Disk You can elect to remove the failing disk from the system instead of replacing it if you are certain that another valid copy of the data exists or the data can be moved to another disk. 4.7.5.1 Removing a Mirror Copy from a Disk If you have a mirror copy of the data already, you can stop LVM from using the copy on the failing disk by reducing the number of mirrors.
could not query the physical volume. You can still remove the mirror copy, but you must specify the physical volume key rather than the name. The physical volume key of a disk indicates its order in the volume group. The first physical volume has the key 0, the second has the key 1, and so on. This need not be the order of appearance in /etc/lvmtab file although it is usually the case, at least when a volume group is initially created.
VGDA Cur LV PE Size (Mbytes) Total PE Free PE Allocated PE Stale PE IO Timeout (Seconds) Autoswitch 2 0 4 1023 1023 0 0 default On In this example, there are two entries for PV Name. Use the vgreduce command to reduce each path as follows: # vgreduce vgname /dev/dsk/c0t5d0 # vgreduce vgname /dev/dsk/c1t6d0 If the disk is unavailable, the vgreduce command fails. You can still forcibly reduce it, but you must then rebuild the lvmtab, which has two side effects.
4.7.6 Step 6: Replacing a Bad Disk (Persistent DSFs) If instead of removing the disk, you need to replace the faulty disk, this section provides a step-by-step guide to replacing a faulty LVM disk, for systems configured with persistent DSFs. For systems using legacy DSFs, refer to the next step “Step 7: Replacing a Bad Disk (Legacy DSFs)” (page 145) If you have any questions about the recovery process, contact your local HP Customer Response Center for assistance.
If the disk is hot-swappable, replace it. If the disk is not hot-swappable, shut down the system, turn off the power, and replace the disk. Reboot the system. 4. Notify the mass storage subsystem that the disk has been replaced. If the system was not rebooted to replace the failed disk, then run scsimgr before using the new disk as a replacement for the old disk.
8. Restore LVM access to the disk. If you did not reboot the system in Step 2, “Halt LVM access to the disk,” reattach the disk as follows: # pvchange –a y /dev/disk/disk14 If you did reboot the system, reattach the disk by reactivating the volume group as follows: # vgchange -a y /dev/vgnn NOTE: The vgchange command with the -a y option can be run on a volume group that is deactivated or already activated.
b. If fuser reports process IDs using the logical volume, use the ps command to map the list of process IDs to processes, and then determine whether you can halt those processes. For example, look up processes 27815 and 27184 as follows: # ps -fp27815 -p27184 UID PID PPID C STIME TTY root 27815 27184 0 09:04:05 pts/0 root 27184 27182 0 08:26:24 pts/0 c. TIME COMMAND 0:00 vi test.c 0:00 -sh If so, use fuser with the –k option to kill all processes accessing the logical volume.
6. Assign the old instance number to the replacement disk. For example: # io_redirect_dsf -d /dev/disk/disk14 -n /dev/disk/disk28 This assigns the old LUN instance number (14) to the replacement disk. In addition, the device special files for the new disk are renamed to be consistent with the old LUN instance number.
4.7.6.3 Replacing a Mirrored Boot Disk There are two additional operations you must perform when replacing a mirrored boot disk: 1. 2. You must initialize boot information on the replacement disk. If the replacement requires rebooting the system, and the primary boot disk is being replaced, you must boot from the alternate boot disk. In this example, the disk to be replaced is at lunpath hardware path 0/1/1/1.0x3.0x0, with device special files named /dev/disk/disk14 and /dev/rdisk/disk14.
For information on the boot process and how to select boot options, see HP-UX System Administrator's Guide: Configuration Management. 4. Notify the mass storage subsystem that the disk has been replaced. If the system was not rebooted to replace the failed disk, then run scsimgr before using the new disk as a replacement for the old disk.
8. Restore LVM configuration information to the new disk. For example: # vgcfgrestore -n /dev/vg00 /dev/rdisk/disk14_p2 NOTE: On an HP 9000 server, the boot disk is not partitioned, so the physical volume refers to the entire disk, not the HP-UX partition. Use the following command: # vgcfgrestore -n /dev/vg00 /dev/rdisk/disk14 9. Restore LVM access to the disk.
4.7.7 Step 7: Replacing a Bad Disk (Legacy DSFs) Follow these steps to replace a bad disk if your system is configured with only legacy DSFs. NOTE: LVM recommends the use of persistent device special files, because they support a greater variety of load balancing options. For replacing a disk with persistent device special files, see “Step 6: Replacing a Bad Disk (Persistent DSFs)” (page 137) To replace a bad disk, follow these steps. 1. Halt LVM Access to the Disk.
3. Initialize the Disk for LVM This step copies LVM configuration information onto the disk, and marks it as owned by LVM so it can subsequently be attached to the volume group. If you replaced a mirror of the root disk on an Integrity server, run the idisk and insf commands as described in “Mirroring the Boot Disk on HP Integrity Servers” (page 95). For PA-RISC servers or non-root disks, this step is unnecessary.
NOTE: The ITRC resource forums at http://www.itrc.hp.com offer peer-to-peer support to solve problems and are free to users after registration. If this is a new problem or if you need additional help, log your problem with the HP Response Center, either online through the support case manager at http://www.itrc.hp.com, or by calling HP Support.
5 Support and Other Resources 5.1 New and Changed Information in This Edition The eighth edition of HP-UX System Administrator's Guide: Logical Volume Management addresses the following new topics: • Added information about converting cDSFs back to their corresponding persistent DSFs, in Table 5 (page 42). • Provided new information on LVM I/O timeout parameters, see “Configuring LVM I/O Timeout Parameters” (page 36) and “LVM I/O Timeout Parameters” (page 175).
Key The name of a keyboard key. Return and Enter both refer to the same key. Term The defined use of an important word or phrase. User input Commands and other text that you type. Variable or Replaceable The name of a placeholder in a command, function, or other syntax display that you replace with an actual value. -chars One or more grouped command options, such as -ikx. The chars are usually a string of literal characters that each represent a specific option.
5.4 Related Information HP-UX technical documentation can be found on HP's documentation website at http:// www.hp.com/go/hpux-core-docs. In particular, LVM documentation is provided on this web page:http://www.hp.com/go/ hpux-LVM-VxVM-docs. See the HP-UX Logical Volume Manager and Mirror Disk/UX Release Notes for information about new features and defect fixes on each release.
5.6 HP-UX 11i Release Names and Operating System Version Identifiers With HP-UX 11i, HP delivers a highly available, secure, and manageable operating system that meets the demands of end-to-end Internet-critical computing. HP-UX 11i supports enterprise, mission-critical, and technical computing environments. HP-UX 11i is available on both HP 9000 systems and HP Integrity systems. Each HP-UX 11i release has an associated release name and release identifier.
changing the printing date. The document part number changes when extensive changes are made. Document updates can be issued between editions to correct errors or document product changes. To ensure that you receive the updated or new editions, subscribe to the appropriate product support service. See your HP sales representative for details. You can find the latest version of this document online at: http://www.hp.com/go/hpux-LVM-VxVM-docs .
Select Insight Remote Support from the menu on the right.
5.10 HP Encourages Your Comments HP encourages your comments concerning this document. We are committed to providing documentation that meets your needs. Send any errors found, suggestions for improvement, or compliments to: http://www.hp.com/bizsupport/feedback/ww/webfeedback.html Include the document title, manufacturing part number, and any comment, error found, or suggestion for improvement you have concerning this document. 5.
6 Documentation feedback HP is committed to providing documentation that meets your needs. To help us improve the documentation, send any errors, suggestions, or comments to Documentation Feedback (docsfeedback@hp.com). Include the document title and part number, version number, or the URL when submitting your feedback.
A LVM Specifications and Limitations This appendix discusses LVM product specifications. NOTE: Do not infer that a system configured to these limits is automatically usable. Table 14 Volume Group Version Maximums Version 1.0 Volume Groups Version 2.0 Volume Groups Version 2.1 Volume Groups Version 2.
1 2 The limit of 2048 volume groups is shared among Version 2.x volume groups. Volume groups of Versions 2.x can be created with volume group numbers ranging from 0-2047. However, the maximum number of Version 2.0 volume groups that can be created is 512. For volume group Version 2.2 or higher, the total number of logical volumes includes normal logical volumes as well as snapshot logical volumes. Table 15 Version 1.
Table 16 Version 2.x Volume Group Limits Parameter Command to Set/Change Parameter Minimum Value Default Value Maximum Value 0 n/a 20481 Number of physical volumes n/a in a volume group 511 511 (2.0) 511 (2.0) 2048 (2.1, 2.2) 2048 (2.1, 2.2) Number of logical volumes in a volume group n/a 511 511 (2.0) 511 (2.0) 2047 (2.1, 2.2) 2047 (2.1, 2.
A.1 Determining LVM’s Maximum Limits on a System The March 2008 update to HP-UX 11i v3 (11.31) introduced a new command that enables the system administrator to determine the maximum LVM limits supported on the target system for a given volume group version. The lvmadm command displays the implemented limits for Version 1.0 and Version 2.x volume groups. It is impossible to create a volume group that exceeds these limits.
Max Max Max Min Max Max LXs per LV PXs per PV Extent Size (Mbytes) Unshare unit (Kbytes) Unshare unit (Kbytes) Snapshots per LV 33554432 16777216 256 512 4096 255 A.
B LVM Command Summary This appendix contains a summary of the LVM commands and descriptions of their use. Table 17 LVM Command Summary Command Description and Example extendfs Extends a file system: # extendfs /dev/vg00/rlvol3 lvmadm Displays the limits associated with a volume group version: # lvmadm -t -V 2.
Table 17 LVM Command Summary (continued) Command Description and Example lvsync Synchronizes stale logical volume mirrors: # lvsync /dev/vg00/lvol1 pvchange Changes the characteristics of a physical volume: # pvchange -a n /dev/disk/disk2 pvck Performs a consistency check on a physical volume: # pvck /dev/disk/disk47_p2 pvcreate Creates a physical volume be to used as part of a volume group: # pvcreate /dev/rdisk/disk2 pvdisplay Displays information about a physical volume: # pvdisplay -v /dev/di
Table 17 LVM Command Summary (continued) Command Description and Example vgmodify Modifies the configuration parameters of a volume group: # vgmodify -v -t -n -r vg32 vgmove Migrates a volume group to different disks: # vgmove -f diskmap.
C Volume Group Provisioning Tips This appendix contains recommendations for parameters to use when creating your volume groups. C.1 Choosing an Optimal Extent Size for a Version 1.0 Volume Group When creating a Version 1.0 volume group, the vgcreate command may fail and display a message that the extent size is too small or that the VGRA is too big. In this situation, you must choose a larger extent size and run vgcreate again.
roundup(16 * lvs, BS) + roundup(16 + 4 * pxs, BS) * pvs) / BS, 8); if (length > 768) { printf("Warning: A bootable PV cannot be added to a VG \n" "created with the specified argument values. \n" "The metadata size %d Kbytes, must be less \n" "than 768 Kbytes.\n" "If the intention is not to have a boot disk in this \n" "VG then do not use '-B' option during pvcreate(1M) \n" "for the PVs to be part of this VG.
D Striped and Mirrored Logical Volumes This appendix provides more details on striped and mirrored logical volumes. It describes the difference between standard hardware-based RAID and LVM implementation of RAID. D.1 Summary of Hardware Raid Configuration RAID 0, commonly referred to as striping, refers to the segmentation of logical sequences of data across disks. RAID 1, commonly referred to as mirroring, refers to creating exact copies of logical sequences of data.
set, the logical extents are stripped and mirrored to obtain the data layout displayed in Figure 6 (page 171). Striping and mirroring in LVM combines the advantages of the hardware implementation of RAID 1+0 and RAID 0+1, and provides the following benefits: • Better write performance. Write operations take place in parallel and each physical write operation is directed to a different physical volume. • Excellent performance for read.
NOTE: Striping with mirroring always uses strict allocation policies where copies of data do not exist on the same physical disk. This results in a configuration similar to the RAID 01 as illustrated in Figure 7 (page 172).
D.3.2 Compatibility Note Releases prior to HP-UX 11i v3 only support striped or mirrored logical volumes and do not support combination of striped and mirrored logical volumes. If a logical volume using simultaneous mirroring and striping is created on HP-UX 11i v3, attempts to import or activate its associated volume group fails on a previous HP-UX release.
E LVM I/O Timeout Parameters When LVM receives an I/O to a logical volume, it converts this logical I/O to physical I/Os to one or more physical volumes from which the logical volume is allocated. There are two LVM timeout values which affect this operation: • Logical volume timeout (LV timeout). • Physical volume timeout (PV timeout). E.1 Logical Volume Timeout (LV timeout) LV timeout controls how long LVM retries a logical I/O after a recoverable physical I/O error.
E.3 Timeout Differences: 11i v2 and 11i v3 Since native multi-pathing is included in 11i v3 mass storage stack and it is enabled by default, the LVM timeout concepts may vary between 11i v2 and 11i v3 in certain cases. • Meaning of PV timeout. In 11i v2, LVM utilizes the configured PV timeout fully for a particular PV link to which it is set. If there is any I/O failure, LVM will retry the I/O on a next available PV link to the same physical volume with the new PV timeout budget.
F Warning and Error Messages This appendix lists some of the warning and error messages reported by LVM. For each message, the cause is described and an action is recommended. F.1 Matching Error Messages to Physical Disks and Volume Groups Often an error message contains the device number for a device, rather than the device file name. For example, you might see the following message in /var/adm/syslog/syslog.
The example error message refers to the Version 2.x volume group vgtest2. F.2 Messages For All LVM Commands Message Text vgcfgbackup: /etc/lvmtab is out of date with the running kernel: Kernel indicates # disks for "/dev/vgname"; /etc/lvmtab has # disks. Cannot proceed with backup. Cause The number of current physical volumes (Cur PV) and active physical volumes (Act PV) are not the same. Cur PV and Act PV must always agree for the volume group.
Max PE per PV VGDA PE Size (Mbytes) Total PE Alloc PE Free PE Total PVG Total Spare PVs Total Spare PVs in use VG Version VG Max Size VG Max Extents 4350 2 4 4340 3740 600 0 0 0 1.0 1082g 69248 In this example, the total free space is 600 physical extents of 4 MB, or 2400 MB. 2. 3. The logical volume is mirrored with a strict allocation policy, and there are not enough extents on a separate disk to comply with the allocation policy.
F.6 pvchange(1M) Message Text Unable to detach the path or physical volume via the pathname provided. Either use pvchange(1M) -a N to detach the PV using an attached path or detach each path to the PV individually using pvchange(1M) –a n Cause The specified path is not part of any volume group, because the path has not been successfully attached to the otherwise active volume group it belongs to. Recommended Action Check the specified path name to make sure it is correct.
Cause The vgcfgrestore command was used to initialize a disk that already belongs to an active volume group. Recommended Action Detach the physical volume or deactivate the volume group before attempting to restore the physical volume. If the disk may be corrupted, detach the disk and mark it using vgcfgrestore, then attach it again without replacing the disk. This causes LVM to reinitialize the disk and synchronize any mirrored user data mapped there. F.
1. 2. The disk was missing when the volume group was activated, but was later restored. This typically occurs when a system is rebooted or the volume group is activated with a disk missing, uncabled, or powered down. The disk LVM header was overwritten with the wrong volume group information. If the disk is shared between two systems, one system might not be aware that the disk was already in a volume group.
# mkdir /dev/vgname # mknod /dev/vgname/group c 64 unique_minor_number # vgimport -m vgname.map -v -f vgname.file /dev/vgname F.10 vgcreate(1M) Message Text vgcreate: "/dev/vgname/group": not a character device. Cause The volume group device file does not exist, and this version of the vgcreate command does not automatically create it. Recommended Action Create the directory for the volume group and create a group file, as described in “Creating the Volume Group Device File” (page 47).
F.11 vgdisplay(1M) Message Text vgdisplay: Couldn't query volume group "/dev/vgname". Possible error in the Volume Group minor number; Please check and make sure the group minor number is unique. vgdisplay: Cannot display volume group "/dev/vgname". Cause This error has the following possible causes: 1. There are multiple LVM group files with the same minor number. 2. Serviceguard was previously installed on the system and the /dev/slvmvg device file still exists. Recommended Action 1.
Recommended Action See the recommended actions under the “vgchange(1M)” (page 181) error messages. F.12 vgextend(1M) Message Text vgextend: Not enough physical extents per physical volume. Need: #, Have: #. Cause The disk size exceeds the volume group maximum disk size. This limitation is defined when the volume group is created, as a product of the extent size specified with the –s option of vgcreate and the maximum number of physical extents per disk specified with the –e option.
F.14 vgmodify(1M) Message Text Error: Cannot reduce max_pv below n+1 when the volume group is activated because the highest pvkey in use is n. Cause The command is trying to reduce max_pv below the highest pvkey in use. This is disallowed when a version 1.0 volume group is activated since it requires the compacting of pvkeys . Recommended Action Try executing the vgmodify operation with n+1 PVs.
Cause The pvkey of a physical volume can range between 0 to a value equal to one less than the maximum supported number of physical volumes for a volume group version. If the pvkey of the physical volume is not in this range for the target Version 2.x volume group, vgversion fails the migration. Recommended Action 1. Use lvmadm to determine the maximum supported number of physical volumes for the target volume group version. 2.
Message Text LVM: Begin: Contiguous LV (VG mmm 0x00n000, LV Number: p) movement: LVM: End: Contiguous LV (VG mmm 0x00n000, LV Number: p) movement: Cause This message is advisory. It is generated whenever the extents of a contiguous logical volume belonging to version 2.x volume group is moved using pvmove. This message is generated beginning with the September 2009 Update. Recommended Action None, if both the Begin and End message appear for a particular contiguous LV.
Message Text LVM: vg[nn] pv[nn] No valid MCR, resyncing all mirrored MWC LVs on the PV Cause This message appears when you import a volume group from a previous release of HP-UX. The format of the MWC changed at HP-UX 11i Version 3, so if the volume group contains mirrored logical volumes using MWC, LVM converts the MWC at import time. It also performs a complete resynchronization of all mirrored logical volumes, which can take substantial time. Recommended Action None.
Message Text vmunix: LVM:ERROR: The task to increase the pre-allocated extents could not be posted for this snapshot LV (VG 128 0x000000, LVM Number 3). Please check if lvmpud is running. Cause If the automatic increase of pre-allocated extents is enabled for a space efficient snapshot, the numbers of pre-allocated extents are automatically increased when the threshold value is reached. The lvmpud daemon must be running for this to succeed. When lvmpud is not running, the above message is logged.
Recommended Action Make sure that the disk devices being used by the entire snapshot tree (the original logical volume and all of its snapshots) are available and healthy before retrying the delete operation. F.16 Log Files and Trace Files: /var/adm/syslog/syslog.
Glossary Agile Addressing The ability to address a LUN with the same device special file regardless of the physical location of the LUN or the number of paths leading to it. In other words, the device special file for a LUN remains the same even if the LUN is moved from one Host Bus Adaptor (HBA) to another, moved from one switch/hub port to another, presented via a different target port to the host, or configured with multiple hardware paths. Also referred to as persistent binding.
Mirroring Simultaneous replication of data, ensuring a greater degree of data availability. LVM can map identical logical volumes to multiple LVM disks, thus providing the means to recover easily from the loss of one copy (or multiple copies in the case of multi-way mirroring) of data. Mirroring can provide faster access to data for applications using more data reads than writes. Mirroring requires the MirrorDisk/UX product.
Index Symbols /etc/default/fs, 100 /etc/fstab, 38, 60, 70, 100 /etc/lvmconf/ directory, 21, 38, 73 /etc/lvmpvg, 36 /etc/lvmtab, 14, 24, 42, 60, 74, 75, 116, 123, 177, 178 /stand/bootconf, 95, 98 /stand/rootconf, 116 /stand/system, 26 /var/adm/syslog/syslog.
backing up via mirroring, 71 boot file system see boot logical volume creating, 100 determining who is using, 60, 71, 101, 103, 139 extending, 101 guidelines, 25 in /etc/fstab, 100 initial size, 24 OnlineJFS, 101 overhead, 24 performance considerations, 25 reducing, 102, 123 HFS or VxFS, 103 OnlineJFS, 103 resizing, 25 root file system see root logical volume short or long file names, 100 stripe size for HFS, 35 stripe size for VxFS, 35 unresponsive, 117 finding logical volumes using a disk, 131 fsadm comma
for root logical volume, 92 for swap logical volume, 92, 105 updating boot information, 39, 95, 97, 123 lvmadm command, 15, 42, 115, 116, 120, 162, 165, 177 lvmchk command, 42 lvmerge command, 43, 71, 165 synchronization, 29 lvmove command, 43, 165 lvmpud, 111 lvmpud command, 43, 165 lvreduce command, 43 and pvmove failure, 78 reducing a file system, 103, 123 reducing a logical volume, 58, 165 reducing a swap device, 106 removing a mirror, 59, 165 removing a mirror from a specific disk, 59 lvremove command,
policies for allocating, 27 policies for writing, 28 size, 12, 21 synchronizing, 29 physical volume groups, 33, 36 naming convention, 18 Physical Volume Reserved Area see PVRA physical volumes adding, 54 auto re-balancing, 78 commands for, 42 converting from bootable to nonbootable, 87, 183 creating, 46 defined, 12 device file, 17, 18, 177 disabling a path, 90 disk layout, 19 displaying information, 45 moving, 74, 75 moving data between, 76 naming convention, 17 removing, 54 resizing, 80 pre-allocated exten
detaching links, 90 reinstating a spare disk, 80 requirements, 30 splitting a mirrored logical volume, 71 splitting a volume group, 70 stale data, 29 strict allocation policy, 28 stripe size, 35 striping, 34 and mirroring, 36 benefits, 34 creating a striped logical volume, 55 defined, 11 interleaved disks, 34 performance considerations, 34 selecting stripe size, 35 setting up, 34 swap logical volume, 25, 26, 105 see also primary swap logical volume creating, 92, 105 extending, 105 guidelines, 26 information
vgmodify command, 15, 42, 61, 167 changing physical volume type, 87, 183 collecting information, 62, 65 errors, 186 modifying volume group parameters, 62, 185 resizing physical volumes, 80, 179 vgmove command, 98, 167 VGRA and vgmodify, 62, 66 area on disk, 20 size dependency on extent size, 21, 183 vgreduce command, 42, 54, 167 with multipathed disks, 61 vgremove command, 42, 71, 167 vgscan command, 42, 167 moving disks, 75 recreating /etc/lvmtab, 123 vgsync command, 29, 42, 167 vgversion command, 43, 49,