VERITAS Volume Manager 3.1 Administrator’s Guide for HP-UX 11i and HP-UX 11i Version 1.5 June, 2001 Manufacturing Part Number: B7961-90018 E0601 United States © Copyright 1983-2000 Hewlett-Packard Company. All rights reserved.
Legal Notices The information in this document is subject to change without notice. Hewlett-Packard makes no warranty of any kind with regard to this manual, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. Hewlett-Packard shall not be held liable for errors contained herein or direct, indirect, special, incidental or consequential damages in connection with the furnishing, performance, or use of this material.
Copyright 1979, 1980, 1983, 1985-93 Regents of the University of California. This software is based in part on the Fourth Berkeley Software Distribution under license from the Regents of the University of California. Copyright 2000 VERITAS Software Corporation Copyright 1988 Carnegie Mellon University Copyright 1990-1995 Cornell University Copyright 1986 Digital Equipment Corporation. Copyright 1997 Isogon Corporation Copyright 1985, 1986, 1988 Massachusetts Institute of Technology.
Publication History The manual publication date and part number indicate its current edition. The publication date will change when a new edition is released. The manual part number will change when extensive changes are made. To ensure that you receive the new editions, you should subscribe to the appropriate product support service. See your HP sales representative for details.
1. Introduction to Volume Manager Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 How Data is Stored . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Volume Manager Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Physical Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Physical Disks and Disk Naming . . . . . . . . . . . . . .
Volume Manager Layouts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 Volume Administration Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 2. Initialization and Setup Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 Volume Manager Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 Volume Manager Daemons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Task Monitor Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 Performing Online Backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94 Exiting the Volume Manager Support Tasks . . . . . . . . . . . . . . . . . . . . . 96 Online Relayout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Storage Layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Common Volume Manager Commands. . . . . . . . . . . . . . . . . . . . . . . . . vxedit Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vxtask Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vxassist Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vxdctl Daemon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vxmake Command . . . . . . . . . . . . .
Partial Disk Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 Complete Disk Failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 Disabling a Disk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 Replacing a Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Replacing a Failed or Removed Disk . . . . . . . . . . . . . . . . . . . . . . . . .
Creating a Disk Group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 Renaming a Disk Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 Importing a Disk Group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 Deporting a Disk Group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 Upgrading a Disk Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Creating a Volume with Dirty Region Logging Enabled . . . . . . . . . Mirroring an Existing Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mirroring All Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mirroring Volumes on a VM Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . Backing Up Volumes Using Mirroring . . . . . . . . . . . . . . . . . . . . . . . Removing a Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Moving Hot-Relocate Subdisks Back to a Disk . . . . . . . . . . . . . . . . . 278 Splitting Subdisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280 Joining Subdisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 Associating Subdisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282 Associating Log Subdisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
FastResync (Fast Mirror Resynchronization). . . . . . . . . . . . . . . . . . . . 314 Upgrading Volume Manager Cluster Functionality . . . . . . . . . . . . . . 316 Cluster-related Volume Manager Utilities and Daemons . . . . . . . . . . vxclustd Daemon. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vxconfigd Daemon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vxdctl Utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
RAID-5 Logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364 Creating RAID-5 Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366 vxassist Command and RAID-5 Volumes . . . . . . . . . . . . . . . . . . . . . 366 vxmake Commandand RAID-5 Volumes . . . . . . . . . . . . . . . . . . . . . . 366 Initializing RAID-5 Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368 Failures and RAID-5 Volumes. . . . . . . . . . .
Getting Performance Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396 Using Performance Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398 Tuning the Volume Manager. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . General Tuning Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tunables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tuning for Large Systems . . . . . . .
Preface 17
Introduction The VERITAS Volume Manager 3.1 Administrator’s Guide provides information on how to use Volume Manager.
Audience and Scope This guide is intended for system administrators responsible for installing, configuring, and maintaining systems under the control of the VERITAS Volume Manager.
Organization This guide is organized with the following chapters: • Chapter 1, “Introduction to Volume Manager” • Chapter 2, “Initialization and Setup” • Chapter 3, “Volume Manager Operations” • Chapter 4, “Disk Tasks” • Chapter 5, “Disk Group Tasks” • Chapter 6, “Volume Tasks” • Chapter 7, “Cluster Functionality” • Chapter 8, “Recovery” • Chapter 9, “Performance Monitoring” • “Glossary” 20
Using This Guide This guide contains instructions for performing Volume Manager system administration functions. Volume Manager administration functions can be performed through one or more of the following interfaces: • a set of complex commands • a single automated command (vxassist) • a menu-driven interface (vxdiskadm) • the Storage Administrator (graphical user interface) This guide describes how to use the various Volume Manager command line interfaces for Volume Manager administration.
Related Documents The following documents provide information related to the Volume Manager: • VERITAS Volume Manager 3.1 Migration Guide • VERITAS Volume Manager 3.1 Reference Guide • VERITAS Volume Manager 3.1 Storage Administrator Administrator’s Guide • VERITAS Volume Manager 3.
Conventions We use the following typographical conventions. audit (5) An HP-UX manpage. audit is the name and 5 is the section in the HP-UX Reference. On the web and on the Instant Information CD, it may be a hot link to the manpage itself. From the HP-UX command line, you can enter “man audit” or “man 5 audit” to view the manpage. See man (1). Book Title The title of a book. On the web and on the Instant Information CD, it may be a hot link to the book itself. KeyCap The name of a keyboard key.
Introduction to Volume Manager 1 Introduction to Volume Manager Chapter 1 25
Introduction to Volume Manager Introduction Introduction This chapter describes what VERITAS Volume Manager is, how it works, how you can communicate with it through the user interfaces, and Volume Manager concepts. Related documents that provide specific information about Volume Manager are listed in the Preface.
Introduction to Volume Manager Introduction Version 1.5 systems.
Introduction to Volume Manager How Data is Stored How Data is Stored There are several methods used to store data on physical disks. These methods organize data on the disk so the data can be stored and retrieved efficiently. The basic method of disk organization is called formatting. Formatting prepares the hard disk so that files can be written to and retrieved from the disk by using a prearranged storage pattern.
Introduction to Volume Manager Volume Manager Overview Volume Manager Overview The Volume Manager uses objects to do storage management. The two types of objects used by Volume Manager are physical objects and virtual objects. • physical objects Volume Manager uses two physical objects: physical disks and partitions. Partitions are created on the physical disks (on systems that use partitions). • virtual objects Volume Manager creates virtual objects, called volumes.
Introduction to Volume Manager Physical Objects Physical Objects This section describes the physical objects (physical disks and partitions) used by Volume Manager. Physical Disks and Disk Naming A physical disk is the basic storage device (media) where the data is ultimately stored. You can access the data on a physical disk by using a device name (devname) to locate the disk. The physical disk device name varies with the computer system you use. Not all parameters are used on all systems.
Introduction to Volume Manager Volumes and Virtual Objects Volumes and Virtual Objects The connection between physical objects and Volume Manager objects is made when you place a physical disk under Volume Manager control. Volume Manager creates virtual objects and makes logical connections between the objects. The virtual objects are then used by Volume Manager to do storage management tasks. NOTE The vxprint command displays detailed information on existing Volume Manager objects.
Introduction to Volume Manager Volumes and Virtual Objects default name that typically takes the form disk##. See Figure 1-1, Example of a VM Disk, which shows a VM disk with a media name of disk01 that is assigned to the disk devname. Figure 1-1 Example of a VM Disk VM Disk Physical Disk disk01 devname Disk Groups A disk group is a collection of VM disks that share a common configuration.
Introduction to Volume Manager Volumes and Virtual Objects Figure 1-2 Example of a Subdisk Subdisk VM Disk With One Subdisk disk01-01 disk01-01 disk01 A VM disk can contain multiple subdisks, but subdisks cannot overlap or share the same portions of a VM disk.
Introduction to Volume Manager Volumes and Virtual Objects Figure 1-4 Example Plex With Two Subdisks Plex Subdisks disk01-01 disk01-02 vol01-01 vol01 disk01-01 disk01-02 disk01 You can organize data on the subdisks to form a plex by using these methods: • concatenation • striping (RAID-0) • striping with parity (RAID-5) • mirroring (RAID-1) Concatenation, striping (RAID-0), RAID-5, and mirroring (RAID-1) are described in “Virtual Object Data Organization (Volume Layouts)”.
Introduction to Volume Manager Volumes and Virtual Objects select meaningful names for their volumes. A volume can consist of up to 32 plexes, each of which contains one or more subdisks. A volume must have at least one associated plex that has a complete set of the data in the volume with at least one associated subdisk. Note that all subdisks within a volume must belong to the same disk group. See Figure 1-5, Example of a Volume with One Plex,.
Introduction to Volume Manager Volumes and Virtual Objects • it contains two plexes named vol06-01 and vol06-02 • each plex contains one subdisk • each subdisk is allocated from a different VM disk (disk01 and disk02) Connection Between Volume Manager Virtual Objects Volume Manager virtual objects are combined to build volumes. The virtual objects contained in volumes are: VM disks, disk groups, subdisks, and plexes.
Introduction to Volume Manager Volumes and Virtual Objects Figure 1-7 Connection Between Volume Manager Objects Volume VM Disk Physical Disk Plex Subdisks disk01-01 disk01-02 vol01-01 vol01 Volume disk01-01 disk01-02 devname disk01 VM Disk Physical Disk Plex disk02-01 Subdisk vol02-01 disk02-01 devname disk02 vol02 Disk Group Chapter 1 37
Introduction to Volume Manager Virtual Object Data Organization (Volume Layouts) Virtual Object Data Organization (Volume Layouts) Data in virtual objects is organized to create volumes by using the following layout methods: • concatenation • striping (RAID-0) • RAID-5 (striping with parity) • mirroring (RAID-1) • mirroring plus striping • striping plus mirroring The following sections describe each layout method.
Introduction to Volume Manager Virtual Object Data Organization (Volume Layouts) Figure 1-8 Example of Concatenation B = Block of data Plex VM Disk Physical Disk disk01-01 disk01 devname B1 disk01-01 B2 B3 B4 You can use concatenation with multiple subdisks when there is insufficient contiguous space for the plex on any one disk. This form of concatenation can be used for load balancing between disks, and for head movement optimization on a particular disk.
Introduction to Volume Manager Virtual Object Data Organization (Volume Layouts) this example, subdisks disk02-02 and disk02-03 are available for other disk management tasks. Figure 1-10, Example of Spanning, shows data spread over two subdisks in a spanned plex.
Introduction to Volume Manager Virtual Object Data Organization (Volume Layouts) to one column. Each column contains one or more subdisks and can be derived from one or more physical disks. The number and sizes of subdisks per column can vary. Additional subdisks can be added to columns, as necessary. CAUTION Striping a volume, or splitting a volume across multiple disks, increases the chance that a disk failure will result in failure of that volume.
Introduction to Volume Manager Virtual Object Data Organization (Volume Layouts) Figure 1-11 Striping Across Three Disks (Columns) Column 1 SU1 SU4 Subdisk 1 Column 2 SU2 SU5 Subdisk 2 Column 3 SU3 SU6 Subdisk 3 Plex SU = Stripe Unit A stripe consists of the set of stripe units at the same positions across all columns. In Figure 1-11, Striping Across Three Disks (Columns),, stripe units 1, 2, and 3 constitute a single stripe.
Introduction to Volume Manager Virtual Object Data Organization (Volume Layouts) multi-user applications across multiple disks. Figure 1-12, Example of a Striped Plex with One Subdisk per Column, shows a striped plex with three equal-sized, single-subdisk columns. There is one column per physical disk.
Introduction to Volume Manager Virtual Object Data Organization (Volume Layouts) Figure 1-13 Example of a Striped Plex with Concatenated Subdisks per Column SU = Stripe Unit Striped Plex su1 VM Disks Physical Disks disk01-01 disk01-02 su2 disk01-03 column 1 disk01-01 disk01-02 disk01-03 su1 su4 devname su2 su5 devname su3 su6 devname disk01 su3 disk02-01 disk02-02 su4 column 2 su5 disk02-01 disk02-02 disk02 disk03-01 disk03-01 su6 . . .
Introduction to Volume Manager Virtual Object Data Organization (Volume Layouts) RAID-5 volumes keep a copy of the data and calculated parity in a plex that is “striped” across multiple disks. In the event of a disk failure, a RAID-5 volume uses parity to reconstruct the data. It is possible to mix concatenation and striping in the layout. RAID-5 volumes can do logging to minimize recovery time. RAID-5 volumes use RAID-5 logs to keep a copy of the data and parity currently being written.
Introduction to Volume Manager Virtual Object Data Organization (Volume Layouts) redundancy. When striping or spanning across a large number of disks, failure of any one of those disks can make the entire plex unusable. The chance of one out of several disks failing is sufficient to make it worthwhile to consider mirroring in order to improve the reliability (and availability) of a striped or spanned volume.
Introduction to Volume Manager Virtual Object Data Organization (Volume Layouts) stripe-mirror layout, only the failing subdisk must be detached, and only that portion of the volume loses redundancy. When the disk is replaced, only a portion of the volume needs to be recovered. Compared to mirroring plus striping, striping plus mirroring offers a volume more tolerant to disk failure. If a disk failure occurs, the recovery time is shorter for striping plus mirroring.
Introduction to Volume Manager Volume Manager and RAID-5 Volume Manager and RAID-5 NOTE You may need an additional license to use this feature. This section describes how Volume Manager implements RAID-5. For general information on RAID-5, see “RAID-5”. Although both mirroring (RAID-1) and RAID-5 provide redundancy of data, they use different methods. Mirroring provides data redundancy by maintaining multiple complete copies of the data in a volume.
Introduction to Volume Manager Volume Manager and RAID-5 Figure 1-15 Traditional RAID-5 Array Stripe 1 Stripe 3 Row 0 Stripe 2 Row 1 Column 0 Column 1 Column 2 Column 3 This traditional array structure supports growth by adding more rows per column. Striping is accomplished by applying the first stripe across the disks in Row 0, then the second stripe across the disks in Row 1, then the third stripe across the Row 0 disks, and so on.
Introduction to Volume Manager Volume Manager and RAID-5 Figure 1-16 Volume Manager RAID-5 Array Stripe 1 Stripe 2 SD SD SD SD SD SD SD SD Column 0 Column 1 Column 2 Column 3 SD = Subdisk With the Volume Manager RAID-5 array structure, each column can consist of a different number of subdisks. The subdisks in a given column can be derived from different physical disks. Additional subdisks can be added to the columns as necessary.
Introduction to Volume Manager Volume Manager and RAID-5 is not as critical as the number of columns and the stripe unit size selection. The left-symmetric layout stripes both data and parity across columns, placing the parity in a different column for every stripe of data. The first parity stripe unit is located in the rightmost column of the first stripe. Each successive parity stripe unit is located in the next stripe, shifted left one column from the previous parity stripe unit location.
Introduction to Volume Manager Volume Manager and RAID-5 Each parity stripe unit contains the result of an exclusive OR (XOR) procedure done on the data in the data stripe units within the same stripe. If data on a disk corresponding to one column is inaccessible due to hardware or software failure, data can be restored. Data is restored by XORing the contents of the remaining columns data stripe units against their respective parity stripe units (for each stripe).
Introduction to Volume Manager Volume Manager and RAID-5 Figure 1-18 Incomplete Write Completed Data Write Disk A Corrupted Data Disk B Incomplete Parity Write Disk C This failure case can be avoided by logging all data writes before committing them to the array. In this way, the log can be replayed, causing the data and parity updates to be completed before the reconstruction of the failed drive takes place.
Introduction to Volume Manager Layered Volumes Layered Volumes Another Volume Manager virtual object is the layered volume. A layered volume is built on top of volume(s). The layered volume structure tolerates failure better and has greater redundancy than the standard volume structure. For example, in a striped and mirrored layered volume, each mirror (plex) covers a smaller area of storage space, so recovery is quicker than with a standard mirror volume.
Introduction to Volume Manager Layered Volumes Figure 1-19 Volume Example of a Striped-Mirrored Layered Volume Striped Plex Underlying Volumes Concatenated Subdisks and Physical Disks Plexes disk04-01 disk04-01 disk05-01 disk05-01 disk06-01 disk06-01 disk07-01 disk07-01 Column 0 vop02 vop03 vol01-01 vol01 Subdisks vol01-01 Column 1 User Manipulation Volume Manager Manipulation System administrators may need to manipulate the layered volume structure for troubleshooting or other operations (
Introduction to Volume Manager Volume Manager User Interfaces Volume Manager User Interfaces This section briefly describes the VERITAS Volume Manager user interfaces. User Interface Overview The Volume Manager supports the following user interfaces: • Volume Manager Storage Administrator The Storage Administrator is a graphical user interface to the Volume Manager. The Storage Administrator provides visual elements such as icons, menus, and dialog boxes to manipulate Volume Manager objects.
Introduction to Volume Manager Volume Manager User Interfaces Volume Manager objects created by one interface are compatible with those created by the other interfaces.
Introduction to Volume Manager Volume Manager Conceptual Overview Volume Manager Conceptual Overview This section describes key terms and relationships between Volume Manager concepts. Figure 1-20, Volume Manager System Concepts,, illustrates the terms and concepts discussed in this section. Why You Should Use Volume Manager Volume Manager provides enhanced data storage service by separating the physical and logical aspects of data management.
Introduction to Volume Manager Volume Manager Conceptual Overview Figure 1-20 Volume Manager System Concepts Volume Manager Objects Host System /dev/vx/dsk/vol, /dev/vx/rdsk/vol Applications Virtual Device Interface DBMS vol (User volume) File System Volume Volume Manager Plex Dynamic Multipathing (DMP) vol-01 Plex Storage Volume Volume Operating System Services disk01-01 Subdisk disk01 VM Disk Platform Hardware Disk Access (simple disk) /dev/vx/[r]dmp/c1t2d3 Device Interconnection Network
Introduction to Volume Manager Volume Manager Conceptual Overview After installing Volume Manager on a host system, perform the following procedure before you can configure and use Volume Manager objects: • bring the contents of physical disks under Volume Manager control bring the physical disk under VxVM control, the disk must not be under LVM control. For more information on LVM and VxVM disk co-existence or how to convert LVM disks to VxVM disks, see the VERITAS Volume Manager Migration Guide.
Introduction to Volume Manager Volume Manager Conceptual Overview Dynamic Multipathing (DMP) NOTE You may need an additional license to use this feature. A multipathing condition can exist when a physical disk can be accessed by more than one operating system device handle. Each multipath operating system device handle permits data access and control through alternate host-to-device pathways. Volume Manager is configured with its own DMP system to organize access to multipathed devices.
Introduction to Volume Manager Volume Manager Conceptual Overview • How does the operating system view the paths? • How does Volume Manager DMP view the paths? • How does the Multipathing target deal with their paths? References Additional information about DMP can be found in this document in the following sections: • “Dynamic Multipathing (DMP)” • “vxdctl Daemon” • Chapter 4, Disk Tasks, “Displaying Multipaths Under a VM Disk” Related information also appears in the following sections of the Volume Manage
Introduction to Volume Manager Volume Manager Conceptual Overview In the 3.0 or higher release of Volume Manager “layered volumes” can be constructed by permitting the subdisk to map either to a VM disk as before, or, to a new logical object called a storage volume. A storage volume provides a recursive level of mapping with layouts similar to the top-level volume. Eventually, the “bottom” of the mapping requires an association to a VM disk, and hence to attached physical storage.
Introduction to Volume Manager Volume Administration Tasks Volume Administration Tasks Volume Manager can be used to perform system and configuration management tasks on its objects: disks, disk groups, subdisks, plexes, and volumes. Volumes contain two types of objects: • subdisk—a region of a physical disk • plex—a series of subdisks linked together in an address space Disks and disk groups must be initialized and defined to the Volume Manager before volumes can be created.
Introduction to Volume Manager Volume Administration Tasks • create subdisks • create plexes • associate subdisks and plexes • create volumes • associate volumes and plexes • initialize volumes Before you create volumes, you should determine which volume layout best suits your system needs.
Introduction to Volume Manager Volume Administration Tasks 66 Chapter 1
Initialization and Setup 2 Initialization and Setup Chapter 2 67
Initialization and Setup Introduction Introduction This chapter briefly describes the steps needed to initialize Volume Manager and the daemons that must be running for Volume Manager to operate. This chapter also provides guidelines to help you set up a system with storage management. See the VERITAS Volume Manager 3.1 for HP-UX Release Notes for detailed information on how to install and set up the Volume Manager and the Storage Administrator on HP-UX 11i systems. NOTE On HP-UX 11i Version 1.
Initialization and Setup Volume Manager Initialization Volume Manager Initialization You initialize the Volume Manager by using the vxinstall program. The vxinstall program places specified disks under Volume Manager control. By default, these disks are placed in the rootdg disk group. You must use the vxinstall program to initialize at least one disk into rootdg. You can then use the vxdiskadm interface or the Storage Administrator to initialize additional disks into other disk groups.
Initialization and Setup Volume Manager Initialization # vxinstall The vxinstall program then does the following: • examines and lists all controllers attached to the system • describes the installation process: Quick or Custom installation Quick installation gives you the option of initializing all disks. If you wish to initialize only some of the disks for VxVM, use the custom installation process.
Initialization and Setup Volume Manager Daemons Volume Manager Daemons Two daemons must be running for the Volume Manager to operate properly: • vxconfigd • vxiod Configuration Daemon vxconfigd The Volume Manager configuration daemon (vxconfigd) maintains Volume Manager disk and disk group configurations. The vxconfigd daemon communicates configuration changes to the kernel and modifies configuration information stored on disk.
Initialization and Setup Volume Manager Daemons By default, the vxconfigd daemon issues errors to the console. However, the vxconfigd daemon can be configured to issue errors to a log file. For more information, see the vxconfigd(1M) and vxdctl(1M) manual pages. Volume I/O Daemon vxiod The volume extended I/O daemon (vxiod) allows for extended I/O operations without blocking calling processes. For more information, see the vxiod (1M) manual page.
Initialization and Setup System Setup System Setup This section has information to help you set up your system for efficient storage management. For additional information on system setup tasks, see the Volume Manager Storage Administrator Administrator’s Guide. The following system setup sequence is typical and should be used as an example. Your system requirements may differ. The system setup guidelines provide helpful information for specific setup configurations.
Initialization and Setup System Setup Guidelines System Setup Guidelines These general guidelines can help you to understand and plan an efficient storage management system. See the cross-references in each section for more information about the featured guideline. Hot-Relocation Guidelines NOTE You may need an additional license to use this feature. Follow these general guidelines when using hot-relocation. See “Hot-Relocation” for more information.
Initialization and Setup System Setup Guidelines one disk that does not already contain one of the mirrors of the volume or another subdisk in the striped plex. This disk should either be a spare disk with some available space or a regular disk with some free space and the disk is not excluded from hot-relocation use. • For a RAID-5 volume, the disk group must have at least one disk that does not already contain the RAID-5 plex (or one of its log plexes) of the volume.
Initialization and Setup System Setup Guidelines NOTE Many modern disk drives have “variable geometry,” which means that the track size differs between cylinders (i.e., outer disk tracks have more sectors than inner tracks). It is therefore not always appropriate to use the track size as the stripe unit size. For these drives, use a moderate stripe unit size (such as 64 kilobytes), unless you know the I/O pattern of the application.
Initialization and Setup System Setup Guidelines Follow these general guidelines when using mirroring. See “Mirroring (RAID-1)” for more information. • Do not place subdisks from different plexes of a mirrored volume on the same physical disk. This action compromises the availability benefits of mirroring and degrades performance. Use of the vxassist command precludes this from happening.
Initialization and Setup System Setup Guidelines Dirty Region Logging (DRL) Guidelines NOTE You must license the VERITAS Volume Manager product to use this feature. Dirty Region Logging (DRL) can speed up recovery of mirrored volumes following a system crash. When DRL is enabled, Volume Manager keeps track of the regions within a volume that have changed as a result of writes to a plex. Volume Manager maintains a bitmap and stores this information in a log subdisk.
Initialization and Setup System Setup Guidelines the vxassist command is recommended. • The log subdisk should not be placed on a heavily-used disk, if possible. • Persistent (non-volatile) storage disks must be used for log subdisks. Mirroring and Striping Guidelines NOTE You may need an additional license to use this feature. Follow these general guidelines when using mirroring and striping together. For more information, see “Mirroring Plus Striping (RAID-1 + RAID-0)”.
Initialization and Setup System Setup Guidelines • Follow the mirroring guidelines described in “Mirroring Guidelines”. RAID-5 Guidelines NOTE You may need an additional license to use this feature. Follow these general guidelines when using RAID-5. See “RAID-5” for more information. In general, the guidelines for mirroring and striping together also apply to RAID-5.
Initialization and Setup Protecting Your System Protecting Your System Disk failures can cause two types of problems: loss of data on the failed disk and loss of access to your system. Loss of access can be due to the failure of a key disk (a disk used for system operations). The VERITAS Volume Manager can protect your system from these problems. To maintain system availability, data important to running and booting your system must be mirrored.
Initialization and Setup Protecting Your System if all copies of a volume are lost or corrupted in some way. For example, a power surge could damage several (or all) disks on your system. Also, typing a command in error can remove critical files or damage a file system directly.
Volume Manager Operations 3 Volume Manager Operations Chapter 3 83
Volume Manager Operations Introduction Introduction This chapter provides information about the VERITAS Volume Manager command line interface (CLI). The Volume Manager command set (for example, the vxassist command) ranges from commands requiring minimal user input to commands requiring detailed user input. Many of the Volume Manager commands require an understanding of Volume Manager concepts. For more information on Volume Manager concepts, see “Volume Manager Conceptual Overview”.
Volume Manager Operations Introduction • “Dynamic Multipathing (DMP)” • “VxSmartSync Recovery Accelerator” • “Common Volume Manager Commands” NOTE Your system can use a device name that differs from the examples. For more information on device names, see “Disk Devices”.
Volume Manager Operations Displaying Disk Configuration Information Displaying Disk Configuration Information You can display disk configuration information from the command line. Output listings available include: available disks, Volume Manager objects, and free space in disk groups.
Volume Manager Operations Displaying Disk Configuration Information dgrootdgdefaultdefault0962910960.1025.
Volume Manager Operations Displaying Disk Configuration Information For example, to display the free space in the default disk group, rootdg, use the following command: # vxdg -g rootdg free The following is an example of vxdg -g rootdg free command output: DISK FLAGS disk01 disk02 - DEVICE TAG OFFSET LENGTH c0t10d0 c0t10d0 0 4444228 c0t11d0 c0t11d0 0 4443310 The free space is measured in 1024-byte sectors.
Volume Manager Operations Displaying Subdisk Information Displaying Subdisk Information The vxprint command displays information about Volume Manager objects.
Volume Manager Operations Displaying Subdisk Information diskdev=102/8 90 Chapter 3
Volume Manager Operations Creating Volumes Creating Volumes Volumes are created to take advantage of the Volume Manager concept of virtual disks. Once a volume exists, a file system can be placed on the volume to organize the disk space with files and directories. Also, applications such as databases can be used to organize data on volumes.
Volume Manager Operations Creating Volumes traffic areas exist on certain subdisks. • RAID-5—A volume that uses striping to spread data and parity evenly across multiple disks in an array. Each stripe contains a parity stripe unit and data stripe units. Parity can be used to reconstruct data if one of the disks fails. In comparison to the performance of striped volumes, write throughput of RAID-5 volumes decreases since parity information needs to be updated each time data is accessed.
Volume Manager Operations Volume Manager Task Monitor Volume Manager Task Monitor The Volume Manager Task Monitor tracks the progress of system recovery by monitoring task creation, maintenance, and completion. The Task Monitor allows you to monitor task progress and to modify characteristics of tasks, such as pausing and recovery rate (for example, to reduce the impact on system performance). You can also monitor and modify the progress of the Online Relayout feature.
Volume Manager Operations Performing Online Backup Performing Online Backup Volume Manager provides snapshot backups of volume devices. This is done through the vxassist command and other commands. There are various procedures for doing backups, depending upon the requirements for integrity of the volume contents. These procedures have the same starting requirement: a plex that is large enough to store the complete contents of the volume.
Volume Manager Operations Performing Online Backup The online backup procedure is completed by running the vxassist snapshot command on a volume with a SNAPDONE mirror. This task detaches the finished snapshot (which becomes a normal mirror), creates a new normal volume and attaches the snapshot mirror to the snapshot volume. The snapshot then becomes a normal, functioning mirror and the state of the snapshot is set to ACTIVE.
Volume Manager Operations Exiting the Volume Manager Support Tasks Exiting the Volume Manager Support Tasks When you have completed all of your disk administration activities, exit the Volume Manager Support Operations by selecting q from the vxdiskadm main menu. For a description of the vxdiskadm main menu and explanations of the menu options, see “vxdiskadm Main Menu” and “vxdiskadm Menu Description”.
Volume Manager Operations Online Relayout Online Relayout NOTE You may need an additional license to use this feature. Online Relayout allows you to convert any supported storage layout in the Volume Manager to any other, in place, with uninterrupted data access. You usually change the storage layout in the Volume Manager to change the redundancy or performance characteristics of the storage.
Volume Manager Operations Online Relayout How Online Relayout Works The VERITAS Online Relayout feature allows you to change storage layouts that you have already created in place, without disturbing data access. You can change the performance characteristics of a particular layout to suit changed requirements. For example, if a striped layout with a 128K stripe unit size is not providing optimal performance, change the stripe unit size of the layout by using the Relayout feature.
Volume Manager Operations Online Relayout To be eligible for layout transformation, mirrored volume plexes must be identical in layout, with the same stripe width and number of columns. See Table 3-1, “Supported Layout Transformations.
Volume Manager Operations Online Relayout 3. Not a Relayout, but a convert operation. 4. Changes mirroring to RAID-5 and/or stripe width/column changes. 5. Changes mirroring to RAID-5 and/or stripe width/column changes. 6. Changes stripe width/column and remove a mirror. 7. Adds columns. 8. Not a Relayout operation. 9. A convert operation. 10. Changes mirroring to RAID-5. See the vxconvert procedure. 11. Removes a mirror; not a Relayout operation. 12. Removes a mirror and add striping. 13.
Volume Manager Operations Online Relayout A striped mirror plex is a striped plex on top of a mirrored volume, resulting in a single plex that has both mirroring and striping. This combination forms a plex called a striped-mirror plex. A concatenated plex can be created in the same way. Online Relayout supports transformations to and from striped-mirror and concatenated-mirror plexes. NOTE Changing the number of mirrors during a transformation is not currently supported.
Volume Manager Operations Online Relayout be rendered sparse by Online Relayout. NOTE Online Relayout can be used only with volumes created with the vxassist command. Transformations Not Supported Transformation of log plexes is not supported. A snapshot of a volume when there is an Online Relayout operation running in the volume is not supported.
Volume Manager Operations Hot-Relocation Hot-Relocation NOTE You may need an additional license to use this feature. Hot-relocation allows a system to automatically react to I/O failures on redundant (mirrored or RAID-5) Volume Manager objects and restore redundancy and access to those objects. The Volume Manager detects I/O failures on objects and relocates the affected subdisks. The subdisks are relocated to disks designated as spare disks and/or free space within the disk group.
Volume Manager Operations Hot-Relocation When a failure occurs, it triggers a hot-relocation attempt. A successful hot-relocation process involves: Step 1. Detecting Volume Manager events resulting from the failure of a disk, plex, or RAID-5 subdisk. Step 2. Notifying the system administrator (and other designated users) of the failure and identifying the affected Volume Manager objects. This is done through electronic mail. Step 3.
Volume Manager Operations Hot-Relocation When selecting space for relocation, hot-relocation preserves the redundancy characteristics of the Volume Manager object that the relocated subdisk belongs to. For example, hot-relocation ensures that subdisks from a failed plex are not relocated to a disk containing a mirror of the failed plex. If redundancy cannot be preserved using any available spare disks and/or free space, hot-relocation does not take place.
Volume Manager Operations Hot-Relocation Volume Manager provides the vxunreloc utility, which can be used to restore the system to the same configuration that existed before the disk failure. The vxunreloc utility allows you to move the hot-relocated subdisks back onto a disk that was replaced due to a disk failure.
Volume Manager Operations Hot-Relocation want to move the hot-relocated subdisks to disk05 where some subdisks already reside. Use the force option to move the hot-relocated subdisks to disk05, but not to the exact offsets: # vxunreloc -g newdg -f -n disk05 disk01 Example 4: If a subdisk was hot-relocated more than once due to multiple disk failures, it can still be unrelocated back to its original location.
Volume Manager Operations Hot-Relocation unrelocate operation will fail and none of the subdisks will be moved. When the vxunreloc program moves the hot-relocated subdisks, it moves them to the original offsets. However, if there some subdisks existed which occupied part or all of the area on the destination disk, the vxunreloc utility will fail.
Volume Manager Operations Hot-Relocation records is still marked as UNRELOC because the cleanup phase is never executed. If the system goes down after the new subdisks are made on the destination, but before they are moved back, the vxunreloc program can be executed again after the system comes back. As described above, when a new subdisk is created, the vxunreloc program sets the comment field of the subdisk as UNRELOC.
Volume Manager Operations Volume Resynchronization Volume Resynchronization When storing data redundantly, using mirrored or RAID-5 volumes, the Volume Manager ensures that all copies of the data match exactly. However, under certain conditions (usually due to complete system failures), some redundant data on a volume can become inconsistent or unsynchronized. The mirrored data is not exactly the same as the original data.
Volume Manager Operations Volume Resynchronization by placing the volume in recovery mode (also called read-writeback recovery mode). Resynchronization of data in the volume is done in the background. This allows the volume to be available for use while recovery is taking place. The process of resynchronization can be expensive and can impact system performance. The recovery process reduces some of this impact by spreading the recoveries to avoid stressing a specific disk or controller.
Volume Manager Operations Dirty Region Logging Dirty Region Logging NOTE You must license the VERITAS Volume Manager product to use this feature. Dirty Region Logging (DRL) is an optional property of a volume, used to provide a speedy recovery of mirrored volumes after a system failure. DRL keeps track of the regions that have changed due to I/O writes to a mirrored volume. DRL uses this information to recover only the portions of the volume that need to be recovered.
Volume Manager Operations Dirty Region Logging also be created manually by creating a log subdisk and associating it with a plex. Then the plex can contain both a log subdisk and data subdisks. Only a limited number of bits can be marked dirty in the log at any time. The dirty bit for a region is not cleared immediately after writing the data to the region. Instead, it remains marked as dirty until the corresponding volume region becomes the least recently used.
Volume Manager Operations FastResync (Fast Mirror Resynchronization) FastResync (Fast Mirror Resynchronization) NOTE You may need an additional license to use this feature. The FastResync (also called Fast Mirror Resynchronization, which is abbreviated as FMR) feature performs quick and efficient resynchronization of stale mirrors by increasing the efficiency of the VxVM snapshot mechanism to better support operations such as backup and decision support.
Volume Manager Operations FastResync (Fast Mirror Resynchronization) Resynchronization), and the second is to extend the snapshot model (Fast Mirror Reconnect) to provide a method by which snapshots can be refreshed and re-used rather than discarded. Fast Mirror Resynchronization Component The former enhancement, Fast Mirror Resynchronization (FMR), requires keeping track of data store updates missed by mirrors that were unavailable at the time the updates were applied.
Volume Manager Operations FastResync (Fast Mirror Resynchronization) with Release 3.1, the snapshot command behaves as before with the exception that it creates an association between the original volume and the snap volume. A new command, vxassist snapback, leverages this association to expediently return the snapshot plex (MSnap) to the volume from which it was snapped (in this example, VPri).
Volume Manager Operations FastResync (Fast Mirror Resynchronization) missed when a mirror is offline/detached/shapshotted, and then applying only those updates when the mirror returns, considerably reduces the time to resynchronize the volume. The basis for this change tracking is the use of a bitmap. Each bit in the bitmap represents a contiguous region (an extent) of a volume’s address space. This contiguous region is called the region size.
Volume Manager Operations FastResync (Fast Mirror Resynchronization) is used to reattach the snapshot plex. If FMR is enabled before the snapshot is taken and is not disabled at any time before the snapshot is complete, then the FMR delta changes reflected in the FMR bitmap are used to resynchronize the volume during the snapback. To make it easier to create snapshots of several volumes at the same time, the snapshot option has been enhanced to accept more than one volume and a naming scheme has been added.
Volume Manager Operations FastResync (Fast Mirror Resynchronization) FMR and Writable Snapshots One of two options is used to track changes to a writable snapshot, as follows: • create a separate map that tracks changes to a snapshot volume • update the map of the parent of the snapshot volume. Use this shortcut method only if there are few updates to the snapshot volume, such as in the backup and DSS (decision support systems) applications For VxVM 3.1, the latter method is implemented; i.e.
Volume Manager Operations Volume Manager Rootability Volume Manager Rootability Rootability is the term used to indicate that the logical volumes containing the root file system and the system swap area are under Volume Manager control. Normally the Volume Manager is started following a successful boot after the operating system has passed control to the initial user mode process.
Volume Manager Operations Volume Manager Rootability Boot Time Volume Restrictions The volumes that need to be available at boot time have some very specific restrictions on their configuration. These restrictions include their names, the disk group they are in, their volume usage types, and they must be single subdisk, contiguous volumes. These restrictions are detailed below: • Disk Group All volumes on the boot disk must be in the rootdg disk group.
Volume Manager Operations Dynamic Multipathing (DMP) Dynamic Multipathing (DMP) NOTE You may need an additional license to use this feature. On some systems, the Volume Manager supports multiported disk arrays. It automatically recognizes multiple I/O paths to a particular disk device within the disk array. The Dynamic Multipathing feature of the Volume Manager provides greater reliability by providing a path failover mechanism.
Volume Manager Operations Dynamic Multipathing (DMP) Load Balancing To provide load balancing across paths, DMP follows the balanced path mechanism for active/active disk arrays. Load balancing makes sure that I/O throughput can be increased by utilizing the full bandwidth of all paths to the maximum. Sequential IOs starting within a certain range will be sent down the same path in order to optimize IO throughput by utilizing the effect of disk track caches.
Volume Manager Operations Dynamic Multipathing (DMP) Step 5. Run the following script # /etc/vx/bin/vxdmpdis If all the above steps complete successfully, reboot the system. When the system comes up, DMP should be removed completely from the system. Verify that DMP was removed by running the vxdmpadm command. The following message is displayed: vxvm:vxdmpadm: ERROR: vxdmp module is not loaded on the system. Command invalid.
Volume Manager Operations Dynamic Multipathing (DMP) maintenance of controllers attached to the host or a disk array supported by the Volume Manager. I/O operations to the host I/O controller can be turned on after the maintenance task is completed. This once again enables I/Os to go through this controller. This operation can be accomplished using the vxdmpadm(1M) utility provided with Volume Manager.
Volume Manager Operations VxSmartSync Recovery Accelerator VxSmartSync Recovery Accelerator The VxSmartSync™ Recovery Accelerator is available for some systems. VxSmartSync for Mirrored Oracle© Databases is a collection of features that speed up the resynchronization process (known as resilvering) for volumes used in with the Oracle Universal Database™.
Volume Manager Operations VxSmartSync Recovery Accelerator Because the database keeps its own logs, it is not necessary for Volume Manager to do logging. Data volumes should therefore be configured as mirrored volumes without dirty region logs. In addition to improving recovery time, this avoids any run-time I/O overhead due to DRL, which improves normal database write access. Redo Log Volume Configuration A redo log is a log of changes to the database data.
Volume Manager Operations Common Volume Manager Commands Common Volume Manager Commands vxedit Command The common command used to remove or rename Volume Manager objects (volumes, plexes, and subdisks) is the vxedit command. The vxedit command has two functions: • it allows you to modify certain records in the volume management databases. Only fields that are not volume usage-type-dependent can be modified. • it can remove or rename Volume Manager objects.
Volume Manager Operations Common Volume Manager Commands started, the size and progress of the operation, and the state and rate of progress of the operation. The administrator can change the state of a task, giving coarse-grained control over the progress of the operation. For those operations that support it, the rate of progress of the task can be changed, giving more fine-grained control over the task. Every task is given a unique task identifier.
Volume Manager Operations Common Volume Manager Commands far. set The set operation is used to change modifiable parameters of a task. Currently, there is only one modifiable parameter for tasks: the slow attribute, which represents a throttle on the task progress. The larger the slow value, the slower the progress of the task and the fewer system resources it consumes in a given time.
Volume Manager Operations Common Volume Manager Commands To list all tasks on the system that are currently paused, use the following command: # vxtask -p list The vxtask -p list command is used as follows: # vxtask pause 167 # vxtask -p list TASKID PTID TYPE/STATE PCT PROGRESS 167 ATCOPY/P 27.
Volume Manager Operations Common Volume Manager Commands The vxassist command typically takes the following form: # vxassist keyword volume_name [attributes...] Select the specific action to perform by specifying an operation keyword as the first argument on the command line. For example, the keyword for creating a new volume is make. You can create a new volume by entering: # vxassist make volume_name length The first argument after any vxassist command keyword is always a volume name.
Volume Manager Operations Common Volume Manager Commands Some of the advantages of using the vxassist command include: The use of the vxassist command involves only one step (command) on the part of the user. You are required to specify only minimal information to the vxassist command, yet you can optionally specify additional parameters to modify its actions. The vxassist command tasks result in a set of configuration changes that either succeed or fail as a group, rather than individually.
Volume Manager Operations Common Volume Manager Commands they are not listed on the command line. Tunables listed on the command line override those specified elsewhere. Tunable parameters are as follows: • Internal defaults—The built-in defaults are used when the value for a particular tunable is not specified elsewhere (on the command line or in a defaults file). • System-wide defaults file—The system-wide defaults file contains default values that you can alter.
Volume Manager Operations Common Volume Manager Commands min_nstripe=2 # for RAID-5, by default create between 3 and 8 stripe columns max_nraid5stripe=8 min_nraid5stripe=3 # create 1 log copy for both mirroring and RAID-5 volumes, by default nregionlog=1 nraid5log=1 # by default, limit mirroring log lengths to 32Kbytes max_regionloglen=32k # use 64K as the default stripe unit size for regular volumes stripe_stwid=64k # use 16K as the default stripe unit size for RAID-5 volumes raid5_stwid=16k vxdctl Daemon
Volume Manager Operations Common Volume Manager Commands active/passive disk arrays. You can change the path type from primary to secondary and vice-versa through the utilities provided by disk array vendors For more information, see the vxdctl (1M) manual page. vxmake Command You can use the vxmake command to add a new volume, plex, or subdisk to the set of objects managed by Volume Manager. The vxmake command adds a new record for that object to the Volume Manager configuration database.
Volume Manager Operations Common Volume Manager Commands disk4-02:1/8000, disk4-03:1/16000 sd ramd1-01 disk=ramd1 len=640 comment=”Hot spot for dbvol plex db-02 sd=ramd1-01:40320 vol db usetype=gen plex=db-01,db-02 readpol=prefer prefname=db-02 comment=”Uses mem1 for hot spot in last 5m This description file specifies a volume with two plexes. The first plex has five subdisks on physical disks. The second plex is preferred and has one subdisk on a volatile memory disk.
Volume Manager Operations Common Volume Manager Commands • move the contents of a subdisk to another subdisk • split one subdisk into two subdisks that occupy the same space as the original join two contiguous subdisks into one. NOTE Some vxsd tasks can take a large amount of time to complete. For more information, see the vxsd (1M) manual page. vxmend Command The vxmend command performs Volume Manager usage-type-specific tasks on volumes, plexes, and subdisks.
Volume Manager Operations Common Volume Manager Commands starting at system startup time. You can stop hot-relocation at any time by killing the vxrelocd process (this should not be done while a hot-relocation attempt is in progress). You can make some minor changes to the way the vxrelocd program behaves by either editing the vxrelocd line in the startup file that invokes the vxrelocd program (/sbin/rc2.d/S095vxvm-recover) or killing the existing vxrelocd process and restarting it with different options.
Volume Manager Operations Common Volume Manager Commands Before the vxrelocd program attempts relocation, a snapshot of the current configuration is saved in /etc/vx/saveconfig.d. This option specifies the maximum number of configurations to keep for each disk group. The default is 32. vxstat Command The vxstat command prints statistics about Volume Manager objects and block devices under Volume Manager control.
Volume Manager Operations Common Volume Manager Commands original dm name and the original offset are saved in the subdisk records.
Volume Manager Operations Common Volume Manager Commands vxunreloc command is run immediately after the replacement. Restarting the vxunreloc Program After Errors Internally, the vxunreloc program moves subdisks in three phases.The first phase creates as many subdisks on the specified destination disk as there are subdisks to be unrelocated. When the subdisks are made, the vxunreloc program fills in the comment field in the subdisk record with the string UNRELOC for identification.
Volume Manager Operations Common Volume Manager Commands The cleanup phase is performed with one transaction. The vxunreloc command resets the comment field to a NULL string for all subdisks marked as UNRELOC that reside on the destination disk. This includes clean-up for those subdisks that were unrelocated in any previous invocation of the vxunreloc command. vxvol Command The vxvol command performs Volume Manager tasks on volumes.
Volume Manager Operations Common Volume Manager Commands 144 Chapter 3
Disk Tasks 4 Disk Tasks Chapter 4 145
Disk Tasks Introduction Introduction This chapter describes the operations for managing disks used by the Volume Manager. NOTE Most Volume Manager commands require superuser or other appropriate privileges.
Disk Tasks Introduction • “Reserving Disks” • “Displaying Disk Information” Chapter 4 147
Disk Tasks Disk Devices Disk Devices Two classes of disk devices can be used with the Volume Manager: standard devices and special devices. In Volume Manager, special devices are considered physical disks connected to the system that are represented metadevices with one or more physical access paths. The access paths depend on whether the disk is a single disk or part of a multiported disk array connected to the system.
Disk Tasks Disk Devices • sliced—the public and private regions are on different disk partitions. • simple—the public and private regions are on the same disk area (with the public area following the private area). • nopriv—there is no private region (only a public region for allocating subdisks). NOTE For HP-UX 11i, disks (except the root disk) are treated and accessed as entire physical disks, so a device name of the form c#t#d# is used. On HP-UX 11i Version 1.
Disk Tasks Disk Utilities Disk Utilities The Volume Manager provides four interfaces that you can use to manage disks: • the graphical user interface • a set of command-line utilities • the vxdiskadm menu-based interface • the unrelocate utility Utilities discussed in this chapter include: • vxdiskadm—the vxdiskadm utility is the Volume Manager Support Operations menu interface. The vxdiskadm utility provides a menu of disk operations.
Disk Tasks Disk Utilities subdisk level and then take necessary action to make the object available again. This mechanism detects I/O failures in a subdisk, relocates the subdisk, and recovers the plex associated with the subdisk. After the disk has been replaced, Volume Manager provides a utility, vxunreloc, that allows you to restore the system back to the configuration that existed before the disk failure.
Disk Tasks vxdiskadm Main Menu vxdiskadm Main Menu The vxdiskadm main menu is as follows: Volume Manager Support Operations Menu: VolumeManager/Disk 1 Add or initialize one or more disks 2 Remove a disk 3 Remove a disk for replacement 4 Replace a failed or removed disk 5 Mirror volumes on a disk 6 Move volumes from a disk 7 Enable access to (import) a disk group 8 Remove access to (deport) a disk group 9 Enable (online) a disk device 10 Disable (offline) a disk device 11 Mark a disk as a spare for a disk g
Disk Tasks vxdiskadm Menu Description vxdiskadm Menu Description The vxdiskadm menu provides access to the following tasks. The numbers correspond to the items listed in the main menu: 1. Add or initialize one or more disks. You can add formatted disks to the system. SCSI disks are already formatted. For other disks, see the manufacturer’s documentation for formatting instructions. You are prompted for the disk device(s).
Disk Tasks vxdiskadm Menu Description device to use as a replacement. You can choose an uninitialized disk, in which case the disk will be initialized, or you can choose a disk that you have already initialized using the Add or initialize a disk menu operation. 5. Mirror volumes on a disk. You can mirror volumes on a disk. These volumes can be mirrored to another disk with available space. Creating mirror copies of volumes in this way protects against data loss in case of disk failure.
Disk Tasks vxdiskadm Menu Description removing the disk. 9. Enable (online) a disk device. If you move a disk from one system to another during normal system operation, the Volume Manager will not recognize the disk automatically. Use this menu task to tell the Volume Manager to scan the disk to identify it, and to determine if this disk is part of a disk group.
Disk Tasks vxdiskadm Menu Description 14. Unrelocate subdisks back to a disk. VxVM hot-relocation allows the system to automatically react to IO failures on a redundant VxVM object at the subdisk level and take necessary action to make the object available again. This mechanism detects I/O failures in a subdisk, relocates the subdisk, and recovers the plex associated with the subdisk.
Disk Tasks Initializing Disks Initializing Disks There are two levels of initialization for disks in the Volume Manager: Step 1. Formatting of the disk media itself. This must be done outside of the Volume Manager. Step 2. Storing identification and configuration information on the disk for use by the Volume Manager. Volume Manager interfaces are provided to step through this level of disk initialization.
Disk Tasks Initializing Disks NOTE If you are adding an uninitialized disk, warning and error messages are displayed on the console during the vxdiskadd command. Ignore these messages. These messages should not appear after the disk has been fully initialized; the vxdiskadd command displays a success message when the initialization completes. At the following prompt, enter y (or press Return) to continue: Add or initialize disks Menu: VolumeManager/Disk/AddDisks Here is the disk selected.
Disk Tasks Initializing Disks enter a special disk name). At the following prompt, enter n to indicate that this disk should not be used as a hot-relocation spare: Add disk as a spare disk for rootdg? [y,n,q,?] (default: n) n When the vxdiskadm program prompts whether to exclude this disk from hot-relocation use, enter n (or press Return).
Disk Tasks Adding a Disk to Volume Manager Adding a Disk to Volume Manager You must place a disk under Volume Manager control, or add it to a disk group, before you can use the disk space for volumes. If the disk was previously in use, but not under Volume Manager control, you can preserve existing data on the disk while still letting the Volume Manager take control of the disk. If the user wants to bring all the non LVM disks under control, they are considered as fresh disks.
Disk Tasks Adding a Disk to Volume Manager disk group, or leave the disk available for use by future add or replacement operations. To create a new disk group, select a disk group name that does not yet exist. To leave the disk available for future use, specify a disk group name of “none”. Which disk group [,none,list,q,?] (default: rootdg) Step 4.
Disk Tasks Adding a Disk to Volume Manager 162 Chapter 4
Disk Tasks Placing Disks Under Volume Manager Control Placing Disks Under Volume Manager Control When you add a disk to a system that is running Volume Manager, you need to put the disk under Volume Manager control so that the Volume Manager can control the space allocation on the disk. Unless another disk group is specified, Volume Manager places new disks in the default disk group, rootdg. When you are asked to name a disk group, enter none instead of selecting rootdg or typing in a disk group name.
Disk Tasks Placing Disks Under Volume Manager Control The sections that follow provide detailed examples of how to use the vxdiskadm utility to place disks under Volume Manager control in various ways and circumstances. NOTE A disk must be formatted (using the mediainit command, for example) or added to the system (using the diskadd command) before it can be placed under Volume Manager control.
Disk Tasks Placing Disks Under Volume Manager Control More than one disk or pattern may be entered at the prompt.
Disk Tasks Placing Disks Under Volume Manager Control yet been added to Volume Manager control. These disks may or may not have been initialized before. The disks that are listed with a disk name and disk group cannot be used for this task, as they are already under Volume Manager control. Step 3. To continue with the operation, enter y (or press Return) at the following prompt: Here is the disk selected. Output format: [Device_Name] c1t2d0 Continue operation? [y,n,q,?] (default: y) y Step 4.
Disk Tasks Placing Disks Under Volume Manager Control Step 8. To continue with the operation, enter y (or press Return) at the following prompt: The selected disks will be added to the disk group rootdg with default disk names. c1t2d0 Continue with operation? [y,n,q,?] (default: y) y Step 9. If the disk was used for the file system earlier, then the vxdiskadm program gives you the following choices: The following disk device appears to contain a currently unmounted file system.
Disk Tasks Placing Disks Under Volume Manager Control When initializing multiple disks at one time, it is possible to exclude certain disks or certain controllers. To exclude disks, list the names of the disks to be excluded in the file /etc/vx/disks.exclude before the initialization. You can exclude all disks on specific controllers from initialization by listing those controllers in the file /etc/vx/cntrls.exclude. Place multiple disks under Volume Manager control at one time as follows: Step 1.
Disk Tasks Placing Disks Under Volume Manager Control If you do not know the address (device name) of the disk to be added, enter list at the prompt for a complete listing of available disks. Step 3. To continue the operation, enter y (or press Return) at the following prompt: Here are the disks selected. Output format: [Device_Name] c3t0d0 c3t1d0 c3t2d0 c3t3d0 Continue operation? [y,n,q,?] (default: y) y Step 4.
Disk Tasks Placing Disks Under Volume Manager Control Step 8. To continue with the operation, enter y (or press Return) at the following prompt: The selected disks will be added to the disk group rootdg with default disk names. c3t0d0 c3t1d0 c3t2d0 c3t3d0 Continue with operation? [y,n,q,?] (default: y) y Step 9. If the disk was used for the file system earlier, then the vxdiskadm program gives you the following choices: The following disk device appears to contain a currently unmounted file system.
Disk Tasks Placing Disks Under Volume Manager Control Migration Guide—HP-UX for details.
Disk Tasks Moving Disks Moving Disks To move a disk between disk groups, remove the disk from one disk group and add it to the other. For example, to move the physical disk c0t3d0 (attached with the disk name disk04) from disk group rootdg and add it to disk group mktdg, use the following commands: # vxdg rmdisk disk04 # vxdg -g mktdg adddisk mktdg02=c0 t3d0 NOTE This procedure does not save the configurations or data on the disks. You can also move a disk by using the vxdiskadm command.
Disk Tasks Enabling a Physical Disk Enabling a Physical Disk If you move a disk from one system to another during normal system operation, the Volume Manager does not recognize the disk automatically. The enable disk task enables Volume Manager to identify the disk and to determine if this disk is part of a disk group. Also, this task re-enables access to a disk that was disabled by either the disk group deport task or the disk device disable (offline) task.
Disk Tasks Enabling a Physical Disk device (y) or return to the vxdiskadm main menu (n): Enable another device? [y,n,q,?] (default: n) 174 Chapter 4
Disk Tasks Detecting Failed Disks Detecting Failed Disks NOTE The Volume Manager hot-relocation feature automatically detects disk failures and notifies the system administrator of the failures by electronic mail. If hot-relocation is disabled or you miss the electronic mail, you can see disk failures through the output of the vxprint command or by using the graphical user interface to look at the status of the disks. You can also see driver error messages on the console or in the system messages file.
Disk Tasks Detecting Failed Disks # vxstat -s -ff home-02 src-02 A typical output display is as follows: TYP NAME sd disk01-04 sd disk01-06 sd disk02-03 sd disk02-04 FAILED READS WRITES 0 0 0 0 1 0 1 0 This display indicates that the failures are on disk02 (and that subdisks disk02-03 and disk02-04 are affected). Hot-relocation automatically relocates the affected subdisks and initiates any necessary recovery procedures.
Disk Tasks Detecting Failed Disks disk02 This message shows that disk02 was detached by a failure. When a disk is detached, I/O cannot get to that disk. The plexes home-02, src-02, and mkting-01 are also detached because of the disk failure. Again, the problem can be a cabling error. If the problem is not a cabling error, replace the disk.
Disk Tasks Disabling a Disk Disabling a Disk You can take a disk offline. If the disk is corrupted, you need to take it offline and remove it. You may be moving the physical disk device to another location to be connected to another system. To take a disk offline, first remove it from its disk group, and then use the following procedure: Step 1. Select menu item 10 (Disable (offline) a disk device) from the vxdiskadm main menu. Step 2.
Disk Tasks Replacing a Disk Replacing a Disk If a disk fails, you need to replace that disk with another. This task requires disabling and removing the failed disk and installing a new disk in its place. If a disk was replaced due to a disk failure and you wish to move hot-relocate subdisks back to this replaced disk, see Chapter 6, Volume Tasks, for information on moving hot-relocate subdisks. To replace a disk, use the following procedure: Step 1.
Disk Tasks Replacing a Disk disk. Choose a device, or select “none” [,none,q,?] (default: c1 t1d0) Step 4. At the following prompt, press Return to continue: The requested operation is to use the initialized device c1t0d0 to replace the removed or failed disk disk02 in disk group rootdg. Continue with operation? [y,n,q,?] (default: y) The vxdiskadm program displays the following success messages: Replacement of disk disk02 in group rootdg with disk device c1t0d0 completed successfully. Step 5.
Disk Tasks Replacing a Disk You can choose an uninitialized disk, in which case the disk will be initialized, or you can choose a disk that you have already initialized using the Add or initialize a disk menu operation. Select a removed or failed disk [,list,q,?] disk02 Step 3. The vxdiskadm program displays the device names of the disk devices available for use as replacement disks. Your system may use a device name that differs from the examples.
Disk Tasks Removing Disks Removing Disks You can remove a disk from a system and move it to another system if the disk is failing or has failed. Before removing the disk from the current system, you must: Step 1. Unmount any file systems on the volumes. Step 2. Stop the volumes on the disk. Step 3. Move the volumes to other disks or back up the volumes. To move a volume, mirror the volume on one or more other disks, then remove the original copy of the volume.
Disk Tasks Removing Disks Step 7. At the following verification prompt, press Return to continue: Requested operation is to remove disk disk01 from group rootdg. Continue with operation? [y,n,q,?] (default: y) The vxdiskadm utility removes the disk from the disk group and displays the following success message: Removal of disk disk01 is complete. You can now remove the disk or leave it on your system as a replacement. Step 8.
Disk Tasks Removing Disks remove it from Volume Manager control completely, as follows: # vxdisk rm devicename To remove c1t0d0 from Volume Manager control, use one of the following commands. On systems without a bus, use the command: # vxdisk rm c1t0d0 On systems with a bus, use the command: # vxdisk rm c1b0t0d0 Removing a Disk with Subdisks You can remove a disk on which some subdisks are defined. For example, you can consolidate all the volumes onto one disk.
Disk Tasks Removing Disks as free space for the creation of Volume Manager objects within its disk group. If necessary, you can free a spare disk for general use by removing it from the pool of hot-relocation disks. To determine which disks are currently designated as spares, use the following command: # vxdisk list The output of this command lists any spare disks with the spare flag.
Disk Tasks Taking a Disk Offline Taking a Disk Offline There are instances when you must take a disk offline. If a disk is corrupted, you must disable it and remove it. You must also disable a disk before moving the physical disk device to another location to be connected to another system.
Disk Tasks Adding a Disk to a Disk Group Adding a Disk to a Disk Group You can add a new disk to an already established disk group. For example, the current disks have insufficient space for the application or work group requirements, especially if these requirements have changed. To add an initialized disk to a disk group, use the following command: # vxdiskadd devname For example, to add device c1t1d0 to rootdg, use the following procedure: Step 1.
Disk Tasks Adding a Disk to a Disk Group Step 3. At the following prompt, either press Return to accept the default disk name or enter a disk name: Use a default disk name for the disk? [y,n,q,?] (default: y) Step 4. When the vxdiskadd program asks whether this disk should become a hot-relocation spare, enter n (or press Return): Add disk as a spare disk for rootdg? [y,n,q,?] (default: n) n Step 5.
Disk Tasks Adding a Disk to a Disk Group Manager, then you do not need to reinitialize the disk device. Output format: [Device_Name] c1t1d0 Reinitialize this device? [y,n,q,?] (default: y) y Messages similar to the following now confirm that this disk is being reinitialized for Volume Manager use. You may also be given the option of performing surface analysis on some systems. Initializing device c1t1d0.
Disk Tasks Adding a VM Disk to the Hot-Relocation Pool Adding a VM Disk to the Hot-Relocation Pool NOTE You may need an additional license to use this feature. Hot-relocation allows the system to automatically react to I/O failure by relocating redundant subdisks to other disks. Hot-relocation then restores the affected Volume Manager objects and data. If a disk has already been designated as a spare in the disk group, the subdisks from the failed disk are relocated to the spare disk.
Disk Tasks Adding a VM Disk to the Hot-Relocation Pool Menu: VolumeManager/Disk/MarkSpareDisk Use this operation to mark a disk as a spare for a disk group. This operation takes, as input, a disk name. This is the same name that you gave to the disk when you added the disk to the disk group. Enter disk name [,list,q,?] disk01 Step 3.
Disk Tasks Removing a VM Disk From the Hot-Relocation Pool Removing a VM Disk From the Hot-Relocation Pool NOTE You may need an additional license to use this feature. While a disk is designated as a spare, the space on that disk is not used as free space for the creation of Volume Manager objects within its disk group. If necessary, you can free a spare disk for general use by removing it from the pool of hot-relocation disks.
Disk Tasks Removing a VM Disk From the Hot-Relocation Pool Turn-off spare flag on another disk? [y,n,q,?] (default: n) Chapter 4 193
Disk Tasks Excluding a Disk from Hot-Relocation Use Excluding a Disk from Hot-Relocation Use NOTE You may need an additional license to use this feature. To exclude a disk available from hot-relocation use, use the following command: # vxedit -g disk_group set nohotuse=on disk_name Alternatively, from the vxdiskadm main menu, use the following procedure: Step 1. Select menu item 15 (Exclude a disk from hot-relocation use) from the vxdiskadm main menu. Step 2.
Disk Tasks Including a Disk for Hot-Relocation Use Including a Disk for Hot-Relocation Use NOTE You may need an additional license to use this feature. Free space is used automatically by hot-relocation on case spare space is not sufficient to relocate failed subdisks. The user can limit this free space usage by hot-relocation by specifying which free disks should not be touched by hot-relocation.
Disk Tasks Including a Disk for Hot-Relocation Use Step 3.
Disk Tasks Reinitializing a Disk Reinitializing a Disk This section describes how to reinitialize a disk that has previously been initialized for Volume Manager use. If the disk you want to add has been used before, but not with Volume Manager, use one of the following procedures: • Convert the LVM disk and preserve its information (see the VERITAS Volume Manager Migration Guide for more details.) • Reinitialize the disk, allowing the Volume Manager to configure the disk for Volume Manager.
Disk Tasks Reinitializing a Disk 4,target 2 c3t4d0:a single disk Select disk devices to add: [,all,list,q,?] c1t3d0 Where can be a single disk, or a series of disks and/or controllers (with optional targets). If consists of multiple items, those items must be separated by white space. If you do not know the address (device name) of the disk to be added, enter l or list at the prompt for a complete listing of available disks. Step 3.
Disk Tasks Reinitializing a Disk n) n Step 7. When the vxdiskadm program prompts whether to exclude this disk from hot-relocation use, enter n (or press Return). Exclude disk from hot-relocation use? [y,n,q,?} (default: n) n Step 8. To continue with the operation, enter y (or press Return) at the following prompt: The selected disks will be added to the disk group rootdg with default disk names. c1t3d0 Continue with operation? [y,n,q,?] (default: y) y Step 9.
Disk Tasks Renaming a Disk Renaming a Disk If you do not specify a Volume Manager name for a disk, the Volume Manager gives the disk a default name when you add the disk to Volume Manager control. The Volume Manager name is used by the Volume Manager to identify the location of the disk or the disk type.
Disk Tasks Reserving Disks Reserving Disks By default, the vxassist command allocates space from any disk that has free space. You can reserve a set of disks for special purposes, such as to avoid general use of a particularly slow or a particularly fast disk.
Disk Tasks Displaying Disk Information Displaying Disk Information Before you use a disk, you need to know if it has been initialized and placed under Volume Manager control. You also need to know if the disk is part of a disk group, because you cannot create volumes on a disk that is not part of a disk group. The vxdisk list command displays device names for all recognized disks, the disk names, the disk group names associated with each disk, and the status of each disk.
Disk Tasks Displaying Disk Information disk: name=disk04 id=962923652.362193.zort timeout: 30 group: name=rootdg id=962212937.1025.zort info: privoffset=128 flags: online ready private autoconfig autoimport imported pubpaths: block=/dev/vx/dmp/c1t0d3 char=/dev/vx/rdmp/c1t0d3 version: 2.1 iosize: min=1024 (bytes) max=64 (blocks) public: slice=0 offset=1152 len=4101723 private: slice=0 offset=128 len=1024 update: time=962923719 seqno=0.
Disk Tasks Displaying Disk Information # vxdisk list c8t15d0 The output is as follows: Device: c8t15d0 devicetag: c8t15d0 type: simple hostid: coppy disk: name=disk06 id=963453688.1097.coppy timeout: 30 group: name=rootdg id=963453659.1025.coppy info: privoffset=128 flags: online ready private autoimport imported pubpaths: block=/dev/vx/dmp/c8t15d0 char=/dev/vx/rdmp/c8t15d0 version: 2.
Disk Tasks Displaying Disk Information The type information is not present for disks on active/active type disk arrays because there is no concept of primary and secondary paths to disks on these disk arrays. Displaying Disk Information with the vxdiskadm Program Displaying disk information shows you which disks are initialized, to which disk groups they belong, and the disk status.
Disk Tasks Displaying Disk Information main menu.
Disk Group Tasks 5 Disk Group Tasks Chapter 5 207
Disk Group Tasks Introduction Introduction This chapter describes the operations for managing disks groups. NOTE Most Volume Manager commands require superuser or other appropriate privileges.
Disk Group Tasks Disk Groups Disk Groups Disks are organized by the Volume Manager into disk groups. A disk group is a named collection of disks that share a common configuration. Volumes are created within a disk group and are restricted to using disks within that disk group. A system with the Volume Manager installed has the default disk group, rootdg. By default, operations are directed to the rootdg disk group. The system administrator can create additional disk groups as necessary.
Disk Group Tasks Disk Groups changed. You can add a disk to a disk group by following the steps required to add a disk. See Chapter 4, Disk Tasks,.
Disk Group Tasks Disk Group Utilities Disk Group Utilities The Volume Manager provides a menu interface, vxdiskadm, and a command line utility, vxdg, to manage disk groups. These utilities are described as follows: • The vxdiskadm utility is the Volume Manager Support Operations menu interface. This utility provides a menu of disk operations. Each entry in the main menu leads you through a particular task by providing you with information and prompts.
Disk Group Tasks Creating a Disk Group Creating a Disk Group Data related to a particular set of applications or a particular group of users may need to be made accessible on another system. Examples of this are: • A system has failed and its data needs to be moved to other systems. • The work load must be balanced across a number of systems.
Disk Group Tasks Creating a Disk Group the operation. The selected disks may also be added to a disk group as spares. The selected disks may also be initialized without adding them to a disk group leaving the disks available for use as replacement disks. More than one disk or pattern may be entered at the prompt.
Disk Group Tasks Creating a Disk Group disk available for future use, specify a disk group name of “none”. Which disk group [,none,list,q,?] (default: rootdg) anotherdg Step 5. The vxdiskadm utility confirms that no active disk group currently exists with the same name and prompts for confirmation that you really want to create this new disk group: There is no active disk group named anotherdg. Create a new group named anotherdg? [y,n,q,?] (default: y) y Step 6.
Disk Group Tasks Creating a Disk Group utility gives you the following choices: The following disk device appears to have already been initialized for vxvm use. c1t2d0 Are you sure you want to re-initialize this disk [y,n,q,?] (default: n) Messages similar to the following confirm that this disk is being reinitialized for Volume Manager use: Initializing device c1t2d0. Creating a new disk group named anotherdg containing the disk device c1t2d0 with the name another01. Step 11.
Disk Group Tasks Renaming a Disk Group Renaming a Disk Group Only one disk group of a given name can exist per system. It is not possible to import or deport a disk group when the target system already has a disk group of the same name. To avoid this problem, the Volume Manager allows you to rename a disk group during import or deport. For example, because every system running the Volume Manager must have a single rootdg default disk group, importing or deporting rootdg across systems is a problem.
Disk Group Tasks Renaming a Disk Group this command: # vxdg -tC -n newdg_name import diskgroup where -t indicates a temporary import name; C clears import locks; -n specifies a temporary name for the rootdg to be imported (so it does not conflict with the existing rootdg); and diskgroup is the disk group ID of the disk group being imported (for example, 774226267.1025.tweety). If a reboot or crash occurs at this point, the temporarily imported disk group becomes unimported and requires a reimport. Step 3.
Disk Group Tasks Importing a Disk Group Importing a Disk Group Use this menu task to enable access by this system to a disk group. To move a disk group from one system to another, first disable (deport) the disk group on the original system, and then move the disk between systems and enable (import) the disk group. To import a disk group, use the following procedure: Step 1. Select menu item 7 (Enable access to (import) a disk group) from the vxdiskadm main menu. Step 2.
Disk Group Tasks Importing a Disk Group Once the import is complete, the vxdiskadm utility displays the following success message: The import of newdg was successful. Step 3.
Disk Group Tasks Deporting a Disk Group Deporting a Disk Group Use the deport disk group task to disable access to a disk group that is currently enabled (imported) by this system. Deport a disk group if you intend to move the disks in a disk group to another system. Also, deport a disk group if you want to use all of the disks remaining in a disk group for a new purpose. To deport a disk group, use the following procedure: Step 1.
Disk Group Tasks Deporting a Disk Group The requested operation is to disable access to the removable disk group named newdg. This disk group is stored on the following disks: newdg01 on device c1t1d0 You can choose to disable access to (also known as “offline”) these disks. This may be necessary to prevent errors if you actually remove any of the disks from the system. Disable (offline) the indicated disks? [y,n,q,?] (default: n) y Step 4.
Disk Group Tasks Upgrading a Disk Group Upgrading a Disk Group Prior to the release of Volume Manager 3.0, the disk group version was automatically upgraded (if needed) when the disk group was imported. Upgrading the disk group makes it incompatible with previous Volume Manager releases. The Volume Manager 3.0 disk group upgrade feature separates the two operations of importing a disk group and upgrading its version. You can import a disk group of down-level version and use it without upgrading it.
Disk Group Tasks Moving a Disk Group Moving a Disk Group A disk group can be moved between systems, along with its Volume Manager objects (except for rootdg). This relocates the disk group configuration to a new system. To move a disk group across systems, use the following procedure: Step 1. Unmount and stop all volumes in the disk group on the first system. Step 2. Deport (disable local access to) the disk group to be moved with the following command: # vxdg deport diskgroup Step 3.
Disk Group Tasks Moving Disk Groups Between Systems Moving Disk Groups Between Systems An important feature of disk groups is that they can be moved between systems. If all disks in a disk group are moved from one system to another, then the disk group can be used by the second system. You do not have to specify the configuration again. To move a disk group between systems, use the following procedure: Step 1.
Disk Group Tasks Moving Disk Groups Between Systems the disk group. CAUTION The purpose of the lock is to ensure that dual-ported disks (disks that can be accessed simultaneously by two systems) are not used by both systems at the same time. If two systems try to manage the same disks at the same time, configuration information stored on the disk is corrupted. The disk and its data become unusable.
Disk Group Tasks Moving Disk Groups Between Systems configuration copies The following message indicates a recoverable error. vxdg : Disk group groupname: import failed: Disk for disk group not found If some of the disks in the disk group have failed, force the disk group to be imported with the command: # vxdg -f import diskgroup NOTE Be careful when using the -f option. It can cause the same disk group to be imported twice from different sets of disks, causing the disk group to become inconsistent.
Disk Group Tasks Using Disk Groups Using Disk Groups Most Volume Manager commands allow you to specify a disk group using the –g option. For example, to create a volume in disk group mktdg, use the following command: # vxassist -g mktdg make mktvol 50m The (block) volume device for this volume is: /dev/vx/dsk/mktdg/mktvol The disk group does not have to be specified. Most Volume Manager commands use object names specified on the command line to determine the disk group for the operation.
Disk Group Tasks Removing a Disk Group Removing a Disk Group To remove a disk group, unmount and stop any volumes in the disk group, and then use the following command: # vxdg deport diskgroup Deporting a disk group does not actually remove the disk group. It disables use of the disk group by the system. However, disks that are in a deported disk group can be reused, reinitialized, or added to other disk groups, or imported to use on other systems.
Disk Group Tasks Destroying a Disk Group Destroying a Disk Group The vxdg command provides a destroy option that removes a disk group from the system and frees the disks in that disk group for reinitialization so they can be used in other disk groups. Remove disk groups that are not needed with the vxdg destroy command so that the disks can be used by other disk groups.
Disk Group Tasks Reserving Minor Numbers for Disk Groups Reserving Minor Numbers for Disk Groups Disk groups can be moved between systems. When you allocate volume device numbers in separate ranges for each disk group, all disk groups in a group of machines can be moved without causing device number collisions. Volume Manager allows you to select a range of minor numbers for a specified disk group. You use this range of numbers during the creation of a volume.
Disk Group Tasks Displaying Disk Group Information Displaying Disk Group Information To use disk groups, you need to know their names and what disks belong to each group. To display information on existing disk groups, use the following command: # vxdg list The Volume Manager returns the following listing of current disk groups: NAME rootdg newdg STATE enabled enabled ID 730344554.1025.tweety 731118794.1213.
Disk Group Tasks Displaying Disk Group Information Disk: c0t12d0 type: simple flags: online ready private autoconfig autoimport imported diskid: 963504891.1070.bass dgname: newdg dgid: 963504895.1075.
Volume Tasks 6 Volume Tasks Chapter 6 233
Volume Tasks Introduction Introduction This chapter describes how to create and maintain a system configuration under Volume Manager control. It includes information about creating, removing, and maintaining Volume Manager objects. Volume Manager objects include: • Volumes Volumes are logical device that appear to data management systems as physical disk partition devices. Volumes provide enhanced recovery, data availability, performance, and storage configuration options.
Volume Tasks Introduction • “Removing a Volume” • “Mirroring a Volume” • “Displaying Volume Configuration Information” • “Preparing a Volume to Restore From Backup” • “Recovering a Volume” • “Moving Volumes from a VM Disk” • “Adding a RAID-5 Log” • “Removing a RAID-5 Log” • “Adding a DRL Log” • “Removing a DRL Log” Plex Tasks • “Creating Plexes” • “Associating Plexes” • “Dissociating and Removing Plexes” • “Displaying Plex Information” • “Changing Plex Attributes” • “Changing Plex Status: Detaching and Atta
Volume Tasks Introduction • “Changing Subdisk Attributes” Performing Online Backup • “FastResync (Fast Mirror Resynchronization)” • “Mirroring Volumes on a VM Disk” • “Mirroring Volumes on a VM Disk” NOTE Some Volume Manager commands require superuser or other appropriate privileges.
Volume Tasks Creating a Volume Creating a Volume Volumes are created to take advantage of the Volume Manager concept of virtual disks. A file system can be placed on the volume to organize the disk space with files and directories. In addition, applications such as databases can be used to organize data on volumes.
Volume Tasks Creating a Volume that can be used to access the volume: • /dev/vx/dsk/volume_name (the block device node for the volume) • /dev/vx/rdsk/volume_name (the raw device node for the volume) For volumes in rootdg and disk groups other than rootdg, these names include the disk group name, as follows: • /dev/vx/dsk/diskgroup_name/volume_name • /dev/vx/rdsk/diskgroup_name/volume_name The following section, “Creating a Concatenated Volume” describes the simplest way to create a (default) volume.
Volume Tasks Creating a Volume For example. to create the volume voldefault with a length of 10 megabytes, use the following command: # vxassist make voldefault 10m Creating a Concatenated Volume on a Specific Disk The Volume Manager automatically selects the disk(s) each volume resides on, unless you specify otherwise. If you want a volume to reside on a specific disk, you must designate that disk for the Volume Manager. More than one disk can be specified.
Volume Tasks Creating a Volume disk05 Creating a RAID-5 Volume NOTE You may need an additional license to use this feature. A RAID-5 volume contains a RAID-5 plex that consists of two or more subdisks located on two or more physical disks. Only one RAID-5 plex can exist per volume. A RAID-5 volume can also contain one or more RAID-5 log plexes, which are used to log information about data and parity being written to the volume. For more information on RAID-5 volumes, see “RAID-5”.
Volume Tasks Starting a Volume Starting a Volume Starting a volume affects its availability to the user. Starting a volume changes its state, makes it available for use, and changes the volume state from DISABLED or DETACHED to ENABLED. The success of this task depends on the ability to enable a volume. If a volume cannot be enabled, it remains in its current state. To start a volume, use the following command: # vxrecover -s volume_name ...
Volume Tasks Stopping a Volume Stopping a Volume Stopping a volume renders it unavailable to the user. In a stopped volume, the volume state is changed from ENABLED or DETACHED to DISABLED. If the command cannot stop it, the volume remains in its current state. To stop a volume, use the following command: # vxvol stop volume_name ...
Volume Tasks Resizing a Volume Resizing a Volume Resizing a volume changes the volume size. To resize a volume, use either the vxassist, vxvol, or vxresize commands. If the volume is not large enough for the amount of data that needs to be stored in it, extend the length of the volume. If a volume is increased in size, the vxassist command automatically locates available disk space. When you resize a volume, you can specify the length of a new volume in sectors, kilobytes, megabytes, or gigabytes.
Volume Tasks Resizing a Volume Extending by a Given Length To extend a volume by a specific length, use the following command: # vxassist growby volume_name length For example, to extend volcat by 100 sectors, use the following command: # vxassist growby volcat 100 Shrinking to a Given Length To shrink a volume to a specific length, use the following command: # vxassist shrinkto volume_name length For example, to shrink volcat to 1300 sectors, use the following command: # vxassist shrinkto volcat 1300 Do n
Volume Tasks Resizing a Volume NOTE The vxvol set len command cannot increase the size of a volume unless the needed space is available in the plexes of the volume. When the size of a volume is reduced using the vxvol set len command, the freed space is not released into the free space pool. Changing the Volume Read Policy Volume Manager offers the choice of the following read policies: • round—reads each plex in turn in “round-robin” fashion for each nonsequential I/O detected.
Volume Tasks Resizing a Volume Resizing Volumes with the vxresize Command Use the vxresize command to resize a volume containing a file system. Although other commands can be used to resize volumes containing file systems, the vxresize command offers the advantage of automatically resizing the file system as well as the volume. For details on how to use the vxresize command, see the vxresize(1M) manual page. Note that only vxfs and hfs file systems can be resized with the vxresize command.
Volume Tasks Removing a Volume Removing a Volume Once a volume is no longer necessary (it is inactive and archived, for example), remove the volume and free up the disk space for other uses. Before removing a volume, refer to the following procedure: Step 1. Remove all references to the volume. Step 2. If the volume is mounted as a file system, unmount it with the command: # umount /dev/vx/dsk/volume_name Step 3. If the volume is listed in /etc/fstab, remove its entry. Step 4.
Volume Tasks Mirroring a Volume Mirroring a Volume NOTE You may need an additional license to use this feature. A mirror is a copy of a volume. The mirror copy is not stored on the same disk(s) as the original copy of the volume. Mirroring a volume ensures that the data in that volume is not lost if one of your disks fails. NOTE To mirror the root disk, use vxrootmir (1M). See the manual page for details.
Volume Tasks Mirroring a Volume # vxplex att volume_name plex_name Mirroring All Volumes To mirror all existing volumes on the system to available disk space, use the following command: # /etc/vx/bin/vxmirror -g diskgroup -a To configure the Volume Manager to create mirrored volumes by default, use the following command: # /etc/vx/bin/vxmirror -d yes If you make this change, you can still make unmirrored volumes by specifying nmirror=1 as an attribute to the vxassist command.
Volume Tasks Mirroring a Volume volumes can be mirrored onto another disk or onto any available disk space. Volumes will not be mirrored if they are already mirrored. Also, volumes that are comprised of more than one subdisk will not be mirrored. Enter disk name [,list,q,?] disk02 Step 3.
Volume Tasks Mirroring a Volume Backing Up Volumes Using Mirroring If a volume is mirrored, backup can be done on that volume by taking one of the volume mirrors offline for a period of time. This removes the need for extra disk space for the purpose of backup only. However, it also removes redundancy of the volume for the duration of the time needed for the backup to take place. NOTE The information in this section does not apply to RAID-5.
Volume Tasks Mirroring a Volume Step 9. Remove the temporary volume, using the following command: # vxedit rm tempvol For information on an alternative online backup method using the vxassist command, see “Performing Online Backup”. Removing a Mirror When a mirror is no longer needed, you can remove it. Removal of a mirror is required in the following instances: • to provide free disk space • to reduce the number of mirrors in a volume to increase the length of another mirror and its associated volume.
Volume Tasks Mirroring a Volume You can first dissociate the plex and subdisks, then remove them with the commands: # vxplex dis plex_name # vxedit -r rm plex_name Together, these commands accomplish the same as vxplex -o rm dis.
Volume Tasks Displaying Volume Configuration Information Displaying Volume Configuration Information You can use the vxprint command to display information about how a volume is configured.
Volume Tasks Preparing a Volume to Restore From Backup Preparing a Volume to Restore From Backup It is important to make backup copies of your volumes. This provides a copy of the data as it stands at the time of the backup. Backup copies are used to restore volumes lost due to disk failure, or data destroyed due to human error. The Volume Manager allows you to back up volumes with minimal interruption to users. To back up a volume with the vxassist command, use the following procedure: Step 1.
Volume Tasks Preparing a Volume to Restore From Backup To create a snapshot volume, use the following command: # vxassist snapshot volume_name new_volume_name For example, to create a snapshot volume of voldef, use the following command: # vxassist snapshot voldef snapvol The snapshot volume can now be used by backup utilities, while the original volume continues to be available for applications and users. The snapshot volume occupies as much space as the original volume.
Volume Tasks Recovering a Volume Recovering a Volume A system crash or an I/O error can corrupt one or more plexes of a volume and leave no plex CLEAN or ACTIVE. You can mark one of the plexes CLEAN and instruct the system to use that plex as the source for reviving the others.
Volume Tasks Moving Volumes from a VM Disk Moving Volumes from a VM Disk Before you disable or remove a disk, you can move the data from that disk to other disks on the system. To do this, ensure that the target disks have sufficient space, and then use the following procedure: Step 1. Select menu item 6 (Move volumes from a disk) from the vxdiskadm main menu. Step 2.
Volume Tasks Moving Volumes from a VM Disk the disk group. Step 3. At the following prompt, press Return to move the volumes: Requested operation is to move all volumes from disk disk01 in group rootdg. NOTE: This operation can take a long time to complete. Continue with operation? [y,n,q,?] (default: y) As the volumes are moved from the disk, the vxdiskadm program displays the status of the operation: Move volume voltest ... Move volume voltest-bk00 ...
Volume Tasks Adding a RAID-5 Log Adding a RAID-5 Log NOTE You may need an additional license to use this feature. Only one RAID-5 plex can exist per RAID-5 volume. Any additional plexes become RAID-5 log plexes, which are used to log information about data and parity being written to the volume. When a RAID-5 volume is created using the vxassist command, a log plex is created for that volume by default.
Volume Tasks Removing a RAID-5 Log Removing a RAID-5 Log NOTE You may need an additional license to use this feature. To remove a RAID-5 log, first dissociate the log from its volume and then remove the log and any associated subdisks completely.
Volume Tasks Adding a DRL Log Adding a DRL Log To put Dirty Region Logging into effect for a volume, a log subdisk must be added to that volume and the volume must be mirrored. Only one log subdisk can exist per plex.
Volume Tasks Removing a DRL Log Removing a DRL Log You can also remove a log with the vxassist command as follows. # vxassist remove log volume_name Use the attribute nlog= to specify the number of logs to be removed. By default, the vxassist command removes one log.
Volume Tasks Creating Plexes Creating Plexes The vxmake command creates Volume Manager objects, such as plexes. When you create a plex, you identify subdisks and associate them to the plex that you want to create. To create a plex from existing subdisks, use the following command: # vxmake plex plex_name sd=subdisk_name,...
Volume Tasks Associating Plexes Associating Plexes A plex becomes a participating plex for a volume by associating the plex with the volume.
Volume Tasks Dissociating and Removing Plexes Dissociating and Removing Plexes When a plex is no longer needed, you can remove it. Remove a plex for the following reasons: • to provide free disk space • to reduce the number of mirrors in a volume so you can increase the length of another mirror and its associated volume.
Volume Tasks Displaying Plex Information Displaying Plex Information Listing plexes helps identify free plexes for building volumes. Using the vxprint command with the plex (–p) option lists information about all plexes. To display detailed information about all plexes in the system, use the following command: # vxprint -lp To display detailed information about a specific plex, use the following command: # vxprint -l plex_name The –t option prints a single line of information about the plex.
Volume Tasks Changing Plex Attributes Changing Plex Attributes CAUTION Change plex attributes with extreme care, and only if necessary. The vxedit command changes the attributes of plexes and other volume Manager objects. To change plex attributes, use the following command: # vxedit set field=value ... plex_name ... The comment field and the putil and tutil fields are used by Volume Manager commands after plex creation.
Volume Tasks Changing Plex Attributes vol01-02 The vxedit command is used to modify attributes as follows: • set the comment field (identifying what the plex is used for) to my plex • set tutil2 to u to indicate that the subdisk is in use • change the user ID to admin To prevent a particular plex from being associated with a volume, set the putil0 field to a non-null string, as shown in the following command: # vxedit set putil0=”DO-NOT-USE” vol01-02 Chapter 6 269
Volume Tasks Changing Plex Status: Detaching and Attaching Plexes Changing Plex Status: Detaching and Attaching Plexes Once a volume has been created and placed online (ENABLED), Volume Manager can temporarily disconnect plexes from the volume. This is useful, for example, when the hardware on which the plex resides needs repair or when a volume has been left unstartable and a source plex for the volume revive must be chosen manually.
Volume Tasks Changing Plex Status: Detaching and Attaching Plexes at system reboot. The plex state is set to STALE, so that if a vxvol start command is run on the appropriate volume (for example, on system reboot), the contents of the plex is recovered and made ACTIVE.
Volume Tasks Changing Plex Status: Detaching and Attaching Plexes In this case, the state of vol01-02 is set to STALE. When the volume is next started, the data on the plex is revived from the other plex, and incorporated into the volume with its state set to ACTIVE. To manually change the state of a plex, see “Recovering a Volume”. See the vxmake(1M) and vxmend(1M) manual pages for more information about these commands.
Volume Tasks Moving Plexes Moving Plexes Moving a plex copies the data content from the original plex onto a new plex. To move data from one plex to another, use the following command: # vxplex mv original_plex new_plex For a move task to be successful, the following criteria must be met: • The old plex must be an active part of an active (ENABLED) volume. • The new plex must be at least the same size or larger than the old plex. • The new plex must not be associated with another volume.
Volume Tasks Copying Plexes Copying Plexes This task copies the contents of a volume onto a specified plex. The volume to be copied cannot be enabled. The plex cannot be associated with any other volume. To copy a plex, use the following command: # vxplex cp volume_name new_plex After the copy task is complete, new_plex is not associated with the specified volume volume_name. The plex contains a complete copy of the volume data.
Volume Tasks Creating Subdisks Creating Subdisks You can use the vxmake command to create Volume Manager objects, such as subdisks.
Volume Tasks Removing Subdisks Removing Subdisks To remove a subdisk, use the following command: # vxedit rm subdisk_name For example, to remove a subdisk named disk02-01, use the following command: # vxedit rm disk02-01 276 Chapter 6
Volume Tasks Moving Subdisks Moving Subdisks Moving a subdisk copies the disk space contents of a subdisk onto another subdisk. If the subdisk being moved is associated with a plex, then the data stored on the original subdisk is copied to the new subdisk. The old subdisk is dissociated from the plex, and the new subdisk is associated with the plex. The association is at the same offset within the plex as the source subdisk.
Volume Tasks Moving Subdisks The available plex home-01 will be used to recover the data. This message contains information about the subdisk before relocation and can be used to decide where to move the subdisk after relocation. The following message example shows the new location for the relocated subdisk: To: root Subject: Attempting VxVM relocation on host teal Volume home Subdisk disk02-03 relocated to disk05-01, but not yet recovered.
Volume Tasks Moving Subdisks If no hot-relocated subdisks reside in the system, the vxdiskadm program returns the following message: Currently there are no hot-relocated disks hit RETURN to continue Step 3. Move the subdisks to a different disk from the original disk by entering y at the following prompt; otherwise, enter n or press Return: Unrelocate to a new disk [y,n,q,?] (default: n) Step 4.
Volume Tasks Splitting Subdisks Splitting Subdisks Splitting a subdisk divides an existing subdisk into two subdisks. To split a subdisk, use the following command: # vxsd –s size split subdisk_name newsd1 newsd2 where: • subdisk_name is the name of the original subdisk • newsd1 is the name of the first of the two subdisks to be created • newsd2 is the name of the second subdisk to be created The –s option is required to specify the size of the first of the two subdisks to be created.
Volume Tasks Joining Subdisks Joining Subdisks Joining subdisks combines two or more existing subdisks into one subdisk. To join subdisks, the subdisks must be contiguous on the same disk. If the selected subdisks are associated, they must be associated with the same plex, and be contiguous in that plex.
Volume Tasks Associating Subdisks Associating Subdisks Associating a subdisk with a plex places the amount of disk space defined by the subdisk at a specific offset within the plex. The entire area that the subdisk fills must not be occupied by any portion of another subdisk. There are several ways that subdisks can be associated with plexes, depending on the overall state of the configuration.
Volume Tasks Associating Subdisks a size that fits the hole in the sparse plex exactly. Then, to associate the subdisk with the plex by specifying the offset of the beginning of the hole in the plex, use the following command: # vxsd -l offset assoc sparse_plex_name exact_size_subdisk NOTE The subdisk must be exactly the right size because Volume Manager does not allow for the space defined by two subdisks to overlap within a single plex.
Volume Tasks Associating Subdisks where subdisk is the name to be used as a log subdisk. The plex must be associated with a mirrored volume before DRL takes effect.
Volume Tasks Dissociating Subdisks Dissociating Subdisks To break an established connection between a subdisk and the plex to which it belongs, the subdisk is dissociated from the plex. A subdisk is dissociated when the subdisk is removed or used in another plex.
Volume Tasks Changing Subdisk Attributes Changing Subdisk Attributes CAUTION Change subdisk attributes with extreme care, and only if necessary. The vxedit command changes attributes of subdisks to other Volume Manager objects. To change information relating to a subdisk, use the following command: # vxedit set field=value ...
Volume Tasks Performing Online Backup Performing Online Backup NOTE You may need an additional license to use this feature. Volume Manager provides snapshot backups of volume devices. This is done through vxassist and other commands. There are various procedures for doing backups, depending upon the requirements for integrity of the volume contents. These procedures have the same starting requirement: a plex that is large enough to store the complete contents of the volume.
Volume Tasks Performing Online Backup also ask users to refrain from using the system during the brief time required to perform the snapshot (typically less than a minute). The amount of time involved in creating the snapshot mirror is long in contrast to the brief amount of time that it takes to create the snapshot volume. The online backup procedure is completed by running the vxassist snapshot command on a volume with a SNAPDONE mirror.
Volume Tasks Performing Online Backup FastResync (Fast Mirror Resynchronization) NOTE You may need an additional license to use this feature. The FastResync feature (also called Fast Mirror Resynchronization, which is abbreviated as FMR) performs quick and efficient resynchronization of stale mirrors by increasing the efficiency of the VxVM snapshot mechanism to better support operations such as backup and decision support.
Volume Tasks Performing Online Backup attached to the original volume. The snapshot volume is removed. This task resynchronizes the data in the volume so that the plexes are consistent. To merge a snapshot with its original volume, use the following command: # vxassist snapback replica-volume where replica-volume is the snapshot copy of the volume. By default, the data in the original plex is used for the merged volume.
Volume Tasks Performing Online Backup Mirroring Volumes on a VM Disk Mirroring the volumes on a VM disk gives you one or more copies of your volumes in another disk location. By creating mirror copies of your volumes, you protect your system against loss of data in case of a disk failure. You can use this task on your root disk to make a second copy of the boot information available on an alternate disk. This allows you to boot your system even if your root disk is corrupted.
Volume Tasks Performing Online Backup You can choose to mirror volumes from disk disk02 onto any available disk space, or you can choose to mirror onto a specific disk. To mirror to a specific disk, select the name of that disk. To mirror to any available disk space, select "any". Enter destination disk [,list,q,?] (default: any) disk01 NOTE Be sure to always specify the destination disk when you are creating an alternate root disk.
Volume Tasks Performing Online Backup Step 1. Select menu item 7 (Move volumes from a disk) from the vxdiskadm main menu. Step 2. At the following prompt, enter the disk name of the disk whose volumes you wish to move: Move volumes from a disk Menu: VolumeManager/Disk/Evacuate Use this menu operation to move any volumes that are using a disk onto other disks. Use this menu immediately prior to removing a disk, either permanently or for replacement.
Volume Tasks Performing Online Backup When the volumes have all been moved, vxdiskadm displays the following success message: Evacuation of disk disk01 is complete. Step 3.
Cluster Functionality 7 Cluster Functionality Chapter 7 295
Cluster Functionality Introduction Introduction This chapter discusses the cluster functionality provided with the VERITAS™ Volume Manager (VxVM). The Volume Manager includes an optional cluster feature that enables VxVM to be used in a cluster environment. The cluster functionality in the Volume Manager is a separately licensable feature.
Cluster Functionality Cluster Functionality Overview Cluster Functionality Overview The cluster functionality in the Volume Manager allows multiple hosts to simultaneously access and manage a given set of disks under Volume Manager control (VM disks). A cluster is a set of hosts sharing a set of disks; each host is referred to as a node in the cluster. The nodes are connected across a network. If one node fails, the other node(s) can still access the disks.
Cluster Functionality Cluster Functionality Overview A shared disk group must be activated on a node in order for the volumes in the disk group to become accessible for application I/O from that node. The ability of applications to read or write to volumes is dictated by the activation mode of the disk group. Valid activation modes for a shared disk group are exclusive-write, shared-write, read-only, shared-read and off (or inactive, as shown in Table 7-1, “Activation Modes for Shared Disk Group.
Cluster Functionality Cluster Functionality Overview How Cluster Volume Management Works The Volume Manager cluster feature works together with an externally-provided cluster manager, which is a daemon that informs VxVM of changes in cluster membership. Each node starts up independently and has its own copies of the operating system, VxVM with cluster support, and the cluster manager. When a node joins a cluster, it gains access to shared disks.
Cluster Functionality Cluster Functionality Overview Figure 7-1 Example of a 4-Node Cluster Network Node 2 (slave) Node 3 (slave) Node 4 (slave) Node 1 (master) Cluster-shareable Disk Group Cluster-shareable Disks The system administrator designates a disk group as cluster-shareable using the vxdg utility (see “vxdg Utility” for more information). Once a disk group is imported as cluster-shareable for one node, the disk headers are marked with the cluster ID.
Cluster Functionality Cluster Functionality Overview Any reconfiguration to a shared disk group is performed with the cooperation of all nodes. Configuration changes to the disk group happen simultaneously on all nodes and the changes are identical. These changes are atomic in nature, so they either occur simultaneously on all nodes or do not occur at all. All members of the cluster can have simultaneous read and write access to any cluster-shareable disk group depending on the activation mode.
Cluster Functionality Cluster Functionality Overview configuration of VxVM is required, apart from the cluster configuration requirements of MC/ServiceGuard. Node initialization is effected through the cluster manager startup procedure, which brings up the various cluster components (such as VxVM with cluster support, the cluster manager, and a distributed lock manager) on the node. Once it is complete, applications may be started.
Cluster Functionality Cluster Functionality Overview Volume Reconfiguration Volume reconfiguration is the process of creating, changing, and removing the Volume Manager objects in the configuration (such as disk groups, volumes, mirrors, etc.). In a cluster, this process is performed with the cooperation of all nodes. Volume reconfiguration is distributed to all nodes; identical configuration changes occur on all nodes simultaneously.
Cluster Functionality Cluster Functionality Overview If a node attempts to join the cluster while a volume reconfiguration is being performed, the results depend on how far the reconfiguration has progressed. If the kernel is not yet involved, the volume reconfiguration is suspended and restarted when the join is complete. If the kernel is involved, the join waits until the reconfiguration is complete.
Cluster Functionality Cluster Functionality Overview • If any volume in a shared disk group is open, the VxVM shutdown procedure returns failure. The shutdown procedure can be retried repeatedly until it succeeds. There is no timeout checking in this operation—it is intended as a service that verifies that the clustered applications are no longer active. NOTE Once shutdown succeeds, the node has left the cluster. It is not possible to access the shared volumes until the node joins the cluster again.
Cluster Functionality Disks in VxVM Clusters Disks in VxVM Clusters The nodes in a cluster must always agree on the status of a disk. In particular, if one node cannot write to a given disk, all nodes must stop accessing that disk before the results of the write operation are returned to the caller. Therefore, if a node cannot contact a disk, it should contact another node to check on the disk’s status. If the disk fails, no node can access it and the nodes can agree to detach the disk.
Cluster Functionality Disks in VxVM Clusters Table 7-2 Allowed and Conflicting Activation Modes Exclusive write Shared write Read only Exclusive write Shared write Read only Shared read Fail Fail Fail Succeed Fail Succeed Fail Succeed Fail Fail Succeed Succeed Shared read Succeed Succeed Succeed Succeed Shared disk groups can be automatically activated in any mode during disk group creation or during manual or auto-import.
Cluster Functionality Dirty Region Logging and Cluster Environments Dirty Region Logging and Cluster Environments Dirty Region Logging (DRL) is an optional property of a volume that provides speedy recovery of mirrored volumes after a system failure. Dirty Region Logging is supported in cluster-shareable disk groups. This section provides a brief overview of DRL and describes how DRL behaves in a clusterenvironment.
Cluster Functionality Dirty Region Logging and Cluster Environments The clustered dirty region log size is typically larger than a VxVM dirty region log, as it must accommodate active maps for all nodes in the cluster plus a recovery map. The size of each map within the dirty region log is one or more whole blocks. vxassist automatically takes care of allocating a sufficiently large dirty region log. The log size depends on the volume size and the number of nodes.
Cluster Functionality Dirty Region Logging and Cluster Environments sufficient size. How DRL Works in a Cluster Environment When one or more nodes in a cluster crash, DRL needs to be able to handle the recovery of all volumes in use by those nodes when the crash(es) occurred. On initial cluster startup, all active maps are incorporated into the recovery map during the volume start operation. Nodes that crash (i.e.
Cluster Functionality Dynamic Multipathing (DMP) Dynamic Multipathing (DMP) NOTE You may need an additional license to use this feature. In a clustered environment where Active/Passive type disk arrays are shared by multiple hosts, all hosts in the cluster should access the disk via the same physical path. If a disk from an Active/Passive type shared disk array is accessed via multiple paths simultaneously, it could lead to severe degradation of I/O performance.
Cluster Functionality Dynamic Multipathing (DMP) You can also obtain all paths connected to a particular controller, for example c2, using the following command: # vxdmpadm getsubpaths ctlr=c2 To get the node that controls a path to a disk array, use the following command: # vxdmpadm getdmpnode nodename=c3t2d1 Assigning a User Friendly Name You can assign a new name to a disk array using the following command: # vxdmpadm setattr enclosure nike0 name=VMGRP_1 Assigning a logical name to the enclosure helps t
Cluster Functionality Dynamic Multipathing (DMP) List Controllers The vxdmpadm listctlr command lists specified disk controllers connected to the host.
Cluster Functionality FastResync (Fast Mirror Resynchronization) FastResync (Fast Mirror Resynchronization) NOTE You may need an additional license to use this feature. The FastResync feature (also called Fast Mirror Resynchronization, which is abbreviated as FMR) is supported for shared volumes. The update maps (FMR maps) are distributed across in the cluster.
Cluster Functionality FastResync (Fast Mirror Resynchronization) Figure 7-2 Bitmap Clusterization START master request dirty map dirty map (requestor) (master) broadcast DONE request reply to original requestor prepare to dirty map (all nodes) node response to master node response to master commit the map (all nodes) master wait for all nodes to respond ring broadcast Chapter 7 315
Cluster Functionality Upgrading Volume Manager Cluster Functionality Upgrading Volume Manager Cluster Functionality The rolling upgrade feature allows an administrator to upgrade the version of Volume Manager running in a cluster without shutting down the entire cluster. To install the new version of Volume Manager running on a cluster, the system administrator can pull out one node from the cluster, upgrade it and then join the node back into the cluster. This is done for each node in the cluster.
Cluster Functionality Upgrading Volume Manager Cluster Functionality Once all nodes have the new release installed, the vxdctl upgrade command must be run on the Master node to switch to the higher cluster protocol version. See “vxdctl Utility” for more information.
Cluster Functionality Cluster-related Volume Manager Utilities and Daemons Cluster-related Volume Manager Utilities and Daemons The following utilities and/or daemons have been created or modified for use with the Volume Manager in a cluster environment: • “vxclustd Daemon” • “vxconfigd Daemon” • “vxdctl Utility” • “vxdg Utility” • “vxdisk Utility” • “vxdmpadm Utility” • “vxrecover Utility” • “vxstat Utility” The following sections contain information about how each of these utilities is used in a cluster
Cluster Functionality Cluster-related Volume Manager Utilities and Daemons • cluster ID and cluster name • node IDs and hostnames of all configured nodes • IP addresses of the network interfaces through which the nodes communicate with each other Registration also provides a callback mechanism for the cluster manager to notify the vxclustd daemon when cluster membership changes. After initializing kernel cluster variables, the vxclustd daemon waits for a callback from the cluster manager.
Cluster Functionality Cluster-related Volume Manager Utilities and Daemons vxconfigd daemon receives cluster-related instructions from the kernel. A separate copy of the vxconfigd daemon resides on each node; these copies communicate with each other through networking facilities. For each node in a cluster, Volume Manager utilities communicate with the vxconfigd daemon running on that particular node; utilities do not attempt to connect with vxconfigd daemons on other nodes.
Cluster Functionality Cluster-related Volume Manager Utilities and Daemons restarted at any time. While the vxconfigd daemon is stopped, volume reconfigurations cannot take place and other nodes cannot join the cluster until the vxconfigd daemon is restarted. In the cluster, the vxconfigd daemons on the slaves are always connected to the vxconfigd daemon on the master. It is therefore not advisable to stop the vxconfigd daemon on any clustered node.
Cluster Functionality Cluster-related Volume Manager Utilities and Daemons vxconfigd daemon does not start. vxdctl Utility When the vxdctl utility is executed as “vxdctlenable”, if DMP identifies a DISABLED primary path of a shared disk in an active/passive type disk array as physically accessible, this path is marked as ENABLED. However, I/Os continue to use the current path and are not routed through the path that has been marked ENABLED.
Cluster Functionality Cluster-related Volume Manager Utilities and Daemons by the current VxVM release using the following command: # vxdctl protocolrange minprotoversion: 10, maxprotoversion: 20 The vxdctl list command displays the cluster protocol version running on a node. The vxdctl list command produces the following output: Volboot file version: 3/1 seqno: 0.
Cluster Functionality Cluster-related Volume Manager Utilities and Daemons • accessname is the disk access name (or device name). Importing disk groups Disk groups can be imported as shared using the vxdg -s import command. If the disk groups were set up before the cluster software was run, the disk groups can be imported into the cluster arrangement using the following command: # vxdg -s import diskgroup where diskgroup is the disk group name or ID.
Cluster Functionality Cluster-related Volume Manager Utilities and Daemons if the system administrator is fully aware of the possible consequences. When a cluster is restarted, VxVM may refuse to auto-import a disk group for one of the following reasons: • A disk in that disk group is no longer accessible because of hardware errors on the disk.
Cluster Functionality Cluster-related Volume Manager Utilities and Daemons The output from this command is as follows: NAME rootdg group2 group1 STATE enabled enabled,shared enabled,shared ID 774215886.1025.teal 774575420.1170.teal 774222028.1090.teal Shared disk groups are designated with the flag shared.
Cluster Functionality Cluster-related Volume Manager Utilities and Daemons vxdisk Utility The vxdisk utility manages Volume Manager disks. To use the vxdisk utility to determine whether a disk is part of a cluster-shareable disk group, use the following command: # vxdisk list accessname where accessname is the disk access name (or device name). The output from this command (for the device c4t1d0) is as follows: Device: c4t1d0 devicetag: c4t1d0 type: simple hostid: hpvm2 disk: name=disk01 id=963616090.1034.
Cluster Functionality Cluster-related Volume Manager Utilities and Daemons Note that the clusterid: field is set to cvm (the name of the cluster) and the flags: field includes an entry for shared. When a node is not joined, the flags: field contains the autoimport flag instead of imported. vxdmpadm Utility The vxdmpadm utility is a command line administrative interface for the Dynamic Multipathing feature of VxVM.
Cluster Functionality Cluster-related Volume Manager Utilities and Daemons host. vxrecover Utility The vxrecover utility recovers plexes and volumes after disk replacement. When a node leaves the cluster, it can leave some mirrors in an inconsistent state. The vxrecover utility performs recovery on all volumes in this state. The -c option causes the vxrecover utility to perform recovery for all volumes in cluster-shareable disk groups.
Cluster Functionality Cluster-related Volume Manager Utilities and Daemons READ WRITE vol vol1 99.0 0.0 2421 0 600000 0 To obtain and display statistics for the entire cluster, use the following command: # vxstat -b The statistics for all nodes are added together. For example, if node 1 did 100 I/Os and node 2 did 200 I/Os, the vxstat -b command returns 300 I/Os.
Cluster Functionality Cluster Terminology Cluster Terminology The following is a list of cluster-related terms and definitions: clean node shutdown cluster cluster manager The ability of a node to leave the cluster gracefully when all access to shared volumes has ceased. A set of hosts that share a set of disks. An externally-provided daemon that runs on each node in a cluster. The cluster managers on each node communicate with each other and inform VxVM of changes in cluster membership.
Cluster Functionality Cluster Terminology hosts (also referred to as a cluster-shareable disk group). shared volume A volume that belongs to a shared disk group and is open on more than one node at the same time. shared VM diskA VM disk that belongs to a shared disk group.
Recovery 8 Recovery Chapter 8 333
Recovery Introduction Introduction The VERITAS Volume Manager protects systems from disk failures and helps you to recover from disk failures. This chapter describes recovery procedures and information to help you prevent loss of data or system access due to disk failures. It also describes possible plex and volume states.
Recovery Reattaching Disks Reattaching Disks You can do a disk reattach operation if a disk has a full failure and hot-relocation is not possible, or if the Volume Manager is started with some disk drivers unloaded and unloadable (causing disks to enter the failed state). If the problem is fixed, you can use the vxreattach command to reattach the disks without plexes being flagged as stale. However, the reattach must occur before any volumes on the disk are started.
Recovery VxVM Boot Disk Recovery VxVM Boot Disk Recovery If there is a failure to boot from the VxVM boot disk on HP-UX 11i Version 1.5, use one of the following methods to recover.
Recovery Reinstallation Recovery Reinstallation Recovery Reinstallation is necessary if all copies of your root (boot) disk are damaged, or if certain critical files are lost due to file system damage. On HP-UX 11i Version 1.5, first use the recovery methods described in “VxVM Boot Disk Recovery”. Follow the procedures below only if those methods fail. If these types of failures occur, attempt to preserve as much of the original Volume Manager configuration as possible.
Recovery Reinstallation Recovery having the root disk under Volume Manager control increases the possibility of a reinstallation being necessary. By having the root disk under Volume Manager control and creating mirrors of the root disk contents, you can eliminate many of the problems that require system reinstallation. When reinstallation is necessary, the only volumes saved are those that reside on, or have copies on, disks that are not directly involved with the failure and reinstallation.
Recovery Reinstallation Recovery Preparing the System for Reinstallation To prevent the loss of data on disks not involved in the reinstallation, involve only the root disk in the reinstallation procedure. NOTE Several of the automatic options for installation access disks other than the root disk without requiring confirmation from the administrator. Disconnect all other disks containing volumes from the system prior to reinstalling the operating system.
Recovery Reinstallation Recovery instructions for loading the Volume Manager (from CD-ROM) in the VERITAS Volume Manager 3.1 for HP-UX Release Notes. To reconstruct the Volume Manager configuration left on the nonroot disks, do not initialize the Volume Manager (using the vxinstall command) after the reinstallation. Recovering the Volume Manager Configuration Once the Volume Manager package has been loaded, recover the Volume Manager configuration using the following procedure: Step 1.
Recovery Reinstallation Recovery # vxdctl initdmp Step 12. Enable vxconfigd using the following command: # vxdctl enable The configuration preserved on the disks not involved with the reinstallation has now been recovered. However, because the root disk has been reinstalled, it does not appear to the Volume Manager as a Volume Manager disk. The configuration of the preserved disks does not include the root disk as part of the Volume Manager configuration.
Recovery Reinstallation Recovery • hot-relocation startup Volume Cleanup After completing the rootability cleanup, you must determine which volumes need to be restored from backup. The volumes to be restored include those with all mirrors (all copies of the volume) residing on disks that have been reinstalled or removed. These volumes are invalid and must be removed, recreated, and restored from backup. If only some mirrors of a volume exist on reinitialized or removed disks, these mirrors must be removed.
Recovery Reinstallation Recovery listed in error state and a VM disk listed as not associated with a device. Step 2. Once you know which disks have been removed or replaced, locate all the mirrors on failed disks using the following command: # vxprint -sF “%vname” -e’sd_disk = “disk”’ where disk is the name of a disk with a failed status. Be sure to enclose the disk name in quotes in the command. Otherwise, the command returns an error message.
Recovery Reinstallation Recovery It is possible that only part of a plex is located on the failed disk. If the volume has a striped plex associated with it, the volume is divided between several disks. For example, the volume named v02 has one striped plex striped across three disks, one of which is the reinstalled disk disk01.
Recovery Reinstallation Recovery This volume has two plexes, v03-01 and v03-02. The first plex (v03-01) does not use any space on the invalid disk, so it can still be used. The second plex (v03-02) uses space on invalid disk disk01 and has a state of NODEVICE. Plex v03-02 must be removed. However, the volume still has one valid plex containing valid data. If the volume needs to be mirrored, another plex can be added later. Note the name of the volume to create another plex later. Step 6.
Recovery Reinstallation Recovery normal backup/restore procedures. Any volumes that had plexes removed as part of the volume cleanup can have these mirrors recreated by following the instructions for mirroring a volume with the vxassist command as described in “Mirroring Guidelines”.
Recovery Detecting and Replacing Failed Disks Detecting and Replacing Failed Disks This section describes how to detect disk failures and replace failed disks. It begins with the hot-relocation feature, which automatically attempts to restore redundant Volume Manager objects when a failure occurs. Hot-Relocation NOTE You may need an additional license to use this feature.
Recovery Detecting and Replacing Failed Disks I/O error in the plex (which affects subdisks within the plex). For mirrored volumes, the plex is detached. • RAID-5 subdisk failure—this is normally detected as a result of an uncorrectable I/O error. The subdisk is detached. When such a failure is detected, the vxrelocd daemon informs the system administrator by electronic mail of the failure and which Volume Manager objects are affected.
Recovery Detecting and Replacing Failed Disks disk group as hot-relocation spares. For information on how to designate a disk as a spare, see “Placing Disks Under Volume Manager Control”. If no spares are available at the time of a failure or if there is not enough space on the spares, free space is automatically used. By designating spare disks, you have control over which space is used for relocation in the event of a failure.
Recovery Detecting and Replacing Failed Disks # vxrelocd root user_name1 user_name2 & • To reduce the impact of recovery on system performance, you can instruct the vxrelocd process to increase the delay between the recovery of each region of the volume using the following command: # vxrelocd -o slow[=IOdelay] root & where the optional IOdelay indicates the desired delay (in milliseconds). The default value for the delay is 250 milliseconds. See the vxrelocd(1M) manual page for more information.
Recovery Detecting and Replacing Failed Disks the following example: To: root Subject: Volume Manager failures on host teal Attempting to relocate subdisk disk02-03 from plex home-02. Dev_offset 0 length 1164 dm_name disk02 da_name c0t5d0. The available plex home-01 will be used to recover the data. This message contains information about the subdisk before relocation that can be used to decide where to move the subdisk after relocation.
Recovery Detecting and Replacing Failed Disks vxunreloc, the information is erased. The original dm name and the original offset are saved in the subdisk records.
Recovery Detecting and Replacing Failed Disks disk02 does not affect the hot-relocated subdisk disk01-01. However, a replacement of disk01, followed by the unrelocate operation, moves disk01-01 back to disk01 when the vxunreloc utility is run, immediately after the replacement. Restart vxunreloc After Errors Internally, the vxunreloc utility moves the subdisks in three phases.The first phase creates as many subdisks on the specified destination disk as there are subdisks to be unrelocated.
Recovery Detecting and Replacing Failed Disks The cleanup phase requires one transaction. The vxunreloc utility resets the comment field to a NULL string for all the subdisks marked UNRELOC that reside on the destination disk. This includes cleanup of subdisks that were unrelocated in a previous invocation of the vxunreloc utility that did not successfully complete.
Recovery Detecting and Replacing Failed Disks To determine which disk is causing the failures in the above example, use the following command: # vxstat -s -ff home-02 src-02 The following is a typical output display: TYP NAME sd disk01-04 sd disk01-06 sd disk02-03 sd disk02-04 FAILED READS WRITES 0 0 0 0 1 0 1 0 This display indicates that the failures are on disk02 (and that subdisks disk02-03 and disk02-04 are affected).
Recovery Detecting and Replacing Failed Disks src-02 mkting-01 failing disks: disk02 This message shows that disk02 was detached by a failure. When a disk is detached, I/O cannot get to that disk. The plexes home-02, src-02, and mkting-01 were also detached (probably because of the failure of the disk). Again, the problem can be a cabling error. If the problem is not a cabling error, replace the disk (see “Replacing Disks”).
Recovery Detecting and Replacing Failed Disks Step 1. Detach the disk from its disk group. Step 2. Replace the disk with a new one. To detach the disk, run the vxdiskadm utility and select item 3 (Remove a disk for replacement) from the main menu. If initialized disks are available as replacements, specify the disk as part of this operation. Otherwise, specify the replacement disk later by selecting item 4 (Replace a failed or removed disk) from the main menu.
Recovery Plex and Volume States Plex and Volume States The following sections describe plex and volume states. Plex States Plex states reflect whether or not plexes are complete and are consistent copies (mirrors) of the volume contents. Volume Manager utilities automatically maintain the plex state. However, if a volume should not be written to because there are changes to that volume and if a plex is associated with that volume, you can modify the state of the plex.
Recovery Plex and Volume States • OFFLINE • TEMP • TEMPRM • TEMPRMSD IOFAIL A Dirty Region Logging or RAID-5 log plex is a special case, as its state is always set to LOG. EMPTY Plex State Volume creation sets all plexes associated with the volume to the EMPTY state to indicate that the plex is not yet initialized. CLEAN Plex State A plex is in a CLEAN state when it is known to contain a consistent copy (mirror) of the volume contents and an operation has disabled the volume.
Recovery Plex and Volume States STALE Plex State If there is a possibility that a plex does not have the complete and current volume contents, that plex is placed in the STALE state. Also, if an I/O error occurs on a plex, the kernel stops using and updating the contents of that plex, and an operation sets the state of the plex to STALE. A vxplex att operation recovers the contents of a STALE plex from an ACTIVE plex. Atomic copy operations copy the contents of the volume to the STALE plexes.
Recovery Plex and Volume States If the system fails for any reason, the TEMPRM state indicates that the operation did not complete successfully. A later operation dissociates and removes TEMPRM plexes. TEMPRMSD Plex State The TEMPRMSD plex state is used by vxassist when attaching new plexes. If the operation does not complete, the plex and its subdisks are removed. IOFAIL Plex State The IOFAIL plex state is associated with persistent state logging.
Recovery Plex and Volume States NOTE No user intervention is required to set these states; they are maintained internally. On a system that is operating properly, all plexes are enabled. Volume States Some volume states are similar to plex states. The following are volume states: CLEAN—The volume is not started (kernel state is DISABLED) and its plexes are synchronized.
Recovery Plex and Volume States the volume is marked CLEAN. RAID-5 Volume States RAID-5 volumes have their own set of volume states, as follows: CLEAN—The volume is not started (kernel state is DISABLED) and its parity is good. The RAID-5 plex stripes are consistent. ACTIVE—The volume has been started (kernel state is currently ENABLED) or was in use (kernel state was ENABLED) when the system was rebooted.
Recovery RAID-5 Volume Layout RAID-5 Volume Layout NOTE You may need an additional license to use this feature. A RAID-5 volume consists of one or more plexes, each of which consists of one or more subdisks. Unlike mirrored volumes, not all plexes in a RAID-5 volume serve to keep a mirror copy of the volume data.
Recovery RAID-5 Volume Layout stored. Any other plexes associated with the volume are used to log information about data and parity being written to the volume. These plexes are referred to as RAID-5 log plexes or RAID-5 logs. RAID-5 logs can be concatenated or striped plexes, and each RAID-5 log associated with a RAID-5 volume has a complete copy of the logging information for the volume. It is suggested that you have a minimum of two RAID-5 log plexes for each RAID-5 volume.
Recovery Creating RAID-5 Volumes Creating RAID-5 Volumes NOTE You may need an additional license to use this feature. You can create RAID-5 volumes by using either the vxassist command (recommended) or the vxmake command. Both approaches are described in this section. A RAID-5 volume contains a RAID-5 plex that consists of two or more subdisks located on two or more physical disks. Only one RAID-5 plex can exist per volume.
Recovery Creating RAID-5 Volumes \sd=disk00-01,disk01-00,disk02-00,disk03-00 Note that because four subdisks are specified with no specification of columns, the vxmake command assumes a four-column RAID-5 plex and places one subdisk in each column. Striped plexes are created using this same method.
Recovery Initializing RAID-5 Volumes Initializing RAID-5 Volumes NOTE You may need an additional license to use this feature. A RAID-5 volume must be initialized if it was created by the vxmake command and has not yet been initialized or if it has been set to an uninitialized state. If RAID-5 is created using the vxassist command with default options, then the volume is initialized by the vxassist command.
Recovery Failures and RAID-5 Volumes Failures and RAID-5 Volumes NOTE You may need an additional license to use this feature. Failures are seen in two varieties: system failures and disk failures. A system failure means that the system has abruptly ceased to operate due to an operating system panic or power failure. Disk failures imply that the data on some number of disks has become unavailable due to a system failure (such as a head crash, electronics failure on disk, or disk controller failure).
Recovery Failures and RAID-5 Volumes failure. The process of resynchronization consists of reading that data and parity from the logs and writing it to the appropriate areas of the RAID-5 volume. This greatly reduces the amount of time needed for a resynchronization of data and parity. It also means that the volume never becomes truly stale. The data and parity for all stripes in the volume are known at all times, so the failure of a single disk cannot result in the loss of the data within the volume.
Recovery Failures and RAID-5 Volumes plr5vol-03r5volENABLEDLOG1440CONCAT-RW sddisk05-01r5vol-03disk05014400c2t14d0ENA The volume r5vol is in degraded mode, as shown by the volume STATE, which is listed as DEGRADED. The failed subdisk is disk01-00, as shown by the flags in the last column; the d indicates that the subdisk is detached and the S indicates that the subdisk contents are stale. A disk containing a RAID-5 log can have a failure. This has no direct effect on the operation of the volume.
Recovery RAID-5 Recovery RAID-5 Recovery NOTE You may need an additional license to use this feature. Here are the types of recovery typically needed for RAID-5 volumes: • parity resynchronization • stale subdisk recovery • log plex recovery These types of recovery are described in the sections that follow.
Recovery RAID-5 Recovery Parity Recovery In most cases, a RAID-5 array does not have stale parity. Stale parity only occurs after all RAID-5 log plexes for the RAID-5 volume have failed, and then only if there is a system failure. Even if a RAID-5 volume has stale parity, it is usually repaired as part of the volume start process. If a volume without valid RAID-5 logs is started and the process is killed before the volume is resynchronized, the result is an active volume with stale parity.
Recovery RAID-5 Recovery regeneration must be kept across reboots. Otherwise, the process has to start all over again. To avoid the restart process, parity regeneration is checkpointed. This means that the offset up to which the parity has been regenerated is saved in the configuration database. The -o checkpt=size option controls how often the checkpoint is saved. If the option is not specified, the default checkpoint size is used.
Recovery Miscellaneous RAID-5 Operations Miscellaneous RAID-5 Operations NOTE You may need an additional license to use this feature. Many operations exist for manipulating RAID-5 volumes and associated objects. These operations are usually performed by other commands, such as the vxassist command and the vxrecover command, as part of larger operations, such as evacuating disks. These command line operations are not necessary for light usage of the Volume Manager.
Recovery Miscellaneous RAID-5 Operations Manipulating RAID-5 Subdisks As with other subdisks, subdisks of the RAID-5 plex of a RAID-5 volume are manipulated using the vxsd command. Association is done by using the assoc keyword in the same manner as for striped plexes.
Recovery Miscellaneous RAID-5 Operations marked as STALE and then recovered using VOL_R5_RECOVER operations. Recovery is done either by the vxsd utility or (if the volume is not active) when the volume is started. This means that the RAID-5 volume is degraded for the duration of the operation. Another failure in the stripes involved in the move makes the volume unusable. The RAID-5 volume can also become invalid if the parity of the volume becomes stale.
Recovery Miscellaneous RAID-5 Operations Unstartable RAID-5 Volumes A RAID-5 volume is unusable if some part of the RAID-5 plex does not map the volume length: • the RAID-5 plex cannot be sparse in relation to the RAID-5 volume length • the RAID-5 plex does not map a region where two subdisks have failed within a stripe, either because they are stale or because they are built on a failed disk When this occurs, the vxvol start command returns the following error message: vxvm:vxvol: ERROR: Volume r5vol is n
Recovery Miscellaneous RAID-5 Operations Figure 8-1 Invalid RAID-5 Volume disk00-00 disk01-00 disk02-00 Data Data Parity W Data Parity Data X Parity Data Data Y Data Data Parity Z disk03-00 disk04-00 disk05-00 RAID-5 Plex This example shows four stripes in the RAID-5 array. All parity is stale and subdisk disk05-00 has failed. This makes stripes X and Y unusable because two failures have occurred within those stripes.
Recovery Miscellaneous RAID-5 Operations have multiple valid RAID-5 logs associated with the array. However, this is not always possible. To start a RAID-5 volume with stale subdisks, you can use the -f option with the vxvol start command. This causes all stale subdisks to be marked as nonstale. Marking takes place before the start operation evaluates the validity of the RAID-5 volume and what is needed to start it.
Recovery Miscellaneous RAID-5 Operations contents of the volume unusable. It is therefore not recommended. Step 2. Any existing logging plexes are zeroed and enabled. If all logs fail during this process, the start process is aborted. Step 3. If no stale subdisks exist or those that exist are recoverable, the volume is put in the ENABLED kernel state and the volume state is set to ACTIVE. The volume is now started. Step 4.
Recovery Miscellaneous RAID-5 Operations Step 6. When all subdisks have been recovered, the volume is placed in the ENABLED kernel state and marked as ACTIVE. It is now started. Changing RAID-5 Volume Attributes You can change several attributes of RAID-5 volumes. For RAID-5 volumes, the volume length and RAID-5 log length can be changed by using the vxvol set command.
Recovery Miscellaneous RAID-5 Operations Step 2. Parity is updated to reflect the contents of the new data region. First, the contents of the old data undergo an exclusive OR (XOR) with the parity (logically removing the old data). The new data is then XORed into the parity (logically adding the new data). The new data and new parity are written to a log. Step 3. The new parity is written to the parity stripe unit. The new data is written to the data stripe units.
Recovery Miscellaneous RAID-5 Operations Figure 8-2 Read-Modify-Write New Data Data and XO Data for Disk 1 Data for Disk 2 Parity for Disk 5 Disk 1 Column Disk 2 Column Disk 3 Column Disk 4 Column Disk 5 Column 4 Lo SU = Stripe Unit = Step 1: Reads data (from parity stripe unit P0 and data stripe units 0 & 1). = Step 2: Performs XORs between data and parity to calculate new parity. Logs new data and new parity.
Recovery Miscellaneous RAID-5 Operations written to the data stripe units. The entire stripe is written in a single write. See Figure 8-3, Full-Stripe Write, Figure 8-3 Full-Stripe Write New Data XO Data and Parity for Disk 5 Data for Disk 1 Data for Disk 2 Disk 1 Column Disk 2 Column Data for Disk 3 Disk 3 Column Data for Disk 4 Disk 4 Column Disk 5 Column 4 Lo SU = Stripe Unit = Step 1: Performs XORs between data and parity to calculate new parity. Logs new data and new parity.
Recovery Miscellaneous RAID-5 Operations Step 2. The new data is XORed with the old, unaffected data to generate a new parity stripe unit. The new data and resulting parity are logged. Step 3. The new parity is written to the parity stripe unit. The new data is written to the data stripe units. All stripe units are written in a single write. See Figure 8-4, Reconstruct-Write,.
Performance Monitoring 9 Performance Monitoring Chapter 9 389
Performance Monitoring Introduction Introduction Logical volume management is a tool that can improve overall system performance. This chapter has performance management and configuration guidelines that can help you to benefit from the advantages provided by Volume Manager.
Performance Monitoring Performance Guidelines: Performance Guidelines: This section contains information on Volume Manager features. Volume Manager provides flexibility in configuring storage to improve system performance.
Performance Monitoring Performance Guidelines: By striping this “high traffic” data across portions of multiple disks, you can increase access bandwidth to this data. Figure 9-1, Use of Striping for Optimal Data Access,, is an example of a single volume (Hot Vol) that has been identified as being a data access bottleneck. This volume is striped across four disks, leaving the remainder of those four disks free for use by less-heavily used volumes.
Performance Monitoring Performance Guidelines: 30 percent writes), then mirroring can result in somewhat reduced performance. To provide optimal performance for different types of mirrored volumes, Volume Manager supports these read policies: • The round-robin read policy (round), where read requests to the volume are satisfied in a round-robin manner from all plexes in the volume.
Performance Monitoring Performance Guidelines: decrease of effective disk space utilization. Performance can also be improved by striping across half of the available disks to form one plex and across the other half to form another plex. When feasible, this is usually the best way to configure the Volume Manager on a set of disks for best performance with reasonable reliability. Mirroring and Striping NOTE You may need an additional license to use this feature.
Performance Monitoring Performance Guidelines: streams can operate concurrently on separate devices. Since mirroring is most often used to protect against loss of data due to disk failures, it may sometimes be necessary to use mirroring for write-intensive workloads. In these instances, mirroring can be combined with striping to deliver both high availability and performance. See “Layered Volumes”. Using RAID-5 NOTE You may need an additional license to use this feature.
Performance Monitoring Performance Monitoring Performance Monitoring There are two sets of priorities for a system administrator. One set is physical, concerned with the hardware. The other set is logical, concerned with managing the software and its operations. Performance Priorities The physical performance characteristics address the balance of the I/O on each drive and the concentration of the I/O within a drive to minimize seek time.
Performance Monitoring Performance Monitoring reports statistics that reflect the activity levels of Volume Manager objects since boot time. Statistics for a specific Volume Manager object, or all objects, can be displayed at one time. A disk group can also be specified, in which case statistics for objects in that disk group only are displayed. If no disk group is specified, rootdg is assumed. The amount of information displayed depends on what options are specified to the vxstat utility.
Performance Monitoring Performance Monitoring vol testvol 0 0 0 0 0.0 0.0 Additional volume statistics are available for RAID-5 configurations. See the vxstat(1M) manual page for more information. Tracing I/O (vxtrace Command) The vxtrace command traces operations on volumes. The vxtrace command either prints kernel I/O errors or I/O trace records to the standard output or writes the records to a file in binary format.
Performance Monitoring Performance Monitoring vol vol vol vol local 49477 49230 rootvol 102906 342664 src 79174 23603 swapvol 22751 32364 507892 1085520 425472 182001 204975 1962946 139302 258905 28.5 28.1 22.4 25.3 33.5 25.6 30.9 323.2 This output helps to identify volumes with an unusually large number of operations or excessive read or write times. To display disk statistics, use the vxstat -d command.
Performance Monitoring Performance Monitoring following command: # vxassist move archive !disk03 disk04 This command indicates that the volume is to be reorganized so that no part remains on disk03. NOTE The graphical user interface provides an easy way to move pieces of volumes between disks and may be preferable to using the command-line. If there are two busy volumes (other than the root volume), move them so that each is on a different disk.
Performance Monitoring Performance Monitoring CAUTION Striping a volume, or splitting a volume across multiple disks, increases the chance that a disk failure results in failure of that volume. For example, if five volumes are striped across the same five disks, then failure of any one of the five disks requires that all five volumes be restored from a backup. If each volume were on a separate disk, only one volume would need to be restored.
Performance Monitoring Tuning the Volume Manager Tuning the Volume Manager This section describes the mechanisms for controlling the resources used by the Volume Manager. Adjustments may be required for some of the tunable values to obtain best performance (depending on the type of system resources available). General Tuning Guidelines The Volume Manager is tuned for most configurations ranging from small systems to larger servers.
Performance Monitoring Tuning the Volume Manager vol_subdisk_num This tunable is used to control the maximum number of subdisks that can be attached to a single plex. There is no theoretical limit to this number, but for practical purposes it has been limited to a default value of 4096. This default can be changed if required. vol_maxioctl This value controls the maximum size of data that can be passed into the Volume Manager via an ioctl call.
Performance Monitoring Tuning the Volume Manager which prevents full-stripe read/writes. This throttles the volume I/O throughput for sequential I/O or larger I/O. This tunable limits the size of an I/O at the top of the volume manager, not at the individual disk. For example, If you have an 8x64K stripe, then the 256K value only allows I/Os that use half the disks in the stripe and thus it cuts potential throughput in half.
Performance Monitoring Tuning the Volume Manager region. The Volume Manager kernel currently sets the default value for this tunable to 1024 sectors. Larger region sizes will tend to cause the cache hit-ratio for regions to improve. This will improve the write performance, but it will also prolong the recovery time. voldrl_max_drtregs This tunable specifies the maximum number of dirty regions that can exist on the system at any time.
Performance Monitoring Tuning the Volume Manager voliot_iobuf_limit This value sets a limit to the size of memory that can be used for storing tracing buffers in the kernel. Tracing buffers are used by the Volume Manager kernel to store the tracing event records. As trace buffers are requested to be stored in the kernel, the memory for them is drawn from this pool. Increasing this size can allow additional tracing to be performed at the expense of system memory usage.
Performance Monitoring Tuning the Volume Manager expense of system memory. Decreasing the size of the buffer could lead to a situation where an error cannot be detected via the tracing device. Applications that depend on error tracing to perform some responsive action are dependent on this buffer. voliot_max_open This value controls the maximum number of tracing channels that can be open simultaneously. Tracing channels are clone entry points into the tracing device driver.
Performance Monitoring Tuning the Volume Manager voliomem_maxpool_sz This tunable defines the maximum memory used by a Volume Manager from the system for its internal purposes. The default value of this tunable is 4 Megabytes. This tunable has a direct impact on the performance of VxVM. voliomem_chunk_size System memory is allocated to and released from the Volume Manager using this granularity.
Performance Monitoring Tuning the Volume Manager The Number of Configuration Copies for a Disk Group Selection of the number of configuration copies for a disk group is based on the trade-off between redundancy and performance. As a general rule, the fewer configuration copies that exist in a disk group, the quicker the group can be initially accessed, the faster the initial start of the vxconfigd(1M) command can proceed, and the quicker transactions can be performed on the disk group.
Performance Monitoring Tuning the Volume Manager 410 Chapter 9
Glossary Active/Active disk arrays This type of multipathed disk array allows you to access a disk in the disk array through all the paths to the disk simultaneously, without any performance degradation. Active/Passive disk arrays This type of multipathed disk array allows one path to a disk to be designated as primary and used to access the disk at any time. Using a path other than the designated active path results in severe performance degradation in some disk arrays.
communicate with each other and inform VxVM of changes in cluster membership. cluster-shareable disk group A disk group in which the disks are shared by multiple hosts (also referred to as a shared disk group). column A set of one or more subdisks within a striped plex. Striping is achieved by allocating data alternately and evenly across the columns within a plex. concatenation A layout style characterized by subdisks that are arranged sequentially and contiguously.
term device name can also be used to refer to the disk access name. is represented as the parent node of the disk by the Operating System, is called disk access records the disk controller by the multipathing subsystem of Volume Manager. Configuration records used to specify the access path to particular disks.
disk group ID A unique identifier used to identify a disk group. disk ID A universally unique identifier that is given to each disk and can be used to identify the disk, even if it is moved. disk media name A logical or administrative name chosen for the disk, such as disk03. The term disk name is also used to refer to the disk media name. disk media record A configuration record that identifies a particular disk, by disk ID, and gives that disk a logical (or administrative) name.
hostid A string that identifies a host to the Volume Manager. The hostid for a host is stored in its volboot file, and is used in defining ownership of disks and disk groups. hot-relocation A technique of automatically restoring redundancy and access to mirrored and RAID-5 volumes when a disk fails. This is done by relocating the affected subdisks to disks designated as spares and/or free space in the same disk group.
node join The process through which a node joins a cluster and gains access to shared disks. object An entity that is defined to and recognized internally by the Volume Manager. The VxVM objects are: volume, plex, subdisk, disk, and disk group. There are actually two types of disk objects—one for the physical aspect of the disk and the other for the logical aspect. parity A calculated value that can be used to reconstruct data after a failure.
primary path In Active/Passive type disk arrays, a disk can be bound to one particular controller on the disk array or owned by a controller. The disk can then be accessed using the path through this particular controller. See “path”, “secondary path”. private disk group A disk group in which the disks are accessed by only one specific host. private region A region of a physical disk used to store private, structured Volume Manager information.
root partition The disk region on which the root file system resides. root volume The VxVM volume that contains the root file system, if such a volume is designated by the system configuration. rootability The ability to place the root file system and the swap device under Volume Manager control. The resulting volumes can then be mirrored to provide redundancy and allow recovery in the event of disk failure.
stripe size The sum of the stripe unit sizes comprising a single stripe across all columns being striped. swap area A disk region used to hold copies of memory pages swapped out by the system pager process. stripe unit Equally-sized areas that are allocated alternately on the subdisks (within columns) of each striped plex. In an array, this is a set of logically contiguous blocks that exist on each disk before allocations are made from the next disk in the array.
volume A virtual disk, representing an addressable range of disk blocks used by applications such as file systems or databases. A volume is a collection of from one to 32 plexes. daemon must be running before VxVM operations can be performed. s2 s2 volume configuration device The volume configuration device (/dev/vx/config) is the interface through which all configuration changes to the volume device driver are performed.
clean node shutdown The ability of a node to leave the cluster gracefully when all access to shared volumes has ceased. cluster A set of hosts that share a set of disks. cluster-shareable disk group A disk group in which the disks are shared between more than one host. In a cluster, an atomic operation takes place either on all nodes ir not at all.
422
A adding a disk, 153, 160 a disk to a disk group, 187, 209 a DRL log, 262 a mirror to a volume, 248 a RAID-5 log, 260 adding disks, 157 format, 157 associating log subdisks, 283 vxsd, 283 associating mirrors vxmake, 265 associating plexes vxmake, 265 associating subdisks vxmake, 282 vxsd, 282 B backing up a volume, 255 backups, 82, 94, 287 mirrors, 251 vxassist, 94, 287 Boot Disk recovery, 336 C changing volume attributes, 382 checkpoint, 374 cli disk device names, 85 cluster disks, 306 shared objects, 297
copying mirrors vxplex, 274 creating a concatenated volume, 238 a RAID-5 volume, 240 a spanned volume, 238 a striped volume, 239 a volume, 131, 132, 133, 134, 237 a volume on a VM disk, 239 creating mirrors vxmake, 264 creating RAID-5 volumes, 367 creating subdisks, 275 vxmake, 275 creating volumes, 366, 367 manually, 64 vxassist, 64 D daemons, 71 configuration, 135 hot-relocation, 103 Volume Manager, 71 vxrelocd, 105, 139, 349 data preserving, 81 redundancy, 45 data assignment, 391 defaults file vxassist,
displaying information, 231 enabling, 154 importing, 218, 223, 224, 225 initializing, 212 moving, 223, 224, 225 moving between systems, 224 removing, 228 renaming, 216 using, 227 disk information, displaying, 205 disk media name, 31, 149 disk names, 209 disk utilities, 150, 211 disks, 149 adding, 157 detached, 175, 354 disabling, 178 displaying information, 205 enabling, 173 failure, 347, 370 and hot-relocation, 103 and recovery, 334 hot-relocation spares, 350 in a cluster, 306 initialization, 157 mirroring
volume configuration, 254 displaying subdisks vxprint, 89 dissociating mirrors vxplex, 252, 266 DMP dynamic multipathing, 122 load balancing, 123 path failover mechanism, 122 DMP configuration, 135 DRL, 262 dynamic multipathing DMP, 122 E enabling a disk, 155, 173 a disk group, 154 access to a disk group, 218 exiting vxdiskadm, 253 F failed disks, 347 detecting, 175, 354 failures, 374 disk, 370 system, 369 Fast Mirror Resynchronization see FastResync, 114, 289, 313, 314 FastResync, 114, 288, 313 FMR see Fas
multiple, 297 hot-relocation, 103, 155 designating spares, 104 modifying vxrelocd, 138, 349 removing spares, 185, 192 I I/O statistics, 398 obtaining, 396 tracing, 398, 401 I/O daemon, 72 importing disk groups, 224 importing a disk group, 154, 218 increasing volume size, 243 information, 286 initializing disks, 157 J joining subdisks vxsd, 281 L layout left-symmetric, 50 listing mirrors vxprint, 267 load balancing DMP, 123 log adding, 262 RAID-5, 260 log plexes, 364 log subdisks, 78, 112, 283, 308 associati
mirrors, 33, 35 adding to a volume, 248 backup using, 251 creating, 264 displaying, 267 dissociating, 252, 266 offline, 241, 242, 270 online, 241, 242 recover, 176, 355 removing, 252, 266 moving volumes from a disk, 154, 258, 292 moving disk groups vxdg, 223, 224 vxrecover, 223, 224 moving disks, 172 moving mirrors, 273 vxplex, 273 moving subdisks vxsd, 277 N name disk access, 30 disk media, 31, 149 nodes, 297 O OFFLINE, 270 offlining a disk, 155, 178 online backup, 94, 287 online relayout, 97 failure recov
management, 390 monitoring, 396 optimizing, 391 priorities, 396 performance data, 396 getting, 396 using, 398 plex kernel states, 361 DETACHED, 361 DISABLED, 361 ENABLED, 361 plex states, 358 ACTIVE, 359 CLEAN, 359 EMPTY, 358 IOFAIL, 361 OFFLINE, 360 STALE, 360 TEMP, 360 TEMPRM, 360 plex states cycle, 361 plexes, 33, 364 and volume, 35 as mirrors, 35 attach, 137 attaching, 270, 271 changing information, 268 copying, 274 creating, 264 definition, 33 detach, 137 detaching, 270 displaying, 267 listing, 267 mov
subdisk moves, 376 RAID-5 log, 260 RAID-5 plexes, 364 RAID-5 volume creating, 240 RAID-5 volumes, 380 read policies, 393 reattaching disks, 335 reconfiguration procedures, 338 recontructing-read, 370 Recovery VxVM Boot Disk, 336 recovery, 334, 380 logs, 374 RAID-5 volumes, 372, 380 volumes, 257 reinitializing disks, 197 reinstallation, 338, 340 removing a disk, 153, 182 a DRL, 262 a physical disk, 182 a RAID-5 log, 261 a volume, 247 removing disk groups, 228 vxdg, 228 removing disks vxdg, 183 removing mirro
snapshot, 94, 95, 287, 288 RAID-5, 94, 287 spanned volume creating, 238 spanning, 38 splitting subdisks vxsd, 280 standard disk devices, 148 starting vxdiskadm, 205 starting volumes, 377 states plex, 358 volume, 362 Storage Administrator, 56 storage layout conversion, 97 stripe column, 40 stripe units, 41 striped plex, 40 striped volume creating, 239 striping, 40, 391, 394 subdisk moves RAID-5, 376 subdisks, 64 associating, 89, 282 changing information, 286 creating, 275 displaying, 89 dissociating, 285 joi
using I/O statistics, 398 using performance data, 398 utility descriptions vxassist, 131 vxdctl, 135 vxedit, 128 vxmake, 136 vxmend, 138 vxplex, 137 vxprint, 138 vxsd, 138 vxstat, 140 vxvol, 143 V VM disks, 31 definition, 31 volume kernel states, 363 DETACHED, 363 DISABLED, 363 ENABLED, 363 Volume Manager, 56, 58, 84 and operating system, 60 daemons, 71 layouts, 62 objects, 58 Volume Manager graphical user interface, 150 volume reconfiguration, 302 volume resynchronization, 110 volume states, 362 ACTIVE, 36
mirroring on disk, 249, 291 moving from disk, 258, 292 operations, 140 RAID-5 creating, 240 read policy, 245 recovery, 257 removing, 247 removing a RAID-5 log from, 261 spanned creating, 238 starting, 241, 242 stopping, 241, 242 striped creating, 239 vxassist, 81, 94, 95, 131, 132, 133, 134, 227, 243, 248, 287, 288, 357 backup, 94, 287 creating volumes, 64 defaults, 133 description of, 131 growby, 243 growto, 243 shrinkby, 243 shrinkto, 243 vxassist addlog, 260 vxassist growby, 244 vxassist growto, 243 vxas
vxinfo, 356 vxiod, 71, 72 vxmake, 136, 248, 264, 265, 275, 282, 366, 367 associating mirrors, 265 associating subdisks, 282 creating mirrors, 264 creating subdisks, 275 description of, 136 vxmend, 138, 241, 242, 257, 270, 271 vxplex, 137, 248, 251, 252, 265, 270, 271, 273 copying mirrors, 274 description of, 137 dissociating mirrors, 252, 266 moving mirrors, 273 vxprint, 86, 89, 138, 175, 254, 267, 354 description of, 138 displaying subdisks, 89 listing mirrors, 267 vxreattach, 335 vxrecover, 176, 329, 355