VERITAS Volume Manager 4.
Disclaimer The information contained in this publication is subject to change without notice. VERITAS Software Corporation makes no warranty of any kind with regard to this manual, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. VERITAS Software Corporation shall not be liable for errors contained herein or for incidental or consequential damages in connection with the furnishing, performance, or use of this manual.
Contents Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix How This Guide Is Organized . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xx Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxi Getting Help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
How Online Relayout Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Limitations of Online Relayout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Transformation Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Transformations and Volume Length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Volume Resynchronization . . . . . . . . .
Partial Device Discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Discovering Disks and Dynamically Adding Disk Arrays . . . . . . . . . . . . . . . . . . . . 64 Third-Party Driver Coexistence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Administering the Device Discovery Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 Placing Disks Under VxVM Control . . . . . . . . . . . . . .
Taking a Disk Offline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96 Renaming a Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Reserving Disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Displaying Disk Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Disabling a Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 Enabling a Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 Renaming an Enclosure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127 Starting the DMP Restore Daemon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Limitations of Disk Group Split and Join . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 Listing Objects Potentially Affected by a Move . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 Moving Objects Between Disk Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 Splitting Disk Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 Joining Disk Groups . . . . . . . . . . . . . . . . . . . .
Plex Kernel States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Attaching and Associating Plexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Taking Plexes Offline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 Detaching Plexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Creating a Mirrored-Stripe Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 Creating a Striped-Mirror Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 Mirroring across Targets, Controllers or Enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . 216 Creating a RAID-5 Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 Creating a Volume Using vxmake . . . . . . . . .
Determining if DRL Logging is Active on a Volume . . . . . . . . . . . . . . . . . . . . . . . . 239 Disabling and Re-enabling DRL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 Removing Support for DRL and Instant Snapshots from a Volume . . . . . . . . . . . 239 Upgrading Existing Volumes to Use Version 20 DCOs . . . . . . . . . . . . . . . . . . . . . . . . . 240 Adding Traditional DRL Logging to a Mirrored Volume . . . . . . . . . . . . . . . . . . . . . . .
Full-Sized Instant Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 Space-Optimized Instant Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 Emulation of Third-Mirror Break-Off Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268 Cascaded Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Adding Plexes to a Snapshot Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 Dissociating a Snapshot Volume (vxassist snapclear) . . . . . . . . . . . . . . . . . . . . . . . 305 Displaying Snapshot Information (vxassist snapprint) . . . . . . . . . . . . . . . . . . . . . . 306 Adding a Version 0 DCO and DCO Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307 Specifying Storage for Version 0 DCO Plexes . . . . . . . . . . . . . . . . . . . . .
Excluding a Disk from Hot-Relocation Use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 Making a Disk Available for Hot-Relocation Use . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338 Configuring Hot-Relocation to Use Only Spare Disks . . . . . . . . . . . . . . . . . . . . . . . . . 339 Moving and Unrelocating Subdisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 Moving and Unrelocating Subdisks Using vxdiskadm . . . . . . . . .
Administering VxVM in Cluster Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371 Requesting Node Status and Discovering the Master Node . . . . . . . . . . . . . . . . . . 371 Determining if a Disk is Shareable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372 Listing Shared Disk Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372 Creating a Shared Disk Group . . . . . . . . . . . . . . . . . . . . .
Identifying Configuration Problems Using Storage Expert . . . . . . . . . . . . . . . . . . . . . 386 Recovery Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386 Disk Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388 Disk Striping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Section 4 — File Formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443 Section 7 — Device Driver Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443 Appendix B. Configuring VERITAS Volume Manager . . . . . . . . . . . . . . . . . . . . . . . 445 Setup Tasks After Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445 Adding Unsupported Disk Arrays as JBODS . . . .
xviii VERITAS Volume Manager Administrator’s Guide
Preface The VERITAS Volume Manager Administrator’s Guide provides information on how to use VERITAS Volume Manager (VxVM) from the command line. This guide is intended for system administrators responsible for installing, configuring, and maintaining systems under the control of VERITAS Volume Manager.
How This Guide Is Organized How This Guide Is Organized This guide is organized as follows: ◆ Understanding VERITAS Volume Manager ◆ Administering Disks ◆ Administering Dynamic Multipathing (DMP) ◆ Creating and Administering Disk Groups ◆ Creating and Administering Subdisks ◆ Creating and Administering Plexes ◆ Creating Volumes ◆ Administering Volumes ◆ Administering Volume Snapshots ◆ Creating and Administering Volume Sets ◆ Configuring Off-Host Processing ◆ Administering Hot-Reloca
Conventions Conventions Convention Usage Example monospace Used for path names, commands, output, directory and file names, functions, and parameters. Read tunables from the /etc/vx/tunefstab file. monospace (bold) Indicates user input. # ls pubs italic Identifies book titles, new terms, emphasized text, and variables replaced with a name or value. See the ls(1) manual page for more information. C:\> dir pubs See the User’s Guide for details.
Getting Help Getting Help For technical assistance, visit http://support.veritas.com and select phone or email support. This site also provides access to resources such as TechNotes, product alerts, software downloads, hardware compatibility lists, and the VERITAS customer email notification service. Use the Knowledge Base Search feature to access additional product information, including current and past releases of product documentation.
1 Understanding VERITAS Volume Manager VERITAS Volume Manager (VxVM) is a storage management subsystem that allows you to manage physical disks as logical devices called volumes. A volume is a logical device that appears to data management systems as a physical disk. VxVM provides easy-to-use online disk storage management for computing environments and Storage Area Network (SAN) environments. Through support of Redundant Array of Independent Disks (RAID), VxVM protects against disk and hardware failure.
Further information on administering VERITAS Volume Manager may be found in the following documents: ◆ VERITAS Storage Foundation Cross-Platform Data Sharing Administrator’s Guide Provides more information on using the Cross-platform Data Sharing (CDS) feature of VERITAS Volume Manager, which allows you to move VxVM disks and objects between machines that are running under different operating systems. Note CDS requires a VERITAS Storage Foundation license.
VxVM and the Operating System VxVM and the Operating System VxVM operates as a subsystem between your operating system and your data management systems, such as file systems and database management systems. VxVM is tightly coupled with the operating system. Before a disk can be brought under VxVM control, the disk must be accessible through the operating system device interface.
How VxVM Handles Storage Management How VxVM Handles Storage Management VxVM uses two types of objects to handle storage management: physical objects and virtual objects. ◆ Physical Objects—Physical Disks or other hardware with block and raw operating system device interfaces that are used to store data. ◆ Virtual Objects—When one or more physical disks are brought under the control of VxVM, it creates virtual objects called volumes on those physical disks.
How VxVM Handles Storage Management For HP-UX 11.x, all the disks are treated and accessed by VxVM as entire physical disks using a device name such as c#t#d#. Disk Arrays Performing I/O to disks is a relatively slow process because disks are physical devices that require time to move the heads to the correct position on the disk before reading or writing. If all of the read or write operations are done to individual disks, one at a time, the read-write time can become unmanageable.
How VxVM Handles Storage Management Multipathed Disk Arrays Some disk arrays provide multiple ports to access their disk devices. These ports, coupled with the host bus adaptor (HBA) controller and any data bus or I/O processor local to the array, make up multiple hardware paths to access the disk devices. Such disk arrays are called multipathed disk arrays.
How VxVM Handles Storage Management Example Configuration for Disk Enclosures Connected via a Fibre Channel Hub/Switch c1 Host Fibre Channel Hub/Switch Disk Enclosures enc0 enc1 enc2 In such a configuration, enclosure-based naming can be used to refer to each disk within an enclosure. For example, the device names for the disks in enclosure enc0 are named enc0_0, enc0_1, and so on.
How VxVM Handles Storage Management same name to VxVM for all of the paths over which it can be accessed. For example, the disk device enc0_0 represents a single disk for which two different paths are known to the operating system, such as c1t99d0 and c2t99d0. To take account of fault domains when configuring data redundancy, you can control how mirrored volumes are laid out across enclosures as described in “Mirroring across Targets, Controllers or Enclosures” on page 216.
How VxVM Handles Storage Management After installing VxVM on a host system, you must bring the contents of physical disks under VxVM control by collecting the VM disks into disk groups and allocating the disk group space to create logical volumes. Note To bring the physical disk under VxVM control, the disk must not be under LVM control.
How VxVM Handles Storage Management The figure, “Connection Between Objects in VxVM” on page 10, shows the connections between VERITAS Volume Manager virtual objects and how they relate to physical disks. The disk group contains three VM disks which are used to create two volumes. Volume vol01 is simple and has a single plex. Volume vol02 is a mirrored volume with two plexes.
How VxVM Handles Storage Management The various types of virtual objects (disk groups, VM disks, subdisks, plexes and volumes) are described in the following sections. Other types of objects exist in VERITAS Volume Manager, such as data change objects (DCOs), and cache objects, to provide extended functionality. These objects are discussed later in this chapter. Disk Groups A disk group is a collection of disks that share a common configuration, and which are managed by VxVM (see “VM Disks” on page 11).
How VxVM Handles Storage Management “VM Disk Example” on page 12 shows a VM disk with a media name of disk01 that is assigned to the physical disk devname. VM Disk Example disk01 VM Disk devname Physical Disk Subdisks A subdisk is a set of contiguous disk blocks. A block is a unit of space on the disk. VxVM allocates disk space using subdisks. A VM disk can be divided into one or more subdisks.
How VxVM Handles Storage Management A VM disk can contain multiple subdisks, but subdisks cannot overlap or share the same portions of a VM disk. “Example of Three Subdisks Assigned to One VM Disk” on page 13 shows a VM disk with three subdisks. (The VM disk is assigned to one physical disk.
How VxVM Handles Storage Management Plexes VxVM uses subdisks to build virtual objects called plexes. A plex consists of one or more subdisks located on one or more physical disks. For example, see the plex vol01-01 shown in “Example of a Plex with Two Subdisks.
How VxVM Handles Storage Management A volume may be created under the following constraints: ◆ Its name can contain up to 31 characters. ◆ It can consist of up to 32 plexes, each of which contains one or more subdisks. ◆ It must have at least one associated plex that has a complete copy of the data in the volume with at least one associated subdisk. ◆ All subdisks within a volume must belong to the same disk group.
How VxVM Handles Storage Management In “Example of a Volume with Two Plexes,” a volume, vol06, with two data plexes is mirrored. Each plex of the mirror contains a complete copy of the volume data. Example of a Volume with Two Plexes vol06 vol06-01 vol06-02 disk01-01 disk02-01 vol06-01 vol06-02 Volume with Two Plexes Plexes Volume vol06 has the following characteristics: ◆ It contains two plexes named vol06-01 and vol06-02. ◆ Each plex contains one subdisk.
Volume Layouts in VxVM Volume Layouts in VxVM A VxVM virtual device is defined by a volume. A volume has a layout defined by the association of a volume to one or more plexes, each of which map to subdisks. The volume presents a virtual device interface that is exposed to other applications for data access. These logical building blocks re-map the volume address space through which I/O is re-directed at run-time. Different volume layouts each provide different levels of storage service.
Volume Layouts in VxVM Layout Methods Data in virtual objects is organized to create volumes by using the following layout methods: ◆ Concatenation and Spanning ◆ Striping (RAID-0) ◆ Mirroring (RAID-1) ◆ Striping Plus Mirroring (Mirrored-Stripe or RAID-0+1) ◆ Mirroring Plus Striping (Striped-Mirror, RAID-1+0 or RAID-10) ◆ RAID-5 (Striping with Parity) The following sections describe each layout method.
Volume Layouts in VxVM Example of Concatenation Data in disk01-01 n Data in disk01-03 n+1 n+2 n+3 Data Blocks disk01-01 disk01-03 disk01-01 disk01-01 Plex with Concatenated Subdisks disk01-03 disk01-02 disk01-03 Subdisks VM Disk disk01 devname n n+1 n+2 n+3 Physical Disk You can use concatenation with multiple subdisks when there is insufficient contiguous space for the plex on any one disk.
Volume Layouts in VxVM The remaining free space in the subdisk disk02-02 on VM disk disk02 can be put to other uses.
Volume Layouts in VxVM Striping (RAID-0) Note You need a full license to use this feature. Striping (RAID-0) is useful if you need large amounts of data written to or read from physical disks, and performance is important. Striping is also helpful in balancing the I/O load from multi-user applications across multiple disks. By using parallel data transfer to and from multiple disks, striping significantly improves data-access performance.
Volume Layouts in VxVM Striping Across Three Columns Column 0 Column 1 Column 2 Stripe 1 su1 su2 su3 Stripe 2 su4 su5 su6 Subdisk 1 Subdisk 2 Subdisk 3 Plex SU = Stripe Unit A stripe consists of the set of stripe units at the same positions across all columns. In the figure, stripe units 1, 2, and 3 constitute a single stripe.
Volume Layouts in VxVM “Example of a Striped Plex with One Subdisk per Column” on page 23 shows a striped plex with three equal sized, single-subdisk columns. There is one column per physical disk. This example shows three subdisks that occupy all of the space on the VM disks. It is also possible for each subdisk in a striped plex to occupy only a portion of the VM disk, which leaves free space for other disk management tasks. su1 su2 su3 su4 su5 su6 . . .
Volume Layouts in VxVM “Example of a Striped Plex with Concatenated Subdisks per Column” on page 24 illustrates a striped plex with three columns containing subdisks of different sizes. Each column contains a different number of subdisks. There is one column per physical disk. Striped plexes can be created by using a single subdisk from each of the VM disks being striped across.
Volume Layouts in VxVM Mirroring (RAID-1) Mirroring uses multiple mirrors (plexes) to duplicate the information contained in a volume. In the event of a physical disk failure, the plex on the failed disk becomes unavailable, but the system continues to operate using the unaffected mirrors. Note Although a volume can have a single plex, at least two plexes are required to provide redundancy of data. Each of these plexes must contain disk space from different disks to achieve redundancy.
Volume Layouts in VxVM Mirrored-Stripe Volume Laid out on Six Disks Column 0 Column 1 Column 2 Striped Plex Mirror Column 0 Column 1 Column 2 Striped Plex Mirrored-Stripe Volume See “Creating a Mirrored-Stripe Volume” on page 215 for information on how to create a mirrored-stripe volume. The layout type of the data plexes in a mirror can be concatenated or striped. Even if only one is striped, the volume is still termed a mirrored-stripe volume.
Volume Layouts in VxVM Striped-Mirror Volume Laid out on Six Disks Underlying Mirrored Volumes Column 0 Column 1 Column 2 Mirror Column 0 Column 1 Column 2 Striped Plex Striped-Mirror Volume See “Creating a Striped-Mirror Volume” on page 215 for information on how to create a striped-mirrored volume.
Volume Layouts in VxVM How the Failure of a Single Disk Affects Mirrored-Stripe and Striped-Mirror Volumes Striped Plex X Failure of Disk Detaches Plex X Failure of Disk Removes Redundancy from a Mirror Detached Striped Plex Mirrored-Stripe Volume with no Redundancy Striped Plex Striped-Mirror Volume with Partial Redundancy Compared to mirrored-stripe volumes, striped-mirror volumes are more tolerant of disk failure, and recovery time is shorter.
Volume Layouts in VxVM RAID-5 (Striping with Parity) Note VxVM supports RAID-5 for private disk groups, but not for shareable disk groups in a cluster environment. In addition, VxVM does not support the mirroring of RAID-5 volumes that are configured using VERITAS Volume Manager software. Disk devices that support RAID-5 in hardware may be mirrored. You need a full license to use this feature. Although both mirroring (RAID-1) and RAID-5 provide redundancy of data, they use different methods.
Volume Layouts in VxVM The implementation of RAID-5 in VxVM is described in “VERITAS Volume Manager RAID-5 Arrays” on page 30. Traditional RAID-5 Arrays A traditional RAID-5 array is several disks organized in rows and columns. A column is a number of disks located in the same ordinal position in the array. A row is the minimal number of disks necessary to support the full width of a parity stripe. The figure, “Traditional RAID-5 Array,” shows the row and column arrangement of a traditional RAID-5 array.
Volume Layouts in VxVM below that, and so on for the length of the columns. Equal-sized stripe units are used for each column. For RAID-5, the default stripe unit size is 16 kilobytes. See “Striping (RAID-0)” on page 21 for further information about stripe units. VERITAS Volume Manager RAID-5 Array Stripe 1 Stripe 2 SD SD SD SD SD SD SD SD Column 0 Column 1 Column 2 Column 3 SD = Subdisk Note Mirroring of RAID-5 volumes is not supported.
Volume Layouts in VxVM next stripe, shifted left one column from the previous parity stripe unit location. If there are more stripes than columns, the parity stripe unit placement begins in the rightmost column again. The figure, “Left-Symmetric Layout,” shows a left-symmetric parity layout with five disks (one per column).
Volume Layouts in VxVM Note Failure of more than one column in a RAID-5 plex detaches the volume. The volume is no longer allowed to satisfy read or write requests. Once the failed columns have been recovered, it may be necessary to recover user data from backups. RAID-5 Logging Logging is used to prevent corruption of data during recovery by immediately recording changes to data and parity to a log area on a persistent device such as a volume on disk or in non-volatile RAM.
Volume Layouts in VxVM Layered Volumes A layered volume is a virtual VERITAS Volume Manager object that is built on top of other volumes. The layered volume structure tolerates failure better and has greater redundancy than the standard volume structure. For example, in a striped-mirror layered volume, each mirror (plex) covers a smaller area of storage space, so recovery is quicker than with a standard mirrored volume.
Volume Layouts in VxVM The figure, “Example of a Striped-Mirror Layered Volume” on page 34, illustrates the structure of a typical layered volume. It shows subdisks with two columns, built on underlying volumes with each volume internally mirrored. The volume and striped plex in the “Managed by User” area allow you to perform normal tasks in VxVM. User tasks can be performed only on the top-level volume of a layered volume.
Online Relayout Online Relayout Note You need a full license to use this feature. Online relayout allows you to convert between storage layouts in VxVM, with uninterrupted data access. Typically, you would do this to change the redundancy or performance characteristics of a volume. VxVM adds redundancy to storage either by duplicating the data (mirroring) or by adding parity (RAID-5).
Online Relayout The default size of the temporary area used during the relayout depends on the size of the volume and the type of relayout. For volumes larger than 50MB, the amount of temporary space that is required is usually 10% of the size of the volume, from a minimum of 50MB up to a maximum of 1GB. For volumes smaller than 50MB, the temporary space required is the same as the size of the volume.
Online Relayout The following are examples of operations that you can perform using online relayout: ◆ Change a RAID-5 volume to a concatenated, striped, or layered volume (remove parity). See “Example of Relayout of a RAID-5 Volume to a Striped Volume” on page 38. Note that removing parity (shown by the shaded area) decreases the overall storage space that the volume requires.
Online Relayout ◆ Change the column stripe width in a volume. See “Example of Increasing the Stripe Width for the Columns in a Volume” on page 39. Example of Increasing the Stripe Width for the Columns in a Volume For details of how to perform online relayout operations, see “Performing Online Relayout” on page 254. For information about the relayout transformations that are possible, see “Permitted Relayout Transformations” on page 255.
Online Relayout ◆ Online relayout cannot transform sparse plexes, nor can it make any plex sparse. (A sparse plex is not the same size as the volume, or has regions that are not mapped to any subdisk.) ◆ The number of mirrors in a mirrored volume cannot be changed using relayout. ◆ Only one relayout may be applied to a volume at a time. Transformation Characteristics Transformation of data from one layout to another involves rearrangement of data in the existing layout to the new layout.
Volume Resynchronization Volume Resynchronization When storing data redundantly and using mirrored or RAID-5 volumes, VxVM ensures that all copies of the data match exactly. However, under certain conditions (usually due to complete system failures), some redundant data on a volume can become inconsistent or unsynchronized. The mirrored data is not exactly the same as the original data.
Dirty Region Logging (DRL) The process of resynchronization can impact system performance. The recovery process reduces some of this impact by spreading the recoveries to avoid stressing a specific disk or controller. For large volumes or for a large number of volumes, the resynchronization process can take time. These effects can be addressed by using dirty region logging (DRL) and FastResync (fast mirror resynchronization) for mirrored volumes, or by ensuring that RAID-5 volumes have valid RAID-5 logs.
Dirty Region Logging (DRL) On restarting a system after a crash, VxVM recovers only those regions of the volume that are marked as dirty in the dirty region log. Log Subdisks and Plexes DRL log subdisks store the dirty region log of a mirrored volume that has DRL enabled. A volume with DRL has at least one log subdisk; multiple log subdisks can be used to mirror the dirty region log. Each log subdisk is associated with one plex of the volume. Only one log subdisk can exist per plex.
SmartSync Recovery Accelerator SmartSync Recovery Accelerator Note The SmartSync Recovery Accelerator feature is not supported by VxVM 4.1 on the HP-UX 11i v3 platform. The SmartSync feature of VERITAS Volume Manager increases the availability of mirrored volumes by only resynchronizing changed data. (The process of resynchronizing mirrored databases is also sometimes referred to as resilvering.
Volume Snapshots Redo Log Volume Configuration A redo log is a log of changes to the database data. Because the database does not maintain changes to the redo logs, it cannot provide information about which sections require resilvering. Redo logs are also written sequentially, and since traditional dirty region logs are most useful with randomly-written data, they are of minimal use for reducing recovery time for redo logs.
Volume Snapshots Volume Snapshot as a Point-In-Time Image of a Volume Time T1 Original volume T2 Original volume Snapshot volume Snapshot volume is created at time T2 T3 Original volume Snapshot volume Snapshot volume retains image taken at time T2 T4 Original volume Snapshot volume Snapshot volume is updated at time T4 Resynchronize Snapshot Volume from Original Volume The traditional type of volume snapshot in VxVM is of the third-mirror break-off type.
Volume Snapshots Release 4.0 of VxVM introduced full-sized instant snapshots and space-optimized instant snapshots, which offer advantages over traditional third-mirror snapshots such as immediate availability and easier configuration and administration. You can also use a third-mirror break-off usage model with full-sized snapshots, where this is necessary for write-intensive applications. For more information, see the following sections: ◆ “Full-Sized Instant Snapshots” on page 265.
Volume Snapshots Comparison of Snapshot Features for Supported Snapshot Types Snapshot Feature Third-Mirror Break-Off (vxassist or vxsnap) Full-Sized Instant (vxsnap) Space-Optimized Instant (vxsnap) Immediately available for use on creation No Yes Yes Requires less storage space than original volume No No Yes Can be reattached to original volume Yes Yes No Can be used to restore Yes1 contents of original volume Yes1 Yes2 Can quickly be No refreshed without being reattached Yes Yes Sna
FastResync FastResync Note You need a VERITAS FlashSnapTM or FastResync license to use this feature. Note The FastResync feature is not supported by VxVM 4.1 on the HP-UX 11i v3 platform. The FastResync feature (previously called Fast Mirror Resynchronization or FMR) performs quick and efficient resynchronization of stale mirrors (a mirror that is not synchronized).
FastResync Non-Persistent FastResync Non-Persistent FastResync allocates its change maps in memory. If Non-Persistent FastResync is enabled, a separate FastResync map is kept for the original volume and for each snapshot volume. Unlike a dirty region log (DRL), they do not reside on disk nor in persistent store. This has the advantage that updates to the FastResync map have little impact on I/O performance, as no disk updates needed to be performed.
FastResync DCO Volume Versioning The internal layout of the DCO volume changed in VxVM 4.0 to support new features such as full-sized and space-optimized instant snapshots. Because the DCO volume layout is versioned, VxVM software continues to support the version 0 layout for legacy volumes. However, you must configure a volume to have a version 20 DCO volume if you want to take instant snapshots of the volume. Future releases of VERITAS Volume Manager may introduce new versions of the DCO volume layout.
FastResync Each bit in a map represents a region (a contiguous number of blocks) in a volume’s address space. A region represents the smallest portion of a volume for which changes are recorded in a map. A write to a single byte of storage anywhere within a region is treated in the same way as a write to the entire region.
FastResync How Persistent FastResync Works with Snapshots Persistent FastResync uses a map in a DCO volume on disk to implement change tracking. As for Non-Persistent FastResync, each bit in the map represents a contiguous number of blocks in a volume’s address space. “Mirrored Volume with Persistent FastResync Enabled” on page 53 shows an example of a mirrored volume with two plexes on which Persistent FastResync is enabled. Associated with the volume are a DCO object and a DCO volume with two plexes.
FastResync Mirrored Volume After Completion of a snapstart Operation Mirrored Volume Data Plex Data Plex Snapshot Plex Disabled DCO Plex Data Change Object DCO Plex DCO Plex DCO Volume Multiple snapshot plexes and associated DCO plexes may be created in the volume by re-running the vxassist snapstart command for traditional snapshots, or the vxsnap make command for space-optimized snapshots. You can create up to a total of 32 plexes (data and log) in a volume.
FastResync Associated with both the original volume and the snapshot volume are snap objects. The snap object for the original volume points to the snapshot volume, and the snap object for the snapshot volume points to the original volume. This allows VxVM to track the relationship between volumes and their snapshots even if they are moved into different disk groups.
FastResync Mirrored Volume and Snapshot Volume After Completion of a snapshot Operation Mirrored Volume Data Plex Data Plex Data Change Object DCO Log Plex Snap Object DCO Log Plex DCO Volume Snapshot Volume Data Plex Data Change Object Snap Object DCO Log Plex DCO Volume Effect of Growing a Volume on the FastResync Map It is possible to grow the replica volume, or the original volume, and still use FastResync.
FastResync In either case, the part of the map that corresponds to the grown area of the volume is marked as “dirty” so that this area is resynchronized. The snapback operation fails if it attempts to create an incomplete snapshot plex. In such cases, you must grow the replica volume, or the original volume, before invoking any of the commands vxsnap reattach, vxsnap restore, or vxassist snapback.
Hot-Relocation Hot-Relocation Note You need an full license to use this feature. Hot-relocation is a feature that allows a system to react automatically to I/O failures on redundant objects (mirrored or RAID-5 volumes) in VxVM and restore redundancy and access to those objects. VxVM detects I/O failures on objects and relocates the affected subdisks. The subdisks are relocated to disks designated as spare disks and/or free space within the disk group.
2 Administering Disks This chapter describes the operations for managing disks used by the VERITAS Volume Manager (VxVM). This includes placing disks under VxVM control, initializing disks, mirroring the root disk, and removing and replacing disks. Note Most VxVM commands require superuser or equivalent privileges.
Disk Devices accessed by more than one physical path, usually via different controllers. The number of access paths that are available depends on whether the disk is a single disk, or is part of a multiported disk array that is connected to a system. You can use the vxdisk utility to display the paths subsumed by a metadevice, and to display the status of each path (for example, whether it is enabled or disabled). For more information, see “Administering Dynamic Multipathing (DMP)” on page 101.
Disk Devices ◆ All fabric or non-fabric disks in supported disk arrays are named using the enclosure_name_# format. For example, disks in the supported disk array, enggdept are named enggdept_0, enggdept_1, enggdept_2 and so on. (You can use the vxdmpadm command to administer enclosure names. See “Administering DMP Using vxdmpadm” on page 112 and the vxdmpadm(1M) manual page for more information.) ◆ Disks in the DISKS category (JBOD disks) are named using the Disk_# format.
Disk Devices Each disk that has a private region holds an entire copy of the configuration database for the disk group. The size of the configuration database for a disk group is limited by the size of the smallest copy of the configuration database on any of its member disks. public region An area that covers the remainder of the disk, and which is used for the allocation of storage space to subdisks.
Discovering and Configuring Newly Added Disk Devices described in “Displaying and Changing Default Disk Layout Attributes” on page 77. See the vxdisk(1M) manual page for details of the usage of this file, and for more information about disk types and their configuration.
Discovering and Configuring Newly Added Disk Devices The next command discovers devices that are connected to the specified physical controller: # vxdisk scandisks pctlr=8/12.8.0.255.0 Note The items in a list of physical controllers are separated by + characters. You can use the command vxdmpadm getctlr all to obtain a list of physical controllers. You can specify only one selection argument to the vxdisk scandisks command. Specifying multiple options results in an error.
Discovering and Configuring Newly Added Disk Devices The new disk array does not need to be already connected to the system when the package is installed. If any of the disks in the new disk array are subsequently connected, first run the ioscan command, and then run either the vxdisk scandisks or the vxdctl enable command to include the devices in the VxVM device list.
Discovering and Configuring Newly Added Disk Devices Administering the Device Discovery Layer Dynamic addition of disk arrays is possible because of the existence of the Device Discovery Layer (DDL) which is a facility for discovering disks and their attributes that are required for VxVM and DMP operations. The DDL is administered using the vxddladm utility, which can be used to perform the following tasks: ◆ List the types of arrays that are supported. ◆ Add support for an array to DDL.
Discovering and Configuring Newly Added Disk Devices ARRAY_NAME FJ_GR710, FJ_GR720, FJ_GR730 FJ_GR740, FJ_GR820, FJ_GR840 Excluding Support for a Disk Array Library To exclude all arrays that depend on a particular array library from participating in device discovery, use the following command: # vxddladm excludearray libname=libvxenc.sl This example excludes support for disk arrays that depends on the library libvxenc.sl.
Discovering and Configuring Newly Added Disk Devices Adding Unsupported Disk Arrays to the DISKS Category Caution The procedure in this section ensures that Dynamic Multipathing (DMP) is set up correctly on an array that is not supported by VERITAS Volume Manager. Otherwise, VERITAS Volume Manager treats the independent paths to the disks as separate devices, which can result in data corruption. To add an unsupported disk array: 1.
Discovering and Configuring Newly Added Disk Devices 4. To verify that the array is now supported, enter the following command: # vxddladm listjbod The following is sample output from this command for the example array: VID PID Opcode Page Code Page Offset SNO length ================================================================= SEAGATE ALL PIDs 18 -1 36 12 5.
Discovering and Configuring Newly Added Disk Devices Adding Foreign Devices DDL may not be able to discover some devices that are controlled by third-party drivers, such as those that provide multipathing or RAM disk capabilities. For these devices it may be preferable to use the multipathing capability that is provided by the third-party drivers for some arrays rather than using the Dynamic Multipathing (DMP) feature.
Placing Disks Under VxVM Control Placing Disks Under VxVM Control When you add a disk to a system that is running VxVM, you need to put the disk under VxVM control so that VxVM can control the space allocation on the disk. Unless you specify a disk group, VxVM places new disks in a default disk group according to the rules given in “Rules for Determining the Default Disk Group” on page 133.
Changing the Disk-Naming Scheme Changing the Disk-Naming Scheme Note Devices with very long device names (for example, Fibre Channel devices that include worldwide name (WWN) identifiers) are always represented by enclosure-based names. The operation in this section has no effect on such devices. You can either use enclosure-based naming for disks or the operating system’s naming scheme (such as c#t#d#).
Changing the Disk-Naming Scheme The use of this command to change between TPD and operating system-based naming is illustrated in the following example for the enclosure named EMC0: # vxdisk list DEVICE emcpower10 emcpower11 emcpower12 emcpower13 emcpower14 emcpower15 emcpower16 emcpower17 emcpower18 emcpower19 TYPE auto:hpdisk auto:hpdisk auto:hpdisk auto:hpdisk auto:hpdisk auto:hpdisk auto:hpdisk auto:hpdisk auto:hpdisk auto:hpdisk DISK disk1 disk2 disk3 disk4 disk5 disk6 disk7 disk8 disk9 disk10 GROU
Changing the Disk-Naming Scheme For example, to find the physical device that is associated with disk ENC0_21, the appropriate commands would be: # vxdisk list ENC0_21 # vxdmpadm getsubpaths dmpnodename=ENC0_21 To obtain the full pathname for the block and character disk device from these commands, append the displayed device name to /dev/dsk or /dev/rdsk. Regenerating the Persistent Device Name Database The persistent device naming feature, introduced in VxVM 4.
Changing the Disk-Naming Scheme Issues Regarding Persistent Simple/Nopriv Disks with Enclosure-Based Naming If you change from c#t#d# based naming to enclosure-based naming, persistent simple or nopriv disks may be put in the “error” state and cause VxVM objects on those disks to fail.
Installing and Formatting Disks 3. If you want to use enclosure-based naming, use vxdiskadm to add a non-persistent simple disk to the bootdg disk group, change back to the enclosure-based naming scheme, and then run the following command: # /etc/vx/bin/vxdarestore Note If not all the disks in bootdg go into the error state, you need only run vxdarestore to restore the disks that are in the error state and the objects that they contain.
Displaying and Changing Default Disk Layout Attributes Displaying and Changing Default Disk Layout Attributes To display or change the default values for initializing disks, select menu item 21 (Change/display the default disk layout) from the vxdiskadm main menu. For disk initialization, you can change the default format and the default length of the private region. The attribute settings for initializing disks are stored in the file, /etc/default/vxdisk.
Adding a Disk to VxVM More than one disk or pattern may be entered at the prompt.
Adding a Disk to VxVM Enter the device name or pattern of the disks that you want to initialize at the prompt and press Return. 3. To continue with the operation, enter y (or press Return) at the following prompt: Here are the disks selected. Output format: [Device] list of device names Continue operation? [y,n,q,?] (default: y) y 4.
Adding a Disk to VxVM 8. When prompted whether to exclude the disks from hot-relocation use, enter n (or press Return). Exclude disk from hot-relocation use? [y,n,q,?} (default: n) n 9. To continue with the operation, enter y (or press Return) at the following prompt: The selected disks will be added to the disk group disk group name with default disk names. list of device names Continue with operation? [y,n,q,?] (default: y) y 10.
Adding a Disk to VxVM Note To bring LVM disks under VxVM control, use the Migration Utilities. See the VERITAS Volume Manager Migration Guide for details. 13. At the following prompt, indicate whether you want to continue to initialize more disks (y) or return to the vxdiskadm main menu (n): Add or initialize other disks? [y,n,q,?] (default: n) See “Displaying and Changing Default Disk Layout Attributes” on page 77 for details of how to change the default layout that is used to initialize disks.
Rootability Rootability Rootability indicates that the volumes containing the root file system and the system swap area are under VxVM control. Without rootability, VxVM is usually started after the operating system kernel has passed control to the initial user mode process at boot time. However, if the volume containing the root file system is under VxVM control, the kernel starts portions of VxVM before starting the first user mode process.
Rootability Sharing (CDS) feature, cannot be used. The vxcp_lvmroot and vxrootmir commands automatically configure a suitable disk type on the physical disks that you specify are to be used as VxVM root disks and mirrors. ◆ The volumes on the root disk cannot use dirty region logging (DRL). Root Disk Mirrors All the volumes on a VxVM root disk may be mirrored. The simplest way to achieve this is to mirror the VxVM root disk onto an identically sized or larger physical disk.
Rootability Setting up a VxVM Root Disk and Mirror Note These procedures should be carried out at init level 1. To set up a VxVM root disk and a bootable mirror of this disk, use the vxcp_lvmroot utility.
Rootability Note The target disk for a mirror that is added using the vxrootmir command must be large enough to accommodate the volumes from the VxVM root disk.
Rootability As these operations can take some time, the verbose option, -v, is specified to indicate how far the operation has progressed. For more information, see the vxres_lvmroot (1M) manual page. Adding Swap Disks to a VxVM Rootable System On occasion, you may need to increase the amount of swap space for an HP-UX system. If your system has a VxVM root disk, use the procedure described below. 1. Create a new volume that is to be used for the swap area.
Removing Disks Removing Disks Note You must disable a disk group as described in “Disabling a Disk Group” on page 168 before you can remove the last disk in that group. Alternatively, you can destroy the disk group as described in “Destroying a Disk Group” on page 168. You can remove a disk from a system and move it to another system if the disk is failing or has failed. Before removing the disk from the current system, you must: 1.
Removing Disks 4. At the following verification prompt, press Return to continue: VxVM NOTICE V-5-2-284 Requested operation is to remove disk mydg01 from group mydg. Continue with operation? [y,n,q,?] (default: y) The vxdiskadm utility removes the disk from the disk group and displays the following success message: VxVM INFO V-5-2-268 Removal of disk mydg01 is complete. You can now remove the disk or leave it on your system as a replacement. 5.
Removing a Disk from VxVM Control Removing a Disk with No Subdisks To remove a disk that contains no subdisks from its disk group, run the vxdiskadm program and select item 2 (Remove a disk) from the main menu, and respond to the prompts as shown in this example to remove mydg02: Enter disk name [,list,q,?] mydg02 VxVM NOTICE V-5-2-284 Requested operation is to remove disk mydg02 from group mydg. Continue with operation? [y,n,q,?] (default: y) y VxVM INFO V-5-2-268 Removal of disk mydg02 is complete.
Removing and Replacing Disks Removing and Replacing Disks Note A replacement disk should have the same disk geometry as the disk that failed. That is, the replacement disk should have the same bytes per sector, sectors per track, tracks per cylinder and sectors per cylinder, same number of cylinders, and the same number of accessible cylinders. If failures are starting to occur on a disk, but the disk has not yet failed completely, you can replace the disk.
Removing and Replacing Disks To remove the disk, causing the named volumes to be disabled and data to be lost when the disk is replaced, enter y or press Return. To abandon removal of the disk, and back up or move the data associated with the volumes that would otherwise be disabled, enter n or q and press Return.
Removing and Replacing Disks 6. You can now choose whether the disk is to be formatted as a CDS disk that is portable between different operating systems, or as a non-portable hpdisk-format disk: Enter the desired format [cdsdisk,hpdisk,q,?] (default: cdsdisk) Enter the format that is appropriate for your needs. In most cases, this is the default format, cdsdisk. 7. At the following prompt, vxdiskadm asks if you want to use the default private region size of 2048 blocks.
Removing and Replacing Disks Replacing a Failed or Removed Disk Note You may need to run commands that are specific to the operating system or disk array when replacing a physical disk. Use the following procedure after you have replaced a failed or removed disk with a new disk: 1. Select menu item 4 (Replace a failed or removed disk) from the vxdiskadm main menu. 2.
Removing and Replacing Disks ❖ If the disk has not previously been initialized, press Return at the following prompt to replace the disk: VxVM INFO V-5-2-378 The requested operation is to initialize disk device c0t1d0 and to then use that device to replace the removed or failed disk mydg02 in disk group mydg.
Enabling a Physical Disk Enabling a Physical Disk If you move a disk from one system to another during normal system operation, VxVM does not recognize the disk automatically. The enable disk task enables VxVM to identify the disk and to determine if this disk is part of a disk group. Also, this task re-enables access to a disk that was disabled by either the disk group deport task or the disk device disable (offline) task. To enable a disk, use the following procedure: 1.
Taking a Disk Offline Taking a Disk Offline There are instances when you must take a disk offline. If a disk is corrupted, you must disable the disk before removing it. You must also disable a disk before moving the physical disk device to another location to be connected to another system. Note Taking a disk offline is only useful on systems that support hot-swap removal and insertion of disks without needing to shut down and reboot the system. To take a disk offline, use the vxdiskadm command: 1.
Renaming a Disk Renaming a Disk If you do not specify a VM disk name, VxVM gives the disk a default name when you add the disk to VxVM control. The VM disk name is used by VxVM to identify the location of the disk or the disk type.
Reserving Disks Reserving Disks By default, the vxassist command allocates space from any disk that has free space. You can reserve a set of disks for special purposes, such as to avoid general use of a particularly slow or a particularly fast disk.
Displaying Disk Information Note The phrase online invalid in the STATUS line indicates that a disk has not yet been added to VxVM control. These disks may or may not have been initialized by VxVM previously. Disks that are listed as online are already under VxVM control.
Displaying Disk Information 100 VERITAS Volume Manager Administrator’s Guide
3 Administering Dynamic Multipathing (DMP) Note You need a full license to use this feature. The Dynamic Multipathing (DMP) feature of VERITAS Volume Manager (VxVM) provides greater availability, reliability and performance by using path failover and load balancing. This feature is available for multiported disk arrays from various vendors. See the VERITAS Volume Manager Hardware Notes for information about supported disk arrays.
How DMP Works Active/Passive arrays in explicit failover mode (or non-autotrespass mode) are termed A/PF arrays. DMP issues the appropriate low-level command to make the LUNs fail over to the secondary path. A/P-C, A/PF-C and A/PG-C arrays are variants of the A/P, AP/F and A/PG array types that support concurrent I/O and load balancing by having multiple primary paths into a controller.
How DMP Works How DMP Represents Multiple Physical Paths to a Disk as One Node VxVM Host c1 c2 Single DMP Node Multiple Paths Mapped by DMP DMP Multiple Paths Disk As described in “Enclosure-Based Naming” on page 6, VxVM implements a disk device naming scheme that allows you to recognize to which array a disk belongs.
How DMP Works Note The persistent device naming feature, introduced in VxVM 4.1, makes the names of disk devices (DMP node names) persistent across system reboots. If operating system-based naming is selected, each disk name is usually set to the name of one of the paths to the disk. After hardware reconfiguration and a subsequent reboot, the operating system may generate different names for the paths to the disks.
How DMP Works Note Both paths of an Active/Passive array are not considered to be on different controllers when mirroring across controllers (for example, when creating a volume using vxassist make specified with the mirror=ctlr attribute). For A/P-C, A/PF-C and A/PG-C arrays, load balancing is performed across all the currently active paths as is done for Active/Active arrays.
How DMP Works Coexistence of DMP with Native Multipathing in HP-UX 11i v3 HP-UX 11i v3 provides native multipathing as part of its mass storage stack. Two kinds of device special file are supported: ◆ Legacy device special files in the /dev/dsk and /dev/rdsk directories. These device files are only created for the first 32,768 paths on a system. Such device files can be discovered by DMP, and coexistence is supported.
Disabling and Enabling Multipathing for Specific Devices at least one primary path is enabled. When all the primary paths have become disabled, all the paths are shown as SECONDARY. Any path attributes are only changed when device discovery is performed. ◆ DMP does not perform failover when a path to a LUN fails. DMP still shows the state of the path as active until HP-UX native multipathing fails an I/O or ioctl request on that path.
Disabling and Enabling Multipathing for Specific Devices ? ?? q Display help about menu Display help about the menuing system Exit from menus Help text and examples are provided onscreen for all the menu items. ❖ Select option 1 to exclude all paths through the specified controller from the view of VxVM. These paths remain in the disabled state until the next reboot, or until the paths are re-included. ❖ Select option 2 to exclude specified paths from the view of VxVM.
Disabling and Enabling Multipathing for Specific Devices 2 3 4 5 6 7 8 Unsuppress a path from VxVM’s view Unsuppress disks from VxVM’s view by specifying a VID:PID combination Remove a pathgroup definition Allow multipathing of all disks on a controller by VxVM Allow multipathing of a disk by VxVM Allow multipathing of disks by specifying a VID:PID combination List currently suppressed/non-multipathed devices ? Display help about menu ?? Display help about the menuing system q Exit from menus ❖ Select o
Enabling and Disabling Input/Output (I/O) Controllers Enabling and Disabling Input/Output (I/O) Controllers DMP allows you to turn off I/O to a host I/O controller so that you can perform administrative operations. This feature can be used for maintenance of controllers attached to the host or of disk arrays supported by VxVM. I/O operations to the host I/O controller can be turned back on after the maintenance task is completed.
Displaying Multipaths to a VM Disk Displaying Multipaths to a VM Disk The vxdisk command is used to display the multipathing information for a particular metadevice. The metadevice is a device representation of a particular physical disk having multiple physical paths from the I/O controller of the system. In VxVM, all the physical disks in the system are represented as metadevices with one or more physical paths.
Administering DMP Using vxdmpadm update: time=962923719 seqno=0.
Administering DMP Using vxdmpadm ◆ Control the operation of the DMP restore daemon. The following sections cover these tasks in detail along with sample output. For more information, see the vxdmpadm(1M) manual page.
Administering DMP Using vxdmpadm Displaying All Paths Controlled by a DMP Node The following command displays the paths controlled by the specified DMP node: # vxdmpadm getsubpaths dmpnodename=c2t1d0 NAME STATE[-] PATH-TYPE[-] CTLR-NAME ENCLR-TYPE ENCLR-NAME ATTRS ================================================================== c2t1d0 ENABLED PRIMARY c2 ACME enc0 c3t2d0 ENABLED SECONDARY c3 ACME enc0 - The specified DMP node must be a valid node in the /dev/vx/rdmp directory.
Administering DMP Using vxdmpadm Listing Information About Host I/O Controllers The following command lists attributes of all host I/O controllers on the system: # vxdmpadm listctlr all CTLR-NAME ENCLR-TYPE STATE ENCLR-NAME ===================================================== c1 OTHER ENABLED other0 c2 X1 ENABLED jbod0 c3 ACME ENABLED enc0 c4 ACME ENABLED enc0 This form of the command lists controllers belonging to a specified enclosure and enclosure type: # vxdmpadm listctlr enclosure=enc0 type=ACME CTL
Administering DMP Using vxdmpadm Displaying Information About TPD-Controlled Devices The third-party driver (TPD) coexistence feature allows I/O that is controlled by third-party multipathing drivers to bypass DMP while retaining the monitoring capabilities of DMP.
Administering DMP Using vxdmpadm Gathering and Displaying I/O Statistics You can use the vxdmpadm iostat command to gather and display I/O statistics for a specified DMP node, enclosure or path. To enable the gathering of statistics, enter this command: # vxdmpadm iostat start [memory=size] To reset the I/O counters to zero, use this command: # vxdmpadm iostat reset The memory attribute can be used to limit the maximum amount of memory that is used to record I/O statistics for each CPU.
Administering DMP Using vxdmpadm The next command displays the current statistics including the accumulated total numbers of read and write operations and kilobytes read and written, on all paths: # vxdmpadm iostat show all cpu usage = 7952us per cpu memory = 8192b OPERATIONS KBYTES AVG TIME(ms) PATHNAME READS WRITES READS WRITES READS WRITES c0t0d0 1088 0 557056 0 0.009542 0.000000 c2t118d0 87 0 44544 0 0.001194 0.000000 c3t118d0 0 0 0 0 0.000000 0.000000 c2t122d0 87 0 44544 0 0.007265 0.
Administering DMP Using vxdmpadm # vxdmpadm iostat show dmpnodename=c0t0d0 cpu usage = 8501us per cpu memory = 4096b OPERATIONS BYTES AVG TIME(ms) PATHNAME READS WRITES READS WRITES READS WRITES c0t0d0 1088 0 557056 0 0.009542 0.000000 # vxdmpadm iostat show enclosure=Disk cpu usage = 8626us per cpu memory = 4096b cpu usage = 8501us per cpu memory = 4096b OPERATIONS BYTES AVG TIME(ms) PATHNAME READS WRITES READS WRITES READS WRITES c0t0d0 1088 0 557056 0 0.009542 0.
Administering DMP Using vxdmpadm ◆ nopreferred Restores the normal priority of a path. The following example restores the default priority to a path: # vxdmpadm setattr path c1t20d0 pathtype=nopreferred ◆ preferred [priority=N] Specifies a path as preferred, and optionally assigns a priority number to it. If specified, the priority number must be an integer that is greater than or equal to one. Higher priority numbers indicate that a path is able to carry a greater I/O load.
Administering DMP Using vxdmpadm Displaying the I/O Policy To display the current and default settings of the I/O policy for an enclosure, array or array type, use the vxdmpadm getattr command.
Administering DMP Using vxdmpadm In this example, the adaptive I/O policy is set for the enclosure enc1: # vxdmpadm setattr enclosure enc1 iopolicy=adaptive ◆ balanced [partitionsize=size] This policy is designed to optimize the use of caching in disk drives and RAID controllers, and is the default policy for A/A arrays. The size of the cache typically ranges from 120KB to 500KB or more, depending on the characteristics of the particular hardware.
Administering DMP Using vxdmpadm Note The benefit of this policy is lost if the value is set larger than the cache size. The default value can be changed by adjusting the value of a tunable parameter (see “dmp_pathswitch_blks_shift” on page 411) and rebooting the system.
Administering DMP Using vxdmpadm ◆ singleactive This policy routes I/O down one single active path. This is the default policy for A/P arrays with one active path per controller, where the other paths are used in case of failover. If configured for A/A arrays, there is no load balancing across the paths, and the alternate paths are only used to provide high availability (HA). If the currently active path fails, I/O is switched to an alternate active path.
Administering DMP Using vxdmpadm By running the vxdmpadm iostat command to display the DMP statistics for the device, it can be seen that all I/O is being directed to one path, c5t4d15: # vxdmpadm iostat show dmpnodename=c3t2d15 interval=5 count=2 ... cpu usage = 11294us per cpu memory = 32768b OPERATIONS KBYTES AVG TIME(ms) PATHNAME READS WRITES READS WRITES READS WRITES c2t0d15 0 0 0 0 0.000000 0.000000 c2t1d15 0 0 0 0 0.000000 0.000000 c3t1d15 0 0 0 0 0.000000 0.000000 c3t2d15 0 0 0 0 0.000000 0.
Administering DMP Using vxdmpadm With the workload still running, the effect of changing the I/O policy to balance the load across the primary paths can now be seen. # vxdmpadm iostat show dmpnodename=c3t2d15 interval=5 count=2 ... cpu usage = 14403us per cpu memory = 32768b OPERATIONS KBYTES AVG TIME(ms) PATHNAME READS WRITES READS WRITES READS WRITES c2t0d15 1021 0 1021 0 0.396670 0.000000 c2t1d15 947 0 947 0 0.391763 0.000000 c3t1d15 1004 0 1004 0 0.393426 0.000000 c3t2d15 1027 0 1027 0 0.402142 0.
Administering DMP Using vxdmpadm Enabling a Controller Note This operation is not supported for controllers that are used to access disk arrays on which cluster-shareable disk groups are configured. Enabling a controller allows a previously disabled host disk controller to accept I/O. This operation succeeds only if the controller is accessible to the host and I/O can be performed on it.
Administering DMP Using vxdmpadm Starting the DMP Restore Daemon The DMP restore daemon re-examines the condition of paths at a specified interval. The type of analysis it performs on the paths depends on the specified checking policy. Note The DMP restore daemon does not change the disabled state of the path through a controller that you have disabled using vxdmpadm disable.
Administering DMP Using vxdmpadm The interval attribute specifies how often the restore daemon examines the paths. For example, after stopping the restore daemon, the polling interval can be set to 400 seconds using the following command: # vxdmpadm start restore interval=400 Note The default interval is 300 seconds. Decreasing this interval can adversely affect system performance. To change the interval or policy, you must first stop the restore daemon, and then restart it with new attributes.
Administering DMP Using vxdmpadm Configuring Array Policy Modules An array policy module (APM) is a dynamically loadable kernel module that may be provided by some vendors for use in conjunction with an array. An APM defines procedures to: ◆ Select an I/O path when multiple paths to a disk within the array are available. ◆ Select the path failover mechanism. ◆ Select the alternate path in the case of a path failure. ◆ Put a path change into effect.
4 Creating and Administering Disk Groups This chapter describes how to create and manage disk groups. Disk groups are named collections of disks that share a common configuration. Volumes are created within a disk group and are restricted to using disks within that disk group. Note In releases of VERITAS Volume Manager (VxVM) prior to 4.0, a system installed with VxVM was configured with a default disk group, rootdg, that had to contain at least one disk.
Specifying a Disk Group to Commands target address or to a different controller, the name mydg02 continues to refer to it. Disks can be replaced by first associating a different physical disk with the name of the disk to be replaced and then recovering any volume data that was stored on the original disk (from mirrors or backup copies). Having disk groups that contain many disks and VxVM objects causes the private region to fill.
Specifying a Disk Group to Commands System-Wide Reserved Disk Groups The following disk group names are reserved, and cannot be used to name any disk groups that you create: bootdg Specifes the boot disk group. This is an alias for the disk group that contains the volumes that are used to boot the system. VxVM sets bootdg to the appropriate disk group if it takes control of the root disk. Otherwise, bootdg is set to nodg (no disk group; see below).
Specifying a Disk Group to Commands If none of these rules succeeds, the requested operation fails. Caution In releases of VxVM prior to 4.0, a subset of commands attempted to deduce the disk group by searching for the object name that was being operated upon by a command. This functionality is no longer supported. Scripts that rely on deducing the disk group from an object name may fail.
Displaying Disk Group Information Displaying Disk Group Information To display information on existing disk groups, enter the following command: # vxdg list NAME STATE rootdg enabled newdg enabled ID 730344554.1025.tweety 731118794.1213.tweety To display more detailed information on a specific disk group, use the following command: # vxdg list diskgroup The output from this command is similar to the following: Group: mydg dgid: 962910960.1025.bass import-id: 0.
Displaying Disk Group Information This command provides output that includes the following information for the specified disk. For example, output for disk c0t12d0 as follows: Disk: c0t12d0 type: simple flags: online ready private autoconfig autoimport imported diskid: 963504891.1070.bass dgname: newdg dgid: 963504895.1075.
Creating a Disk Group Creating a Disk Group Data related to a particular set of applications or a particular group of users may need to be made accessible on another system. Examples of this are: ◆ A system has failed and its data needs to be moved to other systems. ◆ The work load must be balanced across a number of systems. Disks must be placed in one or more disk groups before VxVM can use the disks for volumes.
Adding a Disk to a Disk Group Adding a Disk to a Disk Group To add a disk to an existing disk group, use menu item 1 (Add or initialize one or more disks) of the vxdiskadm command. For details of this procedure, see “Adding a Disk to VxVM” on page 77. You can also use the vxdiskadd command to add a disk to a disk group, for example: # vxdiskadd c1t1d0 where c1t1d0 is the device name of a disk that is not currently assigned to a disk group.
Deporting a Disk Group You can remove a disk on which some subdisks are defined. For example, you can consolidate all the volumes onto one disk. If you use vxdiskadm to remove a disk, you can choose to move volumes off that disk. To do this, run vxdiskadm and select item 2 (Remove a disk) from the main menu.
Deporting a Disk Group 3. Select menu item 8 (Remove access to (deport) a disk group) from the vxdiskadm main menu. 4. At the following prompt, enter the name of the disk group to be deported (in this example, newdg): Remove access to (deport) a disk group Menu: VolumeManager/Disk/DeportDiskGroup Use this menu operation to remove access to a disk group that is currently enabled (imported) by this system. Deport a disk group if you intend to move the disks in a disk group to another system.
Importing a Disk Group 7. At the following prompt, indicate whether you want to disable another disk group (y) or return to the vxdiskadm main menu (n): Disable another disk group? [y,n,q,?] (default: n) Alternatively, you can use the vxdg command to deport a disk group: # vxdg deport diskgroup Importing a Disk Group Importing a disk group enables access by the system to a disk group.
Renaming a Disk Group Once the import is complete, the vxdiskadm utility displays the following success message: VxVM INFO V-5-2-374 The import of newdg was successful. 4.
Renaming a Disk Group Note You cannot use this method to rename the active boot disk group because it contains volumes that are in use by mounted file systems (such as /). To rename the boot disk group, boot the system from an LVM root disk instead of from the VxVM root disk. You can then use the above methods to rename the boot disk group. See the sections under “Rootability” on page 82 for more information.
Moving Disks Between Disk Groups Here hostname is the name of the system whose rootdg is being returned (the system name can be confirmed with the command uname -n). This command removes the imported disk group from the importing host and returns locks to its original host. The original host can then automatically import its boot disk group at the next reboot. Moving Disks Between Disk Groups To move a disk between disk groups, remove the disk from one disk group and add it to the other.
Moving Disk Groups Between Systems 3. Import (enable local access to) the disk group on the second system with this command: # vxdg import diskgroup Caution All disks in the disk group must be moved to the other system. If they are not moved, the import fails. 4. After the disk group is imported, start all volumes in the disk group with this command: # vxrecover -g diskgroup -sb You can also move disks from a system that has crashed. In this case, you cannot deport the disk group from the first system.
Moving Disk Groups Between Systems Caution Be careful when using the vxdisk clearimport or vxdg -C import command on systems that have dual-ported disks. Clearing the locks allows those disks to be accessed at the same time from multiple hosts and can result in corrupted data. You may want to import a disk group when some disks are not available. The import operation fails if some disks for the disk group cannot be found among the disk drives attached to the system.
Moving Disk Groups Between Systems Reserving Minor Numbers for Disk Groups A device minor number uniquely identifies some characteristic of a device to the device driver that controls that device. It is often used to identify some characteristic mode of an individual device, or to identify separate devices that are all under the control of a single controller. VxVM assigns unique device minor numbers to each object (volume, plex, subdisk, disk, or disk group) that it controls.
Moving Disk Groups Between Systems For example, the following command creates the disk group, newdg, that includes the specified disks, and has a base minor number of 30000: # xvdg init newdg minor=30000 c1d0t0 c1t1d0 If a disk group already exists, you can use the vxdg reminor command to change its base minor number: # vxdg -g diskgroup reminor new_base_minor For example, the following command changes the base minor number to 30000 for the disk group, mydg: # vxprint -g mydg reminor 30000 If a volume i
Moving Disk Groups Between Systems limit on the number of DMP and volume devices that can be configured on such a system are 65,536 and 1,048,576 respectively. However, in practice, the number of VxVM devices that can be configured in a single disk group is limited by the size of the private region. When a CDS-compatible disk group is imported on a Linux system with a pre-2.6 kernel, VxVM attempts to reassign the minor numbers of the volumes, and fails if this is not possible.
Handling Conflicting Configuration Copies in a Disk Group Handling Conflicting Configuration Copies in a Disk Group If an incomplete disk group is imported on several different systems, this can create inconsistencies in the disk group configuration copies that you may need to resolve manually. This section and following sections describe how such a condition can occur, and how to correct it.
Handling Conflicting Configuration Copies in a Disk Group Typical Arrangement of a 2-node Campus Cluster Node 0 Redundant Private Network Node 1 Fibre Channel Switches Disk Enclosures enc0 enc1 Building A Building B A serial split brain condition typically arises in a cluster when a private (non-shared) disk group is imported on Node 0 with Node 1 configured as the failover node. If the network connections between the nodes are severed, both nodes think that the other node has died.
Handling Conflicting Configuration Copies in a Disk Group separately on that host. When the disks are subsequently re-imported into the original shared disk group, the actual serial IDs on the disks do not agree with the expected values from the configuration copies on other disks in the disk group.
Handling Conflicting Configuration Copies in a Disk Group ◆ If the other disks were also imported on another host, no disk can be considered to have a definitive copy of the configuration database. The figure below illustrates how this condition can arise for two disks.
Handling Conflicting Configuration Copies in a Disk Group Correcting Conflicting Configuration Information Note This procedure requires that the disk group has a version number of at least 110. See “Upgrading a Disk Group” on page 169 for more information about disk group version numbers. To resolve conflicting configuration information, you must decide which disk contains the correct version of the disk group configuration database.
Reorganizing the Contents of Disk Groups You can specify the -c option to vxsplitlines to print detailed information about each of the disk IDs from the configuration copy on a disk specified by its disk access name: # vxsplitlines DANAME(DMNAME) c2t5d0( c2t5d0 c2t6d0( c2t6d0 c2t7d0( c2t7d0 c2t8d0( c2t8d0 -g newdg -c c2t6d0 || Actual SSB ) || 0.1 ) || 0.1 ) || 0.1 ) || 0.1 || || || || || Expected SSB 0.0 ssb ids don’t match 0.1 ssb ids match 0.1 ssb ids match 0.
Reorganizing the Contents of Disk Groups ◆ To isolate volumes or disks from a disk group, and process them independently on the same host or on a different host. This allows you to implement off-host processing solutions for the purposes of backup or decision support. This is discussed further in “Configuring Off-Host Processing” on page 315. You can use either the VERITAS Enterprise Administrator (VEA) or the vxdg command to reorganize your disk groups.
Reorganizing the Contents of Disk Groups ◆ split—removes a self-contained set of VxVM objects from an imported disk group, and moves them to a newly created target disk group. This operation fails if it would remove all the disks from the source disk group, or if an imported disk group exists with the same name as the target disk group. An existing deported disk group is destroyed if it has the same name as the target disk group (as is the case for the vxdg init command).
Reorganizing the Contents of Disk Groups ◆ join—removes all VxVM objects from an imported disk group and moves them to an imported target disk group. The source disk group is removed when the join is complete. The join operation is illustrated in “Disk Group Join Operation” on page 158.
Reorganizing the Contents of Disk Groups If the system crashes or a hardware subsystem fails, VxVM attempts to complete or reverse an incomplete disk group reconfiguration when the system is restarted or the hardware subsystem is repaired, depending on how far the reconfiguration had progressed.
Reorganizing the Contents of Disk Groups ◆ When used with objects that have been created using the VERITAS Intelligent Storage Provisioning (ISP) feature, only complete storage pools may be split or moved from a disk group. Individual objects such as application volumes within storage pools may not be split or moved. See the VERITAS Storage Foundation Intelligent Storage Provisioning Administrator’s Guide for a description of ISP and storage pools.
Reorganizing the Contents of Disk Groups Moving DCO Volumes Between Disk Groups When you move the parent volume (such as a snapshot volume) to a different disk group, its DCO volume must accompany it. If you use the vxassist addlog, vxmake or vxdco commands to set up a DCO for a volume, you must ensure that the disks that contain the plexes of the DCO volume accompany their parent volume during the move. You can use the vxprint command on a volume to examine the configuration of its associated DCO volume.
Reorganizing the Contents of Disk Groups Examples of Disk Groups That Can and Cannot be Split Volume Data Plexes The disk group can be split as the DCO plexes are on dedicated disks, and can therefore accompany the disks that contain the volume data. Snapshot Plex Split Volume DCO Plexes Snapshot DCO Plex Volume Data Plexes Snapshot Plex The disk group cannot be split as the DCO plexes cannot accompany their volumes. One solution is to relocate the DCO plexes.
Reorganizing the Contents of Disk Groups Moving Objects Between Disk Groups To move a self-contained set of VxVM objects from an imported source disk group to an imported target disk group, use the following command: # vxdg [-o expand] [-o override|verify] move sourcedg targetdg \ object ... The -o expand option ensures that the objects that are actually moved include all other disks containing subdisks that are associated with the specified objects or with objects that they contain.
Reorganizing the Contents of Disk Groups sd mydg01-01 vol1-01 pl vol1-02 vol1 sd mydg05-01 vol1-02 ENABLED 3591 ENABLED 3591 ENABLED 3591 0 0 ACTIVE - - The following command moves the self-contained set of objects implied by specifying disk mydg01 from disk group mydg to rootdg: # vxdg -o expand move mydg rootdg mydg01 The moved volumes are initially disabled following the move.
Reorganizing the Contents of Disk Groups Splitting Disk Groups To remove a self-contained set of VxVM objects from an imported source disk group to a new target disk group, use the following command: # vxdg [-o expand] [-o override|verify] split sourcedg targetdg \ object ... For a description of the -o expand, -o override, and -o verify options, see “Moving Objects Between Disk Groups” on page 163. See “Splitting Disk Groups” on page 376 for more information on splitting shared disk groups in clusters.
Reorganizing the Contents of Disk Groups The output from vxprint after the split shows the new disk group, mydg: # vxprint Disk group: rootdg TY NAME ASSOC dg rootdg rootdg dm rootdg01 c0t1d0 dm rootdg02 c1t97d0 dm rootdg03 c1t112d0 dm rootdg04 c1t114d0 dm rootdg05 c1t96d0 dm rootdg06 c1t98d0 v vol1 fsgen pl vol1-01 vol1 sd rootdg01-01 vol1-01 pl vol1-02 vol1 sd rootdg05-01 vol1-02 KSTATE ENABLED ENABLED ENABLED ENABLED ENABLED LENGTH 17678493 17678493 17678493 17678493 17678493 17678493 2048 3591 3591 3
Reorganizing the Contents of Disk Groups dm dm dm dm rootdg03 rootdg04 rootdg07 rootdg08 c1t112d0 c1t114d0 c1t99d0 c1t100d0 Disk group: mydg TY NAME ASSOC dg mydg mydg dm mydg05 c1t96d0 dm mydg06 c1t98d0 v vol1 fsgen pl vol1-01 vol1 sd mydg01-01 vol1-01 pl vol1-02 vol1 sd mydg05-01 vol1-02 - 17678493 17678493 17678493 17678493 - - - - KSTATE ENABLED ENABLED ENABLED ENABLED ENABLED LENGTH 17678493 17678493 2048 3591 3591 3591 3591 PLOFFS 0 0 STATE TUTIL0 ACTIVE ACTIVE ACTIVE - PUTIL0 - The fo
Disabling a Disk Group Disabling a Disk Group To disable a disk group, unmount and stop any volumes in the disk group, and then use the following command to deport it: # vxdg deport diskgroup Deporting a disk group does not actually remove the disk group. It disables use of the disk group by the system. Disks in a deported disk group can be reused, reinitialized, added to other disk groups, or imported for use on other systems. Use the vxdg import command to re-enable access to the disk group.
Upgrading a Disk Group Upgrading a Disk Group Note On some platforms, the first release of VERITAS Volume Manager was 3.0 or 3.2. Prior to the release of VERITAS Volume Manager 3.0, the disk group version was automatically upgraded (if needed) when the disk group was imported. From release 3.0 of VERITAS Volume Manager, the two operations of importing a disk group and upgrading its version are separate. You can import a disk group from a previous version and use it without upgrading it.
Upgrading a Disk Group The table, “Disk Group Version Assignments,” summarizes the VERITAS Volume Manager releases that introduce and support specific disk group versions: Disk Group Version Assignments VxVM Release Introduces Disk Group Version Supports Disk Group Versions 1.2 10 10 1.3 15 15 2.0 20 20 2.2 30 30 2.3 40 40 2.5 50 50 3.0 60 20-40, 60 3.1 70 20-70 3.1.1 80 20-80 3.2, 3.5 90 20-90 4.0 110 20-110 4.
Upgrading a Disk Group Features Supported by Disk Group Versions Disk Group Version New Features Supported 120 ◆ Automatic Cluster-wide Failback for A/P arrays ◆ DMP Co-existence with Third-Party Drivers ◆ Migration of Volumes to ISP ◆ Persistent DMP Policies ◆ Shared Disk Group Failure Policy ◆ Cross-platform Data Sharing (CDS) ◆ Device Discovery Layer (DDL) 2.
Upgrading a Disk Group Features Supported by Disk Group Versions Disk Group Version New Features Supported Previous Version Features Supported 20 ◆ Dirty Region Logging (DRL) ◆ Disk Group Configuration Copy Limiting ◆ Mirrored Volumes Logging ◆ New-Style Stripes ◆ RAID-5 Volumes ◆ Recovery Checkpointing To list the version of a disk group, use this command: # vxdg list dgname You can also determine the disk group version by using the vxprint command with the -l format option.
Managing the Configuration Daemon in VxVM Managing the Configuration Daemon in VxVM The VxVM configuration daemon (vxconfigd) provides the interface between VxVM commands and the kernel device drivers. vxconfigd handles configuration change requests from VxVM utilities, communicates the change requests to the VxVM kernel, and modifies configuration information stored on disk. vxconfigd also initializes VxVM when the system is booted. The vxdctl command is the command-line interface to the vxconfigd daemon.
Backing Up and Restoring Disk Group Configuration Data Backing Up and Restoring Disk Group Configuration Data The disk group configuration backup and restoration feature allows you to back up and restore all configuration data for disk groups, and for VxVM objects such as volumes that are configured within the disk groups. The vxconfigbackupd daemon monitors changes to the VxVM configuration and automatically records any configuration changes that occur.
5 Creating and Administering Subdisks This chapter describes how to create and maintain subdisks. Subdisks are the low-level building blocks in a VERITAS Volume Mananger (VxVM) configuration that are required to create plexes and volumes. Note Most VxVM commands require superuser or equivalent privileges. Creating Subdisks Note Subdisks are created automatically if you use the vxassist command or the VERITAS Enterprise Administrator (VEA) to create volumes.
Displaying Subdisk Information Displaying Subdisk Information The vxprint command displays information about VxVM objects. To display general information for all subdisks, use this command: # vxprint -st The -s option specifies information about subdisks. The -t option prints a single-line output record that depends on the type of object being listed.
Splitting Subdisks For the subdisk move to work correctly, the following conditions must be met: ◆ The subdisks involved must be the same size. ◆ The subdisk being moved must be part of an active plex on an active (ENABLED) volume. ◆ The new subdisk must not be associated with any other plex. See “Configuring Hot-Relocation to Use Only Spare Disks” on page 339 for information about manually relocating subdisks after hot-relocation.
Associating Subdisks with Plexes For example, to join the contiguous subdisks mydg03-02, mydg03-03, mydg03-04 and mydg03-05 as subdisk mydg03-02 in the disk group, mydg, use the following command: # vxsd -g mydg join mydg03-02 mydg03-03 mydg03-04 mydg03-05 \ mydg03-02 Associating Subdisks with Plexes Associating a subdisk with a plex places the amount of disk space defined by the subdisk at a specific offset within the plex.
Associating Log Subdisks size that fits the hole in the sparse plex exactly. Then, associate the subdisk with the plex by specifying the offset of the beginning of the hole in the plex, using the following command: # vxsd [-g diskgroup] -l offset assoc sparse_plex exact_size_subdisk Note The subdisk must be exactly the right size. VxVM does not allow the space defined for two subdisks to overlap within a plex.
Dissociating Subdisks from Plexes Note Only one log subdisk can be associated with a plex. Because this log subdisk is frequently written, care should be taken to position it on a disk that is not heavily used. Placing a log subdisk on a heavily-used disk can degrade system performance. To add a log subdisk to an existing plex, use the following command: # vxsd [-g diskgroup] aslog plex subdisk where subdisk is the name to be used for the log subdisk.
Removing Subdisks removing the subdisk makes the volume unusable, because another subdisk in the same stripe is unusable or missing and the volume is not DISABLED and empty, the operation is not allowed.
Changing Subdisk Attributes Changing Subdisk Attributes Caution Change subdisk attributes with extreme care. The vxedit command changes attributes of subdisks and other VxVM objects. To change subdisk attributes, use the following command: # vxedit [-g diskgroup] set attribute=value ... subdisk ...
6 Creating and Administering Plexes This chapter describes how to create and maintain plexes. Plexes are logical groupings of subdisks that create an area of disk space independent of physical disk size or other restrictions. Replication (mirroring) of disk data is set up by creating multiple data plexes for a single volume. Each data plex in a mirrored volume contains an identical copy of the volume data.
Creating a Striped Plex Creating a Striped Plex To create a striped plex, you must specify additional attributes. For example, to create a striped plex named pl-01 in the disk group, mydg, with a stripe width of 32 sectors and 2 columns, use the following command: # vxmake -g mydg plex pl-01 layout=stripe stwidth=32 ncolumn=2 \ sd=mydg01-01,mydg02-01 To use a plex to build a volume, you must associate the plex with the volume.
Displaying Plex Information VxVM utilities use plex states to: ◆ indicate whether volume contents have been initialized to a known state ◆ determine if a plex contains a valid copy (mirror) of the volume contents ◆ track whether a plex was in active use at the time of a system failure ◆ monitor operations on plexes This section explains the individual plex states in detail.
Displaying Plex Information EMPTY Plex State Volume creation sets all plexes associated with the volume to the EMPTY state to indicate that the plex is not yet initialized. IOFAIL Plex State The IOFAIL plex state is associated with persistent state logging. When the vxconfigd daemon detects an uncorrectable I/O failure on an ACTIVE plex, it places the plex in the IOFAIL state to exclude it from the recovery selection process at volume start time.
Displaying Plex Information SNAPDONE Plex State The SNAPDONE plex state indicates that a snapshot plex is ready for a snapshot to be taken using vxassist snapshot. SNAPTMP Plex State The SNAPTMP plex state is used during a vxassist snapstart operation when a snapshot is being prepared on a volume. STALE Plex State If there is a possibility that a plex does not have the complete and current volume contents, that plex is placed in the STALE state.
Displaying Plex Information TEMPRMSD Plex State The TEMPRMSD plex state is used by vxassist when attaching new data plexes to a volume. If the synchronization operation does not complete, the plex and its subdisks are removed. Plex Condition Flags vxprint may also display one of the following condition flags in the STATE field: IOFAIL Plex Condition The plex was detached as a result of an I/O failure detected during normal volume I/O.
Attaching and Associating Plexes Plex Kernel States The plex kernel state indicates the accessibility of the plex to the volume driver which monitors it. Note No user intervention is required to set these states; they are maintained internally. On a system that is operating properly, all plexes are enabled. The following plex kernel states are defined: DETACHED Plex Kernel State Maintenance is being performed on the plex. Any write request to the volume is not reflected in the plex.
Taking Plexes Offline For example, to create a mirrored, fsgen-type volume named home, and to associate two existing plexes named home-1 and home-2 with home, use the following command: # vxmake -g mydg -U fsgen vol home plex=home-1,home-2 Note You can also use the command vxassist mirror volume to add a data plex as a mirror to an existing volume. Taking Plexes Offline Once a volume has been created and placed online (ENABLED), VxVM can temporarily disconnect plexes from the volume.
Detaching Plexes Detaching Plexes To temporarily detach one data plex in a mirrored volume, use the following command: # vxplex [-g diskgroup] det plex For example, to temporarily detach a plex named vol01-02 in the disk group, mydg, and place it in maintenance mode, use the following command: # vxplex -g mydg det vol01-02 This command temporarily detaches the plex, but maintains the association between the plex and its volume. However, the plex is not used for I/O.
Moving Plexes If the vxinfo command shows that the volume is unstartable (see “Listing Unstartable Volumes” in the section “Recovery from Hardware Failure” in the VERITAS Volume Manager Troubleshooting Guide), set one of the plexes to CLEAN using the following command: # vxmend [-g diskgroup] fix clean plex Start the volume using the following command: # vxvol [-g diskgroup] start volume Moving Plexes Moving a plex copies the data content from the original plex onto a new plex.
Copying Plexes Copying Plexes This task copies the contents of a volume onto a specified plex. The volume to be copied must not be enabled. The plex cannot be associated with any other volume. To copy a plex, use the following command: # vxplex [-g diskgroup] cp volume new_plex After the copy task is complete, new_plex is not associated with the specified volume volume. The plex contains a complete copy of the volume data. The plex that is being copied should be the same size or larger than the volume.
Changing Plex Attributes Alternatively, you can first dissociate the plex and subdisks, and then remove them with the following commands: # vxplex [-g diskgroup] dis plex # vxedit [-g diskgroup] -r rm plex When used together, these commands produce the same result as the vxplex -o rm dis command. The -r option to vxedit rm recursively removes all objects from the specified object downward. In this way, a plex and its associated subdisks can be removed by a single vxedit command.
7 Creating Volumes This chapter describes how to create volumes in VERITAS Volume Manager (VxVM). Volumes are logical devices that appear as physical disk partition devices to data management systems. Volumes enhance recovery from hardware failure, data availability, performance, and storage configuration. Note You can also use the VERITAS Intelligent Storage Provisioning (ISP) feature to create and administer application volumes.
Types of Volume Layouts Types of Volume Layouts VxVM allows you to create volumes with the following layout types: 196 ◆ Concatenated—A volume whose subdisks are arranged both sequentially and contiguously within a plex. Concatenation allows a volume to be created from multiple regions of one or more disks if there is not enough space for an entire volume on a single region of a disk. For more information, see “Concatenation and Spanning” on page 18.
Types of Volume Layouts A striped-mirror volume is created by configuring several mirrored volumes as the columns of a striped volume. This layout offers the same benefits as a non-layered mirrored-stripe volume. In addition it provides faster recovery as the failure of single disk does not force an entire striped plex offline. For more information, see “Mirroring Plus Striping (Striped-Mirror, RAID-1+0 or RAID-10)” on page 26.
Creating a Volume See “Creating a RAID-5 Volume” on page 217 for information on creating a RAID-5 volume together with RAID-5 logs. Creating a Volume You can create volumes using an advanced approach, an assisted approach, or the rule-based storage allocation approach that is provided by the Intelligent Storage Provisioning (ISP) feature. Each method uses different tools. You may switch between the advanced and the assisted approaches at will.
Using vxassist Assisted Approach The assisted approach takes information about what you want to accomplish and then performs the necessary underlying tasks. This approach requires only minimal input from you, but also permits more detailed specifications. Assisted operations are performed primarily through the vxassist command or the VERITAS Enterprise Administrator (VEA). vxassist and the VEA create the required plexes and subdisks using only the basic attributes of the desired volume as input.
Using vxassist The vxassist utility helps you perform the following tasks: ◆ Creating volumes. ◆ Creating mirrors for existing volumes. ◆ Growing or shrinking existing volumes. ◆ Backing up volumes online. ◆ Reconfiguring a volume’s layout online. vxassist obtains most of the information it needs from sources other than your input. vxassist obtains information about the existing objects and their layouts from the objects themselves.
Using vxassist Setting Default Values for vxassist The default values that the vxassist command uses may be specified in the file /etc/default/vxassist. The defaults listed in this file take effect if you do not override them on the command line, or in an alternate defaults file that you specify using the -d option. A default value specified on the command line always takes precedence. vxassist also has a set of built-in defaults that it uses if it cannot find a value defined elsewhere.
Discovering the Maximum Size of a Volume # by default, create 1 log copy for both mirroring and RAID-5 volumes nregionlog=1 nraid5log=1 # by default, limit mirroring log lengths to 32Kbytes max_regionloglen=32k # use 64K as the default stripe unit size for regular volumes stripe_stwid=64k # use 16K as the default stripe unit size for RAID-5 volumes raid5_stwid=16k Discovering the Maximum Size of a Volume To find out how large a volume you can create within a disk group, use the following form of th
Creating a Volume on Any Disk If you specify the attribute dgalign_checking=strict to vxassist, the command fails with an error if you specify a volume length or attribute size value that is not a multiple of the alignment value for the disk group. Creating a Volume on Any Disk By default, the vxassist make command creates a concatenated volume that uses one or more sections of disk space.
Creating a Volume on Specific Disks The vxassist command allows you to specify storage attributes. These give you control over the devices, including disks, controllers and targets, which vxassist uses to configure a volume.
Creating a Volume on Specific Disks Specifying Ordered Allocation of Storage to Volumes Ordered allocation gives you complete control of space allocation. It requires that the number of disks that you specify to the vxassist command must match the number of disks that are required to create a volume. The order in which you specify the disks to vxassist is also significant.
Creating a Volume on Specific Disks For layered volumes, vxassist applies the same rules to allocate storage as for non-layered volumes.
Creating a Volume on Specific Disks Example of Using Concatenated Disk Space to Create a Mirrored-Stripe Volume Column 1 Column 2 mydg01-01 mydg03-01 mydg02-01 mydg04-01 Striped Plex Mirror Column 1 Column 2 mydg05-01 mydg07-01 mydg06-01 mydg08-01 Striped Plex Mirrored-Stripe Volume Other storage specification classes for controllers, enclosures, targets and trays can be used with ordered allocation.
Creating a Volume on Specific Disks Example of Storage Allocation Used to Create a Mirrored-Stripe Volume Across Controllers c1 c2 c3 Column 1 Column 2 Column 3 Controllers Striped Plex Mirror Column 1 Column 2 Column 3 Striped Plex Mirrored-Stripe Volume c4 c5 c6 Controllers For other ways in which you can control how vxassist lays out mirrored volumes across controllers, see “Mirroring across Targets, Controllers or Enclosures” on page 216.
Creating a Mirrored Volume Creating a Mirrored Volume A mirrored volume provides data redundancy by containing more than one copy of its data. Each copy (or mirror) is stored on different disks from the original copy of the volume and from other mirrors. Mirroring a volume ensures that its data is not lost if a disk in one of its component mirrors fails. Note A mirrored volume requires space to be available on at least as many disks in the disk group as the number of mirrors in the volume.
Creating a Volume with a Version 0 DCO Volume Creating a Volume with a Version 0 DCO Volume If a data change object (DCO) and DCO volume are associated with a volume, this allows Persistent FastResync to be used with the volume. (See “How Persistent FastResync Works with Snapshots” on page 53 for details of how Persistent FastResync performs fast resynchronization of snapshot mirrors when they are returned to their original volume.
Creating a Volume with a Version 0 DCO Volume 2. Use the following command to create the volume (you may need to specify additional attributes to create a volume with the desired characteristics): # vxassist [-g diskgroup] make volume length layout=layout \ logtype=dco [ndcomirror=number] [dcolen=size] [fastresync=on] \ [other attributes] For non-layered volumes, the default number of plexes in the mirrored DCO volume is equal to the lesser of the number of plexes in the data volume or 2.
Creating a Volume with a Version 20 DCO Volume Creating a Volume with a Version 20 DCO Volume To create a volume with an attached version 20 DCO object and volume, use the following procedure: 1. Ensure that the disk group has been upgraded to the latest version.
Creating a Volume with Dirty Region Logging Enabled Creating a Volume with Dirty Region Logging Enabled Note The procedure in this section is applicable to volumes that are created in disk groups with a version number of less than 110.
Creating a Striped Volume Creating a Striped Volume Note You need a full license to use this feature. A striped volume contains at least one plex that consists of two or more subdisks located on two or more physical disks. For more information on striping, see “Striping (RAID-0)” on page 21. Note A striped volume requires space to be available on at least as many disks in the disk group as the number of columns in the volume.
Creating a Striped Volume Creating a Mirrored-Stripe Volume A mirrored-stripe volume mirrors several striped data plexes. Note A mirrored-stripe volume requires space to be available on at least as many disks in the disk group as the number of mirrors multiplied by the number of columns in the volume.
Mirroring across Targets, Controllers or Enclosures Mirroring across Targets, Controllers or Enclosures To create a volume whose mirrored data plexes lie on different controllers (also known as disk duplexing) or in different enclosures, use the vxassist command as described in this section. In the following command, the attribute mirror=target specifies that volumes should be mirrored between identical target IDs on different controllers.
Creating a RAID-5 Volume Creating a RAID-5 Volume Note VxVM supports this feature for private disk groups, but not for shareable disk groups in a cluster environment. Note You need a full license to use this feature. You can create RAID-5 volumes by using either the vxassist command (recommended) or the vxmake command. Both approaches are described below. Note A RAID-5 volume requires space to be available on at least as many disks in the disk group as the number of columns in the volume.
Creating a Volume Using vxmake It is suggested that you configure a minimum of two RAID-5 log plexes for each RAID-5 volume. These log plexes should be located on different disks. Having two RAID-5 log plexes for each RAID-5 volume protects against the loss of logging information due to the failure of a single disk. If you use ordered allocation when creating a RAID-5 volume on specified storage, you must use the logdisk attribute to specify on which disks the RAID-5 log plexes should be created.
Creating a Volume Using vxmake Note that because four subdisks are specified, but the number of columns is not specified, the vxmake command assumes a four-column RAID-5 plex and places one subdisk in each column. Striped plexes are created using the same method except that the layout is specified as stripe.
Creating a Volume Using vxmake After creating a volume using vxmake, you must initialize it before it can be used. The procedure is described in “Initializing and Starting a Volume” on page 221. Creating a Volume Using a vxmake Description File You can use the vxmake command to add a new volume, plex or subdisk to the set of objects managed by VxVM. vxmake adds a record for each new object to the VxVM configuration database.
Initializing and Starting a Volume After creating a volume using vxmake, you must initialize it before it can be used. The procedure is described in “Initializing and Starting a Volume Created Using vxmake” on page 222. Initializing and Starting a Volume If you create a volume using the vxassist command, vxassist initializes and starts the volume automatically unless you specify the attribute init=none.
Accessing a Volume Initializing and Starting a Volume Created Using vxmake A volume may be initialized by running the vxvol command if the volume was created by the vxmake command and has not yet been initialized, or if the volume has been set to an uninitialized state.
8 Administering Volumes This chapter describes how to perform common maintenance tasks on volumes in VERITAS Volume Manager (VxVM). This includes displaying volume information, monitoring tasks, adding and removing logs, resizing volumes, removing mirrors, removing volumes, and changing the layout of volumes without taking them offline. Note You can also use the VERITAS Intelligent Storage Provisioning (ISP) feature to create and administer application volumes.
Displaying Volume Information Displaying Volume Information You can use the vxprint command to display information about how a volume is configured.
Displaying Volume Information This is example output from this command: V NAME RVG/VSET/CO KSTATE STATE LENGTH READPOL PREFPLEX UTYPE v voldef - ACTIVE 20480 SELECT - fsgen ENABLED Note If you enable enclosure-based naming, and use the vxprint command to display the structure of a volume, it shows enclosure-based disk device names (disk access names) rather than c#t#d# names.
Displaying Volume Information INVALID Volume State The contents of an instant snapshot volume no longer represent a true point-in-time image of the original volume. NEEDSYNC Volume State The volume requires a resynchronization operation the next time it is started. For a RAID-5 volume, a parity resynchronization operation is required. REPLAY Volume State The volume is in a transient state as part of a log replay. A log replay occurs when it becomes necessary to use logged parity and data.
Monitoring and Controlling Tasks Volume Kernel States The volume kernel state indicates the accessibility of the volume. The volume kernel state allows a volume to have an offline (DISABLED), maintenance (DETACHED), or online (ENABLED) mode of operation. Note No user intervention is required to set these states; they are maintained internally. On a system that is operating properly, all volumes are ENABLED.
Monitoring and Controlling Tasks The vxassist, vxevac, vxplex, vxmirror, vxrecover, vxrelayout, vxresize, vxsd, and vxvol utilities allow you to specify a tag using the -t option.
Monitoring and Controlling Tasks monitor Prints information continuously about a task or group of tasks as task information changes. This allows you to track the progression of tasks. Specifying -l causes a long listing to be printed. By default, short one-line listings are printed. In addition to printing task information when a task state changes, output is also generated when the task completes. When this occurs, the state of the task is printed as EXITED.
Stopping a Volume This command causes VxVM to attempt to reverse the progress of the operation so far. For an example of how to use vxtask to monitor and modify the progress of the Online Relayout feature, see “Controlling the Progress of a Relayout” on page 259. Stopping a Volume Stopping a volume renders it unavailable to the user, and changes the volume kernel state from ENABLED or DETACHED to DISABLED. If the volume cannot be disabled, it remains in its current state.
Starting a Volume Running the vxvol start command on the volume then revives the plex as described in the next section. Starting a Volume Starting a volume makes it available for use, and changes the volume state from DISABLED or DETACHED to ENABLED. To start a DISABLED or DETACHED volume, use the following command: # vxvol [-g diskgroup] start volume ... If a volume cannot be enabled, it remains in its current state.
Adding a Mirror to a Volume Mirroring All Volumes To mirror all volumes in a disk group to available disk space, use the following command: # /etc/vx/bin/vxmirror -g diskgroup -a To configure VxVM to create mirrored volumes by default, use the following command: # /etc/vx/bin/vxmirror -d yes If you make this change, you can still make unmirrored volumes by specifying nmirror=1 as an attribute to the vxassist command.
Removing a Mirror You can choose to mirror volumes from disk mydg02 onto any available disk space, or you can choose to mirror onto a specific disk. To mirror to a specific disk, select the name of that disk. To mirror to any available disk space, select "any". Enter destination disk [,list,q,?] (default: any) mydg01 4. At the following prompt, press Return to make the mirror: The requested operation is to mirror all volumes on disk mydg02 in disk group mydg onto available disk space on disk mydg01.
Adding Logs and Maps to Volumes For example, to dissociate and remove a mirror named vol01-02 from the disk group, mydg, use the following command: # vxplex -g mydg -o rm dis vol01-02 This command removes the mirror vol01-02 and all associated subdisks.
Preparing a Volume for DRL and Instant Snapshots See “Adding a RAID-5 Log” on page 243 for information on adding RAID-5 logs to a RAID-5 volume. Preparing a Volume for DRL and Instant Snapshots Note This procedure describes how to add a version 20 data change object (DCO) and DCO volume to a volume that you previously created in a disk group with a version number of 110 or greater.
Preparing a Volume for DRL and Instant Snapshots You can also specify vxassist-style storage attributes to define the disks that can and/or cannot be used for the plexes of the DCO volume. See “Specifying Storage for Version 20 DCO Plexes” on page 236 for details. Note The vxsnap prepare command automatically enables Persistent FastResync on the volume. Persistent FastResync is also set automatically on any snapshots that are generated from a volume on which this feature is enabled.
Preparing a Volume for DRL and Instant Snapshots In this output, the DCO object is shown as vol1_dco, and the DCO volume as vol1_dcl with 2 plexes, vol1_dcl-01 and vol1_dcl-02. If required, you can use the vxassist move command to relocate DCO plexes to different disks.
Preparing a Volume for DRL and Instant Snapshots Determining the DCO Version Number The instant snapshot and DRL-enabled DCO features require that a version 20 DCO be associated with a volume, rather than an earlier version 0 DCO. To find out the version number of a DCO that is associated with a volume: 1. Use the vxprint command on the volume to discover the name of its DCO: # DCONAME=‘vxprint [-g diskgroup] -F%dco_name volume‘ 2.
Preparing a Volume for DRL and Instant Snapshots Determining if DRL Logging is Active on a Volume To determine if DRL logging is active on a mirrored volume: 1. Use the following vxprint commands to discover the name of the volume’s DCO volume: # DCONAME=‘vxprint [-g diskgroup] -F%dco_name volume‘ # DCOVOL=‘vxprint [-g diskgroup] -F%parent_vol $DCONAME‘ 2.
Upgrading Existing Volumes to Use Version 20 DCOs Upgrading Existing Volumes to Use Version 20 DCOs The procedure described in this section describes how to upgrade a volume created before VxVM 4.0 so that it can take advantage of new features such as instant snapshots, and DRL logs that are configured within the DCO volume.
Upgrading Existing Volumes to Use Version 20 DCOs 3. Repeat the following steps to upgrade each volume within the disk group as required: a. If the volume to be upgraded has a traditional DRL plex or subdisk (that is, the DRL logs are not held in a version 20 DCO volume), use the following command to remove this: # vxassist [-g diskgroup] remove log volume [nlog=n] Use the optional attribute nlog=n to specify the number, n, of logs to be removed. By default, the vxassist command removes one log. b.
Adding Traditional DRL Logging to a Mirrored Volume You can also specify vxassist-style storage attributes to define the disks that can or cannot be used for the plexes of the DCO volume. Note The vxsnap prepare command automatically enables FastResync on the volume and on any snapshots that are generated from it. If the volume is a RAID-5 volume, it is converted to a layered volume that can be used with snapshots and FastResync.
Adding a RAID-5 Log Once created, the plex containing a log subdisk can be treated as a regular plex. Data subdisks can be added to the log plex. The log plex and log subdisk can be removed using the procedure described in “Removing a Traditional DRL Log” on page 243. Removing a Traditional DRL Log Note The procedure described in this section removes a DRL log that is configured within a dedicated DRL plex. The version 20 DCO volume layout includes space for a DRL log.
Adding a RAID-5 Log Adding a RAID-5 Log using vxplex As an alternative to using vxassist, you can add a RAID-5 log using the vxplex command. For example, to attach a RAID-5 log plex, r5log, to a RAID-5 volume, r5vol, in the disk group, mydg, use the following command: # vxplex -g mydg att r5vol r5log The attach operation can only proceed if the size of the new log is large enough to hold all of the data on the stripe.
Resizing a Volume Note When removing the log leaves the volume with less than two valid logs, a warning is printed and the operation is not allowed to continue. The operation may be forced by additionally specifying the -f option to vxplex or vxassist. Resizing a Volume Resizing a volume changes the volume size. For example, you might need to increase the length of a volume if it is no longer large enough for the amount of data to be stored on it.
Resizing a Volume Resizing Volumes using vxresize Use the vxresize command to resize a volume containing a file system. Although other commands can be used to resize volumes containing file systems, the vxresize command offers the advantage of automatically resizing certain types of file system as well as the volume.
Resizing a Volume Resizing Volumes using vxassist The following modifiers are used with the vxassist command to resize a volume: ◆ growto—increase volume to a specified length ◆ growby—increase volume by a specified amount ◆ shrinkto—reduce volume to a specified length ◆ shrinkby—reduce volume by a specified amount Extending to a Given Length To extend a volume to a specific length, use the following command: # vxassist [-b] [-g diskgroup] growto volume length Note If specified, the -b option make
Resizing a Volume Shrinking to a Given Length To shrink a volume to a specific length, use the following command: # vxassist [-g diskgroup] shrinkto volume length For example, to shrink volcat to 1300 sectors, use the following command: # vxassist -g mydg shrinkto volcat 1300 Caution Do not shrink the volume below the current size of the file system or database using the volume. The vxassist shrinkto command can be safely used on empty volumes.
Changing the Read Policy for Mirrored Volumes If a volume is active and its length is being reduced, the operation must be forced using the -o force option to vxvol. This prevents accidental removal of space from applications using the volume. The length of logs can also be changed using the following command: # vxvol [-g diskgroup] set loglen=length log_volume Note Sparse log plexes are not valid. They must map the entire length of the log.
Removing a Volume To set the read policy to prefer, use the following command: # vxvol [-g diskgroup] rdpol prefer volume preferred_plex For example, to set the policy for vol01 to read preferentially from the plex vol01-02, use the following command: # vxvol -g mydg rdpol prefer vol01 vol01-02 To set the read policy to select, use the following command: # vxvol [-g diskgroup] rdpol select volume For more information about how read policies affect performance, see “Volume Read Policies” on page 402.
Moving Volumes from a VM Disk Moving Volumes from a VM Disk Before you disable or remove a disk, you can move the data from that disk to other disks on the system. To do this, ensure that the target disks have sufficient space, and then use the following procedure: 1. Select menu item 6 (Move volumes from a disk) from the vxdiskadm main menu. 2.
Enabling FastResync on a Volume As the volumes are moved from the disk, the vxdiskadm program displays the status of the operation: VxVM vxevac INFO V-5-2-24 Move volume voltest ... When the volumes have all been moved, the vxdiskadm program displays the following success message: VxVM INFO V-5-2-188 Evacuation of disk mydg02 is complete. 3.
Enabling FastResync on a Volume ◆ Non-Persistent FastResync holds the FastResync maps in memory. These do not survive on a system that is rebooted. By default, FastResync is not enabled on newly created volumes. Specify the fastresync=on attribute to the vxassist make command if you want to enable FastResync on a volume that you are creating. Note It is not possible to configure both Persistent and Non-Persistent FastResync on a volume. Persistent FastResync is used if a DCO is associated with the volume.
Performing Online Relayout Disabling FastResync Use the vxvol command to turn off Persistent or Non-Persistent FastResync for an existing volume, as shown here: # vxvol [-g diskgroup] set fastresync=off volume Turning FastResync off releases all tracking maps for the specified volume. All subsequent reattaches will not use the FastResync facility, but perform a full resynchronization of the volume. This occurs even if FastResync is later turned on.
Performing Online Relayout Permitted Relayout Transformations The tables below give details of the relayout operations that are possible for each type of source storage layout. Supported Relayout Transformations for Unmirrored Concatenated Volumes Relayout to From concat concat No. concat-mirror No. Add a mirror, and then use vxassist convert instead. mirror-concat No. Add a mirror instead. mirror-stripe No. Use vxassist convert after relayout to striped-mirror volume instead. raid5 Yes.
Performing Online Relayout Supported Relayout Transformations for RAID-5 Volumes Relayout to From raid5 concat Yes. concat-mirror Yes. mirror-concat No. Use vxassist convert after relayout to concatenated-mirror volume instead. mirror-stripe No. Use vxassist convert after relayout to striped-mirror volume instead. raid5 Yes. The stripe width and number of columns may be changed. stripe Yes. The stripe width and number of columns may be changed. stripe-mirror Yes.
Performing Online Relayout Supported Relayout Transformations for Unmirrored Stripe, and Layered Striped-Mirror Volumes Relayout to From stripe, or stripe-mirror concat Yes. concat-mirror Yes. mirror-concat No. Use vxassist convert after relayout to concatenated-mirror volume instead. mirror-stripe No. Use vxassist convert after relayout to striped-mirror volume instead. raid5 Yes. The stripe width and number of columns may be changed. stripe Yes.
Performing Online Relayout Specifying a Plex for Relayout Any layout can be changed to RAID-5 if there are sufficient disks and space in the disk group. If you convert a mirrored volume to RAID-5, you must specify which plex is to be converted. All other plexes are removed when the conversion has finished, releasing their space for other purposes. If you convert a mirrored volume to a layout other than RAID-5, the unconverted plexes are not removed.
Performing Online Relayout Controlling the Progress of a Relayout You can use the vxtask command to stop (pause) the relayout temporarily, or to cancel it altogether (abort). If you specified a task tag to vxassist when you started the relayout, you can use this tag to specify the task to vxtask.
Converting Between Layered and Non-Layered Volumes Converting Between Layered and Non-Layered Volumes The vxassist convert command transforms volume layouts between layered and non-layered forms: # vxassist [-b] [-g diskgroup] convert volume [layout=layout] \ [convert_options] Note If specified, the -b option makes conversion of the volume a background task.
9 Administering Volume Snapshots Note The FastResync feature is not supported by VxVM 4.1 on the HP-UX 11i v3 platform. VERITAS Volume Manager (VxVM) provides the capability for taking an image of a volume at a given point in time. Such an image is referred to as a volume snapshot. You can also take a snapshot of a volume set as described in “Creating Instant Snapshots of Volume Sets” on page 284. Volume snapshots allow you to make backup copies of your volumes online with minimal interruption to users.
For databases, a suitable mechanism must additionally be used to ensure the integrity of tablespace data when the volume snapshot is taken. The facility to temporarily suspend file system I/O is provided by most modern database software. For ordinary files in a file system, which may be open to a wide variety of different applications, there may be no way to ensure the complete integrity of the file data other than by shutting down the applications and temporarily unmounting the file system.
Traditional Third-Mirror Break-Off Snapshots Traditional Third-Mirror Break-Off Snapshots The traditional third-mirror break-off volume snapshot model that is supported by the vxassist command is shown in “Third-Mirror Snapshot Creation and Usage.” This figure also shows the transitions that are supported by the snapback and snapclear commands to vxassist.
Traditional Third-Mirror Break-Off Snapshots The command, vxassist snapback, can be used to return snapshot plexes to the original volume from which they were snapped, and to resynchronize the data in the snapshot mirrors from the data in the original volume. This enables you to refresh the data in a snapshot after each time that you use it to make a backup.
Full-Sized Instant Snapshots Full-Sized Instant Snapshots Full-sized instant snapshots are a variation on the third-mirror volume snapshot model that make a snapshot volume available for access as soon as the snapshot plexes have been created. The full-sized instant volume snapshot model is illustrated in “Full-Sized Instant Snapshot Creation and Usage in a Backup Cycle.
Full-Sized Instant Snapshots volume is preserved on the snapshot volume before the write proceeds. As time goes by, and the contents of the volume are updated, its original contents are gradually relocated to the snapshot volume. If desired, you can additionally select to perform either a background (non-blocking) or foreground (blocking) synchronization of the snapshot volume.
Space-Optimized Instant Snapshots Space-Optimized Instant Snapshots Volume snapshots, such as those described in “Traditional Third-Mirror Break-Off Snapshots” on page 263 and “Full-Sized Instant Snapshots” on page 265, require the creation of a complete copy of the original volume, and use as much storage space as the original volume. Instead of requiring a complete copy of the original volume’s storage space, space-optimized instant snapshots use a storage cache.
Emulation of Third-Mirror Break-Off Snapshots As for instant snapshots, space-optimized snapshots use a copy-on-write mechanism to make them immediately available for use when they are first created, or when their data is refreshed. Unlike instant snapshots, however, you cannot enable synchronization on space-optimized snapshots, reattach them to their original volume, or turn them into independent volumes.
Cascaded Snapshots See “Creating Instant Snapshots” on page 275 for details of the procedures for creating and using this type of snapshot. For information about how to add snapshot mirrors to a volume, see “Adding Snapshot Mirrors to a Volume” on page 285. Cascaded Snapshots A snapshot hierarchy known as a snapshot cascade can improve write performance for some applications.
Cascaded Snapshots ◆ The reliability of a snapshot in the cascade depends on all the newer snapshots in the chain. Thus the oldest snapshot in the cascade is the most vulnerable. ◆ Reading from a snapshot in the cascade may require data to be fetched from one or more other snapshots in the cascade. For these reasons, it is recommended that you do not attempt to use a snapshot cascade with applications that need to remove or split snapshots from the cascade.
Cascaded Snapshots Using a Snapshot of a Snapshot to Restore a Database 1. Create instant snapshot S1 of volume V vxsnap make source=V Original volume V Snapshot volume of V: S1 2. Create instant snapshot S2 of snapshot S1 vxsnap make source=S1 Original volume V Snapshot volume of V: S1 Snapshot volume of S1: S2 3. After contents of volume V have gone bad, apply the database redo logs to snapshot S2 Apply redo logs Original volume V Snapshot volume of V: S1 Snapshot volume of S1: S2 4.
Cascaded Snapshots If you have configured snapshots in this way, you may wish to make one or more of the snapshots into independent volumes. There are two vxsnap commands that you can use to do this: ◆ vxsnap dis dissociates a snapshot volume and turns it into an independent volume. The volume to be dissociated must have been fully synchronized from its parent. If a snapshot volume has a child snapshot volume, the child must also have been fully synchronized.
Creating Multiple Snapshots ◆ vxsnap split dissociates a snapshot and its dependent snapshots from its parent volume. The snapshot volume that is to be split must have been fully synchronized from its parent volume. This operation is illustrated in “Splitting Snapshots.
Restoring the Original Volume from a Snapshot Restoring the Original Volume from a Snapshot For traditional snapshots, the snapshot plex is resynchronized from the data in the original volume during a vxassist snapback operation. Alternatively, you can choose the snapshot plex as the preferred copy of the data when performing a snapback as illustrated in “Resynchronizing an Original Volume from a Snapshot.
Creating Instant Snapshots Creating Instant Snapshots Note You need a full license and a a VERITAS FlashSnapTM license to use this feature. VxVM allows you to make instant snapshots of volumes by using the vxsnap command. Note The information in this section also applies to RAID-5 volumes that have been converted to a special layered volume layout by the addition of a DCO and DCO volume. See “Using a DCO and DCO Volume with a RAID-5 Volume” on page 237 for details.
Creating Instant Snapshots To back up a volume with the vxsnap command, use the following procedure: 1. If you intend to take a space-optimized instant snapshot of the volume, you may wish to consider first setting up a shared cache object in the same disk group as the volume. This cache object can be maintained by using the vxcache command. See “Creating a Shared Cache Object” on page 294 for details of how to set up a shared cache object.
Creating Instant Snapshots power of 2, and be greater than or equal to 16KB. A smaller value requires more disk space for the change maps, but the finer granularity provides faster resynchronization. For space-optimized instant snapshots that share a cache object, the specified region size must be greater than or equal to the region size specified for the cache object. See “Creating a Shared Cache Object” on page 294 for details.
Creating Instant Snapshots # vxsnap [-b] [-g diskgroup] addmir volume [nmirror=N] \ [alloc=storage_attributes] By default, the vxsnap addmir command adds one snapshot mirror to a volume unless you use the nmirror attribute to specify a different number of mirrors. The mirrors remain in the SNAPATT state until they are fully synchronized. The -b option can be used to perform the synchronization in the background. Once synchronized, the mirrors are placed in the SNAPDONE state.
Creating Instant Snapshots The first form of the command specifies an existing volume, snapvol, that is to be used as the snapshot volume. See “Creating a Volume for Use as a Full-Sized Instant Snapshot” on page 293 for details.
Creating Instant Snapshots The second form of the vxsnap make command uses one of the following attributes to create the new snapshot volume, snapvol, by breaking off one or more existing plexes in the original volume: plex Specifies the plexes in the existing volume that are to be broken off. This attribute can only be used with plexes that are in the ACTIVE state. nmirror Specifies how many plexes are to be broken off. This attribute can only be used with plexes that are in the SNAPDONE state.
Creating Instant Snapshots For details of how to create a shared cache object, see “Creating a Shared Cache Object” on page 294. ❖ To create a space-optimized instant snapshot, snapvol, and also create a cache object for it to use: # vxsnap [-g diskgroup] make source=vol/newvol=snapvol\ [/cachesize=size][/autogrow=yes][/ncachemirror=number]\ [alloc=storage_attributes] The cachesize attribute determines the size of the cache relative to the size of the volume.
Creating Instant Snapshots You have the following choices of what to do with an instant snapshot: ◆ Refresh the contents of the snapshot. This creates a new point-in-time image of the original volume ready for another backup. If synchronization was already in progress on the snapshot, this operation may result in large portions of the snapshot having to be resynchronized. See “Refreshing an Instant Snapshot (vxsnap refresh)” on page 287 for details.
Creating Instant Snapshots Creating Multiple Instant Snapshots To make it easier to create snapshots of several volumes at the same time, the vxsnap make command accepts multiple tuples that define the source and snapshot volumes names as their arguments.
Creating Instant Snapshots In this example, sequential DRL is enabled for the snapshots of the redo log volumes, and normal DRL is applied to the snapshots of the volumes that contain the database tables. The two space-optimized snapshots are configured to share the same cache object in the disk group. Also note that break-off snapshots are used for the redo logs as such volumes are write intensive.
Creating Instant Snapshots The following example shows how to prepare a source volume set, vset1, and an identical volume set, snapvset1, which is then used to create the snapshot: # vxsnap -g mydg prepare vset1 # vxsnap -g mydg prepare snapvset1 # vxsnap -g mydg make source=vset1/snapvol=snapvset1 To create a full-sized third-mirror break-off snapshot, you must ensure that each volume in the source volume set contains sufficient plexes.
Creating Instant Snapshots By default, the vxsnap addmir command adds one snapshot mirror to a volume unless you use the nmirror attribute to specify a different number of mirrors. The mirrors remain in the SNAPATT state until they are fully synchronized. The -b option can be used to perform the synchronization in the background. Once synchronized, the mirrors are placed in the SNAPDONE state.
Creating Instant Snapshots Similarly, the next snapshot that is taken, fri_bu, is placed in front of thurs_bu: # vxsnap -g dbdg make source=dbvol/newvol=fri_bu/\ infrontof=thurs_bu/cache=dbdgcache For more information on the application of cascaded snapshots, see “Cascaded Snapshots” on page 269. Refreshing an Instant Snapshot (vxsnap refresh) Refreshing an instant snapshot replaces it with another point-in-time copy of a parent volume.
Creating Instant Snapshots By default, all the plexes are reattached, which results in the removal of the snapshot. If required, the number of plexes to be reattached may be specified as the value assigned to the nmirror attribute. Note The snapshot being reattached must not be open to any application. For example, any file system configured on the snapshot volume must first be unmounted. It is possible to reattach a volume to an unrelated volume provided that their sizes are compatible.
Creating Instant Snapshots For a space-optimized instant snapshot, the cached data is used to recreate the contents of the specified volume. The space-optimized instant snapshot remains unchanged by the restore operation. Note For this operation to succeed, the volume that is being restored and the snapshot volume must not be open to any application. For example, any file systems that are configured on either volume must first be unmounted. It is not possible to restore a volume from an unrelated volume.
Creating Instant Snapshots Note When applied to a volume set or to a component volume of a volume set, this operation can result in inconsistencies in the snapshot hierarchy in the case of a system crash or hardware failure. If the operation is applied to a volume set, the -f (force) option must be specified.
Creating Instant Snapshots Displaying Instant Snapshot Information (vxsnap print) The vxsnap print command may be used to display information about the snapshots that are associated with a volume. # vxsnap [-g diskgroup] print [vol] This command shows the percentage progress of the synchronization of a snapshot or volume. If no volume is specified, information about the snapshots for all the volumes in a disk group is displayed.
Creating Instant Snapshots Controlling Instant Snapshot Synchronization Note Synchronization of the contents of a snapshot with its original volume is not possible for space-optimized instant snapshots. By default, synchronization is enabled for the vxsnap reattach, refresh and restore operations on instant snapshots. Otherwise, synchronization is disabled unless you specify the syncing=yes attribute to the vxsnap command.
Creating a Volume for Use as a Full-Sized Instant Snapshot volume. The default size of 1m (1MB) is suggested as the minimum value for high-performance array and controller hardware. The specified value is rounded to a multiple of the volume’s region size. slow=iodelay Specifies the delay in milliseconds between synchronizing successive sets of regions as specified by the value of iosize. This can be used to change the impact of synchronization on system performance.
Creating a Shared Cache Object 4. Use the vxassist command to create a volume, snapvol, of the required size and redundancy, together with a version 20 DCO volume with the correct region size: # vxassist [-g diskgroup] make snapvol $LEN \ [layout=mirror nmirror=number] logtype=dco drl=off \ dcoversion=20 [ndcomirror=number] regionsz=$RSZ init=active \ [storage_attributes] Specify the same number of DCO mirrors (ndcomirror) as the number of mirrors in the volume (nmirror).
Creating a Shared Cache Object 2. Having decided on its characteristics, use the vxassist command to create the volume that is to be used for the cache volume. The following example creates a mirrored cache volume, cachevol, with size 1GB in the disk group, mydg, on the disks mydg16 and mydg17: # vxassist -g mydg make cachevol 1g layout=mirror init=active \ mydg16 mydg17 The attribute init=active is specified to make the cache volume immediately available for use. 3.
Creating a Shared Cache Object Listing the Snapshots Created on a Cache To list the space-optimized instant snapshots that have been created on a cache object, use the following command: # vxcache [-g diskgroup] listvol cache_object The snapshot names are printed as a space-separated list ordered by timestamp. If two or more snapshots have the same timestamp, these snapshots are sorted in order of decreasing size.
Creating a Shared Cache Object increase the size of the cache manually as described in “Growing and Shrinking a Cache” on page 297, or use the vxcache set command to reduce the value of highwatermark as shown in this example: # vxcache -g mydg set highwatermark=60 cobjmydg You can use the maxautogrow attribute to limit the maximum size to which a cache can grow.
Creating a Shared Cache Object Removing a Cache To remove a cache completely, including the cache object, its cache volume and all space-optimized snapshots that use the cache: 1. Run the following command to find out the names of the top-level snapshot volumes that are configured on the cache object: # vxprint -g diskgroup -vne \ "v_plex.pl_subdisk.sd_dm_name ~ /cache_object/" where cache_object is the name of the cache object. 2.
Creating Traditional Third-Mirror Break-Off Snapshots Creating Traditional Third-Mirror Break-Off Snapshots VxVM provides third-mirror break-off snapshot images of volume devices using vxassist and other commands. Note It is recommended that you use the instant snapshot mechanism for backup. Support for traditional third-mirror break-off snapshots created using the vxassist command may be removed in a future release.
Creating Traditional Third-Mirror Break-Off Snapshots Once the snapshot mirror is synchronized, it continues being updated until it is detached. You can then select a convenient time at which to create a snapshot volume as an image of the existing volume. You can also ask users to refrain from using the system during the brief time required to perform the snapshot (typically less than a minute).
Creating Traditional Third-Mirror Break-Off Snapshots Use the nmirror attribute to create as many snapshot mirrors as you need for the snapshot volume. For a backup, you should usually only require the default of one. It is also possible to make a snapshot plex from an existing plex in a volume. See “Converting a Plex into a Snapshot Plex” on page 302 for details. 2. Choose a suitable time to create a snapshot.
Creating Traditional Third-Mirror Break-Off Snapshots ◆ Dissociate the snapshot volume entirely from the original volume as described in “Dissociating a Snapshot Volume (vxassist snapclear)” on page 305. This may be useful if you want to use the copy for other purposes such as testing or report generation.
Creating Traditional Third-Mirror Break-Off Snapshots Here the DCO plex trivol_dco_03 is specified as the DCO plex for the new snapshot plex. To convert an existing plex into a snapshot plex in the SNAPDONE state for a volume on which Non-Persistent FastResync is enabled, use the following command: # vxplex [-g diskgroup] convert state=SNAPDONE plex A converted plex is in the SNAPDONE state, and can be used immediately to create a snapshot volume.
Creating Traditional Third-Mirror Break-Off Snapshots Reattaching a Snapshot Volume (vxassist snapback) Note The information in this section does not apply to RAID-5 volumes unless they have been converted to a special layered volume layout by the addition of a DCO and DCO volume. See “Adding a Version 0 DCO and DCO Volume” on page 307 for details. Snapback merges a snapshot copy of a volume with the original volume.
Creating Traditional Third-Mirror Break-Off Snapshots Adding Plexes to a Snapshot Volume If you want to retain the existing plexes in a snapshot volume after a snapback operation, create additional snapshot plexes that are to be used for the snapback: 1. Use the following vxprint commands to discover the names of the snapshot volume’s data change object (DCO) and DCO volume: # DCONAME=‘vxprint [-g diskgroup] -F%dco_name snapshot‘ # DCOVOL=‘vxprint [-g diskgroup] -F%log_vol $DCONAME‘ 2.
Creating Traditional Third-Mirror Break-Off Snapshots Displaying Snapshot Information (vxassist snapprint) The vxassist snapprint command displays the associations between the original volumes and their respective replicas (snapshot copies): # vxassist snapprint [volume] Output from this command is shown in the following examples: # vxassist -g mydg snapprint V NAME USETYPE SS SNAPOBJ NAME DP NAME VOLUME v1 LENGTH LENGTH LENGTH %DIRTY %DIRTY v ss dp dp 20480 20480 20480 20480 4 0 0 v1 SNAP-v1_snp v1
Adding a Version 0 DCO and DCO Volume Adding a Version 0 DCO and DCO Volume Note The procedure described in this section adds a DCO log volume that has a version 0 layout as introduced in VxVM 3.2. The version 0 layout supports traditional (third-mirror break-off) snapshots, but not full-sized or space-optimized instant snapshots. See “Version 0 DCO Volume Layout” on page 51 and “Version 20 DCO Volume Layout” on page 51 for a description of the differences between the old and new DCO volume layouts.
Adding a Version 0 DCO and DCO Volume 3. Use the following command to add a DCO and DCO volume to the existing volume: # vxassist [-g diskgroup] addlog volume logtype=dco \ [ndcomirror=number] [dcolen=size] [storage_attributes] For non-layered volumes, the default number of plexes in the mirrored DCO volume is equal to the lesser of the number of plexes in the data volume or 2. For layered volumes, the default number of DCO plexes is always 2.
Adding a Version 0 DCO and DCO Volume To view the details of the DCO object and DCO volume that are associated with a volume, use the vxprint command.
Adding a Version 0 DCO and DCO Volume To dissociate, but not remove, the DCO object, DCO volume and any snap objects from the volume, myvol, in the disk group, mydg, use the following command: # vxdco -g mydg dis myvol_dco This form of the command dissociates the DCO object from the volume but does not destroy it or the DCO volume. If the -o rm option is specified, the DCO object, DCO volume and its plexes, and any snap objects are also removed.
10 Creating and Administering Volume Sets This chapter describes how to use the vxvset command to create and administer volume sets in VERITAS Volume Manager (VxVM). Volume sets enable the use of the Multi-Volume Support feature with VERITAS File System (VxFS). It is also possible to use the VERITAS Enterprise Administrator (VEA) to create and administer volumes sets. For more information, see the VEA online help. For full details of the usage of the vxvset command, see the vxvset(1M) manual page.
Creating a Volume Set Creating a Volume Set To create a volume set for use by VERITAS File System (VxFS), use the following command: # vxvset [-g diskgroup] -t vxfs make volset volume Here volset is the name of the volume set, and volume is the name of the first volume in the volume set. The -t option defines the content handler subdirectory for the application that is to be used with the volume. This subdirectory contains utilities that an application uses to operate on the volume set.
Listing Details of Volume Sets Listing Details of Volume Sets To list the details of the component volumes of a volume set, use the following command: # vxvset [-g diskgroup] list [volset] If the name of a volume set is not specified, the command lists the details of all volume sets in a disk group, as shown in the following example: # vxvset -g mydg list NAME GROUP NVOLS set1 mydg 3 set2 mydg 2 CONTEXT - To list the details of each volume in a volume set, specify the name of the volume set as an argume
Removing a Volume from a Volume Set For the example given previously, the effect of running these commands on the component volumes is shown below: # vxvset -g mydg stop set1 # vxvset -g mydg list set1 VOLUME INDEX vol1 0 vol2 1 vol3 2 LENGTH 12582912 12582912 12582912 KSTATE DISABLED DISABLED DISABLED CONTEXT - LENGTH 12582912 12582912 12582912 KSTATE ENABLED ENABLED ENABLED CONTEXT - # vxvset -g mydg start set1 # vxvset -g mydg list set1 VOLUME INDEX vol1 0 vol2 1 vol3 2 Removing a Volume from a
11 Configuring Off-Host Processing Note The FastResync feature is not supported by VxVM 4.1 on the HP-UX 11i v3 platform. Off-host processing allows you to implement the following activities: ◆ Data Backup—As the requirement for 24 x 7 availability becomes essential for many businesses, organizations cannot afford the downtime involved in backing up critical data offline.
FastResync of Volume Snapshots FastResync of Volume Snapshots Note You need a VERITAS FlashSnapTM or FastResync license to use this feature. VxVM allows you to take multiple snapshots of your data at the level of a volume. A snapshot volume contains a stable copy of a volume’s data at a given moment in time that you can use for online backup or decision support. If FastResync is enabled on a volume, VxVM uses a FastResync map to keep track of which blocks are updated in the volume and in the snapshot.
Implementing Off-Host Processing Solutions Implementing Off-Host Processing Solutions As shown in “Example Implementation of Off-Host Processing,” , by accessing snapshot volumes from a lightly loaded host (shown here as the OHP host), CPU- and I/O-intensive operations for online backup and decision support do not degrade the performance of the primary host that is performing the main production activity (such as running a database).
Implementing Off-Host Processing Solutions Note A volume snapshot represents the data that exists in a volume at a given point in time. As such, VxVM does not have any knowledge of data that is cached by the overlying file system, or by applications such as databases that have files open in the file system.
Implementing Off-Host Processing Solutions Note If the volume was created under VxVM 4.0 or a later release, and it is not associated with a new-style DCO object and DCO volume, follow the procedure described in “Preparing a Volume for DRL and Instant Snapshots” on page 235. If the volume was created before release 4.0 of VxVM, and it has any attached snapshot plexes or it is associated with any snapshot volumes, follow the procedure given in “Upgrading Existing Volumes to Use Version 20 DCOs” on page 240.
Implementing Off-Host Processing Solutions placed on disks which are used to hold the plexes of other volumes, this may cause problems when you subsequently attempt to move a snapshot volume into another disk group as described in “Moving DCO Volumes Between Disk Groups” on page 161. To override the default storage allocation policy, you can use storage attributes to specify explicitly which disks to use for the snapshot plexes. See “Creating a Volume on Specific Disks” on page 203 for more information.
Implementing Off-Host Processing Solutions If required, you can use the following command to verify whether the V_PFLAG_INCOMPLETE flag is set on a volume: # vxprint [-g diskgroup] -F%incomplete snapvol This command returns the value off if synchronization of the volume, snapvol, is complete; otherwise, it returns the value on. You can also use the vxsnap print command to check on the progress of synchronization as described in “Displaying Instant Snapshot Information (vxsnap print)” on page 291. 9.
Implementing Off-Host Processing Solutions 15. On the primary host, re-import the snapshot volume’s disk group using the following command: # vxdg import snapvoldg 16. On the primary host, use the following command to rejoin the snapshot volume’s disk group with the original volume’s disk group: # vxdg join snapvoldg volumedg 17. The snapshot volume is initially disabled following the join.
Implementing Off-Host Processing Solutions Note If the volume was created under VxVM 4.0 or a later release, and it is not associated with a new-style DCO object and DCO volume, follow the procedure described in “Preparing a Volume for DRL and Instant Snapshots” on page 235. If the volume was created before release 4.0 of VxVM, and has any attached snapshot plexes, or is associated with any snapshot volumes, follow the procedure given in “Upgrading Existing Volumes to Use Version 20 DCOs” on page 240. 2.
Implementing Off-Host Processing Solutions If you created a new volume, snapvol, for use as the snapshot volume in step 4, use the following version of the vxsnap command to create the snapshot on this volume: # vxsnap -g volumedg make source=volume/snapvol=snapvol Note By default, VxVM attempts to avoid placing snapshot mirrors on a disk that already holds any plexes of a data volume. However, this may be impossible if insufficient space is available in the disk group.
Implementing Off-Host Processing Solutions 8. On the primary host, if you temporarily suspended updates to the volume by a database in step 6, release all the tables from hot backup mode. 9. The snapshot volume must be completely synchronized before you can move it into another disk group.
Implementing Off-Host Processing Solutions 15. On the OHP host, use the appropriate database commands to recover and start the replica database for its decision support role. When you want to resynchronize the snapshot volume’ s data with the primary database, you can refresh the snapshot plexes from the original volume as described below: 1. On the OHP host, shut down the replica database, and use the following command to unmount the snapshot volume: # umount mount_point 2.
12 Administering Hot-Relocation If a volume has a disk I/O failure (for example, the disk has an uncorrectable error), VERITAS Volume Manager (VxVM) can detach the plex involved in the failure. I/O stops on that plex but continues on the remaining plexes of the volume. If a disk fails completely, VxVM can detach the disk from its disk group. All plexes on the disk are disabled. If there are any unmirrored volumes on a disk when it is detached, those volumes are also disabled.
How Hot-Relocation Works How Hot-Relocation Works Hot-relocation allows a system to react automatically to I/O failures on redundant (mirrored or RAID-5) VxVM objects, and to restore redundancy and access to those objects. VxVM detects I/O failures on objects and relocates the affected subdisks to disks designated as spare disks or to free space within the disk group. VxVM then reconstructs the objects that existed before the failure and makes them redundant and accessible again.
How Hot-Relocation Works 3. If no spare disks are available or additional space is needed, vxrelocd uses free space on disks in the same disk group, except those disks that have been excluded for hot-relocation use (marked nohotuse). When vxrelocd has relocated the subdisks, it reattaches each relocated subdisk to its plex. 4. Finally, vxrelocd initiates appropriate recovery procedures. For example, recovery includes mirror resynchronization for mirrored volumes or data recovery for RAID-5 volumes.
How Hot-Relocation Works Example of Hot-Relocation for a Subdisk in a RAID-5 Volume a) Disk group contains five disks. Two RAID-5 volumes are configured across four of the disks. One spare disk is available for hot-relocation. mydg01 mydg02 mydg03 mydg04 mydg01-01 mydg02-01 mydg03-01 mydg04-01 mydg05 Spare Disk mydg02-02 mydg03-02 b) Subdisk mydg02-01 in one RAID-5 volume fails.
How Hot-Relocation Works Partial Disk Failure Mail Messages If hot-relocation is enabled when a plex or disk is detached by a failure, mail indicating the failed objects is sent to root. If a partial disk failure occurs, the mail identifies the failed plexes.
How Hot-Relocation Works Complete Disk Failure Mail Messages If a disk fails completely and hot-relocation is enabled, the mail message lists the disk that failed and all plexes that use the disk.
Configuring a System for Hot-Relocation When selecting space for relocation, hot-relocation preserves the redundancy characteristics of the VxVM object to which the relocated subdisk belongs. For example, hot-relocation ensures that subdisks from a failed plex are not relocated to a disk containing a mirror of the failed plex. If redundancy cannot be preserved using any available spare disks and/or free space, hot-relocation does not take place.
Displaying Spare Disk Information Depending on the locations of the relocated subdisks, you can choose to move them elsewhere after hot-relocation occurs (see “Configuring Hot-Relocation to Use Only Spare Disks” on page 339). After a successful relocation, remove and replace the failed disk as described in “Removing and Replacing Disks” on page 90).
Marking a Disk as a Hot-Relocation Spare Marking a Disk as a Hot-Relocation Spare Hot-relocation allows the system to react automatically to I/O failure by relocating redundant subdisks to other disks. Hot-relocation then restores the affected VxVM objects and data. If a disk has already been designated as a spare in the disk group, the subdisks from the failed disk are relocated to the spare disk. Otherwise, any suitable free space in the disk group is used.
Removing a Disk from Use as a Hot-Relocation Spare Any VM disk in this disk group can now use this disk as a spare in the event of a failure. If a disk fails, hot-relocation should automatically occur (if possible). You should be notified of the failure and relocation through electronic mail. After successful relocation, you may want to replace the failed disk.
Excluding a Disk from Hot-Relocation Use Excluding a Disk from Hot-Relocation Use To exclude a disk from hot-relocation use, use the following command: # vxedit [-g diskgroup] set nohotuse=on diskname where diskname is the disk media name. Alternatively, using vxdiskadm: 1. Select menu item 15 (Exclude a disk from hot-relocation use) from the vxdiskadm main menu. 2.
Making a Disk Available for Hot-Relocation Use Making a Disk Available for Hot-Relocation Use Free space is used automatically by hot-relocation in case spare space is not sufficient to relocate failed subdisks. You can limit this free space usage by hot-relocation by specifying which free disks should not be touched by hot-relocation. If a disk was previously excluded from hot-relocation use, you can undo the exclusion and add the disk back to the hot-relocation pool.
Configuring Hot-Relocation to Use Only Spare Disks Configuring Hot-Relocation to Use Only Spare Disks If you want VxVM to use only spare disks for hot-relocation, add the following line to the file /etc/default/vxassist: spare=only If not enough storage can be located on disks marked as spare, the relocation fails. Any free space on non-spare disks is not used.
Moving and Unrelocating Subdisks Moving and Unrelocating Subdisks Using vxdiskadm To move the hot-relocated subdisks back to the disk where they originally resided after the disk has been replaced following a failure, use the following procedure: 1. Select menu item 14 (Unrelocate subdisks back to a disk) from the vxdiskadm main menu. 2. This option prompts for the original disk media name first.
Moving and Unrelocating Subdisks Moving and Unrelocating Subdisks Using vxassist You can use the vxassist command to move and unrelocate subdisks. For example, to move the relocated subdisks on mydg05 belonging to the volume home back to mydg02, enter the following command: # vxassist -g mydg move home !mydg05 mydg02 Here, !mydg05 specifies the current location of the subdisks, and mydg02 specifies where the subdisks should be relocated.
Moving and Unrelocating Subdisks Moving Hot-Relocated Subdisks back to their Original Disk Assume that mydg01 failed and all the subdisks were relocated. After mydg01 is replaced, vxunreloc can be used to move all the hot-relocated subdisks back to mydg01. # vxunreloc -g mydg mydg01 Moving Hot-Relocated Subdisks to a Different Disk The vxunreloc utility provides the -n option to move the subdisks to a different disk from where they were originally relocated.
Moving and Unrelocating Subdisks Examining Which Subdisks Were Hot-Relocated from a Disk If a subdisk was hot relocated more than once due to multiple disk failures, it can still be unrelocated back to its original location. For instance, if mydg01 failed and a subdisk named mydg01-01 was moved to mydg02, and then mydg02 experienced disk failure, all of the subdisks residing on it, including the one which was hot-relocated to it, will be moved again.
Modifying the Behavior of Hot-Relocation If the system goes down after the new subdisks are created on the destination disk, but before all the data has been moved, re-execute vxunreloc when the system has been rebooted. Caution Do not modify the string UNRELOC in the comment field of a subdisk record. Modifying the Behavior of Hot-Relocation Hot-relocation is turned on as long as vxrelocd is running. You leave hot-relocation turned on so that you can take advantage of this feature if a failure occurs.
Modifying the Behavior of Hot-Relocation ❖ To reduce the impact of recovery on system performance, you can instruct vxrelocd to increase the delay between the recovery of each region of the volume, as shown in the following example: nohup vxrelocd -o slow[=IOdelay] root & where the optional IOdelay value indicates the desired delay in milliseconds. The default value for the delay is 250 milliseconds.
Modifying the Behavior of Hot-Relocation 346 VERITAS Volume Manager Administrator’s Guide
13 Administering Cluster Functionality Note The cluster functionality feature is not supported by VxVM 4.1 on the HP-UX 11i v3 platform. A cluster consists of a number of hosts or nodes that share a set of disks. The main benefits of cluster configurations are: ◆ Availability—If one node fails, the other nodes can still access the shared disks. When configured with suitable software, mission-critical applications can continue running by transferring their execution to a standby node in the cluster.
Overview of Cluster Volume Management For additional information about using the Dynamic Multipathing (DMP) feature of VxVM in a clustered environment, see “DMP in a Clustered Environment” on page 105. Overview of Cluster Volume Management In recent years, tightly coupled cluster systems have become increasingly popular in the realm of enterprise-scale mission-critical data processing. The primary advantage of clusters is protection against hardware failure.
Overview of Cluster Volume Management The private network allows the nodes to share information about system resources and about each other’s state. Using the private network, any node can recognize which other nodes are currently active, which are joining or leaving the cluster, and which have failed. The private network requires at least two communication channels to provide redundancy against one of the channels failing.
Overview of Cluster Volume Management VxVM determines that the first node to join a cluster performs the function of master node. If the master node leaves a cluster, one of the slave nodes is chosen to be the new master. In “Example of a 4-Node Cluster,” node 0 is the master node and nodes 1, 2 and 3 are slave nodes. Private and Shared Disk Groups Two types of disk groups are defined: ◆ Private disk groups—belong to only one node. A private disk group is only imported by one system.
Overview of Cluster Volume Management Whether all members of the cluster have simultaneous read and write access to a cluster-shareable disk group depends on its activation mode setting as discussed in “Activation Modes of Shared Disk Groups.” The data contained in a cluster-shareable disk group is available as long as at least one node is active in the cluster. The failure of a cluster node does not affect access by the remaining active nodes.
Overview of Cluster Volume Management Activation Modes for Shared Disk Groups The node has neither read nor write access to the disk group. Query operations on the disk group are permitted. off The following table summarizes the allowed and conflicting activation modes for shared disk groups: Allowed and Conflicting Activation Modes Disk group activated in cluster as... Attempt to activate disk group on another node as...
Overview of Cluster Volume Management Note The activation mode of a disk group controls volume I/O from different nodes in the cluster. It is not possible to activate a disk group on a given node if it is activated in a conflicting mode on another node in the cluster. When enabling activation using the defaults file, it is recommended that this file be made identical on all nodes in the cluster. Otherwise, the results of activation are unpredictable.
Overview of Cluster Volume Management The practical implication of this design is that I/O failure on any node results in the configuration of all nodes being changed. This is known as the global detach policy. However, in some cases, it is not desirable to have all nodes react in this way to I/O failure. To address this, an alternate way of responding to I/O failures, known as the local detach policy, was introduced in release 3.2 of VxVM.
Overview of Cluster Volume Management Local Detach Policy Caution Do not use the local detach policy if you use the VCS agents that monitor the cluster functionality of VERITAS Volume Manager, and which are provided with VERITAS Storage FoundationTM for Cluster File System HA and VERITAS Storage Foundation for databases HA. These agents do not notify VCS about local failures.
Overview of Cluster Volume Management Cluster Behavior Under I/O Failure to a Mirrored Volume for Different Disk Detach Policies Type of I/O Failure Local (diskdetpolicy=local) Global (diskdetpolicy=global) Failure of one or more disks in a volume for all nodes. The plex is detached, and I/O from/to the volume continues. An I/O error is generated if no plexes remain. The plex is detached, and I/O from/to the volume continues. An I/O error is generated if no plexes remain.
Overview of Cluster Volume Management ◆ When an array is seen by DMP as Active/Passive. The local detach policy causes unpredictable behavior for Active/Passive arrays. ◆ For clusters with four or fewer nodes. With a small number of nodes in a cluster, it is preferable to keep all nodes actively using the volumes, and to keep the applications running on all the nodes. ◆ If only non-mirrored, small mirrored, or hardware mirrored volumes are configured.
Cluster Initialization and Configuration The detach policy does not change the requirement that a node joining a cluster must have access to all the disks in all shared disk groups. Similarly, a node that is removed from the cluster because of an I/O failure cannot rejoin the cluster until this requirement is met. Limitations of Shared Disk Groups Note The boot disk group (usually aliased as bootdg) cannot be made cluster-shareable. It must be private.
Cluster Initialization and Configuration Note To make effective use of the cluster functionality of VxVM requires that you configure a cluster monitor (such as provided by GAB (Group Membership and Atomic Broadcast) in VCS). The cluster monitor startup procedure effects node initialization, and brings up the various cluster components (such as VxVM with cluster support, the cluster monitor, and a distributed lock manager) on the node. Once this is complete, applications may be started.
Cluster Initialization and Configuration The startnode keyword to vxclustadm starts cluster functionality on a cluster node by passing cluster configuration information to the VxVM kernel. In response to this command, the kernel and the VxVM configuration daemon, vxconfigd, perform initialization. The stopnode keyword stops cluster functionality on a node. It waits for all outstanding I/O to complete and for all applications to close shared volumes.
Cluster Initialization and Configuration Reason Description join timed out during reconfiguration Join of a node has timed out due to reconfiguration taking place in the cluster. klog update failed Cannot update kernel log copies during the join of a node. master aborted during join Master node aborted while another node was joining the cluster. minor number conflict Minor number conflicts exist between private disk groups and shared disk groups that are being imported.
Cluster Initialization and Configuration new disk group with the same name as an existing disk group. The vxconfigd daemon on the master node then sends details of the changes to the vxconfigd daemons on the slave nodes. The vxconfigd daemons on the slave nodes then perform their own checking. For example, each slave node checks that it does not have a private disk group with the same name as the one being created; if the operation involves a new disk, each node checks that it can access that disk.
Cluster Initialization and Configuration When a node is initialized for cluster operation, the vxconfigd daemon is notified that the node is about to join the cluster and is provided with the following information from the cluster monitor configuration database: ◆ cluster ID ◆ node IDs ◆ master node ID ◆ role of the node ◆ network address of the vxconfigd daemon on each node (if applicable) On the master node, the vxconfigd daemon sets up the shared configuration by importing shared disk groups,
Cluster Initialization and Configuration information about the shared configuration. (Neither the kernel view of the shared configuration nor access to shared disks is affected.) Until the vxconfigd daemon on the slave node has successfully reconnected to the vxconfigd daemon on the master node, it has very little information about the shared configuration and any attempts to display or modify the shared configuration can fail.
Cluster Initialization and Configuration The cluster functionality of VxVM maintains global state information for each volume. This enables VxVM to determine which volumes need to be recovered when a node crashes. When a node leaves the cluster due to a crash or by some other means that is not clean, VxVM determines which volumes may have writes that have not completed and the master node resynchronizes these volumes.
Upgrading Cluster Functionality Cluster Shutdown If all nodes leave a cluster, shared volumes must be recovered when the cluster is next started if the last node did not leave cleanly, or if resynchronization from previous nodes leaving uncleanly is incomplete. Upgrading Cluster Functionality The rolling upgrade feature allows you to upgrade the version of VxVM running in a cluster without shutting down the entire cluster.
Dirty Region Logging (DRL) in Cluster Environments Once you have installed the new release on all nodes, run the vxdctl upgrade command on the master node to switch the cluster to the higher cluster protocol version. See “Upgrading the Cluster Protocol Version” on page 379 for more information. Dirty Region Logging (DRL) in Cluster Environments Dirty region logging (DRL) is an optional property of a volume that provides speedy recovery of mirrored volumes after a system failure.
Multi-Host Failover Configurations How DRL Works in a Cluster Environment When one or more nodes in a cluster crash, DRL must handle the recovery of all volumes that were in use by those nodes when the crashes occurred. On initial cluster startup, all active maps are incorporated into the recovery map during the volume start operation.
Multi-Host Failover Configurations Specifically, when a host imports a disk group, the import normally fails if any disks within the disk group appear to be locked by another host. This allows automatic re-importing of disk groups after a reboot (autoimporting) and prevents imports by another host, even while the first host is shut down.
Multi-Host Failover Configurations which can be fed to vxmake to restore the layouts. There are typically numerous configuration copies for each disk group, but corruption nearly always affects all configuration copies, so redundancy does not help in this case. Disk group configuration corruption usually shows up as missing or duplicate records in the configuration databases.
Administering VxVM in Cluster Environments Administering VxVM in Cluster Environments The following sections describe the administration of VxVM’s cluster functionality. Note Most VxVM commands require superuser or equivalent privileges. Requesting Node Status and Discovering the Master Node The vxdctl utility controls the operation of the vxconfigd volume configuration daemon. The -c option can be used to request cluster information and to find out which node is the master.
Administering VxVM in Cluster Environments Determining if a Disk is Shareable The vxdisk utility manages VxVM disks. To use the vxdisk utility to determine whether a disk is part of a cluster-shareable disk group, use the following command: # vxdisk list accessname where accessname is the disk access name (or device name). A portion of the output from this command (for the device c4t1d0) is shown here: Device: devicetag: type: clusterid: disk: timeout: group: flags: ...
Administering VxVM in Cluster Environments To display information about one specific disk group, use the following command: # vxdg list diskgroup The following is example output for the command vxdg list group1 on the master: Group: group1 dgid: 774222028.1090.teal import-id: 32768.
Administering VxVM in Cluster Environments the disk is accessible is the only node in the cluster. However, this means that other nodes cannot join the cluster. Furthermore, if you attempt to add the same disk to different disk groups (private or shared) on two nodes at the same time, the results are undefined. Perform all configuration on one node only, and preferably on the master node.
Administering VxVM in Cluster Environments When a cluster is restarted, VxVM can refuse to auto-import a disk group for one of the following reasons: ◆ A disk in the disk group is no longer accessible because of hardware errors on the disk. In this case, use the following command to forcibly reimport the disk group: # vxdg -s -f import diskgroup ◆ Some of the nodes to which disks in the disk group are attached are not currently in the cluster, so the disk group cannot access all of its disks.
Administering VxVM in Cluster Environments Splitting Disk Groups As described in “Splitting Disk Groups” on page 165, you can use the vxdg split command to remove a self-contained set of VxVM objects from an imported disk group, and move them to a newly created disk group. Splitting a private disk group creates a private disk group, and splitting a shared disk group creates a shared disk group. You can split a private disk group on any cluster node where that disk group is imported.
Administering VxVM in Cluster Environments Setting the DIsk Detach Policy on a Shared Disk Group Note The disk detach policy for a shared disk group can only be set on the master node. The vxdg command may be used to set either the global or local disk detach policy for a shared disk group: # vxdg -g diskgroup set diskdetpolicy=global|local The default disk detach policy is global. See “Connectivity Policy of Shared Disk Groups” on page 353 for more information.
Administering VxVM in Cluster Environments Setting Exclusive Open Access to a Volume by a Node Note Exclusive open access on a volume can only be set on the master node. Ensure that none of the nodes in the cluster have the volume open when setting this attribute. You can set the exclusive=on attribute with the vxvol command to specify that an existing volume may only be opened by one node in the cluster at a time.
Administering VxVM in Cluster Environments Displaying the Supported Cluster Protocol Version Range The following command displays the maximum and minimum protocol version supported by the node and the current protocol version: # vxdctl support This command produces out put similar to the following: Support information: vxconfigd_vrsn: dg_minimum: dg_maximum: kernel: protocol_minimum: protocol_maximum: protocol_current: 21 20 120 15 40 60 60 You can also use the following command to display the maximum a
Administering VxVM in Cluster Environments Recovering Volumes in Shared Disk Groups Note Volumes can only be recovered on the master node. The vxrecover utility is used to recover plexes and volumes after disk replacement. When a node leaves a cluster, it can leave some mirrors in an inconsistent state. The vxrecover utility can be used to recover such volumes. The -c option to vxrecover causes it to recover all volumes in shared disk groups.
14 Using VERITAS Storage Expert System administrators often find that gathering and interpreting data about large and complex configurations can be a difficult task. VERITAS Storage Expert (vxse) is designed to help in diagnosing configuration problems with VxVM. Storage Expert consists of a set of simple commands that collect VxVM configuration data and compare it with “best practice.
How Storage Expert Works How Storage Expert Works Storage Expert components include a set of rule scripts and a rules engine. The rules engine runs the scripts and produces ASCII output, which is organized and archived by Storage Expert’s report generator. This output contains information about areas of VxVM configuration that do not meet the set criteria. By default, output is sent to the screen, but you can send it to a file using standard output redirection.
Running Storage Expert The following options may be specified: -d defaults_file Specify an alternate defaults file. -g diskgroup Specify the disk group to be examined. -v Specify verbose output format. One of the following keywords must be specified: check List the default values used by the rule’s attributes. info Describe what the rule does. list List the attributes of the rule that you can set. run Run the rule.
Running Storage Expert To see the default values of a specified rule’s attributes, use the check keyword as shown here: # vxse_stripes2 check vxse_stripes2 - TUNEABLES ---------------------------------------------------------VxVM vxse:vxse_stripes2 INFO V-5-1-5546 too_wide_stripe - (16) columns in a striped volume too_narrow_stripe - (3) columns in a striped volume Storage Expert lists the default value of each of the rule’s attributes.
Running Storage Expert Rule Result Types Running a rule generates output that shows the status of the objects that have been examined against the rule: INFO Information about the specified object; for example “RAID-5 does not have a log.” PASS The object met the conditions of the rule. VIOLATION The object did not meet the conditions of the rule.
Identifying Configuration Problems Using Storage Expert Identifying Configuration Problems Using Storage Expert Storage Expert provides a large number of rules that help you to diagnose configuration issues that might cause problems for your storage environment. Each rule describes the issues involved, and suggests remedial actions.
Identifying Configuration Problems Using Storage Expert Checking for Large Mirror Volumes Without a DRL (vxse_drl1) To check whether large mirror volumes (larger than 1GB) have an associated dirty region log (DRL), run rule vxse_drl1. Creating a DRL speeds recovery of mirrored volumes after a system crash. A DRL tracks those regions that have changed and uses the tracking information to recover only those portions of the volume that need to be recovered.
Identifying Configuration Problems Using Storage Expert Checking for Non-Mirrored RAID-5 Logs (vxse_raid5log3) To check that the RAID-5 log of a large volume is mirrored, run the vxse_raid5log3 rule. A mirror of the RAID-5 log protects against loss of data due to the failure of a single disk. You are strongly advised to mirror the log if vxse_raid5log3 reports that the log of a large RAID-5 volume does not have a mirror. For information on adding a RAID-5 log mirror, see “Adding a RAID-5 Log” on page 243.
Identifying Configuration Problems Using Storage Expert Checking Version Number of Disk Groups (vxse_dg4) To check the version number of a disk group, run rule vxse_dg4. For optimum results, your disk groups should have the latest version number that is supported by the installed version of VxVM. If a disk group is not at the latest version number, see the section “Upgrading a Disk Group” on page 169 for information about upgrading it.
Identifying Configuration Problems Using Storage Expert Checking States of Plexes and Volumes (vxse_volplex) To check whether your disk groups contain unused objects (such as plexes and volumes), run rule vxse_volplex.
Identifying Configuration Problems Using Storage Expert Checking the Number of Columns in RAID-5 Volumes (vxse_raid5) To check whether RAID-5 volumes have too few or too many columns, run rule vxse_raid5. By default, this rule assumes that a RAID-5 plex should have more than 4 columns and fewer than 8 columns. See “Performing Online Relayout” on page 254 for information on changing the number of columns.
Identifying Configuration Problems Using Storage Expert Hardware Failures Checking for Disk and Controller Failures (vxse_dc_failures) Rule vxse_dc_failures can be used to discover if the system has any failed disks or disabled controllers. Rootability Checking the Validity of Root Mirrors (vxse_rootmir) Rule vxse_rootmir can be used to confirm that the root mirrors are set up correctly.
Rule Definitions and Attributes Rule Definitions and Attributes The tables in this section list rule definitions, and rule attributes and their default values. Note You can use the info keyword to show a description of a rule. See “Discovering What a Rule Does” on page 383 for details. Rule Definitions Rule Description vxse_dc_failures Checks and points out failed disks and disabled controllers. vxse_dg1 Checks for disk group configurations in which the disk group has become too large.
Rule Definitions and Attributes Rule Definitions Rule Description vxse_raid5log1 Checks for RAID-5 volumes that do not have an associated log. vxse_raid5log2 Checks for recommended minimum and maximum RAID-5 log sizes. vxse_raid5log3 Checks for large RAID-5 volumes that do not have a mirrored RAID-5 log. vxse_redundancy Checks the redundancy of volumes. vxse_rootmir Checks that all root mirrors are set up correctly.
Rule Definitions and Attributes Rule Attributes and Default Attribute Values Rule Attribute Default Value Description vxse_dc_failures - - No user-configurable variables. vxse_dg1 max_disks_per_dg 250 Maximum number of disks in a disk group. Warn if a disk group has more disks than this. vxse_dg2 - - No user-configurable variables. vxse_dg3 - - No user-configurable variables. vxse_dg4 - - No user-configurable variables. vxse_dg5 - - No user-configurable variables.
Rule Definitions and Attributes Rule Attributes and Default Attribute Values Rule Attribute Default Value Description vxse_raid5 too_narrow_raid5 4 Minimum number of RAID-5 columns. Warn if actual number of RAID-5 columns is less than this. too_wide_raid5 8 Maximum number of RAID-5 columns. Warn if the actual number of RAID-5 columns is greater than this. vxse_raid5log1 - - No user-configurable variables. vxse_raid5log2 r5_max_size 1g (1GB) Maximum RAID-5 log check size.
Rule Definitions and Attributes Rule Attributes and Default Attribute Values Rule Attribute Default Value Description vxse_stripes1 default _stripeunit 8k (8KB) Stripe unit size for stripe volumes. Warn if a stripe does not have a stripe unit which is an integer multiple of this value. vxse_stripes2 too_narrow_stripe 3 Minimum number of columns in a striped plex. Warn if a striped volume has fewer columns than this. too_wide_stripe 16 Maximum number of columns in a striped plex.
Rule Definitions and Attributes 398 VERITAS Volume Manager Administrator’s Guide
15 Performance Monitoring and Tuning VERITAS Volume Manager (VxVM) can improve overall system performance by optimizing the layout of data storage on the available hardware. This chapter contains guidelines establishing performance priorities, for monitoring performance, and for configuring your system appropriately. Performance Guidelines VxVM allows you to optimize data storage performance using the following two strategies: ◆ Balance the I/O load among the available disk drives.
Performance Guidelines Striping Striping improves access performance by cutting data into slices and storing it on multiple devices that can be accessed in parallel. Striped plexes improve access performance for both read and write operations. Having identified the most heavily accessed volumes (containing file systems or databases), you can increase access bandwidth to this data by striping it across portions of multiple disks.
Performance Guidelines Combining Mirroring and Striping Note You need a full license to use this feature. Mirroring and striping can be used together to achieve a significant improvement in performance when there are multiple I/O streams. Striping provides better throughput because parallel I/O streams can operate concurrently on separate devices. Serial access is optimized when I/O exactly fits across all stripe units in one stripe.
Performance Guidelines Volume Read Policies To help optimize performance for different types of volumes, VxVM supports the following read policies on data plexes: ◆ round—a round-robin read policy, where all plexes in the volume take turns satisfying read requests to the volume. ◆ prefer—a preferred-plex read policy, where the plex with the highest performance usually satisfies read requests. If that plex fails, another plex is accessed.
Performance Monitoring Performance Monitoring As a system administrator, you have two sets of priorities for setting priorities for performance. One set is physical, concerned with hardware such as disks and controllers. The other set is logical, concerned with managing software and its operation.
Performance Monitoring If you do not specify any operands, vxtrace reports either all error trace data or all I/O trace data on all virtual disk devices. With error trace data, you can select all accumulated error trace data, wait for new error trace data, or both of these (this is the default action). Selection can be limited to a specific disk group, to specific VxVM kernel I/O object types, or to particular named objects or devices.
Performance Monitoring Additional volume statistics are available for RAID-5 configurations. For detailed information about how to use vxstat, refer to the vxstat(1M) manual page. Using Performance Data When you have gathered performance data, you can use it to determine how to configure your system to use resources most effectively. The following sections provide an overview of how you can use this data. Using I/O Statistics Examination of the I/O statistics can suggest how to reconfigure your system.
Performance Monitoring To display disk statistics, use the vxstat -d command. The following is a typical display of disk statistics: TYP dm dm dm dm NAME mydg01 mydg02 mydg03 mydg04 OPERATIONS READ WRITE 40473 174045 32668 16873 55249 60043 11909 13745 BLOCKS READ WRITE 455898 951379 470337 351351 780779 731979 114508 128605 AVG TIME(ms) READ WRITE 29.5 35.4 35.2 102.9 35.3 61.2 25.0 30.
Performance Monitoring If two volumes (other than the root volume) on the same disk are busy, move them so that each is on a different disk. If one volume is particularly busy (especially if it has unusually large average read or write times), stripe the volume (or split the volume into multiple pieces, with each piece on a different disk). If done online, converting a volume to use striping requires sufficient free space to store an extra copy of the volume.
Tuning VxVM controllers can be used, and the speed of the system bus. If a particularly busy volume has a high ratio of reads to writes, it is likely that mirroring can significantly improve performance of that volume. Using I/O Tracing I/O statistics provide the data for basic performance analysis; I/O traces serve for more detailed analysis. With an I/O trace, focus is narrowed to obtain an event trace for a specific workload.
Tuning VxVM Tuning Guidelines for Large Systems On smaller systems (with less than a hundred disk drives), tuning is unnecessary and VxVM is capable of adopting reasonable defaults for all configuration parameters. On larger systems, configurations can require additional control over the tuning of these parameters, both for capacity and performance reasons. Generally, only a few significant decisions must be made when setting up VxVM on a large system.
Tuning VxVM You can also change the number of copies for an existing group by using the vxedit set command (see the vxedit(1M) manual page). For example, to configure five configuration copies for the disk group, bigdg, use the following command: # vxedit set nconfig=5 bigdg Changing Values of Tunables Tunables are modified by using SAM or the kctune utility. Changed tunables take effect only after relinking the kernel and booting the system from the new kernel.
Tuning VxVM Tunable Parameters The following sections describe specific tunable parameters. dmp_enable_restore_daemon Set to 1 to enable the DMP restore daemon; set to 0 to disable. dmp_failed_io_threshold The time limit for an I/O request in DMP. If the time exceeds this value, the usual result is to mark the disk as bad.
Tuning VxVM dmp_restore_daemon_policy The DMP restore policy, which can be set to 0 (CHECK_ALL), 1 (CHECK_DISABLED), 2 (CHECK_PERIODIC), or 3 (CHECK_ALTERNATE). dmp_retry_count If an inquiry succeeds on a path, but there is an I/O error, the number of retries to attempt on the path. vol_checkpt_default The interval at which utilities performing recoveries or resynchronization operations load the current offset into the kernel as a checkpoint.
Tuning VxVM also increases the load on the private network between the cluster members. This is because every other member of the cluster must be informed each time a bit in the map is marked. Since the region size must be the same on all nodes in a cluster for a shared volume, the value of the vol_fmr_logsz tunable on the master node overrides the tunable values on the slave nodes, if these values are different.
Tuning VxVM vol_maxioctl The maximum size of data that can be passed into VxVM via an ioctl call. Increasing this limit allows larger operations to be performed. Decreasing the limit is not generally recommended, because some utilities depend upon performing operations of a certain size and can fail unexpectedly if they issue oversized ioctl requests. The default value for this tunable is 32768 bytes (32KB). vol_maxkiocount The maximum number of I/O operations that can be performed by VxVM in parallel.
Tuning VxVM This tunable limits the size of an I/O request at a higher level in VxVM than the level of an individual disk. For example, for an 8 by 64KB stripe, a value of 256KB only allows I/O requests that use half the disks in the stripe; thus, it cuts potential throughput in half. If you have more columns or you have used a larger interleave factor, then your relative performance is worse. This tunable must be set, as a minimum, to the size of your largest stripe (RAID-0 or RAID-5).
Tuning VxVM The VxVM kernel currently sets the default value for this tunable to 512 sectors. Note If DRL sequential logging is configured, the value of voldrl_min_regionsz must be set to at least half the value of vol_maxio. voliomem_chunk_size The granularity of memory chunks used by VxVM when allocating or releasing system memory. A larger granularity reduces CPU overhead due to memory allocation by allowing VxVM to retain hold of a larger amount of memory. The default size for this tunable is 64KB.
Tuning VxVM voliot_iobuf_default The default size for the creation of a tracing buffer in the absence of any other specification of desired kernel buffer size as part of the trace ioctl. The default size of this tunable is 8192 bytes (8KB). If trace data is often being lost due to this buffer size being too small, then this value can be tuned to a more generous amount. voliot_iobuf_limit The upper limit to the size of memory that can be used for storing tracing buffers in the kernel.
Tuning VxVM volpagemod_max_memsz The amount of memory, measured in kilobytes, that is allocated for caching FastResync and cache object metadata. This tunable has a default value of 6144KB (6MB) of physical memory. Note The memory allocated for this cache is exclusively dedicated to it. It is not available for other processes or applications.
Tuning VxVM Increasing this size improves the initial performance on the system when a failure first occurs and before a detach of a failing object is performed, but can lead to memory starvation.
Tuning VxVM 420 VERITAS Volume Manager Administrator’s Guide
A Commands Summary This appendix summarizes the usage and purpose of important commonly used commands in VERITAS Volume Manager (VxVM). References are included to longer descriptions in the remainder of this book. Most commands (excepting daemons, library commands and supporting scripts) are linked to the /usr/sbin directory from the /opt/VRTS/bin directory.
The following tables summarize the commonly used commands: 422 ◆ “Obtaining Information About Objects in VxVM” on page 423 ◆ “Administering Disks” on page 424 ◆ “Creating and Administering Disk Groups” on page 426 ◆ “Creating and Administering Subdisks” on page 428 ◆ “Creating and Administering Plexes” on page 430 ◆ “Creating Volumes” on page 432 ◆ “Administering Volumes” on page 434 ◆ “Monitoring and Controlling Tasks” on page 438 VERITAS Volume Manager Administrator’s Guide
Obtaining Information About Objects in VxVM Command Description vxdctl license List licensed features of VxVM. vxdisk [-g diskgroup] list [diskname] Lists disks under control of VxVM. See “Displaying Disk Information” on page 98. Example: # vxdisk -g mydg list vxdg list [diskgroup] Lists information about disk groups. See “Displaying Disk Group Information” on page 135. Example: # vxdg list mydg vxdg -s list Lists information about shared disk groups. See “Listing Shared Disk Groups” on page 372.
Administering Disks Command Description vxdiskadm Administers disks in VxVM using a menu-based interface. vxdiskadd [devicename ...] Adds a disk specified by device name. See “Using vxdiskadd to Place a Disk Under Control of VxVM” on page 81. Example: # vxdiskadd c0t1d0 vxedit [-g diskgroup] rename olddisk \ Renames a disk under control of VxVM. See newdisk “Renaming a Disk” on page 97.
Administering Disks Command Description vxedit [-g diskgroup] set \ spare=on|off diskname Adds/removes a disk from the pool of hot-relocation spares. See “Marking a Disk as a Hot-Relocation Spare” on page 335 and “Removing a Disk from Use as a Hot-Relocation Spare” on page 336. Examples: # vxedit -g mydg set spare=on \ mydg04 # vxedit -g mydg set spare=off \ mydg04 vxdisk offline devicename Takes a disk offline. See “Taking a Disk Offline” on page 96.
Creating and Administering Disk Groups Command Description vxdg [-s] init diskgroup [diskname=]devicename Creates a disk group using a pre-initialized disk. See “Creating a Disk Group” on page 137 and “Creating a Shared Disk Group” on page 373. Example: # vxdg init mydg mydg01=c0t1d0 vxsplitlines -g diskgroup Reports conflicting configuration information. See “Handling Conflicting Configuration Copies in a Disk Group” on page 150.
Creating and Administering Disk Groups Command Description vxdg [-o expand] move sourcedg \ targetdg object ... Moves objects between disk groups. See “Moving Objects Between Disk Groups” on page 163. Example: # vxdg -o expand move mydg newdg \ myvol1 vxdg [-o expand] split sourcedg \ targetdg object ... Splits a disk group and moves the specified objects into the target disk group. See “Splitting Disk Groups” on page 165.
Creating and Administering Subdisks Command Description vxmake [-g diskgroup] sd subdisk \ diskname,offset,length Creates a subdisk. See “Creating Subdisks” on page 175. Example: # vxmake -g mydg sd mydg02-01 \ mydg02,0,8000 vxsd [-g diskgroup] assoc plex \ subdisk... Associates subdisks with an existing plex. See “Associating Subdisks with Plexes” on page 178. Example: # vxsd -g mydg assoc home-1 \ mydg02-01 mydg02-00 mydg02-01 vxsd [-g diskgroup] assoc plex \ subdisk1:0 ...
Creating and Administering Subdisks Command Description vxassist [-g diskgroup] move \ Relocates subdisks in a volume between disks. See “Moving and Unrelocating Subdisks Using vxassist” on page 341. volume !olddisk newdisk Example: # vxassist -g mydg move myvol !mydg02 mydg05 vxunreloc [-g diskgroup] original_disk Relocates subdisks to their original disks. See “Moving and Unrelocating Subdisks Using vxunreloc” on page 341.
Creating and Administering Plexes Command Description vxmake [-g diskgroup] plex plex \ sd=subdisk1[,subdisk2,...] Creates a concatenated plex. See “Creating Plexes” on page 183. Example: # vxmake -g mydg plex vol01-02 \ sd=mydg02-01,mydg02-02 vxmake [-g diskgroup] plex plex \ layout=stripe|raid5 stwidth=W \ ncolumn=N sd=subdisk1[,subdisk2,...] Creates a striped or RAID-5 plex. See “Creating a Striped Plex” on page 184.
Creating and Administering Plexes Command Description vxplex [-g diskgroup] cp volume newplex Copies a volume onto a plex. See “Copying Plexes” on page 193. Example: # vxplex -g mydg cp vol02 vol03-01 vxmend [-g diskgroup] fix clean plex Sets the state of a plex in an unstartable volume to CLEAN. See “Reattaching Plexes” on page 191. Example: # vxmend -g mydg fix clean vol02-02 vxplex [-g diskgroup] -o rm dis plex Dissociates and removes a plex from a volume.
Creating Volumes Command Description vxassist [-g diskgroup] maxsize \ layout=layout [attributes] Displays the maximum size of volume that can be created. See “Discovering the Maximum Size of a Volume” on page 202. Example: # vxassist -g mydg maxsize \ layout=raid5 nlog=2 vxassist -b [-g diskgroup] make \ volume length [layout=layout ] [attributes] Creates a volume. See “Creating a Volume on Any Disk” on page 203 and “Creating a Volume on Specific Disks” on page 203.
Creating Volumes Command Description vxassist -b [-g diskgroup] make \ volume length layout=mirror \ mirror=ctlr [attributes] Creates a volume with mirrored data plexes on separate controllers. See “Mirroring across Targets, Controllers or Enclosures” on page 216. Example: # vxassist -b -g mydg make mymcvol \ 20g layout=mirror mirror=ctlr vxmake -b [-g diskgroup] -Uusage_type \ Creates a volume from existing plexes. See vol volume [len=length] plex=plex,... “Creating a Volume Using vxmake” on page 218.
Administering Volumes Command Description vxassist [-g diskgroup] mirror volume \ [attributes] Adds a mirror to a volume. See “Adding a Mirror to a Volume” on page 231. Example: # vxassist -g mydg mirror myvol \ mydg10 vxassist [-g diskgroup] remove mirror \ volume [attributes] Removes a mirror from a volume. See “Removing a Mirror” on page 233.
Administering Volumes Command Description vxsnap [-g diskgroup] make \ source=volume/newvol=snapvol\ [/nmirror=number] Takes a full-sized instant snapshot of a volume by breaking off plexes of the original volume. See “Creating Instant Snapshots” on page 275. Example: # vxsnap -g mydg make \ source=myvol/newvol=mysnpvol\ /nmiror=2 vxsnap [-g diskgroup] make \ source=volume/snapvol=snapvol Takes a full-sized instant snapshot of a volume using a prepared empty volume.
Administering Volumes Command Description vxsnap [-g diskgroup] refresh snapshot Refreshes a snapshot from its original volume. See “Refreshing an Instant Snapshot (vxsnap refresh)” on page 287. Example: # vxsnap -g mydg refresh mysnpvol vxsnap [-g diskgroup] dis snapshot Turns a snapshot into an independent volume. See “Dissociating an Instant Snapshot (vxsnap dis)” on page 289.
Administering Volumes Command Description vxassist [-g diskgroup] convert volume \ [layout=layout] [convert_options] Converts between a layered volume and a non-layered volume layout. See “Converting Between Layered and Non-Layered Volumes” on page 260. Example: # vxassist -g mydg convert vol3 \ layout=stripe-mirror vxassist [-g diskgroup] remove volume \ volume Removes a volume. See “Removing a Volume” on page 250.
Monitoring and Controlling Tasks Command Description command [-g diskgroup] -t tasktag [options] [arguments] Specifies a task tag to a VxVM command. See “Specifying Task Tags” on page 227. Example: # vxrecover -g mydg -t mytask -b \ mydg05 vxtask [-h] [-g diskgroup] list Lists tasks running on a system. See “vxtask Usage” on page 229. Example: # vxtask -h -g mydg list vxtask monitor task Monitors the progress of a task. See “vxtask Usage” on page 229.
Online Manual Pages Online Manual Pages Manual pages are organized into three sections: ◆ Section 1M — Administrative Commands ◆ Section 4 — File Formats ◆ Section 7 — Device Driver Interfaces Section 1M — Administrative Commands Manual pages in section 1M describe commands that are used to administer VERITAS Volume Manager. Section 1M Manual Pages Name Description dgcfgbackup Create or update VxVM volume group configuration backup file. dgcfgdaemon Start the VxVM configuration backup daemon.
Online Manual Pages Section 1M Manual Pages 440 Name Description vxclustadm Start, stop, and reconfigure a cluster. vxcmdlog Administer command logging. vxconfigbackup Back up disk group configuration. vxconfigbackupd Disk group configuration backup daemon. vxconfigd VERITAS Volume Manager configuration daemon vxconfigrestore Restore disk group configuration. vxcp_lvmroot Copy LVM root disk onto new VERITAS Volume Manager root disk.
Online Manual Pages Section 1M Manual Pages Name Description vxevac Evacuate all volumes from a disk. vximportdg Import a disk group into the VERITAS Volume Manager configuration. vxinfo Print accessibility and usability of volumes. vxinstall Menu-driven VERITAS Volume Manager initial configuration. vxintro Introduction to the VERITAS Volume Manager utilities. vxiod Start, stop, and report on VERITAS Volume Manager kernel daemons. vxmake Create VERITAS Volume Manager configuration records.
Online Manual Pages Section 1M Manual Pages 442 Name Description vxresize Change the length of a volume containing a file system. vxrootmir Create a mirror of a VERITAS Volume Manager root disk. vxsd Perform VERITAS Volume Manager operations on subdisks. vxse Storage Expert rules. vxsnap Enable DRL on a volume, and create and administer instant snapshots. vxsplitlines Show disks with conflicting configuration copies in a cluster. vxstat VERITAS Volume Manager statistics management utility.
Online Manual Pages Section 4 — File Formats Manual pages in section 4 describe the format of files that are used by VERITAS Volume Manager. Section 4 Manual Pages Name Description vol_pattern Disk group search specifications. vxmake vxmake description file. Section 7 — Device Driver Interfaces Manual pages in section 7 describe the interfaces to VERITAS Volume Manager devices.
Online Manual Pages 444 VERITAS Volume Manager Administrator’s Guide
B Configuring VERITAS Volume Manager This appendix provides guidelines for setting up efficient storage management after installing the VERITAS Volume Manager software.
Adding Unsupported Disk Arrays as JBODS ▼ ▼ Optional Setup Tasks ◆ Place the root disk under VxVM control and mirror it to create an alternate boot disk. ◆ Designate hot-relocation spare disks in each disk group. ◆ Add mirrors to volumes. ◆ Configure DRL and FastResync on volumes. Maintenance Tasks ◆ Resize volumes and file systems. ◆ Add more disks, create new disk groups, and create new volumes. ◆ Create and maintain snapshots.
Guidelines for Configuring Storage Guidelines for Configuring Storage The following general guidelines help you understand and plan an efficient storage management system. Guidelines for Protecting Your System and Data A disk failure can cause loss of data on the failed disk and loss of access to your system. Loss of access is due to the failure of a key disk used for system operations. VERITAS Volume Manager can protect your system from these problems.
Guidelines for Configuring Storage ◆ Leave the VERITAS Volume Manager hot-relocation feature enabled. See “Hot-Relocation Guidelines” on page 450 for details. Mirroring Guidelines Refer to the following guidelines when using mirroring. ◆ Do not place subdisks from different plexes of a mirrored volume on the same physical disk. This action compromises the availability benefits of mirroring and degrades performance. Using the vxassist or vxdiskadm commands precludes this from happening.
Guidelines for Configuring Storage Note Using Dirty Region Logging can impact system performance in a write-intensive environment. For more information, see “Dirty Region Logging (DRL)” on page 42. Striping Guidelines Refer to the following guidelines when using striping. ◆ Do not place more than one column of a striped plex on the same physical disk. ◆ Calculate stripe-unit sizes carefully.
Guidelines for Configuring Storage For more information, see “Striping (RAID-0)” on page 21. RAID-5 Guidelines Refer to the following guidelines when using RAID-5. In general, the guidelines for mirroring and striping together also apply to RAID-5. The following guidelines should also be observed with RAID-5: ◆ Only one RAID-5 plex can exist per RAID-5 volume (but there can be multiple log plexes). ◆ The RAID-5 plex must be derived from at least three subdisks on three or more physical disks.
Guidelines for Configuring Storage ◆ Although hot-relocation does not require you to designate disks as spares, designate at least one disk as a spare within each disk group. This gives you some control over which disks are used for relocation. If no spares exist, VERITAS Volume Manager uses any available free space within the disk group. When free space is used for relocation purposes, it is possible to have performance degradation after the relocation.
Controlling VxVM’s View of Multipathed Devices Accessing Volume Devices As soon as a volume has been created and initialized, it is available for use as a virtual disk partition by the operating system for the creation of a file system, or by application programs such as relational databases and other data management software.
Configuring Cluster Support Configuring Shared Disk Groups This section describes how to configure shared disks in a cluster. If you are installing VERITAS Volume Manager for the first time or adding disks to an existing cluster, you need to configure new shared disks. If you are setting up VERITAS Volume Manager for the first time, configure the shared disks using the following procedure: 1. Start the cluster on one node only to prevent access by other nodes. 2.
Reconfiguration Tasks To import disk groups to be shared, use the following command: # vxdg -s import diskgroup This procedure marks the disks in the shared disk groups as shared and stamps them with the ID of the cluster, enabling other nodes to recognize the shared disks. If dirty region logs exist, ensure they are active. If not, replace them with larger ones. To display the shared flag for all the shared disk groups, use the following command: # vxdg list The disk groups are now ready to be shared.
Glossary Active/Active disk arrays This type of multipathed disk array allows you to access a disk in the disk array through all the paths to the disk simultaneously, without any performance degradation. Active/Passive disk arrays This type of multipathed disk array allows one path to a disk to be designated as primary and used to access the disk at any time. Using a path other than the designated active path results in severe performance degradation in some disk arrays.
block The minimum unit of data transfer to or from a disk or array. boot disk A disk that is used for the purpose of booting a system. boot disk group A private disk group that contains the disks from which the system may be booted. bootdg A reserved disk group name that is an alias for the name of the boot disk group. clean node shutdown The ability of a node to leave a cluster gracefully when all access to shared volumes has ceased. cluster A set of hosts (each termed a node) that share a set of disks.
configuration database A set of records containing detailed information on existing VxVM objects (such as disk and volume attributes). data change object (DCO) A VxVM object that is used to manage information about the FastResync maps in the DCO volume. Both a DCO object and a DCO volume must be associated with a volume to implement Persistent FastResync on that volume. data stripe This represents the usable data portion of a stripe and is equal to the stripe minus the parity region.
disk A collection of read/write data blocks that are indexed and can be accessed fairly quickly. Each disk has a universally unique identifier. disk access name An alternative term for a device name. disk access records Configuration records used to specify the access path to particular disks. Each disk access record contains a name, a type, and possibly some type-specific information, which is used by VxVM in deciding how to access and manipulate the disk that is defined by the disk access record.
disk group ID A unique identifier used to identify a disk group. disk ID A universally unique identifier that is given to each disk and can be used to identify the disk, even if it is moved. disk media name An alternative term for a disk name. disk media record A configuration record that identifies a particular disk, by disk ID, and gives that disk a logical (or administrative) name. disk name A logical or administrative name chosen for a disk that is under the control of VxVM, such as disk03.
enclosure See disk enclosure. enclosure-based naming See device name. fabric mode disk A disk device that is accessible on a Storage Area Network (SAN) via a Fibre Channel switch. FastResync A fast resynchronization feature that is used to perform quick and efficient resynchronization of stale mirrors, and to increase the efficiency of the snapshot mechanism. Also see Persistent FastResync and Non-Persistent FastResync.
hot-swap Refers to devices that can be removed from, or inserted into, a system without first turning off the power supply to the system. initiating node The node on which the system administrator is running a utility that requests a change to VxVM objects. This node initiates a volume reconfiguration. JBOD The common name for an unintelligent disk array which may, or may not, support the hot-swapping of disks. The name is derived from “just a bunch of disks.” log plex A plex used to store a RAID-5 log.
node One of the hosts in a cluster. node abort A situation where a node leaves a cluster (on an emergency basis) without attempting to stop ongoing operations. node join The process through which a node joins a cluster and gains access to shared disks. Non-Persistent FastResync A form of FastResync that cannot preserve its maps across reboots of the system because it stores its change map in memory. object An entity that is defined to and recognized internally by VxVM.
path When a disk is connected to a host, the path to the disk consists of the HBA (Host Bus Adapter) on the host, the SCSI or fibre cable connector and the controller on the disk or disk array. These components constitute a path to a disk. A failure on any of these results in DMP trying to shift all I/O for that disk onto the remaining (alternate) paths. Also see Active/Passive disk arrays, primary path and secondary path.
private region A region of a physical disk used to store private, structured VxVM information. The private region contains a disk header, a table of contents, and a configuration database. The table of contents maps the contents of the disk. The disk header contains a disk ID. All data in the private region is duplicated for extra reliability. public region A region of a physical disk managed by VxVM that contains available space and is used for allocating subdisks.
rootability The ability to place the root file system and the swap device under VxVM control. The resulting volumes can then be mirrored to provide redundancy and allow recovery in the event of disk failure. secondary path In Active/Passive disk arrays, the paths to a disk other than the primary path are called secondary paths. A disk is supposed to be accessed only through the primary path until it fails, after which ownership of the disk is transferred to one of the secondary paths.
spanning A layout technique that permits a volume (and its file system or database) that is too large to fit on a single disk to be configured across multiple physical disks. sparse plex A plex that is not as long as the volume or that has holes (regions of the plex that do not have a backing subdisk). Storage Area Network (SAN) A networking paradigm that provides easily reconfigurable connectivity between any subset of computers, disk storage and interconnecting hardware such as switches, hubs and bridges.
swap area A disk region used to hold copies of memory pages swapped out by the system pager process. swap volume A VxVM volume that is configured for use as a swap area. transaction A set of configuration changes that succeed or fail as a group, rather than individually. Transactions are used internally to maintain consistent configurations. volboot file A small file that is used to locate copies of the boot disk group configuration.
vxconfigd The VxVM configuration daemon, which is responsible for making changes to the VxVM configuration. This daemon must be running before VxVM operations can be performed.
Index Symbols /dev/vx/dmp directory 102 /dev/vx/rdmp directory 102 /etc/default/vxassist file 201, 339 /etc/default/vxdg defaults file 352 /etc/default/vxdg file 137 /etc/default/vxdisk file 62, 77 /etc/default/vxse file 385 /etc/fstab file 250 /etc/volboot file 173 /etc/vx/darecs file 173 /etc/vx/disk.info file 74 /etc/vx/dmppolicy.info file 121 /etc/vx/volboot file 145 /sbin/init.
preferred priority 120 primary 120 putil 182, 194 secondary 120 sequential DRL 212 setting for paths 119 setting for rules 385 snapvol 278 source 278 standby 120 subdisk 182 syncing 275, 292 tutil 182, 194 auto disk type 62 autogrow tuning 296 autogrow attribute 281, 295 autogrowby attribute 295 autotrespass mode 101 B backups created using snapshots 275 creating for volumes 261 creating using instant snapshots 275 creating using third-mirror snapshots 299 for multiple volumes 283, 303 implementing online 3
converting shared disk groups to private 375 creating shared disk groups 373 designating shareable disk groups 350 detach policies 353 determining if disks are shared 372 forcibly adding disks to disk groups 374 forcibly importing disk groups 374 importing disk groups as shared 374 initialization 358 introduced 348 joining disk groups in 376 limitations of shared disk groups 358 listing shared disk groups 372 maximum number of nodes in 347 moving objects between disk groups 375 node abort 365 node shutdown
database replay logs and sequential DRL 43 databases resilvering 44 resynchronizing 44 DCO adding to RAID-5 volumes 237 adding version 0 DCOs to volumes 307 adding version 20 DCOs to volumes 235 calculating plex size for version 20 52 considerations for disk layout 161 creating volumes with version 0 DCOs attached 210 creating volumes with version 20 DCOs attached 212 data change object 51 determining version of 238 dissociating version 0 DCOs from volumes 310 effect on disk group split and join 161 log ple
category 67 multipathed 6 re-including support for 67 removing disks from DISKS category 69 removing vendor-supplied support package 65 disk drives variable geometry 449 disk duplexing 25, 216 disk groups activating shared 376 activation in clusters 352 adding disks to 138 avoiding conflicting minor numbers on import 147 boot disk group 133 bootdg 133 checking for non-imported 389 checking initialized disks 389 checking number of configuration copies in 389 checking number of configuration database copies 3
version 169, 171 disk media names 11, 59 disk names 59 configuring persistent 74 disk sparing Storage Expert rules 391 disk## 12 disk##-## 12 diskdetpolicy attribute 357 diskgroup## 59 disks 64 adding 81 adding to disk groups 138 adding to disk groups forcibly 374 adding to DISKS category 68 array support library 64 auto-configured 62 categories 64 CDS format 62 changing default layout attributes 77 changing naming scheme 72 checking for failed 392 checking initialized disks not in disk group 389 checking n
setting failure policies in clusters 377 simple 62 spare 332 specifying to vxassist 203 stripe unit size 449 taking offline 96 unreserving 98 VM 11 DISKS category 64 adding disks 68 listing supported disks 67 removing disks 69 DMP check_all restore policy 128 check_alternate restore policy 128 check_disabled restore policy 128 check_periodic restore policy 128 disabling controllers 126 disabling multipathing 107 displaying DMP database information 110 displaying DMP node for a path 113 displaying DMP node f
enabled paths, displaying 112 enclosure-based naming 6, 60, 72 displayed by vxprint 73 DMP 103 enclosures 6 discovering disk access names in 73 issues with nopriv disks 75 issues with simple disks 75 mirroring across 216 setting attributes of paths 119 error messages Association count is incorrect 370 Association not resolved 370 Cannot auto-import group 370 Configuration records are inconsistent 370 Disk for disk group not found 146 Disk group has no valid configuration copies 146, 370 Disk group version d
detecting RAID-5 subdisk failure 328 excluding free space on disks from use by 337 limitations 329 making free space on disks available for use by 338 marking disks as spare 335 modifying behavior of 344 notifying users other than root 344 operation of 327 partial failure messages 331 preventing from running 344 reducing performance impact of recovery 345 removing disks from spare pool 336 Storage Expert rules 391 subdisk relocation 333 subdisk relocation messages 339 unrelocating subdisks 339 unrelocating
changing for disks 77 layouts changing default used by vxassist 203 left-symmetric 31 specifying default 203 types of volume 196 leave failure policy 356 left-symmetric layout 31 len subdisk attribute 182 LIF area 82 LIF LABEL record 82 load balancing 102 across nodes in a cluster 348 displaying policy for 121 specifying policy for 121 load-balancing specifying policy for 121 local detach policy 355 lock clearing on disks 145 LOG plex state 186 log subdisks 448 associating with plexes 179 DRL 43 logdisk 211
checking configuration 390 converting to striped-mirror 260 creating 215 defined 196 performance 401 mirroring defined 25 guidelines 448 mirroring controllers 448 mirroring plus striping 26 mirrors adding to volumes 231 boot disk 83 creating of VxVM root disk 84 creating snapshot 300 defined 16 removing from volumes 233 specifying number of 209 multipathing disabling 107 displaying information about 111 enabling 108 in HP-UX 106 Multi-Volume Support 311 N names changing for disk groups 142 defining for snap
specifying task tags for 258 temporary area 37 transformation characteristics 40 transformations and volume length 40 types of transformation 255 viewing status of 258 online status 99 ordered allocation 205, 211, 218 OTHER_DISKS category 64 overlapped seeks 448 P parity in RAID-5 29 partial device discovery 63 partition size displaying the value of 121 specifying 122 path failover in DMP 104 pathgroups creating 108 paths setting attributes of 119 performance analyzing data 405 benefits of using VxVM 399 ch
DISABLED 189 ENABLED 189 plex states ACTIVE 185 CLEAN 185 DCOSNP 185 EMPTY 186 IOFAIL 186 LOG 186 OFFLINE 186 SNAPATT 186 SNAPDIS 186 SNAPDONE 187 SNAPTMP 187 STALE 187 TEMP 187 TEMPRM 187 TEMPRMSD 188 plexes adding to snapshots 305 associating log subdisks with 179 associating subdisks with 178 associating with volumes 189 attaching to volumes 189 changing attributes 194 changing read policies for 249 checking for detached 390 checking for disabled 390 comment attribute 194 complete failure messages 332 co
checking existence of log 387 checking log is mirrored 388 checking size of log 387 guidelines 450 hot-relocation limitations 329 logs 33, 41 parity 29 removing logs 244 specifying number of logs 217 subdisk failure handled by hot-relocation 328 volumes 29 RAID-5 volumes adding DCOs to 237 adding logs 243 changing number of columns 257 changing stripe unit size 257 checking existence of RAID-5 log 387 checking number of columns 391 checking RAID-5 log is mirrored 388 checking size of RAID-5 log 387 creating
databases 44 root disk creating mirrors 84 root disk group 11, 131 root disks creating LVM from VxVM 85 creating VxVM 84 removing LVM 85 root mirrors checking 392 root volumes booting 83 rootability 82 checking 392 rootdg 11 round-robin load balancing 123 performance of read policy 402 read policy 249 rules attributes 395 checking attribute values 384 checking disk group configuration copies 388 checking disk group version number 389 checking for full disk group configuration database 388 checking for initi
size units 175 slave nodes defined 349 SmartSync 44 disabling on shared disk groups 415 enabling on shared disk groups 415 snap objects 55 snap volume naming 273 snapabort 263 SNAPATT plex state 186 snapback defined 264 merging snapshot volumes 304 resyncfromoriginal 274 resyncfromreplica 274, 304 snapclear creating independent volumes 305 SNAPDIS plex state 186 SNAPDONE plex state 187 snapshot hierarchies creating 286 splitting 290 snapshot mirrors adding to volumes 285 removing from volumes 286 snapshots
check keyword 384 checking default values of rule attributes 384 command-line syntax 382 diagnosing configuration issues 386 info keyword 383 introduced 381 list keyword 383 listing rule attributes 383 obtaining a description of a rule 383 requirements 382 rule attributes 395 rule definitions 393 rule result types 385 rules 382 rules engine 382 run keyword 384 running a rule 384 setting values of rule attributes 385 vxse 381 storage processor 101 storage relayout 36 stripe columns 21 stripe unit size recomm
unrelocating after hot-relocation 339 unrelocating to different disks 342 unrelocating using vxassist 341 unrelocating using vxdiskadm 340 unrelocating using vxunreloc 341 swap space increasing for VxVM rootable system 86 SYNC volume state 226 synchronization controlling for instant snapshots 292 improving performance of 292 syncing attribute 275, 292 syncpause 292 syncresume 292 syncstart 292 syncstop 292 syncwait 292 system names checking 392 TEMPRM plex state 187 TEMPRMSD plex state 188 third-mirror sna
V V-5-1-2536 246 V-5-1-2829 169 V-5-1-552 138 V-5-1-569 370 V-5-1-587 145 V-5-2-3091 160 V-5-2-369 139 V-5-2-4292 160 version 0 of DCOs 51 version 20 of DCOs 51 versioning of DCOs 51 versions checking for disk group 389 disk group 169 displaying for disk group 172 upgrading 169 virtual objects 8 VM disks defined 11 determining if shared 372 displaying spare 334 excluding free space from hot-relocation use 337 initializing 71 making free space available for hot-relocation use 338 marking as spare 335 mirrori
adding version 20 DCOs to 235 advanced approach to creating 198 assisted approach to creating 199 associating plexes with 189 attaching plexes to 189 backing up 261 backing up online using snapshots 275 block device files 222, 452 booting VxVM-rootable 83 changing layout online 254 changing number of columns 257 changing read policies for mirrored 249 changing stripe unit size 257 character device files 222, 452 checking for disabled 390 checking for stopped 390 checking if FastResync is enabled 253 checkin
obtaining performance statistics 404 performance of mirrored 400 performance of RAID-5 401 performance of striped 400 performing online relayout 254 placing in maintenance mode 230 preparing for DRL and instant snapshot operations 235 preventing recovery on restarting 231 RAID-0 21 RAID-0+1 25 RAID-1 25 RAID-1+0 26 RAID-10 26 RAID-5 29, 196 raw device files 222, 452 reattaching plexes 191 reattaching version 0 DCOs to 310 reconfiguration in clusters 361 recovering after correctable hardware failure 331 remo
instant snapshots 294 creating volumes with DRL enabled 213 creating volumes with version 0 DCOs attached 211 creating volumes with version 20 DCOs attached 212 defaults file 201 defining layout on specified storage 203 discovering maximum volume size 202 displaying information about snapshots 306 dissociating snapshots from volumes 305 excluding storage from use 204 finding out how much volumes can grow 245 merging snapshots with volumes 304 mirroring across controllers 207, 216 mirroring across enclosures
listing excluded disk arrays 67 listing supported disk arrays 66 listing supported disks in DISKS category 67 removing disks from DISKS category 69, 70 used to exclude support for disk arrays 67 used to re-include support for disk arrays 67 vxdestroy_lvmroot used to remove LVM root disks 85 vxdg adding disks to disk groups forcibly 374 changing activation mode on shared disk groups 376 clearing locks on disks 145 controllingl CDS compatibility of new disk groups 137 converting shared disk groups to private
List disk information 99 listing spare disks 334 Make a disk available for hot-relocation use 338 making free space on disks available for hot-relocation use 338 Mark a disk as a spare for a disk group 335 marking disks as spare 335 Mirror volumes on a disk 232 mirroring volumes 232 Move volumes from a disk 251 moving disk groups between systems 146 moving disks between disk groups 144 moving subdisks after hot-relocation 340 moving subdisks from disks 139 moving volumes from VM disks 251 Remove a disk 87,
configuring VxVM default behavior 232 mirroring volumes 232 vxnotify monitoring configuration changes 174 vxplex adding RAID-5 logs 244 attaching plexes to volumes 189, 231 converting plexes to snapshots 302 copying plexes 193 detaching plexes temporarily 191 dissociating and removing plexes 193 dissociating plexes from volumes 194 moving plexes 192 reattaching plexes 191 removing mirrors 233 removing plexes 233 removing RAID-5 logs 244 vxprint checking if FastResync is enabled 253 determining if DRL is ena
a disk 386 vxse_drl1 rule to check for mirrored volumes without a DRL 387 vxse_drl2 rule to check for mirrored DRL 387 vxse_host rule to check system name 392 vxse_mirstripe rule to check mirrored-stripe volumes 390 vxse_raid5 rule to check number of RAID-5 columns 391 vxse_raid5log1 rule to check for RAID-5 log 387 vxse_raid5log2 rule to check RAID-5 log size 387 vxse_raid5log3 rule to check for non-mirrored RAID-5 log 388 vxse_redundancy rule to check volume redundancy 389 vxse_rootmir rule to check roota
benefits to performance 399 cluster functionality (CVM) 347 configuration daemon 173 configuring disk devices 63 configuring to create mirrored volumes 232 dependency on operating system 3 disk discovery 64 granularity of memory allocation by 416 limitations of shared disk groups 358 maximum number of data plexes per volume 402 maximum number of subdisks per plex 415 maximum number of volumes 413 maximum size of memory pool 416 minimum size of memory pool 418 objects in 8 operation in clusters 348 performan