VERITAS Volume Manager™ 3.
Disclaimer The information contained in this publication is subject to change without notice. VERITAS Software Corporation makes no warranty of any kind with regard to this manual, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. VERITAS Software Corporation shall not be liable for errors contained herein or for incidental or consequential damages in connection with the furnishing, performance, or use of this manual.
Contents Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xv Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xv Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xv Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Concatenation and Spanning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Striping (RAID-0) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Mirroring (RAID-1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Striping Plus Mirroring (Mirrored-Stripe or RAID-0+1) . . . . . . . . . . . . . . . . . . . . . .
Hot-Relocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 Chapter 2. Administering Disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .53 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Disk Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Replacing a Failed or Removed Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Enabling a Physical Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 Taking a Disk Offline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 Renaming a Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
DMP in a Clustered Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 Enabling/Disabling Controllers with Shared Disk Groups . . . . . . . . . . . . . . . . . . . 105 Operation of the DMP Restore Daemon with Shared Disk Groups . . . . . . . . . . . . 105 Chapter 4. Creating and Administering Disk Groups . . . . . . . . . . . . . . . . . . . . . . . .107 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Displaying Subdisk Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 Moving Subdisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 Splitting Subdisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 Joining Subdisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Advanced Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 Assisted Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 Using vxassist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 Setting Default Values for vxassist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Managing Tasks with vxtask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Stopping a Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189 Putting a Volume in Maintenance Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 Starting a Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Backing Up Volumes Online Using Mirrors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212 Backing Up Volumes Online Using Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214 Converting a Plex into a Snapshot Plex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217 Backing Up Multiple Volumes Using Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 Merging a Snapshot Volume (snapback) . . . . . . . . . . . . . . . . . . . . . . . . .
Moving and Unrelocating subdisks using vxassist . . . . . . . . . . . . . . . . . . . . . . . . . 240 Moving and Unrelocating Subdisks using vxunreloc . . . . . . . . . . . . . . . . . . . . . . . 240 Restarting vxunreloc After Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 Modifying the Behavior of Hot-Relocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 Chapter 10. Administering Cluster Functionality . . . . . . . . . . . . . . .
Converting a Disk Group from Shared to Private . . . . . . . . . . . . . . . . . . . . . . . . . . . 264 Moving Objects Between Disk Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 Splitting Disk Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265 Joining Disk Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Performance Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 Setting Performance Priorities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 Obtaining Performance Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285 Using Performance Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287 Tuning VxVM . . . . . .
Preface Introduction The VERITAS Volume ManagerTM Administrator’s Guide provides information on how to use VERITAS Volume Manager (VxVM) and all of its features. Audience This guide is intended for system administrators responsible for installing, configuring, and maintaining systems under the control of VxVM.
Organization Organization This guide is organized as follows: ◆ Understanding VERITAS Volume Manager ◆ Administering Disks ◆ Creating and Administering Disk Groups ◆ Creating and Administering Subdisks ◆ Creating and Administering Plexes ◆ Creating Volumes ◆ Administering Volumes ◆ Administering Hot-Relocation ◆ Administering Dynamic Multipathing (DMP) ◆ Administering Cluster Functionality ◆ Configuring Off-Host Processing ◆ Performance Monitoring and Tuning Using This Guide This gu
Related Documents Related Documents The following documents provide information related to VxVM: Preface ◆ VERITAS Volume Manager Installation Guide ◆ VERITAS Volume Manager Release Notes ◆ VERITAS Volume Manager Hardware Notes ◆ VERITAS Volume Manager Troubleshooting Guide ◆ VERITAS Volume Manager (UNIX) User’s Guide — VEA ◆ VERITAS Volume Manager manual pages xvii
Conventions Conventions The following table describes the typographic conventions used in this guide. Typeface Usage Examples monospace Computer output, file contents, files, directories, software elements such as command options, function names, and parameters Read tunables from the /etc/vx/tunefstab file. New terms, book titles, emphasis, variables to be replaced by a name or value See the User’s Guide for details.
Getting Help Getting Help If you have any comments or problems with VERITAS products, contact VERITAS Technical Support: ◆ U.S. and Canadian Customers: 1-800-342-0652 ◆ International Customers: +1 (650) 527-8555 ◆ Email: support@veritas.com For license information (U.S. and Canadian Customers): ◆ Phone: 1-925-931-2464 ◆ Email: license@veritas.com ◆ Fax: 1-925-931-2487 For software updates: ◆ Email: swupdate@veritas.
Getting Help xx VERITAS Volume Manager Administrator’s Guide
1 Understanding VERITAS Volume Manager Introduction VERITAS Volume Manager (VxVM) is a storage management subsystem that allows you to manage physical disks as logical devices called volumes. A volume is a logical device that appears to data management systems as a physical disk. VxVM provides easy-to-use online disk storage management for computing environments and Storage Area Network (SAN) environments.
VxVM and the Operating System VxVM and the Operating System VxVM operates as a subsystem between your operating system and your data management systems, such as file systems and database management systems. VxVM is tightly coupled with the operating system. Before a disk can be brought under VxVM control, the disk must be accessible through the operating system device interface.
How VxVM Handles Storage Management How VxVM Handles Storage Management VxVM uses two types of objects to handle storage management: physical objects and virtual objects. ◆ Physical objects—Physical disks or other hardware with block and raw operating system device interfaces that are used to store data. ◆ Virtual objects—When one or more physical disks are brought under the control of VxVM, it creates virtual objects called volumes on those physical disks.
How VxVM Handles Storage Management Disk Arrays Performing I/O to disks is a relatively slow process because disks are physical devices that require time to move the heads to the correct position on the disk before reading or writing. If all of the read or write operations are done to individual disks, one at a time, the read-write time can become unmanageable. Performing these operations on multiple disks can help to reduce this problem.
Device Discovery Multipathed Disk Arrays Some disk arrays provide multiple ports to access their disk devices. These ports, coupled with the host bus adaptor (HBA) controller and any data bus or I/O processor local to the array, make up multiple hardware paths to access the disk devices. Such disk arrays are called multipathed disk arrays.
Device Discovery Example Configuration for Disk Enclosures Connected via a Fibre Channel Hub/Switch c1 Host Fibre Channel Hub/Switch Disk Enclosures enc0 enc1 enc2 In such a configuration, enclosure-based naming can be used to refer to each disk within an enclosure. For example, the device names for the disks in enclosure enc0 are named enc0_0, enc0_1, and so on. The main benefit of this scheme is that it allows you to quickly determine where a disk is physically located in a large SAN configuration.
Device Discovery name to VxVM for all of the paths over which it can be accessed. For example, the disk device enc0_0 represents a single disk for which two different paths are known to the operating system, such as c1t99d0 and c2t99d0. To take account of fault domains when configuring data redundancy, you can control how mirrored volumes are laid out across enclosures as described in “Mirroring across Targets, Controllers or Enclosures” on page 175.
Device Discovery After installing VxVM on a host system, you must bring the contents of physical disks under VxVM control by collecting the VM disks into disk groups and allocating the disk group space to create logical volumes. Note To bring the physical disk under VxVM control, the disk must not be under LVM control.
Device Discovery Disk Groups A disk group is a collection of VM disks that share a common configuration. A disk group configuration is a set of records with detailed information about related VxVM objects, their attributes, and their connections. The default disk group is rootdg (or root disk group). A disk group name can be up to 31 characters long. Note Even though rootdg is the default disk group, it does not necessarily contain the root disk.
Device Discovery Example of Three Subdisks Assigned to One VM Disk Physical Disk VM Disk Subdisks disk01-01 devname disk01-01 disk01-02 disk01-03 disk01-02 disk01 disk01-03 Any VM disk space that is not part of a subdisk is free space. You can use free space to create new subdisks. VxVM release 3.0 or higher supports the concept of layered volumes in which subdisks can contain volumes. For more information, see “Layered Volumes” on page 30.
Device Discovery Volumes A volume is a virtual disk device that appears to applications, databases, and file systems like a physical disk device, but does not have the physical limitations of a physical disk device. A volume consists of one or more plexes, each holding a copy of the selected data in the volume. Due to its virtual nature, a volume is not restricted to a particular disk or a specific area of a disk. The configuration of a volume can be changed by using VxVM user interfaces.
Device Discovery Example of a Volume with Two Plexes Volume disk01-01 vol06-01 disk02-01 vol06-02 vol06 Each plex contains an identical copy of the volume data. For more information, see “Mirroring (RAID-1)” on page 21.
Volume Layouts in VxVM Connection Between Objects in VxVM Physical Disk Volume VM Disk Plex disk01-01 devname1 disk01-01 disk01-02 vol01-01 vol01 disk01-02 disk01 Physical Disk Subdisks Volume VM Disk Plex devname2 disk02-01 disk02-01 vol02-01 disk02 Subdisk vol02 Disk Group Volume Layouts in VxVM A VxVM virtual device is defined by a volume. A volume has a layout defined by the association of a volume to one or more plexes, each of which map to subdisks.
Volume Layouts in VxVM Implementation of Layered Volumes A layered volume is constructed by mapping its subdisks to underlying volumes. The subdisks in the underlying volumes must map to VM disks, and hence to attached physical storage. Layered volumes allow for more combinations of logical compositions, some of which may be desirable for configuring a virtual device.
Volume Layouts in VxVM Concatenation and Spanning Concatenation maps data in a linear manner onto one or more subdisks in a plex. To access all of the data in a concatenated plex sequentially, data is first accessed in the first subdisk from beginning to end. Data is then accessed in the remaining subdisks sequentially from beginning to end, until the end of the last subdisk. The subdisks in a concatenated plex do not have to be physically contiguous and can belong to more than one VM disk.
Volume Layouts in VxVM The figure, “Example of Spanning” on page 16 shows data spread over two subdisks in a spanned plex. In the figure, “Example of Spanning,” the first six blocks of data (B1 through B6) use most of the space on the disk to which VM disk disk01 is assigned. This requires space only on subdisk disk01-01 on disk01. However, the last two blocks of data, B7 and B8, use only a portion of the space on the disk to which VM disk disk02 is assigned.
Volume Layouts in VxVM Striping (RAID-0) Note You may need an additional license to use this feature. Striping (RAID-0) is useful if you need large amounts of data written to or read from physical disks, and performance is important. Striping is also helpful in balancing the I/O load from multi-user applications across multiple disks. By using parallel data transfer to and from multiple disks, striping significantly improves data-access performance.
Volume Layouts in VxVM Striping Across Three Columns Column 1 Column 2 Column 3 Stripe 1 su1 su2 su3 Stripe 2 su4 su5 su6 Subdisk 1 Subdisk 2 Subdisk 3 SU = Stripe Unit Plex A stripe consists of the set of stripe units at the same positions across all columns. In the figure, stripe units 1, 2, and 3 constitute a single stripe.
Volume Layouts in VxVM “Example of a Striped Plex with One Subdisk per Column” shows a striped plex with three equal sized, single-subdisk columns. There is one column per physical disk. This example shows three subdisks that occupy all of the space on the VM disks. It is also possible for each subdisk in a striped plex to occupy only a portion of the VM disk, which leaves free space for other disk management tasks.
Volume Layouts in VxVM “Example of a Striped Plex with Concatenated Subdisks per Column” illustrates a striped plex with three columns containing subdisks of different sizes. Each column contains a different number of subdisks. There is one column per physical disk. Striped plexes can be created by using a single subdisk from each of the VM disks being striped across.
Volume Layouts in VxVM Mirroring (RAID-1) Mirroring uses multiple mirrors (plexes) to duplicate the information contained in a volume. In the event of a physical disk failure, the plex on the failed disk becomes unavailable, but the system continues to operate using the unaffected mirrors. Note Although a volume can have a single plex, at least two plexes are required to provide redundancy of data. Each of these plexes must contain disk space from different disks to achieve redundancy.
Volume Layouts in VxVM Mirrored-Stripe Volume Laid out on Six Disks Column 1 Column 2 Column 3 Striped Plex Mirror Column 1 Column 2 Column 3 Striped Plex Mirrored-Stripe Volume See “Creating a Mirrored-Stripe Volume” on page 174 for information on how to create a mirrored-stripe volume. The layout type of the data plexes in a mirror can be concatenated or striped. Even if only one is striped, the volume is still termed a mirrored-stripe volume.
Volume Layouts in VxVM Striped-Mirror Volume Laid out on Six Disks Underlying Mirrored Volumes Column 1 Column 2 Column 3 Mirror Column 1 Column 2 Column 3 Striped Plex Striped-Mirror Volume See “Creating a Striped-Mirror Volume” on page 175 for information on how to create a striped-mirrored volume.
Volume Layouts in VxVM How the Failure of a Single Disk Affects Mirrored-Stripe and Striped-Mirror Volumes Striped Plex X Failure of Disk Detaches Plex X Failure of Disk Removes Redundancy from a Mirror Detached Striped Plex Mirrored-Stripe Volume with no Redundancy Striped Plex Striped-Mirror Volume with Partial Redundancy Compared to mirrored-stripe volumes, striped-mirror volumes are more tolerant of disk failure, and recovery time is shorter.
Volume Layouts in VxVM RAID-5 (Striping with Parity) Note VxVM supports RAID-5 for private disk groups, but not for shareable disk groups in a cluster environment. Note You may need an additional license to use this feature. Although both mirroring (RAID-1) and RAID-5 provide redundancy of data, they use different methods. Mirroring provides data redundancy by maintaining multiple complete copies of the data in a volume. Data being written to a mirrored volume is reflected in all copies.
Volume Layouts in VxVM Traditional RAID-5 Arrays A traditional RAID-5 array is several disks organized in rows and columns. A column is a number of disks located in the same ordinal position in the array. A row is the minimal number of disks necessary to support the full width of a parity stripe. The figure, “Traditional RAID-5 Array”, shows the row and column arrangement of a traditional RAID-5 array.
Volume Layouts in VxVM VERITAS Volume ManagerRAID-5 Array Stripe 1 Stripe 2 SD SD SD SD SD SD SD SD Column 0 Column 1 Column 2 Column 3 SD = Subdisk Note Mirroring of RAID-5 volumes is not currently supported. See “Creating a RAID-5 Volume” on page 176 for information on how to create a RAID-5 volume. Left-Symmetric Layout There are several layouts for data and parity that can be used in the setup of a RAID-5 array. The implementation of RAID-5 in VxVM uses a left-symmetric layout.
Volume Layouts in VxVM The figure, “Left-Symmetric Layout,” shows a left-symmetric parity layout with five disks (one per column). Left-Symmetric Layout Column Stripe Parity Stripe Unit 0 1 2 3 P0 5 6 7 P1 4 10 11 P2 8 9 15 P3 12 13 14 P4 16 17 18 19 (Data) Stripe Unit For each stripe, data is organized starting to the right of the parity stripe unit. In the figure, data organization for the first stripe begins at P0 and continues to stripe units 0-3.
Volume Layouts in VxVM RAID-5 Logging Logging is used to prevent corruption of data during recovery by immediately recording changes to data and parity to a log area on a persistent device such as a volume on disk or in non-volatile RAM. The new data and parity are then written to the disks. Without logging, it is possible for data not involved in any active writes to be lost or silently corrupted if both a disk in a RAID-5 volume and the system fail.
Volume Layouts in VxVM Layered Volumes A layered volume is a virtual VERITAS Volume Manager object that is built on top of other volumes. The layered volume structure tolerates failure better and has greater redundancy than the standard volume structure. For example, in a striped-mirror layered volume, each mirror (plex) covers a smaller area of storage space, so recovery is quicker than with a standard mirrored volume.
Online Relayout System administrators can manipulate the layered volume structure for troubleshooting or other operations (for example, to place data on specific disks). Layered volumes are used by VxVM to perform the following tasks and operations: ◆ Creating striped-mirrors. (See “Creating a Striped-Mirror Volume” on page 175, and the vxassist(1M) manual page.) ◆ Creating concatenated-mirrors. (See “Creating a Concatenated-Mirror Volume” on page 171, and the vxassist(1M) manual page.
Online Relayout For example, if a striped layout with a 128KB stripe unit size is not providing optimal performance, you can use relayout to change the stripe unit size. File systems mounted on the volumes do not need to be unmounted to achieve this transformation provided that the file system (such as VERITAS File SystemTM) supports online shrink and grow operations. Online relayout reuses the existing storage space and has space allocation policies to address the needs of the new layout.
Online Relayout Example of Decreasing the Number of Columns in a Volume Five Columns Three Columns The following are examples of operations that you can perform using online relayout: ◆ Change a RAID-5 volume to a concatenated, striped, or layered volume (remove parity). See “Example of Relayout of a RAID-5 Volume to a Striped Volume” below. Note that removing parity (shown by the shaded area) decreases the overall storage space that the volume requires.
Online Relayout ◆ Change the column stripe width in a volume. See “Example of Increasing the Stripe Width for the Columns in a Volume” below. Example of Increasing the Stripe Width for the Columns in a Volume For details of how to perform online relayout operations, see “Performing Online Relayout” on page 221.
Online Relayout Permitted Relayout Transformations The tables below give details of the relayout operations that are possible for each type of source storage layout. Supported Relayout Transformations for Unmirrored Concatenated Volumes Relayout to From concat concat No. concat-mirror No. Add a mirror, and then use vxassist convert instead. mirror-concat No. Add a mirror instead. mirror-stripe No. Use vxassist convert after relayout to striped-mirror volume instead. raid5 Yes.
Online Relayout Supported Relayout Transformations for RAID-5 Volumes Relayout to From raid5 concat Yes. concat-mirror Yes. mirror-concat No. Use vxassist convert after relayout to concatenated-mirror volume instead. mirror-stripe No. Use vxassist convert after relayout to striped-mirror volume instead. raid5 Yes. The stripe width and number of columns may be changed. stripe Yes. The stripe width and number of columns may also be changed. stripe-mirror Yes.
Online Relayout Supported Relayout Transformations for Unmirrored Stripe, and Layered Striped-Mirror Volumes Relayout to From stripe, or stripe-mirror concat Yes. concat-mirror Yes. mirror-concat No. Use vxassist convert after relayout to concatenated-mirror volume instead. mirror-stripe No. Use vxassist convert after relayout to striped-mirror volume instead. raid5 Yes. The stripe width and number of columns may be changed. stripe Yes. The stripe width or number of columns must be changed.
Volume Resynchronization Transformation Characteristics Transformation of data from one layout to another involves rearrangement of data in the existing layout to the new layout. During the transformation, online relayout retains data redundancy by mirroring any temporary space used. Read and write access to data is not interrupted during the transformation. Data is not corrupted if the system fails during a transformation.
Volume Resynchronization VxVM needs to ensure that all mirrors contain exactly the same data and that the data and parity in RAID-5 volumes agree. This process is called volume resynchronization. For volumes that are part of disk groups that are automatically imported at boot time (such as rootdg), the resynchronization process takes place when the system reboots. Not all volumes require resynchronization after a system failure.
Dirty Region Logging (DRL) Dirty Region Logging (DRL) Note You may need an additional license to use this feature. Dirty region logging (DRL), if enabled, speeds recovery of mirrored volumes after a system crash. DRL keeps track of the regions that have changed due to I/O writes to a mirrored volume. DRL uses this information to recover only those portions of the volume that need to be recovered.
Dirty Region Logging (DRL) If the vxassist command is used to create a dirty region log, it creates a log plex containing a single log subdisk by default. A dirty region log can also be set up manually by creating a log subdisk and associating it with a plex. The plex then contains both a log and data subdisks. Sequential DRL Some volumes, such as those that are used for database replay logs, are written sequentially and do not benefit from delayed cleaning of the DRL bits.
Volume Snapshots Volume Snapshots The volume snapshot model is shown in “Snapshot Creation and the Backup Cycle.” This figure also shows the transitions that are supported by the snapback and snapclear commands to vxassist. Snapshot Creation and the Backup Cycle START snapstart Original volume Refresh on snapback Original volume snapshot Snapshot mirror Backup Cycle Snapshot volume snapback Back up to disk, tape or other medium, or use to create replica database or file system.
FastResync Alternatively, you can use the vxassist snapclear command to break the association between the original volume and the snapshot volume. The snapshot volume then has an existence that is independent of the original volume. This is useful for applications that do not require the snapshot to be resynchronized with the original volume. For more information about taking snapshots of a volume, see “Backing Up Volumes Online Using Snapshots” on page 214, and the vxassist(1M) manual page.
FastResync ◆ FastResync allows you to refresh and re-use snapshots rather than discard them. You can quickly re-associate (snapback) snapshot plexes with their original volumes. This reduces the system overhead required to perform cyclical operations such as backups that rely on the snapshot functionality of VxVM. Non-Persistent FastResync Non-Persistent FastResync, introduced in VxVM 3.1, allocates its change maps in memory.
FastResync How Non-Persistent FastResync Works with Snapshots The snapshot feature of VxVM takes advantage of FastResync change tracking to record updates to the original volume after a shapshot plex is created. After a snapshot is taken, the snapback option is used to reattach the snapshot plex.
FastResync When the snapstart operation is performed on the volume, this sets up a snapshot plex in the volume and associates a disabled DCO plex with it, as shown in “Mirrored Volume After Completion of a snapstart Operation.
FastResync See “Merging a Snapshot Volume (snapback)” on page 218, “Dissociating a Snapshot Volume (snapclear)” on page 219, and the vxassist(1M) manual page for more information.
FastResync To snapshot all the volumes in a single disk group, specify the option -o allvols to vxassist. However, this fails if any of the volumes in the disk group do not have a complete snapshot plex. It is also possible to take several snapshots of the same volume. A new FastResync change map is produced for each snapshot taken to minimize the resynchronization time for each snapshot. By default, the snapshot plex is resynchronized from the data in the original volume during a snapback operation.
SmartSync Recovery Accelerator FastResync Limitations The following limitations apply to FastResync: ◆ Persistent FastResync is supported for RAID-5 volumes, but this prevents the use of the relayout or resize operations on the volume while a DCO is associated with it. ◆ Neither Non-Persistent nor Persistent FastResync can be used to resynchronize mirrors after a system crash. Dirty region logging (DRL), which can coexist with FastResync, should be used for this purpose.
SmartSync Recovery Accelerator You must configure volumes correctly to use SmartSync. For VxVM, there are two types of volumes used by the database, as follows: ◆ Redo log volumes contain redo logs of the database. ◆ Data volumes are all other volumes used by the database (control files and tablespace files). SmartSync works with these two types of volumes differently, and they must be configured correctly to take full advantage of the extended interfaces.
Hot-Relocation For details of how to configure sequential DRL, see “Adding DRL Logging to a Mirrored Volume” on page 197. Hot-Relocation Note You may need an additional license to use this feature. Hot-relocation is a feature that allows a system to react automatically to I/O failures on redundant objects (mirrored or RAID-5 volumes) in VxVM and restore redundancy and access to those objects. VxVM detects I/O failures on objects and relocates the affected subdisks.
Hot-Relocation 52 VERITAS Volume Manager Administrator’s Guide
2 Administering Disks Introduction This chapter describes the operations for managing disks used by the VERITAS Volume Manager (VxVM). This includes placing disks under VxVM control, initializing disks, mirroring the root disk, and removing and replacing disks. Note Most VxVM commands require superuser or equivalent privileges. Note Rootability, which puts the root disk under VxVM control and allows it to be mirrored, is supported for this release of VxVM for HP-UX.
Disk Devices The device name (sometimes referred to as devname or disk access name) defines where the disk is located in a system. Note The full pathname of a device is /dev/vx/[r]dsk/devicename. In this document, only the device name is listed and /dev/vx/[r]dsk is assumed. Disk Device Naming in VxVM Prior to VxVM 3.2, all disks were named according to the c#t#d# format. Fabric mode disks were not supported by VxVM. From VxVM 3.
Disk Devices ◆ Disks in the DISKS category (formerly known as JBOD disks) are named using the Disk_# format. ◆ Disks in the OTHER_DISKS category are named as follows: - Non-fabric disks are named using the c#t#d# format. - Fabric disks are named using the fabric_# format. See “Changing the Disk-Naming Scheme” on page 61 for details of how to switch between the two naming schemes.
Configuring Newly Added Disk Devices ◆ simple—the public and private regions are on the same disk area (with the public area following the private area). Typically, most or all disks on your system are configured as this disk type. ◆ nopriv—there is no private region (only a public region for allocating subdisks). This is the simplest disk type consisting only of space for allocating subdisks.
Configuring Newly Added Disk Devices You can also use the vxdisk scandisks command to scan devices in the operating system device tree and to initiate dynamic reconfiguration of multipathed disks. See the vxdisk(1M) manual page for more information. Discovering Disks and Dynamically Adding Disk Arrays You can dynamically add support for a new type of disk array which has been developed by a third-party vendor.
Configuring Newly Added Disk Devices Administering the Device Discovery Layer Dynamic addition of disk arrays is possible because of the existence of the Device Discovery Layer (DDL) which is a facility for discovering disks and their attributes that are required for VXVM and DMP operations. Administering the DDL is the role of the vxddladm utility which is an administrative interface to the DDL. You can use vxddladm to perform the following tasks: ◆ List the types of arrays that are supported.
Configuring Newly Added Disk Devices For more information about excluding disk array support, see the vxddladm (1M) manual page. Re-including Support for an Excluded Disk Array If you have excluded support for a particular disk array, you can use the includearray keyword to remove the entry from the exclude list, as shown in the following example: # vxddladm includearray libname=libvxenc.sl This command adds the array library to the database so that the library can once again be used in device discovery.
Placing Disks Under VxVM Control To remove support for X1 disks from ACME, use the following command: # vxddladm rmjbod vid=ACME pid=X1 Placing Disks Under VxVM Control When you add a disk to a system that is running VxVM, you need to put the disk under VxVM control so that VxVM can control the space allocation on the disk. Unless another disk group is specified, VxVM places new disks in the default disk group, rootdg.
Changing the Disk-Naming Scheme You can exclude all disks on specific controllers from initialization by listing those controllers in the file /etc/vx/cntrls.exclude. The following is an example of an entry in a cntrls.exclude file: c0 You can exclude all disks in specific enclosures from initialization by listing those enclosures in the file /etc/vx/enclr.exclude. The following is an example of an entry in a enclr.
Changing the Disk-Naming Scheme Using vxprint with Enclosure-Based Disk Names If you enable enclosure-based naming, and use the vxprint command to display the structure of a volume, it shows enclosure-based disk device names (disk access names) rather than c#t#d names.
Changing the Disk-Naming Scheme Note You cannot run vxdarestore if the c#t#d# naming scheme is in use. Additionally, vxdarestore does not handle failures on persistent simple/nopriv disks that are caused by renaming enclosures, by hardware reconfiguration that changes device names. or by removing support from the JBOD category for disks that belong to a particular vendor when enclosure-based naming is in use. For more information about the vxdarestore command, see the vxdarestore(1M) manual page.
Changing the Disk-Naming Scheme # /usr/bin/vxvm/bin/vxdarestore 3. Re-import the disk group using the following command: # vxdg import diskgroup Installing and Formatting Disks Depending on the hardware capabilities of your disks and of your system, you may either need to shut down and power off your system before installing the disks, or you may be able to hot-insert the disks into the live system. Many operating systems can detect the presence of the new disks on being rebooted.
Adding a Disk to VxVM Adding a Disk to VxVM Formatted disks being placed under VxVM control may be new or previously used outside VxVM. The set of disks can consist of all disks on the system, all disks on a controller, selected disks, or a combination of these. Depending on the circumstances, all of the disks may not be processed in the same way. Caution Initialization does not preserve data on disks.
Adding a Disk to VxVM can be a single disk, or a series of disks and/or controllers (with optional targets). If consists of multiple items, separate them using white space, for example: c3t0d0 c3t1d0 c3t2d0 c3t3d0 specifies fours disks at separate target IDs on controller 3.
Adding a Disk to VxVM 4. At the following prompt, specify the disk group to which the disk should be added, none to reserve the disks for future use, or press Return to accept rootdg: You can choose to add these disks to an existing disk group, a new disk group, or you can leave these disks available for use by future add or replacement operations. To create a new disk group, select a disk group name that does not yet exist. To leave the disks available for future use, specify a disk group name of “none”.
Adding a Disk to VxVM vxdiskadm asks you to confirm that the devices are to be reinitialized before proceeding: Reinitialize these devices? [y,n,q,?] (default: n) y Initializing device device name. Adding disk device device name to disk group disk group name with disk name disk name. . . . Note To bring LVM disks under VxVM control, use the Migration Utilities. See the VERITAS Volume Manager Migration Guide for details. 11.
Rootability Note If you are adding an uninitialized disk, warning and error messages are displayed on the console during the vxdiskadd command. Ignore these messages. These messages should not appear after the disk has been fully initialized; the vxdiskadd command displays a success message when the initialization completes. The interactive dialog for adding a disk using vxdiskadd is similar to that for vxdiskadm, described in “Adding a Disk to VxVM” on page 65.
Rootability VxVM Root Disk Volume Restrictions Volumes on a bootable VxVM-root disk have the following configuration restrictions: ◆ All volumes on the root disk must be in the rootdg disk group. ◆ The names of the volumes with entries in the LIF LABEL record must be standvol, rootvol, swapvol, and dumpvol (if present). The names of the volumes for other file systems on the root disk are generated by appending vol to the name of their mount point under /.
Rootability volumes, and then loads this configuration into the VxVM kernel driver. At this point, I/O can take place for these temporary root and swap volumes by referencing the device number set up by the rootability code. When the kernel has passed control to the initial user procedure, the VxVM configuration daemon (vxconfigd) is started. vxconfigd reads the configuration of the volumes in the rootdg disk group and loads them into the kernel. The temporary root and swap volumes are then discarded.
Rootability The next example uses the same command and additionally specifies the -m option to set up a root mirror on disk c1t1d0: # /etc/vx/bin/vxcp_lvmroot -m c1t1d0 -R 30 -v -b c0t4d0 In this example, the -b option to vxcp_lvmroot sets c0t4d0 as the primary boot device and c1t1d0 as the alternate boot device.
Rootability Creating an LVM Root Disk from a VxVM Root Disk Note These procedures should be carried out at init level 1. In some circumstances, it may be necessary to boot the system from an LVM root disk. If an LVM root disk is no longer available or an existing LVM root disk is out-of-date, you can use the vxres_lvmroot command to create an LVM root disk on a spare physical disk that is not currently under LVM or VxVM control.
Removing Disks 4. Build a new kernel and reboot the system: # mk_kernel -v -o /stand/vmunix # kmupdate # reboot -r Removing Disks You can remove a disk from a system and move it to another system if the disk is failing or has failed. Before removing the disk from the current system, you must: 1. Stop all activity by applications to volumes that are configured on the disk that is to be removed. Unmount file systems and shut down databases that are configured on the volumes. 2.
Removing Disks 3. If there are any volumes on the disk, VxVM asks you whether they should be evacuated from the disk. If you wish to keep the volumes, answer y. Otherwise, answer n. 4. At the following verification prompt, press Return to continue: Requested operation is to remove disk disk01 from group rootdg. Continue with operation? [y,n,q,?] (default: y) The vxdiskadm utility removes the disk from the disk group and displays the following success message: Removal of disk disk01 is complete.
Removing and Replacing Disks Removing a Disk with No Subdisks To remove a disk that contains no subdisks from its disk group, run the vxdiskadm program and select item 2 (Remove a disk) from the main menu, and respond to the prompts as shown in this example to remove disk02: Enter disk name [,list,q,?] disk02 Requested operation is to remove disk disk02 from group rootdg. Continue with operation? [y,n,q,?] (default: y) y Removal of disk disk02 is complete.
Removing and Replacing Disks 3. When you select a disk to remove for replacement, all volumes that are affected by the operation are displayed, for example: The following volumes will lose mirrors as a result of this operation: home src No data on these volumes will be lost. The following volumes are in use, and will be disabled as a result of this operation: mkting Any applications using these volumes will fail future accesses. These volumes will require restoration from backup.
Removing and Replacing Disks 5. If you chose to replace the disk in step 4, press Return at the following prompt to confirm this: Requested operation is to remove disk02 from group rootdg. The removed disk will be replaced with disk device c0t1d0. Continue with operation? [y,n,q,?] (default: y) 6. vxdiskadm displays the following success messages: Replacement of disk disk02 in group rootdg with disk device c0t1d0 completed successfully.
Removing and Replacing Disks 3. The vxdiskadm program displays the device names of the disk devices available for use as replacement disks. Your system may use a device name that differs from the examples. Enter the device name of the disk or press Return to select the default device: The following devices are available as replacements: c0t1d0 c1t1d0 You can choose one of these disks to replace disk02. Choose "none" to initialize another disk to replace disk02.
Enabling a Physical Disk Enabling a Physical Disk If you move a disk from one system to another during normal system operation, VxVM does not recognize the disk automatically. The enable disk task enables VxVM to identify the disk and to determine if this disk is part of a disk group. Also, this task re-enables access to a disk that was disabled by either the disk group deport task or the disk device disable (offline) task. To enable a disk, use the following procedure: 1.
Taking a Disk Offline Taking a Disk Offline There are instances when you must take a disk offline. If a disk is corrupted, you must disable the disk before removing it. You must also disable a disk before moving the physical disk device to another location to be connected to another system. Note Taking a disk offline is only useful on systems that support hot-swap removal and insertion of disks without needing to shut down and reboot the system. To take a disk offline, use the vxdiskadm command: 1.
Reserving Disks To confirm that the name change took place, use the following command: # vxdisk list VxVM returns the following display: DEVICE c0t0d0 c1t0d0 c1t1d0 TYPE simple simple simple DISK disk04 disk03 - GROUP rootdg rootdg - STATUS online online online Note By default, VxVM names subdisk objects after the VM disk on which they are located. Renaming a VM disk does not automatically rename the subdisks on that disk.
Displaying Disk Information Displaying Disk Information Before you use a disk, you need to know if it has been initialized and placed under VxVM control. You also need to know if the disk is part of a disk group because you cannot create volumes on a disk that is not part of a disk group. The vxdisk list command displays device names for all recognized disks, the disk names, the disk group names associated with each disk, and the status of each disk.
Displaying Disk Information Displaying Disk Information with vxdiskadm Displaying disk information shows you which disks are initialized, to which disk groups they belong, and the disk status. The list command displays device names for all recognized disks, the disk names, the disk group names associated with each disk, and the status of each disk. To display disk information, use the following procedure: 1. Start the vxdiskadm program, and select list (List disk information) from the main menu. 2.
3 Administering Dynamic Multipathing (DMP) Introduction Note You may need an additional license to use this feature. The Dynamic Multipathing (DMP) feature of VERITAS Volume Manager (VxVM) provides greater reliability and performance by using path failover and load balancing. This feature is available for multiported disk arrays from various vendors. See the VERITAS Volume Manager Hardware Notes for information about supported disk arrays.
Introduction VxVM uses DMP metanodes to access disk devices connected to the system. For each disk in a supported array, DMP maps one metanode to the set of paths that are connected to the disk. Additionally, DMP associates the appropriate multipathing policy for the disk array with the metanode. For disks in an unsupported array, DMP maps a separate metanode to each path that is connected to a disk.
Introduction Example of Multipathing for a Disk Enclosure in a SAN Environment c1 c2 VxVM Host enc0_0 Mapped by DMP DMP Fibre Channel Hubs/Switches c1t99d0 c2t99d0 Disk Enclosure enc0 Disk is c1t99d0 or c2t99d0 depending on the path See “Changing the Disk-Naming Scheme” on page 61 for details of how to change the naming scheme that VxVM uses for disk devices. Note The operation of DMP relies on the vxdmp device driver. Unlike prior releases, from VxVM 3.1.
Disabling and Enabling Multipathing for Specific Devices Load Balancing DMP uses the balanced path mechanism to provide load balancing across paths for active/active disk arrays. Load balancing maximizes I/O throughput by using the total bandwidth of all available paths. Sequential I/O starting within a certain range is sent down the same path in order to benefit from disk track caching.
Disabling and Enabling Multipathing for Specific Devices Do you want to continue ? [y,n,q,?] (default: y) 2.
Disabling and Enabling Multipathing for Specific Devices As a result of this operation, the specified paths will be excluded from the view of VxVM. This operation can be reversed using the vxdiskadm command. You can specify a pathname or a pattern at the prompt.
Disabling and Enabling Multipathing for Specific Devices all - Exclude all disks aaa:123 - Exclude all disks aaa*:123 - Exclude all disks and PID ‘123’ aaa:123* - Exclude all disks with ‘123’ aaa:* - Exclude all disks having VID ‘aaa’ and PID ‘123’ having VID starting with ‘aaa’ having VID ‘aaa’ and PID starting having VID ‘aaa’ and any PID Enter a VID:PID combination:[,all,list-exclude,q,?] ❖ Select option 4 to define a pathgroup for disks that are not multipathed by VxVM.
Disabling and Enabling Multipathing for Specific Devices You can specify a controller name at the prompt. A controller name is of the form c#, example c3, c11 etc. Enter ’all’ to exclude all paths on all the controllers on the host. To see the list of controllers on the system, type ’list’. Enter a controller name:[,all,list,list-exclude,q,?] ❖ Select option 6 to disable multipathing for specified paths.
Disabling and Enabling Multipathing for Specific Devices You can specify a VendorID:ProductID combination at the prompt. The specification can be as follows: VID:PID where VID stands for Vendor ID PID stands for Product ID (The command vxdmpinq in /etc/vx/diag.d can be used to obtain the Vendor ID and Product ID) Both VID and PID can have an optional ’*’ (asterisk) following them. If a ’*’ follows VID, it will result in the exclusion of all disks returning Vendor ID starting with the specified VID.
Disabling and Enabling Multipathing for Specific Devices The devices selected in this operation will become visible to VxVM and/or will be multipathed by vxdmp again. Only those devices which were previously excluded can be included again. Do you want to continue ? [y,n,q,?] (default: y) 2.
Disabling and Enabling Multipathing for Specific Devices ❖ Select option 2 to make specified paths visible to VxVM. Re-include paths in VxVM Menu: VolumeManager/Disk/IncludeDevices/PATH-VXVM Use this operation to make one or more paths visible to VxVM again. As a result of this operation,the specified paths will become visible to VxVM again. You can specify a pathname or a pattern at the prompt.
Disabling and Enabling Multipathing for Specific Devices Some examples of VID:PID specification are: all - Include all disks aaa:123 - Include all disks aaa*:123 - Include all disks and PID ‘123’ aaa:123* - Include all disks with ‘123’ aaa:* - Include all disks having VID ‘aaa’ and PID ‘123’ having VID starting with ‘aaa’ having VID ‘aaa’ and PID starting having VID ‘aaa’ and any PID Enter a VID:PID combination:[,all,list,list-exclude,q,?] All disks returning the specified Vendor ID and Product
Disabling and Enabling Multipathing for Specific Devices ❖ Select option 6 to enable multipathing for specified paths. Note This option requires a reboot of the system. Re-include paths in DMP Menu: VolumeManager/Disk/IncludeDevices/PATH-DMP Use this operation to make vxdmp multipath one or more disks again. As a result of this operation, all disks corresponding to the specified paths will be multipathed by vxdmp again. You can specify a pathname or a pattern at the prompt.
Enabling and Disabling Input/Output (I/O) Controllers If a ’*’ follows VID, it will result in the inclusion of all disks returning Vendor ID starting with the specified VID. The same is true for Product ID as well. Both VID and PID should be non NULL. The maximum allowed lengths for Vendor ID and Product ID are 8 and 16 characters respectively.
Displaying DMP Database Information Displaying DMP Database Information You can use the vxdmpadm command to list DMP database information and perform other administrative tasks. This command allows you to list all controllers that are connected to disks, and other related information that is stored in the DMP database. You can use this information to locate system hardware, and to help you decide which controllers need to be enabled or disabled.
Administering DMP Using vxdmpadm configs: count=1 len=727 logs: count=1 len=110 Defined regions: config priv 000017-000247[000231]:copy=01 offset=000000 disabled config priv 000249-000744[000496]:copy=01 offset=000231 disabled log priv 000745-000854[000110]:copy=01 offset=000000 disabled lockrgn priv 000855-000919[000065]: part=00 offset=000000 Multipathing information: numpaths: 2 c1t0d3 state=enabled type=secondary c4t1d3 state=disabled type=primary In the Multipathing information section of this output
Administering DMP Using vxdmpadm Retrieving Information About a DMP Node The following command displays the DMP node that controls a particular physical path: # vxdmpadm getdmpnode nodename=c3t2d1 The physical path can be specified as the nodename attribute, which must be a valid path listed in the /dev/rdsk directory. Use the enclosure attribute with getdmpnode to obtain a list of all DMP nodes for the specified enclosure.
Administering DMP Using vxdmpadm Enabling a Controller Enabling a controller allows a previously disabled host disk controller to accept I/O. This operation succeeds only if the controller is accessible to the host and I/O can be performed on it. When connecting active/passive disk arrays in a non-clustered environment, the enable operation results in failback of I/O to the primary path. The enable operation can also be used to allow I/O to the controllers on a system board that was previously detached.
Administering DMP Using vxdmpadm Use the start restore command to start the restore daemon and specify one of the following policies: ◆ check_all The restore daemon analyzes all paths in the system and revives the paths that are back online, as well as disabling the paths that are inaccessible. The command to start the restore daemon with this policy is: # vxdmpadm start restore policy=check_all [interval=seconds] ◆ check_alternate The restore daemon checks that at least one alternate path is healthy.
DMP in a Clustered Environment Note To change the interval or policy, you must first stop the restore daemon, and then restart it with new attributes. See the vxdmpadm(1M) manual page for more information about DMP restore policies. Stopping the DMP Restore Daemon Use the following command to stop the DMP restore daemon: # vxdmpadm stop restore Note Automatic path failback stops if the restore daemon is stopped.
DMP in a Clustered Environment For active/active type disk arrays, any disk can be simultaneously accessed through all available physical paths to it. Therefore, in a clustered environment all hosts do not need to access a disk, via the same physical path. Note If the vxdctl enable command is run, and DMP identifies a disabled primary path of a shared disk in an active/passive type disk array as physically accessible, it marks this path as enabled.
DMP in a Clustered Environment 106 VERITAS Volume Manager Administrator’s Guide
4 Creating and Administering Disk Groups Introduction This chapter describes how to create and manage disk groups. Disk groups are named collections of disks that share a common configuration. Volumes are created within a disk group and are restricted to using disks within that disk group. A system with VERITAS Volume Manager (VxVM) installed has a default disk group configured, rootdg. By default, operations are directed to the rootdg disk group.
Specifying a Disk Group to Commands the free parameter. An example is shown in “Displaying Disk Group Information” on page 109. One way to overcome the problem of running out of free space is to split the affected disk group into two separate disk groups. See “Reorganizing the Contents of Disk Groups” on page 121 for details. Caution Before making any changes to disk groups, use the commands vxprint -hrm and vxdisk list to record the current configuration.
Displaying Disk Group Information Displaying Disk Group Information To display information on existing disk groups, enter the following command: # vxdg list VxVM returns the following listing of current disk groups: NAME rootdg newdg STATE enabled enabled ID 730344554.1025.tweety 731118794.1213.
Creating a Disk Group Displaying Free Space in a Disk Group Before you add volumes and file systems to your system, make sure you have enough free disk space to meet your needs.
Adding a Disk to a Disk Group Note VxVM commands create all volumes in the default disk group, rootdg, if no alternative disk group is specified using the -g option (see “Specifying a Disk Group to Commands” on page 108). All commands default to the rootdg disk group unless the disk group can be deduced from other information such as a disk name. A disk group must have at least one disk associated with it.
Removing a Disk from a Disk Group Removing a Disk from a Disk Group A disk that contains no subdisks can be removed from its disk group with this command: # vxdg [-g groupname] rmdisk diskname where the disk group name is only specified for a disk group other than the default, rootdg.
Deporting a Disk Group If you choose y, then all subdisks are moved off the disk, if possible. Some subdisks may not be movable. The most common reasons why a subdisk may not be movable are as follows: ◆ There is not enough space on the remaining disks. ◆ Plexes or striped subdisks cannot be allocated on different disks from existing plexes or striped subdisks in the volume.
Importing a Disk Group disable all access to the disk before removing the disk. Enter name of disk group [,list,q,?] (default: list) newdg 5. At the following prompt, enter y if you intend to remove the disks in this disk group: The requested operation is to disable access to the removable disk group named newdg. This disk group is stored on the following disks: newdg01 on device c1t1d0 You can choose to disable access to (also known as “offline”) these disks.
Importing a Disk Group 2. Select menu item 7 (Enable access to (import) a disk group) from the vxdiskadm main menu. 3. At the following prompt, enter the name of the disk group to import (in this example, newdg): Enable access to (import) a disk group Menu: VolumeManager/Disk/EnableDiskGroup Use this operation to enable access to a disk group. This can be used as the final part of moving a disk group from one system to another.
Renaming a Disk Group Renaming a Disk Group Only one disk group of a given name can exist per system. It is not possible to import or deport a disk group when the target system already has a disk group of the same name. To avoid this problem, VxVM allows you to rename a disk group during import or deport. For example, because every system running VxVM must have a single rootdg default disk group, importing or deporting rootdg across systems is a problem.
Moving Disks between Disk Groups The -t option indicates a temporary import name, and the -C option clears import locks. The -n option specifies an alternate name for the rootdg being imported so that it does not conflict with the existing rootdg. diskgroup is the disk group ID of the disk group being imported (for example, 774226267.1025.tweety). If a reboot or crash occurs at this point, the temporarily imported disk group becomes unimported and requires a reimport. 3.
Moving Disk Groups Between Systems Moving Disk Groups Between Systems An important feature of disk groups is that they can be moved between systems. If all disks in a disk group are moved from one system to another, then the disk group can be used by the second system. You do not have to re-specify the configuration. To move a disk group between systems, use the following procedure: 1.
Moving Disk Groups Between Systems When you move disks from a system that has crashed or failed to detect the group before the disk is moved, the locks stored on the disks remain and must be cleared. The system returns the following error message: vxdg:disk group groupname: import failed: Disk is in use by another host To clear locks on a specific set of devices, use the following command: # vxdisk clearimport devicename ...
Moving Disk Groups Between Systems These operations can also be performed using the vxdiskadm utility. To deport a disk group using vxdiskadm, select menu item 8 (Remove access to (deport) a disk group). To import a disk group, select item 7 (Enable access to (import) a disk group). The vxdiskadm import operation checks for host import locks and prompts to see if you want to clear any that are found. It also starts volumes in the disk group.
Reorganizing the Contents of Disk Groups Reorganizing the Contents of Disk Groups Note You may need an additional license to use this feature. There are several circumstances under which you might want to reorganize the contents of your existing disk groups: ◆ To group volumes or disks differently as the needs of your organization change. For example, you might want to split disk groups to match the boundaries of separate departments, or to join disk groups when departments are merged.
Reorganizing the Contents of Disk Groups The vxdg command provides the following operations for reorganizing disk groups: ◆ move—moves a self-contained set of VxVM objects between imported disk groups. This operation fails if it would remove all the disks from the source disk group. Volume states are preserved across the move. The move operation is illustrated in “Disk Group Move Operation” below.
Reorganizing the Contents of Disk Groups ◆ split—removes a self-contained set of VxVM objects from an imported disk group, and moves them to a newly created target disk group. This operation fails if it would remove all the disks from the source disk group, or if an imported disk group exists with the same name as the target disk group. An existing deported disk group is destroyed if it has the same name as the target disk group (as is the case for the vxdg init command).
Reorganizing the Contents of Disk Groups ◆ join—removes all VxVM objects from an imported disk group and moves them to an imported target disk group. The source disk group is removed when the join is complete. The join operation is illustrated in “Disk Group Join Operation” below.
Reorganizing the Contents of Disk Groups by another host or because it no longer exists, you must recover the disk group manually as described in the section “Recovery from Incomplete Disk Group Moves” in the chapter “Recovery from Hardware Failure” of the VERITAS Volume Manager Troubleshooting Guide. The disk group move, split and join feature has the following limitations: ◆ Disk groups involved in a move, split or join must be version 90 or greater.
Reorganizing the Contents of Disk Groups Listing Objects Potentially Affected by a Move To display the VxVM objects that would be moved for a specified list of objects, use the following command: # vxdg [-o expand] listmove sourcedg targetdg object ...
Reorganizing the Contents of Disk Groups Examples of Disk Groups That Can and Cannot be Split Volume Data Plexes The disk group can be split as the DCO plexes are on the same disks as the data plexes and can therefore accompany their volumes. Snapshot Plex Split Volume DCO Plexes Snapshot DCO Plex Volume Data Plexes The disk group cannot be split as the DCO plexes have been separated from their data plexes and so cannot accompany their volumes. One solution is to relocate the DCO plexes.
Reorganizing the Contents of Disk Groups Moving Objects Between Disk Groups To move a self-contained set of VxVM objects from an imported source disk group to an imported target disk group, use the following command: # vxdg [-o expand] [-o override|verify] move sourcedg targetdg \ object ... The -o expand option ensures that the objects that are actually moved include all other disks containing subdisks that are associated with the specified objects or with objects that they contain.
Reorganizing the Contents of Disk Groups sd disk01-01 vol1-01 pl vol1-02 vol1 sd disk05-01 vol1-02 ENABLED 3591 ENABLED 3591 ENABLED 3591 0 0 ACTIVE - - The following command moves the self-contained set of objects implied by specifying disk disk01 from disk group dg1 to rootdg: # vxdg -o expand move dg1 rootdg disk01 The moved volumes are initially disabled following the move. Use the following commands to recover and restart the volumes in the target disk group: # vxrecover -g targetdg -m [volume .
Reorganizing the Contents of Disk Groups Splitting Disk Groups To remove a self-contained set of VxVM objects from an imported source disk group to a new target disk group, use the following command: # vxdg [-o expand] [-o override|verify] split sourcedg targetdg \ object ... For a description of the -o expand, -o override, and -o verify options, see “Moving Objects Between Disk Groups” on page 128. See “Splitting Disk Groups” on page 265 for more information on splitting shared disk groups in clusters.
Reorganizing the Contents of Disk Groups The output from vxprint after the split shows the new disk group, dg1: # vxprint Disk group: rootdg TY NAME ASSOC dg rootdg rootdg dm disk01 c0t1d0 dm disk02 c1t97d0 dm disk03 c1t112d0 dm disk04 c1t114d0 dm disk05 c1t96d0 dm disk06 c1t98d0 v vol1 fsgen pl vol1-01 vol1 sd disk01-01 vol1-01 pl vol1-02 vol1 sd disk05-01 vol1-02 KSTATE ENABLED ENABLED ENABLED ENABLED ENABLED LENGTH 17678493 17678493 17678493 17678493 17678493 17678493 2048 3591 3591 3591 3591 PLOFFS
Reorganizing the Contents of Disk Groups dm dm dm dm disk03 disk04 disk07 disk08 c1t112d0 c1t114d0 c1t99d0 c1t100d0 Disk group: dg1 TY NAME ASSOC dg dg1 dg1 dm disk05 c1t96d0 dm disk06 c1t98d0 v vol1 fsgen pl vol1-01 vol1 sd disk01-01 vol1-01 pl vol1-02 vol1 sd disk05-01 vol1-02 - 17678493 17678493 17678493 17678493 - - - KSTATE ENABLED ENABLED ENABLED ENABLED ENABLED LENGTH 17678493 17678493 2048 3591 3591 3591 3591 PLOFFS 0 0 STATE TUTIL0 ACTIVE ACTIVE ACTIVE - - PUTIL0 - The following com
Disabling a Disk Group Disabling a Disk Group To disable a disk group, unmount and stop any volumes in the disk group, and then use the following command to deport it: # vxdg deport diskgroup Deporting a disk group does not actually remove the disk group. It disables use of the disk group by the system. Disks in a deported disk group can be reused, reinitialized, added to other disk groups, or imported for use on other systems. Use the vxdg import command to re-enable access to the disk group.
Upgrading a Disk Group Until completion of the upgrade, the disk group can be used “as is” provided there is no attempt to use the features of the current version.
Upgrading a Disk Group Features Supported by Disk Group Versions Disk Group Version New Features Supported Previous Version Features Supported 90 - Cluster Support for Oracle Resilvering 20, 30, 40, 50, 60, 70, 80 Non-Persistent FastResync 50 - SRVM (now known as VERITAS Volume Replicator or VVR) 20, 30, 40 40 - Hot-Relocation 20, 30 30 - VxSmartSync Recovery Accelerator 20 20 - Dirty Region Logging 80 70 60 Disk Group Move, Split and Join Device Discovery Layer (DDL) Layered Volume
Managing the Configuration Daemon in VxVM It may sometimes be necessary to create a disk group for an older version. The default disk group version for a disk group created on a system running VERITAS Volume Manager 3.5 is 90. Such a disk group would not be importable on a system running VERITAS Volume Manager 2.3, which only supports up to version 40. Therefore, to create a disk group on a system running VERITAS Volume Manager 3.5 that can be imported by a system running VERITAS Volume Manager 2.
Managing the Configuration Daemon in VxVM If your system is configured to use Dynamic Multipathing (DMP), you can also use vxdctl to: ◆ reconfigure the DMP database to include disk devices newly attached to, or removed from the system ◆ create DMP device nodes in the directories /dev/vx/dmp and /dev/vx/rdmp ◆ update the DMP database with changes in path type for active/passive disk arrays.
Managing the Configuration Daemon in VxVM 138 VERITAS Volume Manager Administrator’s Guide
5 Creating and Administering Subdisks Introduction This chapter describes how to create and maintain subdisks. Subdisks are the low-level building blocks in a VERITAS Volume Mananger (VxVM) configuration that are required to create plexes and volumes. Note Most VxVM commands require superuser or equivalent privileges. Creating Subdisks Note Subdisks are created automatically if you use the vxassist command or the VERITAS Enterprise Administrator (VEA) to create volumes.
Displaying Subdisk Information Displaying Subdisk Information The vxprint command displays information about VxVM objects. To display general information for all subdisks, use this command: # vxprint -st The -s option specifies information about subdisks. The -t option prints a single-line output record that depends on the type of object being listed.
Splitting Subdisks For the subdisk move to work correctly, the following conditions must be met: ◆ The subdisks involved must be the same size. ◆ The subdisk being moved must be part of an active plex on an active (ENABLED) volume. ◆ The new subdisk must not be associated with any other plex. See “Configuring Hot-Relocation to Use Only Spare Disks” on page 238 for information about manually relocating subdisks after hot-relocation.
Associating Subdisks with Plexes For example, to join the contiguous subdisks disk03-02, disk03-03, disk03-04 and disk03-05 as subdisk disk03-02, use the following command: # vxsd join disk03-02 disk03-03 disk03-04 disk03-05 disk03-02 Associating Subdisks with Plexes Associating a subdisk with a plex places the amount of disk space defined by the subdisk at a specific offset within the plex. The entire area that the subdisk fills must not be occupied by any portion of another subdisk.
Associating Log Subdisks Note The subdisk must be exactly the right size. VxVM does not allow the space defined for two subdisks to overlap within a plex. For striped or RAID-5 plexes, use the following command to specify a column number and column offset for the subdisk to be added: # vxsd -l column_#/offset assoc plex subdisk ... If only one number is specified with the -l option for striped plexes, the number is interpreted as a column number and the subdisk is associated at the end of the column.
Dissociating Subdisks from Plexes For example, to associate a subdisk named disk02-01 with a plex named vol01-02, which is already associated with volume vol01, use the following command: # vxsd aslog vol01-02 disk02-01 You can also add a log subdisk to an existing volume with the following command: # vxassist addlog volume disk This command automatically creates a log subdisk within a log plex on the specified disk for the specified volume.
Changing Subdisk Attributes Changing Subdisk Attributes Caution Change subdisk attributes with extreme care. The vxedit command changes attributes of subdisks and other VxVM objects. To change subdisk attributes, use the following command: # vxedit set attribute=value ... subdisk ...
Changing Subdisk Attributes 146 VERITAS Volume Manager Administrator’s Guide
6 Creating and Administering Plexes Introduction This chapter describes how to create and maintain plexes. Plexes are logical groupings of subdisks that create an area of disk space independent of physical disk size or other restrictions. Replication (mirroring) of disk data is set up by creating multiple data plexes for a single volume. Each data plex in a mirrored volume contains an identical copy of the volume data.
Creating a Striped Plex Creating a Striped Plex To create a striped plex, you must specify additional attributes. For example, to create a striped plex named pl-01 with a stripe width of 32 sectors and 2 columns, use the following command: # vxmake plex pl-01 layout=stripe stwidth=32 ncolumn=2 \ sd=disk01-01,disk02-01 To use a plex to build a volume, you must associate the plex with the volume. For more information, see the section, “Attaching and Associating Plexes” on page 153.
Displaying Plex Information VxVM utilities use plex states to: ◆ indicate whether volume contents have been initialized to a known state ◆ determine if a plex contains a valid copy (mirror) of the volume contents ◆ track whether a plex was in active use at the time of a system failure ◆ monitor operations on plexes This section explains the individual plex states in detail.
Displaying Plex Information EMPTY Plex State Volume creation sets all plexes associated with the volume to the EMPTY state to indicate that the plex is not yet initialized. IOFAIL Plex State The IOFAIL plex state is associated with persistent state logging. When the vxconfigd daemon detects an uncorrectable I/O failure on an ACTIVE plex, it places the plex in the IOFAIL state to exclude it from the recovery selection process at volume start time.
Displaying Plex Information SNAPDONE Plex State The SNAPDONE plex state indicates that a snapshot plex is ready for a snapshot to be taken using vxassist snapshot. SNAPTMP Plex State The SNAPTMP plex state is used during a vxassist snapstart operation when a snapshot is being prepared on a volume. STALE Plex State If there is a possibility that a plex does not have the complete and current volume contents, that plex is placed in the STALE state.
Displaying Plex Information TEMPRMSD Plex State The TEMPRMSD plex state is used by vxassist when attaching new data plexes to a volume. If the synchronization operation does not complete, the plex and its subdisks are removed. Plex Condition Flags vxprint may also display one of the following condition flags in the STATE field: IOFAIL Plex Condition The plex was detached as a result of an I/O failure detected during normal volume I/O.
Attaching and Associating Plexes Plex Kernel States The plex kernel state indicates the accessibility of the plex to the volume driver which monitors it. Note No user intervention is required to set these states; they are maintained internally. On a system that is operating properly, all plexes are enabled. The following plex kernel states are defined: DETACHED Plex Kernel State Maintenance is being performed on the plex. Any write request to the volume is not reflected in the plex.
Taking Plexes Offline For example, to create a mirrored, fsgen-type volume named home, and to associate two existing plexes named home-1 and home-2 with home, use the following command: # vxmake -U fsgen vol home plex=home-1,home-2 Note You can also use the command vxassist mirror volume to add a data plex as a mirror to an existing volume. Taking Plexes Offline Once a volume has been created and placed online (ENABLED), VxVM can temporarily disconnect plexes from the volume.
Reattaching Plexes This command temporarily detaches the plex, but maintains the association between the plex and its volume. However, the plex is not used for I/O. A plex detached with the preceding command is recovered at system reboot. The plex state is set to STALE, so that if a vxvol start command is run on the appropriate volume (for example, on system reboot), the contents of the plex is recovered and made ACTIVE.
Moving Plexes Moving Plexes Moving a plex copies the data content from the original plex onto a new plex. To move a plex, use the following command: # vxplex mv original_plex new_plex For a move task to be successful, the following criteria must be met: ◆ The old plex must be an active part of an active (ENABLED) volume. ◆ The new plex must be at least the same size or larger than the old plex. ◆ The new plex must not be associated with another volume.
Dissociating and Removing Plexes Dissociating and Removing Plexes When a plex is no longer needed, you can dissociate it from its volume and remove it as an object from VxVM. You might want to remove a plex for the following reasons: ◆ to provide free disk space ◆ to reduce the number of mirrors in a volume so you can increase the length of another mirror and its associated volume.
Changing Plex Attributes Changing Plex Attributes Caution Change plex attributes with extreme care. The vxedit command changes the attributes of plexes and other volume Manager objects. To change plex attributes, use the following command: # vxedit set attribute=value ... plex Plex fields that can be changed using the vxedit command include: ◆ name ◆ putiln ◆ tutiln ◆ comment The putiln field attributes are maintained on reboot; tutiln fields are temporary and are not retained on reboot.
7 Creating Volumes Introduction This chapter describes how to create volumes in VERITAS Volume Manager (VxVM). Volumes are logical devices that appear as physical disk partition devices to data management systems. Volumes enhance recovery from hardware failure, data availability, performance, and storage configuration. Volumes are created to take advantage of the VxVM concept of virtual disks. A file system can be placed on the volume to organize the disk space with files and directories.
Types of Volume Layouts ◆ Mirrored—A volume with multiple data plexes that duplicate the information contained in a volume. Although a volume can have a single data plex, at least two are required for true mirroring to provide redundancy of data. For the redundancy to be useful, each of these data plexes should contain disk space from different disks. For more information, see “Mirroring (RAID-1)” on page 21.
Creating a Volume Creating a Volume You can create volumes using either an advanced approach or an assisted approach. Each method uses different tools although you may switch from one set to another at will. Note Most VxVM commands require superuser or equivalent privileges. Advanced Approach The advanced approach consists of a number of commands that typically require you to specify detailed input.
Using vxassist Assisted operations are performed primarily through the vxassist command or the VERITAS Enterprise Administrator (VEA). vxassist and the VEA create the required plexes and subdisks using only the basic attributes of the desired volume as input. Additionally, they can modify existing volumes while automatically modifying any underlying or associated objects. Both vxassist and the VEA use default values for many volume attributes, unless you provide specific values.
Using vxassist For tasks requiring new disk space, vxassist seeks out available disk space and allocates it in the configuration that conforms to the layout specifications and that offers the best use of free space. The vxassist command takes this form: # vxassist [options] keyword volume [attributes...] where keyword selects the task to perform. The first argument after a vxassist keyword, volume, is a volume name, which is followed by a set of desired volume attributes.
Using vxassist The format of entries in a defaults file is a list of attribute-value pairs separated by new lines. These attribute-value pairs are the same as those specified as options on the vxassist command line. Refer to the vxassist(1M) manual page for details.
Discovering the Maximum Size of a Volume Discovering the Maximum Size of a Volume To find out how large a volume you can create within a disk group, use the following form of the vxassist command: # vxassist [-g diskgroup] maxsize layout=layout [attributes] For example, to discover the maximum size RAID-5 volume with 5 columns and 2 logs that you can create within the disk group dgrp, enter the following command: # vxassist -g dgrp maxsize layout=raid5 nlog=2 You can use storage attributes if you want to
Creating a Volume on Specific Disks Creating a Volume on Specific Disks VxVM automatically selects the disks on which each volume resides, unless you specify otherwise. If you want a volume to be created on specific disks, you must designate those disks to VxVM. More than one disk can be specified. To create a volume on a specific disk or disks, use the following command: # vxassist [-b] [-g diskgroup] make volume length [layout=layout] \ diskname ...
Creating a Volume on Specific Disks See the vxassist(1M) manual page for more information about using storage attributes. It is also possible to control how volumes are laid out on the specified storage as described in the next section “Specifying Ordered Allocation of Storage to Volumes.” Specifying Ordered Allocation of Storage to Volumes If you specify the -o ordered option to vxassist when creating a volume, any storage that you also specify is allocated in the following order: 1. Concatenate disks.
Creating a Volume on Specific Disks This command mirrors column 1 across disk01 and disk03, and column 2 across disk02 and disk04 as illustrated in “Example of using Ordered Allocation to Create a Striped-Mirror Volume” on page 168.
Creating a Volume on Specific Disks Other storage specification classes for controllers, enclosures, targets and trays can be used with ordered allocation.
Creating a Mirrored Volume Creating a Mirrored Volume A mirrored volume provides data redundancy by containing more than one copy of its data. Each copy (or mirror) is stored on different disks from the original copy of the volume and from other mirrors. Mirroring a volume ensures that its data is not lost if a disk in one of its component mirrors fails. Note A mirrored volume requires space to be available on at least as many disks in the disk group as the number of mirrors in the volume.
Creating a Mirrored Volume Creating a Concatenated-Mirror Volume Note You may need an additional license to use this feature. A concatenated-mirror volume is an example of a layered volume which concatenates several underlying mirror volumes. To create a concatenated-mirror volume, use the following command: # vxassist [-b] [-g diskgroup] make volume length \ layout=concat-mirror [nmirror=number] Note Specify the -b option if you want to make the volume immediately available for use.
Creating a Mirrored Volume To create a volume with an attached DCO object and DCO volume, use the following procedure: 1. Ensure that the disk group has been upgraded to at least version 90. Use the following command to check the version of a disk group: # vxdg list diskgroup To upgrade a disk group to the latest version, use the following command: # vxdg upgrade diskgroup For more information, see “Upgrading a Disk Group” on page 133. 2.
Creating a Striped Volume For a volume that will be written to sequentially, such as a database log volume, use the following command to specify that sequential DRL is to be used: # vxassist [-g diskgroup] make volume length layout=mirror \ logtype=drlseq To add DRL logging to a volume that has DCO enabled, or to change the number of DRL logs, follow the procedure that is described in “Adding DRL Logging to a Mirrored Volume” on page 197.
Creating a Striped Volume This creates a striped volume with the default stripe unit size (64 kilobytes) and the default number of stripes (2). You can specify the disks on which the volumes are to be created by including the disk names on the command line.
Mirroring across Targets, Controllers or Enclosures Creating a Striped-Mirror Volume A striped-mirror volume is an example of a layered volume which stripes several underlying mirror volumes. Note A striped-mirror volume requires space to be available on at least as many disks in the disk group as the number of columns multiplied by the number of stripes in the volume.
Creating a RAID-5 Volume The attribute mirror=ctlr specifies that disks in one mirror should not be on the same controller as disks in other mirrors within the same volume. Note Both paths of an active/passive array are not considered to be on different controllers when mirroring across controllers.
Creating a RAID-5 Volume A RAID-5 volume contains a RAID-5 data plex that consists of three or more subdisks located on three or more physical disks. Only one RAID-5 data plex can exist per volume. A RAID-5 volume can also contain one or more RAID-5 log plexes, which are used to log information about data and parity being written to the volume. For more information on RAID-5 volumes, see “RAID-5 (Striping with Parity)” on page 25.
Creating a Volume Using vxmake For example, the following command creates a 3-column RAID-5 volume with the default stripe unit size on disks disk04, disk05 and disk06. It also creates two RAID-5 logs on disks disk07 and disk08. # vxassist -b make volraid 10g layout=raid5 ncol=3 nlog=2 \ logdisk=disk07,disk08 disk04 disk05 disk06 Note The number of logs must equal the number of disks specified to logdisk.
Creating a Volume Using vxmake If each column in a RAID-5 plex is to be created from multiple subdisks which may span several physical disks, you can specify to which column each subdisk should be added.
Creating a Volume Using vxmake Creating a Volume Using a vxmake Description File You can use the vxmake command to add a new volume, plex or subdisk to the set of objects managed by VxVM. vxmake adds a record for each new object to the VxVM configuration database. You can create records either by specifying parameters to vxmake on the command line, or by using a file which contains plain-text descriptions of the objects. The file can also contain commands for performing a list of tasks.
Initializing and Starting a Volume Initializing and Starting a Volume A volume must be initialized if it was created by the vxmake command and has not yet been initialized, or if the volume has been set to an uninitialized state. Note If you create a volume using the vxassist command, vxassist initializes and starts the volume automatically unless you specify the attribute init=none.
Accessing a Volume This command writes zeroes to the entire length of the volume and to any log plexes. It then makes the volume active. You can also zero out a volume by specifying the attribute init=zero to vxassist, as shown in this example: # vxassist make volume length layout=raid5 init=zero Note You cannot use the -b option to make this operation a background task.
8 Administering Volumes Introduction This chapter describes how to perform common maintenance tasks on volumes in VERITAS Volume Manager (VxVM). This includes displaying volume information, monitoring tasks, adding and removing logs, resizing volumes, removing mirrors, removing volumes, backing up volumes using mirrors and snapshots, and changing the layout of volumes without taking them offline. Note Most VxVM commands require superuser or equivalent privileges.
Displaying Volume Information v voldef pl voldef-01 sd disk12-02 sgen ENABLED ACTIVE voldef ENABLED ACTIVE voldef-0disk12 2288 20480 20480 20480 SELECT CONCAT 0 c1t1d0 RW ENA where dg is a disk group, dm is a disk, v is a volume, pl is a plex, and sd is a subdisk. The top few lines indicate the headers that match each type of output line that follows. Each volume is listed along with its associated plexes and subdisks.
Displaying Volume Information CLEAN Volume State The volume is not started (kernel state is DISABLED) and its plexes are synchronized. For a RAID-5 volume, its plex stripes are consistent and its parity is good. EMPTY Volume State The volume contents are not initialized. The kernel state is always DISABLED when the volume is EMPTY. NEEDSYNC Volume State The volume requires a resynchronization operation the next time it is started. For a RAID-5 volume, a parity resynchronization operation is required.
Displaying Volume Information Volume Kernel States The volume kernel state indicates the accessibility of the volume. The volume kernel state allows a volume to have an offline (DISABLED), maintenance (DETACHED), or online (ENABLED) mode of operation. Note No user intervention is required to set these states; they are maintained internally. On a system that is operating properly, all volumes are enabled.
Monitoring and Controlling Tasks Monitoring and Controlling Tasks Note VxVM supports this feature for private disk groups, but not for shareable disk groups in a cluster environment. The VxVM task monitor tracks the progress of system recovery by monitoring task creation, maintenance, and completion. The task monitor allows you to monitor task progress and to modify characteristics of tasks, such as pausing and recovery rate (for example, to reduce the impact on system performance).
Monitoring and Controlling Tasks VxVM tasks represent long-term operations in progress on the system. Every task gives information on the time the operation started, the size and progress of the operation, and the state and rate of progress of the operation. The administrator can change the state of a task, giving coarse-grained control over the progress of the operation. For those operations that support it, the rate of progress of the task can be changed, giving more fine-grained control over the task.
Stopping a Volume vxtask Usage To list all tasks currently running on the system, use the following command: # vxtask list To print tasks hierarchically, with child tasks following the parent tasks, us the -h option, as follows: # vxtask -h list To trace all tasks in the disk group foodg that are currently paused, as well as any tasks with the tag sysstart, use the following command: # vxtask -G foodg -p -i sysstart list Use the vxtask -p list command lists all paused tasks, and use vxtask resume to con
Starting a Volume Putting a Volume in Maintenance Mode If all mirrors of a volume become STALE, you can place the volume in maintenance mode. Then you can view the plexes while the volume is DETACHED and determine which plex to use for reviving the others. To place a volume in maintenance mode, use the following command: # vxvol maint volume To assist in choosing the revival source plex, use vxprint to list the stopped volume and its plexes.
Adding a Mirror to a Volume Adding a Mirror to a Volume A mirror can be added to an existing volume with the vxassist command, as follows: # vxassist [-b] [-g diskgroup] mirror volume Note If specified, the -b option makes synchronizing the new mirror a background task.
Adding a Mirror to a Volume To mirror volumes on a disk, make sure that the target disk has an equal or greater amount of space as the originating disk and then do the following: 1. Select menu item 5 (Mirror volumes on a disk) from the vxdiskadm main menu. 2. At the following prompt, enter the disk name of the disk that you wish to mirror: Mirror volumes on a disk Menu: VolumeManager/Disk/Mirror This operation can be used to mirror volumes on a disk.
Removing a Mirror Removing a Mirror When a mirror is no longer needed, you can remove it to free up disk space. Note The last valid plex associated with a volume cannot be removed. To remove a mirror from a volume, use the following command: # vxassist remove mirror volume Additionally, you can use storage attributes to specify the storage to be removed.
Adding a DCO and DCO Volume Note You may need an additional license to use the Persistent FastResync feature. Even if you do not have a license, you can configure a DCO object and DCO volume so that snap objects are associated with the original and snapshot volumes. For more information about snap objects, see “How Persistent FastResync Works with Snapshots” on page 45.
Adding a DCO and DCO Volume To view the details of the DCO object and DCO volume that are associated with a volume, use the vxprint command.
Removing a DCO and DCO Volume Specifying Storage for DCO Plexes If the disks that contain volumes and their snapshots are to be moved or split into different disk groups, the disks that contain their respective DCO plexes must be able to accompany them. By default, VxVM attempts to place the DCO plexes on the same disks as the data plexes of the parent volume. However, this may be impossible if there is insufficient space available on those disks.
Reattaching a DCO and DCO Volume This form of the command dissociates the DCO object from the volume but does not destroy it or the DCO volume. If the -o rm option is specified, the DCO object, DCO volume and its plexes, and any snap objects are also removed. Note Dissociating a DCO and DCO volume disables Persistent FastResync on the volume. A full resynchronization of any remaining snapshots is required when they are snapped back. For more information, see the vxassist(1M) and vxdco(1M) manual pages.
Removing a DRL Log For a volume that will be written to sequentially, such as a database log volume, include the logtype=drlseq attribute to specify that sequential DRL is to be used: # vxassist addlog volume logtype=drlseq [nlog=n] Once created, the plex containing a log subdisk can be treated as a regular plex. Data subdisks can be added to the log plex. The log plex and log subdisk can be removed using the same procedures as are used to remove ordinary plexes and subdisks.
Removing a RAID-5 Log Adding a RAID-5 Log using vxplex As an alternative to using vxassist, you can add a RAID-5 log using the vxplex command. For example, to attach a RAID-5 log plex, r5log, to a RAID-5 volume, r5vol, use the following command: # vxplex att r5vol r5log The attach operation can only proceed if the size of the new log is large enough to hold all of the data on the stripe. If the RAID-5 volume already contains logs, the new log length is the minimum of each individual log length.
Resizing a Volume Note When removing the log leaves the volume with less than two valid logs, a warning is printed and the operation is not allowed to continue. The operation may be forced by additionally specifying the -f option to vxplex or vxassist. Resizing a Volume Resizing a volume changes the volume size. For example, you might need to increase the length of a volume if it is no longer large enough for the amount of data to be stored on it.
Resizing a Volume Resizing Volumes using vxresize Use the vxresize command to resize a volume containing a file system. Although other commands can be used to resize volumes containing file systems, the vxresize command offers the advantage of automatically resizing certain types of file system as well as the volume.
Resizing a Volume Resizing Volumes using vxassist The following modifiers are used with the vxassist command to resize a volume: ◆ growto—increase volume to a specified length ◆ growby—increase volume by a specified amount ◆ shrinkto—reduce volume to a specified length ◆ shrinkby—reduce volume by a specified amount Extending to a Given Length To extend a volume to a specific length, use the following command: # vxassist [-b] growto volume length Note If specified, the -b option makes growing the v
Resizing a Volume # vxassist shrinkto volume length For example, to shrink volcat to 1300 sectors, use the following command: # vxassist shrinkto volcat 1300 Caution Do not shrink the volume below the current size of the file system or database using the volume. The vxassist shrinkto command can be safely used on empty volumes.
Changing the Read Policy for Mirrored Volumes Note Sparse log plexes are not valid. They must map the entire length of the log. If increasing the log length would make any of the logs invalid, the operation is not allowed. Also, if the volume is not active and is dirty (for example, if it has not been shut down cleanly), the log length cannot be changed.
Removing a Volume To set the read policy to select, use the following command: # vxvol rdpol select volume For more information about how read policies affect performance, see “Volume Read Policies” on page 284. Removing a Volume Once a volume is no longer necessary (it is inactive and its contents have been archived, for example), it is possible to remove the volume and free up the disk space for other uses. Before removing a volume, use the following procedure to stop all activity on the volume: 1.
Moving Volumes from a VM Disk Moving Volumes from a VM Disk Before you disable or remove a disk, you can move the data from that disk to other disks on the system. To do this, ensure that the target disks have sufficient space, and then use the following procedure: 1. Select menu item 6 (Move volumes from a disk) from the vxdiskadm main menu. 2.
Enabling FastResync on a Volume As the volumes are moved from the disk, the vxdiskadm program displays the status of the operation: Move volume voltest ... Move volume voltest-bk00 ... When the volumes have all been moved, the vxdiskadm program displays the following success message: Evacuation of disk disk01 is complete. 3.
Enabling FastResync on a Volume Note It is not possible to configure both Persistent and Non-Persistent FastResync on a volume. Persistent FastResync is used if a DCO object and a DCO volume are associated with the volume. Otherwise, Non-Persistent FastResync is used.
Disabling FastResync Disabling FastResync Use the vxvol command to turn off Persistent or Non-Persistent FastResync for an existing volume, as shown here: # vxvol [-g diskgroup] set fastresync=off volume Turning FastResync off releases all tracking maps for the specified volume. All subsequent reattaches will not use the FastResync facility, but perform a full resynchronization of the volume. This occurs even if FastResync is later turned on.
Enabling Persistent FastResync on Existing Volumes with Associated Snapshots Note The DCO plexes require persistent storage space on disk to be available for the FastResync maps. To make room for the DCO plexes, you may need to add extra disks to the disk group, or reconfigure existing volumes to free up space in the disk group. Another way to add disk space is to use the disk group move feature to bring in spare disks from a different disk group.
Enabling Persistent FastResync on Existing Volumes with Associated Snapshots 4. Use the following command on the original volume and on each of its snapshot volumes (if any) to add a DCO and DCO volume. # vxassist [-g diskgroup] addlog volume logtype=dco \ dcolen=loglen ndcomirror=number [storage_attribute ...] Set the value of ndcomirror equal to the number of data and snapshot plexes in the volume. The ndcomirror attribute specifies the number of DCO plexes that are created in the DCO volume.
Backing up Volumes Online For each plex in each volume, use the following command to set the plex’s dco_plex_rid attribute to refer to the corresponding plex in the DCO volume.
Backing up Volumes Online You can perform backup of a mirrored volume on an active system with these steps: 1. Dissociate one of the volume’s data plexes (vol01-01, for example) using the following command: # vxplex [-g diskgroup] dis plex Optionally, stop user activity during this time to improve the consistency of the backup. 2. Create a temporary volume, tempvol, that uses the dissociated plex, using the following command: # vxmake -g diskgroup -U gen vol tempvol plex=plex 3.
Backing up Volumes Online Backing Up Volumes Online Using Snapshots Note You can use the procedure described in this section to create a snapshot of a RAID-5 volume and to back it up. Note You may need an additional license to use this feature. VxVM provides snapshot images of volume devices using vxassist and other commands.
Backing up Volumes Online The online backup procedure is completed by running the vxassist snapshot command on a volume with a SNAPDONE mirror. This task detaches the finished snapshot (which becomes a normal mirror), creates a new normal volume and attaches the snapshot mirror to the snapshot volume. The snapshot then becomes a normal, functioning mirror and the state of the snapshot is set to ACTIVE.
Backing up Volumes Online 2. Choose a suitable time to create a snapshot. If possible, plan to take the snapshot at a time when users are accessing the volume as little as possible. 3. Create a snapshot volume using the following command: # vxassist [-g diskgroup] snapshot [nmirror=N] volume snapshot If required, use the nmirror attribute to specify the number of mirrors in the snapshot volume.
Backing up Volumes Online Note Dissociating or removing the snapshot volume loses the advantage of fast resynchronization if FastResync was enabled. If there are no further snapshot plexes available, any subsequent snapshots that you take require another complete copy of the original volume to be made. Converting a Plex into a Snapshot Plex In some circumstances, you may find it more convenient to convert an existing plex in a volume into a snapshot plex rather than running vxassist snapstart.
Backing up Volumes Online Backing Up Multiple Volumes Using Snapshots To make it easier to create snapshots of several volumes at the same time, the snapshot option accepts more than one volume name as its argument, for example: # vxassist [-g diskgroup] snapshot volume1 volume2 ... By default, each replica volume is named SNAPnumber-volume, where number is a unique serial number, and volume is the name of the volume for which the snapshot is being taken.
Backing up Volumes Online To merge a specified number of plexes from the snapshot volume with the original volume, use the following command: # vxassist snapback nmirror=number snapshot Here the nmirror attribute specifies the number of mirrors in the snapshot volume that are to be re-attached. Once the snapshot plexes have been reattached and their data resynchronized, they are ready to be used in another snapshot operation.
Backing up Volumes Online For example, if myvol1 and SNAP-myvol1 are in separate disk groups mydg1 and mydg2 respectively, the following commands stop the tracking on SNAP-myvol1 with respect to myvol1 and on myvol1 with respect to SNAP-myvol1: # vxassist -g mydg2 snapclear SNAP-myvol1 myvol1_snp # vxassist -g mydg1 snapclear myvol1 SNAP-myvol1_snp Displaying Snapshot Information (snapprint) The vxassist snapprint command displays the associations between the original volumes and their respective replicas
Performing Online Relayout that no snap objects are associated with volume v2 or with its snapshot volume SNAP-v2. See “How Persistent FastResync Works with Snapshots” on page 45 for more information about snap objects. If a volume is specified, the snapprint command displays an error message if no FastResync maps are enabled for that volume. Performing Online Relayout Note You may need an additional license to use this feature.
Performing Online Relayout ncol=-number—specifies the number of colums to remove stripeunit=size—specifies the stripe width See the vxassist(1M) manual page for more information about relayout options.
Performing Online Relayout Viewing the Status of a Relayout Online relayout operations take some time to perform. You can use the vxrelayout command to obtain information about the status of a relayout operation. For example, the command: # vxrelayout status vol04 might display output similar to this: STRIPED, columns=5, stwidth=128--> STRIPED, columns=6, stwidth=128 Relayout running, 68.58% completed.
Converting Between Layered and Non-Layered Volumes The -o bg option restarts the relayout in the background. You can also specify the slow and iosize option modifiers to control the speed of the relayout and the size of each region that is copied.
Converting Between Layered and Non-Layered Volumes is desired.
Converting Between Layered and Non-Layered Volumes 226 VERITAS Volume Manager Administrator’s Guide
9 Administering Hot-Relocation Introduction If a volume has a disk I/O failure (for example, because the disk has an uncorrectable error), VERITAS Volume Manager (VxVM) can detach the plex involved in the failure. I/O stops on that plex but continues on the remaining plexes of the volume. If a disk fails completely, VxVM can detach the disk from its disk group. All plexes on the disk are disabled. If there are any unmirrored volumes on a disk when it is detached, those volumes are also disabled.
How Hot-Relocation works How Hot-Relocation works Hot-relocation allows a system to react automatically to I/O failures on redundant (mirrored or RAID-5) VxVM objects, and to restore redundancy and access to those objects. VxVM detects I/O failures on objects and relocates the affected subdisks to disks designated as spare disks or to free space within the disk group. VxVM then reconstructs the objects that existed before the failure and makes them redundant and accessible again.
How Hot-Relocation works 3. If no spare disks are available or additional space is needed, vxrelocd uses free space on disks in the same disk group, except those disks that have been excluded for hot-relocation use (marked nohotuse). When vxrelocd has relocated the subdisks, it reattaches each relocated subdisk to its plex. 4. Finally, vxrelocd initiates appropriate recovery procedures. For example, recovery includes mirror resynchronization for mirrored volumes or data recovery for RAID-5 volumes.
How Hot-Relocation works Example of Hot-Relocation for a Subdisk in a RAID-5 Volume a) Disk group contains five disks. Two RAID-5 volumes are configured across four of the disks. One spare disk is available for hot-relocation. disk01 disk02 disk03 disk04 disk01-01 disk02-01 disk03-01 disk04-01 disk05 Spare Disk disk02-02 disk03-02 b) Subdisk disk02-01 in one RAID-5 volume fails.
How Hot-Relocation works Partial Disk Failure Mail Messages If hot-relocation is enabled when a plex or disk is detached by a failure, mail indicating the failed objects is sent to root. If a partial disk failure occurs, the mail identifies the failed plexes.
How Hot-Relocation works Complete Disk Failure Mail Messages If a disk fails completely and hot-relocation is enabled, the mail message lists the disk that failed and all plexes that use the disk.
Configuring a System for Hot-Relocation When selecting space for relocation, hot-relocation preserves the redundancy characteristics of the VxVM object to which the relocated subdisk belongs. For example, hot-relocation ensures that subdisks from a failed plex are not relocated to a disk containing a mirror of the failed plex. If redundancy cannot be preserved using any available spare disks and/or free space, hot-relocation does not take place.
Displaying Spare Disk Information Depending on the locations of the relocated subdisks, you can choose to move them elsewhere after hot-relocation occurs (see “Configuring Hot-Relocation to Use Only Spare Disks” on page 238). After a successful relocation, remove and replace the failed disk as described in “Removing and Replacing Disks” on page 76).
Removing a Disk from Use as a Hot-Relocation Spare Any VM disk in this disk group can now use this disk as a spare in the event of a failure. If a disk fails, hot-relocation automatically occurs (if possible). You are notified of the failure and relocation through electronic mail. After successful relocation, you may want to replace the failed disk. Alternatively, you can use vxdiskadm to designate a disk as a hot-relocation spare: 1.
Excluding a Disk from Hot-Relocation Use Alternatively, you can use vxdiskadm to remove a disk from the hot-relocation pool: 1. Select menu item 12 (Turn off the spare flag on a disk) from the vxdiskadm main menu. 2. At the following prompt, enter the name of a spare disk (such as disk01): Menu: VolumeManager/Disk/UnmarkSpareDisk Use this operation to turn off the spare flag on a disk. This operation takes, as input, a disk name.
Making a Disk Available for Hot-Relocation Use The vxdiskadm program displays the following confirmation: Excluding disk01 in rootdg from hot-relocation use is complete. 3.
Configuring Hot-Relocation to Use Only Spare Disks Configuring Hot-Relocation to Use Only Spare Disks If you want VxVM to use only spare disks for hot-relocation, add the following line to the file /etc/default/vxassist: spare=only If not enough storage can be located on disks marked as spare, the relocation fails. Any free space on non-spare disks is not used.
Moving and Unrelocating Subdisks Moving and Unrelocating Subdisks using vxdiskadm To move the hot-relocated subdisks back to the disk where they originally resided after the disk has been replaced following a failure, use the following procedure: 1. Select menu item 14 (Unrelocate subdisks back to a disk) from the vxdiskadm main menu. 2. This option prompts for the original disk media name first.
Moving and Unrelocating Subdisks Moving and Unrelocating subdisks using vxassist You can use the vxassist command to move and unrelocate subdisks. For example, to move the relocated subdisks on disk05 belonging to the volume home back to disk02, enter the following command: # vxassist -g rootdg move home !disk05 disk02 Here, !disk05 specifies the current location of the subdisks, and disk02 specifies where the subdisks should be relocated.
Moving and Unrelocating Subdisks Moving hot-relocated subdisks back to their original disk Assume that disk01 failed and all the subdisks were relocated. After disk01 is replaced, vxunreloc can be used to move all the hot-relocated subdisks back to disk01. # vxunreloc -g newdg disk01 Moving hot-relocated subdisks to a different disk The vxunreloc utility provides the -n option to move the subdisks to a different disk from where they were originally relocated.
Moving and Unrelocating Subdisks Examining which subdisks were hot-relocated from a disk If a subdisk was hot relocated more than once due to multiple disk failures, it can still be unrelocated back to its original location. For instance, if disk01 failed and a subdisk named disk01-01 was moved to disk02, and then disk02 experienced disk failure, all of the subdisks residing on it, including the one which was hot-relocated to it, will be moved again.
Modifying the Behavior of Hot-Relocation If the system goes down after the new subdisks are created on the destination disk, but before all the data has been moved, re-execute vxunreloc when the system has been rebooted. Caution Do not modify the string UNRELOC in the comment field of a subdisk record. Modifying the Behavior of Hot-Relocation Hot-relocation is turned on as long as vxrelocd is running. You leave hot-relocation turned on so that you can take advantage of this feature if a failure occurs.
Modifying the Behavior of Hot-Relocation When executing vxrelocd manually, either include /etc/vx/bin in your PATH or specify vxrelocd’s absolute pathname, for example: # PATH=/etc/vx/bin:$PATH # export PATH # nohup vxrelocd root & or # nohup /etc/vx/bin/vxrelocd root user1 user2 & See the vxrelocd(1M) manual page for more information.
10 Administering Cluster Functionality Introduction A cluster consists of a number of hosts or nodes that share a set of disks. The main benefits of cluster configurations are: ◆ Availability—If one node fails, the other nodes can still access the shared disks. When configured with suitable software, mission-critical applications can continue running by transferring their execution to a standby node in the cluster.
Overview of Cluster Volume Management Overview of Cluster Volume Management In recent years, tightly coupled cluster systems have become increasingly popular in the realm of enterprise-scale mission-critical data processing. The primary advantage of clusters is protection against hardware failure. Should the primary node fail or otherwise become unavailable, applications can continue to run by transferring their execution to standby nodes in the cluster.
Overview of Cluster Volume Management Example of a 4-Node Cluster Redundant Private Network Node 0 (master) Node 1 (slave) Node 2 (slave) Node 3 (slave) Redundant SCSI or Fibre Channel Connectivity Cluster-Shareable Disks Cluster-Shareable Disk Groups To the cluster monitor, all nodes are the same. VxVM objects configured within shared disk groups can potentially be accessed by all nodes that join the cluster.
Overview of Cluster Volume Management Private and Shared Disk Groups Two types of disk groups are defined: ◆ Private disk groups—belong to only one node. A private disk group is only imported by one system. Disks in a private disk group may be physically accessible from one or more systems, but access is restricted to one system only. The root disk group (rootdg) is always a private disk group. ◆ Shared disk groups—shared by all nodes.
Overview of Cluster Volume Management Note Applications running on each node can access the data on the VM disks simultaneously. VxVM does not protect against simultaneous writes to shared volumes by more than one node. It is assumed that applications control consistency (by using a distributed lock manager, for example). Activation Modes of Shared Disk Groups A shared disk group must be activated on a node in order for the volumes in the disk group to become accessible for application I/O from that node.
Overview of Cluster Volume Management The table “Allowed and Conflicting Activation Modes” summarizes the allowed and conflicting activation modes for shared disk groups: Allowed and Conflicting Activation Modes Disk group activated in cluster as... Attempt to activate disk group on another node as...
Overview of Cluster Volume Management You can also use the vxdg command to change the activation mode on a shared disk group as described in “Changing the Activation Mode on a Shared Disk Group” on page 266. For a description of how to configure a volume so that it can only be opened by a single node in a cluster, see “Creating Volumes with Exclusive Open Access by a Node” on page 266 and “Setting Exclusive Open Access to a Volume by a Node” on page 267.
Cluster Initialization and Configuration Cluster Initialization and Configuration Before any nodes can join a new cluster for the first time, you must supply certain configuration information during cluster monitor setup. This information is normally stored in some form of cluster monitor configuration database. The precise content and format of this information depends on the characteristics of the cluster monitor.
Cluster Initialization and Configuration If other operations, such as VxVM operations or recoveries, are in progress, cluster reconfiguration can be delayed until those operations have completed. Volume reconfigurations (see “Volume Reconfiguration” on page 254) do not take place at the same time as cluster reconfigurations. Depending on the circumstances, an operation may be held up and restarted later. In most cases, cluster reconfiguration takes precedence.
Cluster Initialization and Configuration Note If MC/ServiceGuard is the cluster monitor, it expects the vxclustd daemon registration to complete within a given timeout period. If registration times out, MC/ServiceGuard aborts cluster initialization and fails cluster startup. Volume Reconfiguration Volume reconfiguration is the process of creating, changing, and removing VxVM objects such as disk groups, volumes and plexes. In a cluster, all nodes cooperate to perform such operations.
Cluster Initialization and Configuration vxconfigd Daemon The VxVM configuration daemon, vxconfigd, maintains the configuration of VxVM objects. It receives cluster-related instructions from the kernel. A separate copy of vxconfigd runs on each node, and these copies communicate with each other over a network. When invoked, a VxVM utility communicates with the vxconfigd daemon running on the same node; it does not attempt to connect with vxconfigd daemons on other nodes.
Cluster Initialization and Configuration ◆ If the vxconfigd daemon is stopped on the master node, the vxconfigd daemons on the slave nodes periodically attempt to rejoin to the master node. Such attempts do not succeed until the vxconfigd daemon is restarted on the master. In this case, the vxconfigd daemons on the slave nodes have not lost information about the shared configuration, so that any displayed configuration information is correct.
Cluster Initialization and Configuration Clean node shutdown must be used after, or in conjunction with, a procedure to halt all cluster applications. Depending on the characteristics of the clustered application and its shutdown procedure, a successful shutdown can require a lot of time (minutes to hours). For instance, many applications have the concept of draining, where they accept no new work, but complete any work in progress before exiting.
Upgrading Cluster Functionality Cluster Shutdown If all nodes leave a cluster, shared volumes must be recovered when the cluster is next started if the last node did not leave cleanly, or if resynchronization from previous nodes leaving uncleanly is incomplete. Upgrading Cluster Functionality The rolling upgrade feature allows you to upgrade the version of VxVM running in a cluster without shutting down the entire cluster.
Dirty Region Logging (DRL) in Cluster Environments Dirty Region Logging (DRL) in Cluster Environments Dirty region logging (DRL) is an optional property of a volume that provides speedy recovery of mirrored volumes after a system failure. DRL is supported in cluster-shareable disk groups. This section provides a brief overview of DRL and describes how DRL behaves in a cluster environment. For more information on DRL, see “Dirty Region Logging (DRL)” on page 40.
Administering VxVM in Cluster Environments The cluster functionality of VxVM can perform a DRL recovery on a non-shared volume. However, if such a volume is moved to a VxVM system with cluster support and imported as shared, the dirty region log is probably too small to accommodate maps for all the cluster nodes. VxVM then marks the log invalid and performs a full recovery anyway.
Administering VxVM in Cluster Environments Requesting the Status of a Cluster Node The vxdctl utility controls the operation of the vxconfigd volume configuration daemon. The -c option can be used to request cluster information.
Administering VxVM in Cluster Environments Listing Shared Disk Groups vxdg can be used to list information about shared disk groups. To display information for all disk groups, use the following command: # vxdg list Example output from this command is displayed here: NAME rootdg group2 group1 STATE enabled enabled,shared enabled,shared ID 774215886.1025.teal 774575420.1170.teal 774222028.1090.teal Shared disk groups are designated with the flag shared.
Administering VxVM in Cluster Environments Creating a Shared Disk Group Note Shared disk groups can only be created on the master node. If the cluster software has been run to set up the cluster, a shared disk group can be created using the following command: # vxdg -s init diskgroup [diskname=]devicename where diskgroup is the disk group name, diskname is the administrative name chosen for a VM disk, and devicename is the device name (or disk access name).
Administering VxVM in Cluster Environments Importing Disk Groups as Shared Note Shared disk groups can only be imported on the master node. Disk groups can be imported as shared using the vxdg -s import command. If the disk groups are set up before the cluster software is run, the disk groups can be imported into the cluster arrangement using the following command: # vxdg -s import diskgroup where diskgroup is the disk group name or ID.
Administering VxVM in Cluster Environments Moving Objects Between Disk Groups As described in “Moving Objects Between Disk Groups” on page 128, you can use the vxdg move command to move a self-contained set of VxVM objects such as disks and top-level volumes between disk groups. In a cluster, you can move such objects between private disk groups on any cluster node where those disk groups are imported. Note You can only move objects between shared disk groups on the master node.
Administering VxVM in Cluster Environments Changing the Activation Mode on a Shared Disk Group Note The activation mode for access by a cluster node to a shared disk group is set on that node. The activation mode of a shared disk group can be changed using the following command: # vxdg -g diskgroup set activation=mode The activation mode is one of exclusive-write or ew, read-only or ro, shared-read or sr, shared-write or sw, or off.
Administering VxVM in Cluster Environments Setting Exclusive Open Access to a Volume by a Node Note Exclusive open access on a volume can only be set on the master node. Ensure that none of the nodes in the cluster have the volume open when setting this attribute. You can set the exclusive=on attribute with the vxvol command to specify that an existing volume may only be opened by one node in the cluster at a time.
Administering VxVM in Cluster Environments Displaying the Supported Cluster Protocol Version Range The following command displays the maximum and minimum protocol version supported by the node and the current protocol version: # vxdctl support This command produces out put similar to the following: Support information: vold_vrsn: 11 dg_minimum: 60 dg_maximum: 70 kernel: 10 protocol_minimum: 30 protocol_maximum: 40 protocol_current: 40 You can also use the following command to display the maximum and mini
Administering VxVM in Cluster Environments Recovering Volumes in Shared Disk Groups Note Volumes can only be recovered on the master node. The vxrecover utility is used to recover plexes and volumes after disk replacement. When a node leaves a cluster, it can leave some mirrors in an inconsistent state. The vxrecover utility can be used to recover such volumes. The -c option to vxrecover causes it to recover all volumes in shared disk groups.
Administering VxVM in Cluster Environments 270 VERITAS Volume Manager Administrator’s Guide
11 Configuring Off-Host Processing Introduction Off-host processing allows you to implement the following activities: ◆ Data Backup—As the requirement for 24 x 7 availability becomes essential for many businesses, organizations cannot afford the downtime involved in backing up critical data offline. By taking a snapshot of the data, and backing up from this snapshot, business-critical applications can continue to run without extended down time or impacted performance.
Introduction FastResync of Volume Snapshots Note You may need an additional license to use this feature. VxVM allows you to take multiple snapshots of your data at the level of a volume. A snapshot volume contains a stable copy of a volume’s data at a given moment in time that you can use for online backup or decision support. If FastResync is enabled on a volume, VxVM uses a FastResync map to keep track of which blocks are updated in the volume and in the snapshot.
Implementing Off-Host Processing Solutions Disk Group Split and Join Note You may need an additional license to use this feature. A volume, such as a snapshot volume, can be split off into a separate disk group and deported. It is then ready for importing on another host that is dedicated to off-host processing. This host need not be a member of a cluster but must have access to the disks.
Implementing Off-Host Processing Solutions The following sections describe how you can apply off-host processing to implement regular online backup of a volume in a private disk group, and to set up a replica of a production database for decision support.
Implementing Off-Host Processing Solutions Note If the volume was created before release 3.2 of VxVM, and it has any attached snapshot plexes or it is associated with any snapshot volumes, follow the procedure given in “Enabling Persistent FastResync on Existing Volumes with Associated Snapshots” on page 209. 3.
Implementing Off-Host Processing Solutions 5. On the primary host, make a snapshot volume, snapvol, using the following command: # vxassist -g volumedg snapshot [nmirrors=N] volume snapvol If required, use the nmirrors attribute to specify the number of mirrors in the snapshot volume. If a database spans more than one volume, specify all the volumes and their snapshot volumes on the same line, for example: # vxassist -g dbasedg snapshot vol1 snapvol1 vol2 snapvol2 \ vol3 snapvol3 6.
Implementing Off-Host Processing Solutions 12. On the OHP host, use the following command to deport the snapshot volume’s disk group: # vxdg deport snapvoldg 13. On the primary host, re-import the snapshot volume’s disk group using the following command: # vxdg import snapvoldg 14. On the primary host, use the following command to rejoin the snapshot volume’s disk group with the original volume’s disk group: # vxdg join snapvoldg volumedg 15. The snapshot volume is initially disabled following the join.
Implementing Off-Host Processing Solutions If the volume is not associated with a DCO object and DCO volume, follow the procedure described in “Adding a DCO and DCO Volume” on page 193. 2. Use the following command on the primary host to check whether FastResync is enabled on a volume: # vxprint -g volumedg -F%fastresync volume This command returns on if FastResync is enabled; otherwise, it returns off.
Implementing Off-Host Processing Solutions 4. Prepare the OHP host to receive the snapshot volume that contains the copy of the database tables. This may involve setting up private volumes to contain any redo logs, and configuring any files that are used to initialize the database. 5. On the primary host, suspend updates to the volume that contains the database tables. The database may have a hot backup mode that allows you to do this by temporarily suspending writes to its tables. 6.
Implementing Off-Host Processing Solutions 13. On the OHP host, use the appropriate database commands to recover and start the replica database for its decision support role. When you no longer need the replica database, or you want to resynchronize its data with the primary database, you can reattach the snapshot plexes with the original volume as described below: 1. On the OHP host, shut down the replica database, and use the following command to unmount the snapshot volume: # umount mount_point 2.
12 Performance Monitoring and Tuning Introduction VERITAS Volume Manager (VxVM) can improve overall system performance by optimizing the layout of data storage on the available hardware. This chapter contains guidelines establishing performance priorities, for monitoring performance, and for configuring your system appropriately. Performance Guidelines VxVM allows you to optimize data storage performance using the following two strategies: ◆ Balance the I/O load among the available disk drives.
Performance Guidelines Striping Striping improves access performance by cutting data into slices and storing it on multiple devices that can be accessed in parallel. Striped plexes improve access performance for both read and write operations. Having identified the most heavily accessed volumes (containing file systems or databases), you can increase access bandwidth to this data by striping it across portions of multiple disks.
Performance Guidelines Combining Mirroring and Striping Note You may need an additional license to use this feature. Mirroring and striping can be used together to achieve a significant improvement in performance when there are multiple I/O streams. Striping provides better throughput because parallel I/O streams can operate concurrently on separate devices. Serial access is optimized when I/O exactly fits across all stripe units in one stripe.
Performance Guidelines Volume Read Policies To help optimize performance for different types of volumes, VxVM supports the following read policies on data plexes: ◆ round—a round-robin read policy, where all plexes in the volume take turns satisfying read requests to the volume. ◆ prefer—a preferred-plex read policy, where the plex with the highest performance usually satisfies read requests. If that plex fails, another plex is accessed.
Performance Monitoring Performance Monitoring As a system administrator, you have two sets of priorities for setting priorities for performance. One set is physical, concerned with hardware such as disks and controllers. The other set is logical, concerned with managing software and its operation.
Performance Monitoring If you do not specify any operands, vxtrace reports either all error trace data or all I/O trace data on all virtual disk devices. With error trace data, you can select all accumulated error trace data, wait for new error trace data, or both of these (this is the default action). Selection can be limited to a specific disk group, to specific VxVM kernel I/O object types, or to particular named objects or devices.
Performance Monitoring The following is an example of output produced using the vxstat command: OPERATIONS BLOCKS TYP NAME READ WRITE vol blop 0 0 vol foobarvol 0 0 vol rootvol 73017 181735 vol swapvol 13197 20252 vol testvol 0 0 AVG TIME(ms) READ WRITE 0 0 0 0 718528 1114227 105569 162009 0 0 READ 0.0 0.0 26.8 25.8 0.0 WRITE 0.0 0.0 27.9 397.0 0.0 Additional volume statistics are available for RAID-5 configurations. For detailed information about how to use vxstat, refer to the vxstat(1M) manual page.
Performance Monitoring To display volume statistics, enter the vxstat command with no arguments. The following is a typical display of volume statistics: OPERATIONS BLOCKS TYP NAME READ WRITE vol archive 865 807 vol home 2980 5287 vol local 49477 49230 vol rootvol 102906 342664 vol src 79174 23603 vol swapvol 22751 32364 AVG TIME(ms) READ WRITE 5722 3809 6504 10550 507892 204975 1085520 1962946 425472 139302 182001 258905 READ 32.5 37.7 28.5 28.1 22.4 25.3 WRITE 24.0 221.1 33.5 25.6 30.9 323.
Performance Monitoring where dest_disk is the disk to which you want to move the volume. It is not necessary to specify a dest_disk. If you do not specify a dest_disk, the volume is moved to an available disk with enough space to contain the volume. For example, to move the volume from disk03 to disk04, use the following command: # vxassist move archive !disk03 disk04 This command indicates that the volume is to be reorganized so that no part remains on disk03.
Performance Monitoring Caution Striping a volume, or splitting a volume across multiple disks, increases the chance that a disk failure results in failure of that volume. For example, if five volumes are striped across the same five disks, then failure of any one of the five disks requires that all five volumes be restored from a backup. If each volume were on a separate disk, only one volume would need to be restored.
Tuning VxVM Tuning VxVM This section describes how to adjust the tunable parameters that control the system resources used by VxVM. Depending on the system resources that are available, adjustments may be required to the values of some tunable parameters to optimize performance. General Tuning Guidelines VxVM is optimally tuned for most configurations ranging from small systems to larger servers.
Tuning VxVM Number of Configuration Copies for a Disk Group Selection of the number of configuration copies for a disk group is based on a trade-off between redundancy and performance. As a general rule, reducing the number configuration copies in a disk group speeds up initial access of the disk group, initial startup of the vxconfigd daemon, and transactions performed within the disk group.
Tuning VxVM Tunable Parameters The following sections describe specific tunable parameters. dmp_pathswitch_blks_shift The number of contiguous I/O blocks (expressed as an integer power of 2) that are sent along a DMP path to an Active/Active array before switching to the next available path. The default value of this parameter is set to 10 so that 1024 blocks (1MB) of contiguous I/O are sent over a DMP path before switching.
Tuning VxVM vol_fmr_logsz The maximum size in kilobytes of the bitmap that Non-Persistent FastResync uses to track changed blocks in a volume. The number of blocks in a volume that are mapped to each bit in the bitmap depends on the size of the volume, and this value changes if the size of the volume is changed. For example, if the volume size is 1 gigabyte and the system block size is 1024 bytes, a vol_fmr_logsz value of 4 yields a map contains 32,768 bits, each bit representing one region of 32 blocks.
Tuning VxVM vol_maxio The maximum size of logical I/O operations that can be performed without breaking up the request. I/O requests to VxVM that are larger than this value are broken up and performed synchronously. Physical I/O requests are broken up based on the capabilities of the disk device and are unaffected by changes to this maximum logical request limit. The default value for this tunable is 256 sectors (256KB).
Tuning VxVM vol_maxspecialio The maximum size of an I/O request that can be issued by an ioctl call. Although the ioctl request itself can be small, it can request a large I/O request be performed. This tunable limits the size of these I/O requests. If necessary, a request that exceeds this value can be failed, or the request can be broken up and performed synchronously. The default value for this tunable is 256 sectors (256KB).
Tuning VxVM voldrl_max_drtregs The maximum number of dirty regions that can exist for non-sequential DRL on a volume. A larger value may result in improved system performance at the expense of recovery time. This tunable can be used to regulate the worse-case recovery time for the system following a failure. The default value for this tunable is 2048 sectors (2MB). voldrl_max_seq_dirty The maximum number of dirty regions allowed for sequential DRL.
Tuning VxVM A write request to a mirrored volume that is greater than voliomem_maxpool_sz/2 is broken up and performed in chunks of size voliomem_maxpool_sz/2. The default value for this tunable is 4M. Note The value of voliomem_maxpool_sz must be at least 10 times greater than the value of vol_maxio. voliot_errbuf_default The default size of the buffer maintained for error tracing events. This buffer is allocated at driver load time and is not adjustable for size while VxVM is running.
Tuning VxVM voliot_iobuf_max The maximum buffer size that can be used for a single trace buffer. Requests of a buffer larger than this size are silently truncated to this size. A request for a maximal buffer size from the tracing interface results (subject to limits of usage) in a buffer of this size. The default size for this buffer is 65536 bytes (64KB). Increasing this buffer can provide for larger traces to be taken without loss for very heavily used volumes.
Tuning VxVM 300 VERITAS Volume Manager Administrator’s Guide
A Commands Summary This appendix summarizes the usage and purpose of important commands in VERITAS Volume Manager (VxVM). References are included to longer descriptions in the remainder of this book. For detailed information about an individual command, refer to the appropriate manual page in the 1M section. Obtaining Information About Objects in VxVM Command Description vxdisk list [diskname] Lists disks under control of VxVM. See “Displaying Disk Information” on page 83.
Administering Disks 302 Command Description vxdiskadm Administers disks in VxVM using a menu-based interface. vxdiskadd [devicename] Adds a disk specified by device name. See “Using vxdiskadd to Place a Disk Under Control of VxVM” on page 68. vxedit rename olddisk newdisk Renames a disk under control of VxVM.See “Renaming a Disk” on page 81. vxedit set reserve=on|off diskname Sets aside/does not set aside a disk from use in a disk group. See “Reserving Disks” on page 82.
Creating and Administering Disk Groups Command Description vxdg init diskgroup [diskname=]devicename Creates a disk group using a pre-initialized disk. See “Creating a Disk Group” on page 110. vxdg -s init diskgroup \ Creates a shared disk group in a cluster using a pre-initialized disk. See “Creating a Shared Disk Group” on page 263. [diskname=]devicename vxdg [-n newname] deport diskgroup Deports a disk group and optionally renames it. See “Deporting a Disk Group” on page 113.
Creating and Administering Subdisks Command Description vxmake sd subdisk diskname,offset,length Creates a subdisk. See “Creating Subdisks” on page 139. vxsd assoc plex subdisk ... Associates subdisks with an existing plex. See “Associating Subdisks with Plexes” on page 142. vxsd assoc plex subdisk1:0 ... subdiskM:N-1 Adds subdisks to the ends of the columns in a striped or RAID-5 volume. See “Associating Subdisks with Plexes” on page 142.
Creating and Administering Plexes Command Description vxmake plex plex \ sd=subdisk1[,subdisk2,...] Creates a concatenated plex. See “Creating Plexes” on page 147. vxmake plex plex \ layout=stripe|raid5 stwidth=W ncolumn=N \ sd=subdisk1[,subdisk2,...] Creates a striped or RAID-5 plex. See “Creating a Striped Plex” on page 148. vxplex att volume plex Attaches a plex to an existing volume. See “Attaching and Associating Plexes” on page 153 and “Reattaching Plexes” on page 155.
Creating Volumes Command Description vxassist [-g diskgroup] maxsize \ layout=layout [attributes] Displays the maximum size of volume that can be created. See “Discovering the Maximum Size of a Volume” on page 165. vxassist make volume length \ [layout=layout ] [attributes] Creates a volume. See “Creating a Volume on Any Disk” on page 165 and “Creating a Volume on Specific Disks” on page 166 vxassist make volume length \ layout=mirror [nmirror=N] [attributes] Creates a mirrored volume.
Administering Volumes Command Description vxassist mirror volume [attributes] Adds a mirror to a volume. See “Adding a Mirror to a Volume” on page 191. vxassist remove mirror volume [attributes] Removes a mirror from a volume. See “Removing a Mirror” on page 193. vxassist addlog volume [attributes] Adds a log to a volume. See “Adding a DCO and DCO Volume” on page 193, “Adding DRL Logging to a Mirrored Volume” on page 197 and “Adding a RAID-5 Log” on page 198.
Administering Volumes 308 Command Description vxassist snapstart volume Prepares a snapshot mirror of a volume. See “Backing Up Volumes Online Using Snapshots” on page 214. vxassist snapshot volume snapshot Takes a snapshot of a volume. See “Backing Up Volumes Online Using Snapshots” on page 214. vxassist snapback volume snapshot Merges a snapshot with its original volume. See “Merging a Snapshot Volume (snapback)” on page 218. vxassist snapclear snapshot Makes the snapshot volume independent.
Monitoring and Controlling Tasks Command Description vxcommand -t tasktag [options] [arguments] Specifies a task tag to a command. See “Specifying Task Tags” on page 187. vxtask [-h] list Lists tasks running on a system. See “vxtask Usage” on page 189. vxtask monitor task Monitors the progress of a task. See “vxtask Usage” on page 189. vxtask pause task Suspends operation of a task. See “vxtask Usage” on page 189. vxtask -p list Lists all paused tasks.
310 VERITAS Volume Manager Administrator’s Guide
Glossary active/active disk arrays This type of multipathed disk array allows you to access a disk in the disk array through all the paths to the disk simultaneously, without any performance degradation. active/passive disk arrays This type of multipathed disk array allows one path to a disk to be designated as primary and used to access the disk at any time. Using a path other than the designated active path results in severe performance degradation in some disk arrays.
block The minimum unit of data transfer to or from a disk or array. boot disk A disk that is used for the purpose of booting a system. clean node shutdown The ability of a node to leave a cluster gracefully when all access to shared volumes has ceased. cluster A set of hosts (each termed a node) that share a set of disks. cluster manager An externally-provided daemon that runs on each node in a cluster.
data change object (DCO) A VxVM object that is used to manage information about the FastResync maps in the DCO volume. Both a DCO object and a DCO volume must be associated with a volume to implement Persistent FastResync on that volume. data stripe This represents the usable data portion of a stripe and is equal to the stripe minus the parity region. DCO volume A special volume that is used to hold Persistent FastResync change maps.
disk access records Configuration records used to specify the access path to particular disks. Each disk access record contains a name, a type, and possibly some type-specific information, which is used by VxVM in deciding how to access and manipulate the disk that is defined by the disk access record. disk array A collection of disks logically arranged into an object. Arrays tend to provide benefits such as redundancy or improved performance. Also see disk enclosure and JBOD.
disk media record A configuration record that identifies a particular disk, by disk ID, and gives that disk a logical (or administrative) name. disk name A logical or administrative name chosen for a disk that is under the control of VxVM, such as disk03. The term disk media name is also used to refer to a disk name. dissociate The process by which any link that exists between two VxVM objects is removed.
FastResync A fast resynchronization feature that is used to perform quick and efficient resynchronization of stale mirrors, and to increase the efficiency of the snapshot mechanism. Also see Persistent FastResync and Non-Persistent FastResync. Fibre Channel A collective name for the fiber optic technology that is commonly used to set up a Storage Area Network (SAN). file system A collection of files organized together into a structure.
log plex A plex used to store a RAID-5 log. The term log plex may also be used to refer to a Dirty Region Logging plex. log subdisk A subdisk that is used to store a dirty region log. master node A node that is designated by the software to coordinate certain VxVM operations in a cluster. Any node is capable of being the master node. mastering node The node to which a disk is attached. This is also known as a disk owner.
Non-Persistent FastResync A form of FastResync that cannot preserve its maps across reboots of the system because it stores its change map in memory. object An entity that is defined to and recognized internally by VxVM. The VxVM objects are: volume, plex, subdisk, disk, and disk group. There are actually two types of disk objects—one for the physical aspect of the disk and the other for the logical aspect. parity A calculated value that can be used to reconstruct data after a failure.
persistent state logging A logging type that ensures that only active mirrors are used for recovery purposes and prevents failed mirrors from being selected for recovery. This is also known as kernel logging. physical disk The underlying storage device, which may or may not be under VxVM control. plex A plex is a logical grouping of subdisks that creates an area of disk space independent of physical disk size or other restrictions. Mirroring is set up by creating multiple data plexes for a single volume.
read-writeback mode A recovery mode in which each read operation recovers plex consistency for the region covered by the read. Plex consistency is recovered by reading data from blocks of one plex and writing the data to all other writable plexes. root configuration The configuration database for the root disk group. This is special in that it always contains records for other disk groups, which are used for backup purposes only. It also contains disk records that define all disk devices on the system.
sector A unit of size, which can vary between systems. Sector size is set per device (hard drive, CD-ROM, and so on). Although all devices within a system are usually configured to the same sector size for interoperability, this is not always the case. A sector is commonly 1024 bytes. shared disk group A disk group in which access to the disks is shared by multiple hosts (also referred to as a cluster-shareable disk group). Also see private disk group.
stripe size The sum of the stripe unit sizes comprising a single stripe across all columns being striped. stripe unit Equally-sized areas that are allocated alternately on the subdisks (within columns) of each striped plex. In an array, this is a set of logically contiguous blocks that exist on each disk before allocations are made from the next disk in the array. A stripe unit may also be referred to as a stripe element. stripe unit size The size of each stripe unit.
VM disk A disk that is both under VxVM control and assigned to a disk group. VM disks are sometimes referred to as VxVM disks or simply disks. volume A virtual disk, representing an addressable range of disk blocks used by applications such as file systems or databases. A volume is a collection of from one to 32 plexes. volume configuration device The volume configuration device (/dev/vx/config) is the interface through which all configuration changes to the volume device driver are performed.
324 VERITAS Volume Manager Administrator’s Guide
Index Symbols /dev/vx/dsk block device files 182 /dev/vx/rdsk character device files 182 /etc/default/vxassist defaults file 163 /etc/default/vxassist file 238 /etc/default/vxdg defaults file 250 /etc/fstab file 205 /etc/vx/cntrls.exclude file 61 /etc/vx/disks.exclude file 60 /etc/vx/enclr.exclude file 61 /etc/vx/volboot file 136 /sbin/rc2.
initialization 252 interaction of MC/ServiceGuard and VxVM 257 introduced 246 joining disk groups in 265 limitations of shared disk groups 251 listing shared disk groups 262 local connectivity policy 251 maximum number of nodes in 245 moving objects between disk groups 265 node abort 257 node shutdown 256 nodes 246 operation of DRL in 259, 260 operation of vxconfigd in 255 operation of VxVM in 246 private disk groups 248 private networks 246 protection against simultaneous writes 249 reconfiguration daemon
moving log plexes 196 reattaching to volumes 197 removing from volumes 196 specifying storage for 196 dcolen attribute 45, 172, 194 DCOSNP plex state 149 DDL 5 Device Discovery Layer 58 description file with vxmake 180 DETACHED plex kernel state 153 volume kernel state 186 Device Discovery 5 Device Discovery Layer 58 Device Discovery Layer (DDL) 5 device files to access volumes 182 device names 3, 54 devices metadevices 56 pathname 54 special 56 standard 56 dirty bits in DRL 40 dirty flags set on volumes 39
reorganizing 121 reserving minor numbers 120 restarting moved volumes 129, 130, 132 root 9 rootdg 9, 107 setting connectivity policies in clusters 266 setting number of configuration copies 292 shared in clusters 248 specifying to commands 108 splitting 123, 130 splitting in clusters 265 upgrading version of 133, 135 version 133, 135 disk media names 8, 53 disk names 53 disk## 9, 53 disk##-## 9 disks adding 68 adding to disk groups 111 adding to disk groups forcibly 264 changing naming scheme 61 clearing lo
disabling controllers 101 displaying DMP database information 99 displaying DMP node for a path 101 displaying DMP node for an enclosure 101 displaying information about paths 99 displaying paths controlled by DMP node 101 displaying status of DMP error daemons 104 displaying status of DMP restore daemon 104 dynamic multipathing 85 enclosure-based naming 86 in a clustered environment 104 listing controllers 101 listing enclosures 102 load balancing 88 metanodes 86 path failover mechanism 87 path-switch tuna
detecting RAID-5 subdisk failure 228 excluding free space on disks from use by 236 limitations 229 making free space on disks available for use by 237 marking disks as spare 234 modifying behavior of 243 notifying users other than root 243 operation of 227 partial failure messages 231 preventing from running 243 reducing performance impact of recovery 243 removing disks from spare pool 235 subdisk relocation 233 subdisk relocation messages 238 unrelocating subdisks 238 unrelocating subdisks using vxassist 2
adding DRL logs 197 adding sequential DRL logs 197 changing read policies for 204 configuring VxVM to create by default 191 creating 170 creating across controllers 169, 175 creating across enclosures 175 creating across targets 167 creating with logging enabled 172 creating with sequential DRL enabled 173 defined 160 dirty region logging 39 DRL 39 FastResync 39 FR 39 logging 39 performance 282 removing DRL logs 198 removing sequential DRL logs 198 snapshots 43 mirrored-concatenated volumes converting to co
defining for snapshot volumes 218 device 3, 54 disk 53 disk media 8, 53 plex 11 plex attribute 158 renaming disks 81 subdisk 9 subdisk attribute 145 VM disk 9 volume 11 naming scheme changing for disks 61 naming schemes for disks 54 ndcomirror attribute 172, 194 NEEDSYNC volume state 185 NODAREC plex condition 152 nodes in clusters 246 maximum number in a cluster 245 node abort in clusters 257 shutdown in clusters 256 NODEVICE plex condition 152 non-layered volume conversion 224 Non-Persistent FastResync 44
determining failed 231 displaying information 83 displaying information about 109 displaying spare 234 enabling 80 excluding free space from hot-relocation use 236 failure handled by hot-relocation 228 initializing 60 installing 64 making available for hot-relocation 234 making free space available for hot-relocation use 237 marking as spare 234 moving between disk groups 117, 128 moving disk groups between systems 118 moving volumes from 206 partial failure messages 231 postponing replacement 76 releasing
specifying for online relayout 222 states 148 striped 17 taking offline 154, 190 tutil attribute 158 types 10 polling interval for DMP restore 103 preferred plex performance of read policy 284 read policy 204 primary path 85, 100 private disk groups converting from shared 264 in clusters 248 private network in clusters 246 private region configuration database 55 defined 55 effect of large disk groups on 107 public region 55 putil plex attribute 158 subdisk attribute 145 R RAID-0 17 RAID-0+1 21 RAID-1 21 RA
types of transformation 35 viewing status of 223 relocation automatic 227 complete failure messages 232 limitations 229 partial failure messages 231 REMOVED plex condition 152 removing disks 76 removing physical disks 74 replacing disks 76 replay logs and sequential DRL 41 REPLAY volume state 185 resilvering databases 49 restore policy check_all 103 check_alternate 103 check_disabled 103 check_periodic 103 restrictions VxVM-bootable volumes 70 resyncfromoriginal snapback 48 resyncfromreplica snapback 48 res
merging with original volumes 218 of RAID-5 volumes 214 on multiple volumes 48 removing 216 resynchronization on snapback 48 resynchronizing volumes from 219 used to back up volumes online 214 snapstart 42 SNAPTMP plex state 151 spanned volumes 15 spanning 15 spare disks displaying 234 marking disks as 234 used for hot-relocation 232 sparse plexes 37, 142 special disk devices 56 STALE plex state 151 standard disk devices 56 states for plexes 148 volume 184 storage ordered allocation of 167, 173, 177 storage
vol_maxspecialio 296 vol_subdisk_num 296 volcvm_smartsync 296 voldrl_max_drtregs 297 voldrl_max_seq_dirty 41, 297 voldrl_min_regionsz 297 voliomem_chunk_size 297 voliomem_maxpool_sz 297 voliot_errbuf_default 298 voliot_iobuf_dflt 298 voliot_iobuf_limit 298 voliot_iobuf_max 299 voliot_max_open 299 volraid_minpool_size 299 volraid_rsrtransmax 299 tutil plex attribute 158 subdisk attribute 145 unrelocating using vxassist 240 unrelocating using vxdiskadm 239 unrelocating using vxunreloc 240 swap space increasi
vol_max_vol tunable 294 vol_maxio tunable 295 vol_maxioctl tunable 295 vol_maxkiocount tunable 295 vol_maxparallelio tunable 295 vol_maxspecialio tunable 296 vol_subdisk_num tunable 296 volboot file 136 adding entry to 136 volcvm_smartsync tunable 296 voldrl_max_drtregs tunable 297 voldrl_max_seq_dirty tunable 41, 297 voldrl_min_regionsz tunable 297 voliomem_chunk_size tunable 297 voliomem_maxpool_sz tunable 297 voliot_errbuf_default tunable 298 voliot_iobuf_dflt tunable 298 voliot_iobuf_limit tunable 298 v
snapshots 220 dissociating DCO from 196 dissociating plexes from 157 enabling FastResync 209 enabling FastResync on 207 enabling FastResync on new 172 excluding storage from use by vxassist 166 finding maximum size of 165 finding out by how much can grow 200 flagged as dirty 39 initializing 181 initializing contents to zero 181 kernel states 186 layered 22, 30, 160 limit on number of plexes 11 limitations 11 making immediately available for use 181 maximum number of 294 maximum number of data plexes 284 mer
snapclear 43 snapshot 42 snapstart 42 used to add a log subdisk 144 used to add a RAID-5 log 198 used to add DCOs to volumes 194 used to add DRL logs 197 used to add mirrors to volumes 154, 191 used to add sequential DRL logs 198 used to change number of columns 222 used to change stripe unit size 222 used to configure exclusive access to a volume 266 used to convert between layered and non-layered volumes 224 used to create concatenated-mirror volumes 171 used to create mirrored volumes 170 used to create
used to add entry to volboot file 136 used to check cluster protocol version 267 used to manage vxconfigd 136 used to upgrade cluster protocol version 268 vxdctl enable used to configure new disks 56 used to invoke device discovery 57 vxddladm used to exclude support for disk arrays 58 used to list excluded disk arrays 59 used to list supported disk arrays 58 used to re-include support for disk arrays 59 vxdestroy_lvmroot used to remove LVM root disks 72 vxdg used to add disks to disk groups forcibly 263 us
used to add disks 65 used to add disks to disk groups 111 used to change disk-naming scheme 61 used to create disk groups 111 used to deport disk groups 113 used to exclude free space on disks from hot-relocation use 236 used to import disk groups 114 used to initialize disks 65 used to list spare disks 234 used to make free space on disks available for hot-relocation use 237 used to mark disks as spare 235 used to mirror volumes 192 used to move disk groups between systems 120 used to move disks between di
plexes 157 used to dissociate plexes from volumes 157 used to move plexes 156 used to reattach plexes 155 used to remove mirrors 193 used to remove plexes 193 used to remove RAID-5 logs 199 vxprint used to check if FastResync is enabled 208 used to display DCO information 195 used to display plex information 148 used to display subdisk information 140 used to display volume information 183 used to identify RAID-5 log plexes 199 used to list spare disks 234 used with enclosure-based disk names 62 vxrecover u
configuring disk devices 56 configuring to create mirrored volumes 191 dependency on operating system 2 disk discovery 57 granularity of memory allocation by 297 interaction with MC/ServiceGuard 257 limitations of shared disk groups 251 maximum number of data plexes per volume 284 maximum number of subdisks per plex 296 maximum number of volumes 294 maximum size of memory pool 297 minimum size of memory pool 299 objects in 7 obtaining system information xix operation in clusters 246 performance tuning 291 s