VERITAS Volume Manager 3.
Legal Notices Disclaimer The information contained in this publication is subject to change without notice. VERITAS Software makes no warranty of any kind with regard to this manual, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. VERITAS Software Corporation shall not be liable for errors contained herein or for incidental or consequential damages in connection with the furnishing, performance, or use of this manual.
Contents 1. Understanding VERITAS Volume Manager Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 VxVM and the Operating System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 How Data is Stored . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 How VxVM Handles Storage Management . . . . . . . . . . . . . . . . . . . . . . .
Contents FastResync Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SmartSync Recovery Accelerator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data Volume Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Redo Log Volume Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hot-Relocation . . . . . . . . . . . . .
Contents Taking a Disk Offline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 Renaming a Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 Reserving Disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 Displaying Disk Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Adding a Disk to a Disk Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Removing a Disk from a Disk Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deporting a Disk Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Importing a Disk Group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Renaming a Disk Group . . . . . . . .
Contents Plex States. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Plex Condition Flags. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Plex Kernel States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Attaching and Associating Plexes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents 8. Administering Volumes Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Displaying Volume Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Volume States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Volume Kernel States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Backing up Volumes Online . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Backing Up Volumes Online Using Mirrors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Backing Up Volumes Online Using Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Converting a Plex into a Snapshot Plex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Backing Up Multiple Volumes Using Snapshots . . . . . . . . . . . . . . . . .
Contents Overview of Cluster Volume Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Private and Shared Disk Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Activation Modes of Shared Disk Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Connectivity Policy of Shared Disk Groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Limitations of Shared Disk Groups . . . . . . . . . . . . . . . . . . . . .
Contents 11. Configuring Off-Host Processing Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . FastResync of Volume Snapshots. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Disk Group Split and Join . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Implementing Off-Host Processing Solutions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents xii
Preface xiii
Introduction The purpose of this guide is to demonstrate how to use VERITAS FlashSnapTM to implement point-in-time copy solutions on enterprise systems. FlashSnap offers you flexible solutions for the efficient management of multiple point-in-time copies of your data, and for reducing resource contention on your business-critical servers. NOTE xiv This guide supersedes the VERITAS Off-Host Processing Using FastResync Administrator’s Guide.
Audience and Scope The VERITAS® Point-In-Time Copy Solutions Administrator’s Guide provides information about how to implement solutions for online backup of databases and cluster-shareable file systems, for decision support on enterprise systems, and for Storage Rollback of databases to implement fast database recovery.
Organization This guide is organized as follows: xvi • Chapter 1, “Understanding VERITAS Volume Manager,” on page 1 • Chapter 2, “Administering Disks,” on page 65 • Chapter 3, “Administering Dynamic Multipathing (DMP),” on page 105 • Chapter 4, “Creating and Administering Disk Groups,” on page 131 • Chapter 5, “Creating and Administering Subdisks,” on page 177 • Chapter 6, “Creating and Administering Plexes,” on page 191 • Chapter 7, “Creating Volumes,” on page 211 • Chapter 8, “Administeri
Related Documents The following documents provide more information related to the installation, configuration and administration of the products described in this guide: • VERITAS NetBackup BusinesServer Installation Guide • VERITAS NetBackup BusinesServer System Administrator’s Guide • VERITAS Cluster File System Installation and Configuration Guide • VERITAS Database Edition for DB2 Database Administrator’s Guide • VERITAS Database Edition for DB2 Installation and Configuration Guide • VERITAS
Conventions The following table describes the typographic conventions used in this guide. Table 1 Typeface Examples Computer output, file contents, files, directories, software elements such as command options, function names, and parameters Read tunables from the /etc/vx/tunefstab file. New terms, book titles, emphasis, variables to be replaced by a name or value See the User’s Guide for details.
Table 1 (Continued) Typeface Usage Examples [ ] In a command synopsis, brackets indicates an optional argument ls [ -a ] | In a command synopsis, a vertical bar separates mutually exclusive arguments mount [suid | nosuid ] xix
Getting Help If you have any comments or problems with the VERITAS products, contact the VERITAS Technical Support: • U.S. and Canadian Customers: 1-800-342-0652 • International Customers: +1 (650) 527-8555 • Email: support@veritas.com For license information (U.S. and Canadian Customers): • Phone: 1-925-931-2464 • Email: license@veritas.com • Fax: 1-925-931-2487 For software updates: • Email: swupdate@veritas.
1 Understanding VERITAS Volume Manager Introduction The VERITAS Volume Manager (VxVM) is a storage management subsystem that allows you to manage physical disks as logical devices called volumes. A volume is a logical device that appears to data management systems as a physical disk. VxVM provides easy-to-use online disk storage management for computing environments and Storage Area Network (SAN) environments.
Understanding VERITAS Volume Manager VxVM and the Operating System VxVM and the Operating System VxVM operates as a subsystem between your operating system and your data management systems, such as file systems and database management systems. VxVM is tightly coupled with the operating system. Before a disk can be brought under VxVM control, the disk must be accessible through the operating system device interface.
Understanding VERITAS Volume Manager How Data is Stored How Data is Stored There are several methods used to store data on physical disks. These methods organize data on the disk so the data can be stored and retrieved efficiently. The basic method of disk organization is called formatting. Formatting prepares the hard disk so that files can be written to and retrieved from the disk by using a prearranged storage pattern.
Understanding VERITAS Volume Manager How VxVM Handles Storage Management How VxVM Handles Storage Management VxVM uses two types of objects to handle storage management: physical objects and virtual objects. • Physical objects—Physical disks or other hardware with block and raw operating system device interfaces that are used to store data. • Virtual objects—When one or more physical disks are brought under the control of VxVM, it creates virtual objects called volumes on those physical disks.
Understanding VERITAS Volume Manager How VxVM Handles Storage Management The figure, “Physical Disk Example,” shows how a physical disk and device name (devname) are illustrated in this document. For example, device name c0t0d0 is the entire hard disk connected to controller number 0 in the system, with a target ID of 0, and physical disk number 0. Figure 1-1 Physical Disk Example devname VxVM writes identification information on physical disks under VxVM control (VM disks).
Understanding VERITAS Volume Manager How VxVM Handles Storage Management Data can be spread across several disks within an array to distribute or balance I/O operations across the disks. Using parallel I/O across multiple disks in this way improves I/O performance by increasing data transfer speed and overall throughput for the array.
Understanding VERITAS Volume Manager Device Discovery Device Discovery Device Discovery is the term used to describe the process of discovering the disks that are attached to a host. This feature is important for DMP because it needs to support a growing number of disk arrays from a number of vendors. In conjunction with the ability to discover the devices attached to a host, the Device Discovery services enables you to add support dynamically for new disk arrays.
Understanding VERITAS Volume Manager Device Discovery In a typical SAN environment, host controllers are connected to multiple enclosures in a daisy chain or through a Fibre Channel hub or fabric switch as illustrated inFigure 1-3 Figure 1-3 Example Configuration for Disk Enclosures Connected via a Fibre Channel Hub/Switch c1 Host Fibre Channel Hub/Switch Disk Enclosures enc0 enc1 enc2 In such a configuration, enclosure-based naming can be used to refer to each disk within an enclosure.
Understanding VERITAS Volume Manager Device Discovery configured only on the disks in enclosure enc1, the failure of the cable between the hub and the enclosure would make the entire volume unavailable. If required, you can replace the default name that VxVM assigns to an enclosure with one that is more meaningful to your configuration. See “Renaming an Enclosure” on page 126 for details.
Understanding VERITAS Volume Manager Device Discovery See “Disk Device Naming in VxVM” on page 66 and “Changing the Disk-Naming Scheme” on page 76 for details of the standard and the enclosure-based naming schemes, and how to switch between them. Virtual Objects Virtual objects in VxVM include the following: • VM Disks • Disk Groups • Subdisks • Plexes • Volumes The connection between physical objects and VxVM objects is made when you place a physical disk under VxVM control.
Understanding VERITAS Volume Manager Device Discovery NOTE The vxprint command displays detailed information on existing VxVM objects. For additional information on the vxprint command, see “Displaying Volume Information” on page 249 and the vxprint(1M) manual page. VM Disks When you place a physical disk under VxVM control, a VM disk is assigned to the physical disk. A VM disk is under VxVM control and is usually in a disk group. Each VM disk corresponds to one physical disk.
Understanding VERITAS Volume Manager Device Discovery NOTE Even though rootdg is the default disk group, it does not necessarily contain the root disk. In the current release, the root disk may be under VxVM or LVM control. You can create additional disk groups as necessary. Disk groups allow you to group disks into logical collections. A disk group and its components can be moved as a unit from one host machine to another.
Understanding VERITAS Volume Manager Device Discovery A VM disk can contain multiple subdisks, but subdisks cannot overlap or share the same portions of a VM disk. Figure 1-7, “Example of Three Subdisks Assigned to One VM Disk,” shows a VM disk with three subdisks. The VM disk is assigned to one physical disk.
Understanding VERITAS Volume Manager Device Discovery • concatenation • striping (RAID-0) • mirroring (RAID-1) • striping with parity (RAID-5) Concatenation, striping (RAID-0), mirroring (RAID-1) and RAID-5 are described in “Volume Layouts in VxVM” on page 17. Volumes A volume is a virtual disk device that appears to applications, databases, and file systems like a physical disk device, but does not have the physical limitations of a physical disk device.
Understanding VERITAS Volume Manager Device Discovery See Figure 1-9, “Example of a Volume with One Plex,”. Figure 1-9 Example of a Volume with One Plex Volume disk01-01 vol01-01 Plex Subdisk vol01 Volume vol01 has the following characteristics: • It contains one plex named vol01-01. • The plex contains one subdisk named disk01-01. • The subdisk disk01-01 is allocated from VM disk disk01. A volume with two or more data plexes is “mirrored” and contains mirror images of the data.
Understanding VERITAS Volume Manager Device Discovery Combining Virtual Objects in VxVM VxVM virtual objects are combined to build volumes. The virtual objects contained in volumes are VM disks, disk groups, subdisks, and plexes.
Understanding VERITAS Volume Manager Volume Layouts in VxVM Volume Layouts in VxVM A VxVM virtual device is defined by a volume. A volume has a layout defined by the association of a volume to one or more plexes, each of which map to subdisks. The volume presents a virtual device interface that is exposed to other applications for data access. These logical building blocks re-map the volume address space through which I/O is re-directed at run-time.
Understanding VERITAS Volume Manager Volume Layouts in VxVM To achieve the desired storage service from a set of virtual devices, it may be necessary to include an appropriate set of VM disks into a disk group, and to execute multiple configuration commands. To the extent that it can, VxVM handles initial configuration and on-line re-configuration with its set of layouts and administration interface to make this job easier and more deterministic.
Understanding VERITAS Volume Manager Volume Layouts in VxVM The figure, “Example of Concatenation,”, shows concatenation with one subdisk. Figure 1-12 Example of Concatenation VM Disk Physical Disk Plex B = Block of data B1 B2 disk01-01 devname disk01-01 B3 disk01 B4 You can use concatenation with multiple subdisks when there is insufficient contiguous space for the plex on any one disk.
Understanding VERITAS Volume Manager Volume Layouts in VxVM subdisk disk01-01 on disk01. However, the last two blocks of data, B7 and B8, use only a portion of the space on the disk to which VM disk disk02 is assigned. The remaining free space on VM disk disk02 can be put to other uses. In this example, subdisks disk02-02 and disk02-03 are available for other disk management tasks.
Understanding VERITAS Volume Manager Volume Layouts in VxVM Striping (RAID-0) NOTE You may need an additional license to use this feature. Striping (RAID-0) is useful if you need large amounts of data written to or read from physical disks, and performance is important. Striping is also helpful in balancing the I/O load from multi-user applications across multiple disks. By using parallel data transfer to and from multiple disks, striping significantly improves data-access performance.
Understanding VERITAS Volume Manager Volume Layouts in VxVM For example, if there are three columns in a striped plex and six stripe units, data is striped over the three columns, as illustrated in Figure 1-15, “Striping Across Three Columns,” Figure 1-15 Striping Across Three Columns Column 1 Column 2 Column 3 Stripe 1 su1 su2 su3 Stripe 2 su4 su5 su6 Subdisk 1 Subdisk 2 Subdisk 3 SU = Stripe Unit Plex A stripe consists of the set of stripe units at the same positions across all columns.
Understanding VERITAS Volume Manager Volume Layouts in VxVM Striping continues for the length of the columns (if all columns are the same length), or until the end of the shortest column is reached. Any space remaining at the end of subdisks in longer columns becomes unused space. Figure 1-16, “Example of a Striped Plex with One Subdisk per Column,” shows a striped plex with three equal sized, single-subdisk columns. There is one column per physical disk.
Understanding VERITAS Volume Manager Volume Layouts in VxVM of the same disk or from another disk (for example, if the size of the plex is increased). Columns can also contain subdisks from different VM disks. Figure 1-17 Example of a Striped Plex with Concatenated Subdisks per Column VM Disks Striped Plex SU = Stripe Unit Physical Disks disk01-01 disk01-02 disk01-03 su1 su4 su1 disk01-02 . . . devname1 disk01-01 disk01-03 disk01 Column 1 disk02-01 disk02-02 disk02-01 su2 su3 su2 su5 . . .
Understanding VERITAS Volume Manager Volume Layouts in VxVM NOTE Although a volume can have a single plex, at least two plexes are required to provide redundancy of data. Each of these plexes must contain disk space from different disks to achieve redundancy. When striping or spanning across a large number of disks, failure of any one of those disks can make the entire plex unusable.
Understanding VERITAS Volume Manager Volume Layouts in VxVM The figure, “Mirrored-Stripe Volume Laid out on Six Disks,” shows an example where two plexes, each striped across three disks, are attached as mirrors to the same volume to create a mirrored-stripe volume.
Understanding VERITAS Volume Manager Volume Layouts in VxVM As for a mirrored-stripe volume, a striped-mirror volume offers the dual benefits of striping to spread data across multiple disks, while mirroring provides redundancy of data. In addition, it enhances redundancy, and reduces recovery time after disk failure.
Understanding VERITAS Volume Manager Volume Layouts in VxVM vulnerable to being put out of use altogether should a second disk fail before the first failed disk has been replaced, either manually or by hot-relocation.
Understanding VERITAS Volume Manager Volume Layouts in VxVM NOTE The VERITAS Enterprise Administrator (VEA) terms a striped-mirror as Striped-Pro, and a concatenated-mirror as Concatenated-Pro. RAID-5 (Striping with Parity) NOTE VxVM supports RAID-5 for private disk groups, but not for shareable disk groups in a cluster environment. NOTE You may need an additional license to use this feature. Although both mirroring (RAID-1) and RAID-5 provide redundancy of data, they use different methods.
Understanding VERITAS Volume Manager Volume Layouts in VxVM all of the disks in the array, reducing the write time for large independent writes because the writes do not have to wait until a single parity disk can accept the data. Figure 1-21 Stripe 1 Stripe 2 Stripe 3 Stripe 4 Parity Locations in a RAID-5 Model Data Data Parity Data Data Parity Data Data Parity Data Data Parity RAID-5 and how it is implemented by the VxVM is described in “Volume Manager RAID-5 Arrays” on page 31.
Understanding VERITAS Volume Manager Volume Layouts in VxVM support the full width of a parity stripe. The figure, “Traditional RAID-5 Array,”, shows the row and column arrangement of a traditional RAID-5 array. Figure 1-22 Traditional RAID-5 Array Stripe 1 Stripe 3 Row 0 Stripe 2 Row 1 Column 0 Column 1 Column 2 Column 3 This traditional array structure supports growth by adding more rows per column.
Understanding VERITAS Volume Manager Volume Layouts in VxVM units are used for each column. For RAID-5, the default stripe unit size is 16 kilobytes. See “Striping (RAID-0)” on page 21 for further information about stripe units. Figure 1-23 Volume ManagerRAID-5 Array Stripe 1 Stripe 2 SD SD SD SD SD SD SD SD Column 0 Column 1 Column 2 Column 3 SD = Subdisk NOTE Mirroring of RAID-5 volumes is not currently supported.
Understanding VERITAS Volume Manager Volume Layouts in VxVM Left-symmetric layout stripes both data and parity across columns, placing the parity in a different column for every stripe of data. The first parity stripe unit is located in the rightmost column of the first stripe. Each successive parity stripe unit is located in the next stripe, shifted left one column from the previous parity stripe unit location.
Understanding VERITAS Volume Manager Volume Layouts in VxVM failure, the data for each stripe can be restored by XORing the contents of the remaining columns data stripe units against their respective parity stripe units. For example, if a disk corresponding to the whole or part of the far left column fails, the volume is placed in a degraded mode.
Understanding VERITAS Volume Manager Volume Layouts in VxVM complete. However, only the data write to disk A is complete. The parity write to disk C is incomplete, which would cause the data on disk B to be reconstructed incorrectly. Figure 1-25 Incomplete Write Completed Data Write Disk A Incomplete Parity Write Corrupted Data Disk B Disk C This failure can be avoided by logging all data and parity writes before committing them to the array.
Understanding VERITAS Volume Manager Volume Layouts in VxVM Underlying volumes in the “Managed by VxVM” area are used exclusively by VxVM and are not designed for user manipulation. You cannot detach a layered volume or perform any other operation on the underlying volumes by manipulating the internal structure.
Understanding VERITAS Volume Manager Volume Layouts in VxVM NOTE Chapter 1 The VERITAS Enterprise Administrator (VEA) terms a striped-mirror as Striped-Pro, and a concatenated-mirror as Concatenated-Pro.
Understanding VERITAS Volume Manager Online Relayout Online Relayout NOTE You may need an additional license to use this feature. Online relayout allows you to convert between storage layouts in VxVM, with uninterrupted data access. Typically, you would do this to change the redundancy or performance characteristics of a volume. VxVM adds redundancy to storage either by duplicating the data (mirroring) or by adding parity (RAID-5).
Understanding VERITAS Volume Manager Online Relayout The transformation is done by moving one portion of data at a time in the source layout to the destination layout. Data is copied from the source volume to the temporary area, and data is removed from the source volume storage area in portions. The source volume storage area is then transformed to the new layout, and the data saved in the temporary area is written back to the new layout.
Understanding VERITAS Volume Manager Online Relayout The following are examples of operations that you can perform using online relayout: • Figure 1-28 Change a RAID-5 volume to a concatenated, striped, or layered volume (remove parity). See Figure 1-28, “Example of Relayout of a RAID-5 Volume to a Striped Volume,” below. Note that removing parity (shown by the shaded area) decreases the overall storage space that the volume requires.
Understanding VERITAS Volume Manager Online Relayout • Figure 1-31 Change the column stripe width in a volume. See Figure 1-31, “Example of Increasing the Stripe Width for the Columns in a Volume,” below. Example of Increasing the Stripe Width for the Columns in a Volume For details of how to perform online relayout operations, see “Performing Online Relayout” on page 304.
Understanding VERITAS Volume Manager Online Relayout Table 1-2 Supported Relayout Transformations for Layered Concatenated-Mirror Volumes Relayout to Table 1-3 From concat-mirror concat No. Use vxassist convert, and then remove unwanted mirrors from the resulting mirrored-concatenated volume instead. concat-mirror No. mirror-concat No. Use vxassist convert instead. mirror-stripe No. Use vxassist convert after relayout to striped-mirror volume instead. raid5 Yes. stripe Yes.
Understanding VERITAS Volume Manager Online Relayout Table 1-3 Supported Relayout Transformations for RAID-5 Volumes Relayout to stripe-mirror Table 1-4 From raid5 Yes. The stripe width and number of columns may also be changed. Supported Relayout Transformations for Mirrored-Concatenated Volumes Relayout to Table 1-5 From mirror-concat concat No. Remove unwanted mirrors instead. concat-mirror No. Use vxassist convert instead. mirror-concat No. mirror-stripe No.
Understanding VERITAS Volume Manager Online Relayout Table 1-5 Supported Relayout Transformations for Mirrored-Stripe Volumes (Continued) Relayout to Table 1-6 From mirror-stripe raid5 Yes. The stripe width and number of columns may be changed. stripe Yes. The stripe width or number of columns must be changed. stripe-mirror Yes. The stripe width or number of columns must be changed. Otherwise, use vxassist convert.
Understanding VERITAS Volume Manager Online Relayout • Online relayout cannot create a non-layered mirrored volume in a single step. It always creates a layered mirrored volume even if you specify a non-layered mirrored layout, such as mirror-stripe or mirror-concat. Use the vxassist convert command to turn the layered mirrored volume that results from a relayout into a non-layered volume. See “Converting Between Layered and Non-Layered Volumes” on page 308 for more information.
Understanding VERITAS Volume Manager Online Relayout You can determine the transformation direction by using the vxrelayout status volume command. These transformations are protected against I/O failures if there is sufficient redundancy and space to move the data. Transformations and Volume Length Some layout transformations can cause the volume length to increase or decrease.
Understanding VERITAS Volume Manager Volume Resynchronization Volume Resynchronization When storing data redundantly and using mirrored or RAID-5 volumes, VxVM ensures that all copies of the data match exactly. However, under certain conditions (usually due to complete system failures), some redundant data on a volume can become inconsistent or unsynchronized. The mirrored data is not exactly the same as the original data.
Understanding VERITAS Volume Manager Volume Resynchronization Resynchronization Process The process of resynchronization depends on the type of volume. RAID-5 volumes that contain RAID-5 logs can “replay” those logs. If no logs are available, the volume is placed in reconstruct-recovery mode and all parity is regenerated. For mirrored volumes, resynchronization is done by placing the volume in recovery mode (also called read-writeback recovery mode).
Understanding VERITAS Volume Manager Dirty Region Logging (DRL) Dirty Region Logging (DRL) NOTE You may need an additional license to use this feature. Dirty region logging (DRL), if enabled, speeds recovery of mirrored volumes after a system crash. DRL keeps track of the regions that have changed due to I/O writes to a mirrored volume. DRL uses this information to recover only those portions of the volume that need to be recovered.
Understanding VERITAS Volume Manager Dirty Region Logging (DRL) subdisk is associated with one plex of the volume. Only one log subdisk can exist per plex. If the plex contains only a log subdisk and no data subdisks, that plex is referred to as a log plex. The log subdisk can also be associated with a regular plex that contains data subdisks. In that case, the log subdisk risks becoming unavailable if the plex must be detached due to the failure of one of its data subdisks.
Understanding VERITAS Volume Manager Volume Snapshots Volume Snapshots The volume snapshot model is shown in Figure 1-32, “Snapshot Creation and the Backup Cycle,” This figure also shows the transitions that are supported by the snapback and snapclear commands to vxassist.
Understanding VERITAS Volume Manager Volume Snapshots The command, vxassist snapback, can be used to return snapshot plexes to the original volume from which they were snapped, and to resynchronize the data in the snapshot mirrors from the data in the original volume. This enables you to refresh the data in a snapshot after each time that you use it to make a backup.
Understanding VERITAS Volume Manager FastResync FastResync NOTE You may need an additional license to use this feature. The FastResync feature (previously called fast mirror resynchronization or FMR) performs quick and efficient resynchronization of stale mirrors (a mirror that is not synchronized). This increases the efficiency of the VxVM snapshot mechanism, and improves the performance of operations such as backup and decision support applications.
Understanding VERITAS Volume Manager FastResync Once FastResync has been enabled on a volume, it does not alter how you administer mirrors. The only visible effect is that repair operations conclude more quickly. • FastResync allows you to refresh and re-use snapshots rather than discard them. You can quickly re-associate (snapback) snapshot plexes with their original volumes.
Understanding VERITAS Volume Manager FastResync Persistent FastResync can also track the association between volumes and their snapshot volumes after they are moved into different disk groups. When the disk groups are rejoined, this allows the snapshot plexes to be quickly resynchronized. This ability is not supported by Non-Persistent FastResync. See “Reorganizing the Contents of Disk Groups” on page 152 for details.
Understanding VERITAS Volume Manager FastResync Figure 1-33, “Mirrored Volume with Persistent FastResync Enabled,” shows an example of a mirrored volume with two plexes on which Persistent FastResync is enabled. Associated with the volume are a DCO object and a DCO volume with two plexes.
Understanding VERITAS Volume Manager FastResync Multiple snapshot plexes and associated DCO plexes may be created in the volume by re-running the snapstart operation. You can create up to a total of 32 plexes (data and log) in a volume. A snapshot volume can now be created from a snapshot plex by running the snapshot operation on the volume.
Understanding VERITAS Volume Manager FastResync See “Merging a Snapshot Volume (snapback)” on page 300, “Dissociating a Snapshot Volume (snapclear)” on page 301, and the vxassist(1M) manual page for more information.
Understanding VERITAS Volume Manager FastResync volume is the name of the volume being snapshotted. This default can be overridden by using the option -o name=pattern, as described on the vxassist(1M) manual page. To snapshot all the volumes in a single disk group, specify the option -o allvols to vxassist. However, this fails if any of the volumes in the disk group do not have a complete snapshot plex. It is also possible to take several snapshots of the same volume.
Understanding VERITAS Volume Manager FastResync area of the volume is marked as “dirty” so that this area is resynchronized. The snapback operation fails if it attempts to create an incomplete snapshot plex. In such cases, you must grow the replica volume, or the original volume, before invoking snapback. Growing the two volumes separately can lead to a snapshot that shares physical disks with another mirror in the volume. To prevent this, grow the volume after the snapback command is complete.
Understanding VERITAS Volume Manager FastResync replica. It is safe to perform these operations after the snapshot is completed. For more information, see the vxvol (1M), vxassist (1M), and vxplex (1M) manual pages.
Understanding VERITAS Volume Manager SmartSync Recovery Accelerator SmartSync Recovery Accelerator The SmartSync feature of Volume Manager increases the availability of mirrored volumes by only resynchronizing changed data. (The process of resynchronizing mirrored databases is also sometimes referred to as resilvering.) SmartSync reduces the time required to restore consistency, freeing more I/O bandwidth for business-critical applications.
Understanding VERITAS Volume Manager SmartSync Recovery Accelerator Because the database keeps its own logs, it is not necessary for VxVM to do logging. Data volumes should be configured as mirrored volumes without dirty region logs. In addition to improving recovery time, this avoids any run-time I/O overhead due to DRL which improves normal database write access. Redo Log Volume Configuration A redo log is a log of changes to the database data.
Understanding VERITAS Volume Manager Hot-Relocation Hot-Relocation NOTE You may need an additional license to use this feature. Hot-relocation is a feature that allows a system to react automatically to I/O failures on redundant objects (mirrored or RAID-5 volumes) in VxVM and restore redundancy and access to those objects. VxVM detects I/O failures on objects and relocates the affected subdisks. The subdisks are relocated to disks designated as spare disks and/or free space within the disk group.
2 Administering Disks Introduction This chapter describes the operations for managing disks used by the Volume Manager (VxVM). This includes placing disks under VxVM control, initializing disks, mirroring the root disk, and removing and replacing disks. NOTE Most VxVM commands require superuser or equivalent privileges. NOTE Rootability, which puts the root disk under VxVM control and allows it to be mirrored, is supported for this release of VxVM for HP-UX.
Disk Devices When performing disk administration, it is important to understand the difference between a disk name and a device name. When a disk is placed under VxVM control, a VM disk is assigned to it. You can define a symbolic disk name (also known as a disk media name) to refer to a VM disk for the purposes of administration. A disk name can be up to 31 characters long. If you do not assign a disk name, it defaults to disk## if the disk is being added to rootdg (where ## is a sequence number).
NOTE The s2 component of the device name is required to specify the HP-UX partition of an EFI formatted physical disk that is sued to boot an HP Itanium 2 based system. The root disk on an HP IPF system is divided into partitions where the c#t#d# device contains the EFI header information, c#t#d#s1 is an EFI file system that contains the Itanium boot loader, and c#t#d#s2 is an HP-UX partition.
— Non-fabric disks are named using the c#t#d# format. — Fabric disks are named using the fabric_# format. See “Changing the Disk-Naming Scheme” on page 76 for details of how to switch between the two naming schemes. To display the native OS device names of a VM disk (such as disk01), use the following command: # vxdisk list diskname For information on how to rename an enclosure, see “Renaming an Enclosure” on page 126.
• simple—the public and private regions are on the same disk area (with the public area following the private area). Typically, most or all disks on your system are configured as this disk type. • nopriv—there is no private region (only a public region for allocating subdisks). This is the simplest disk type consisting only of space for allocating subdisks.
Configuring Newly Added Disk Devices When you physically connect new disks to a host or when you zone new fibre channel devices to a host, you can use the vxdctl command to rebuild the volume device node directories and to update the DMP internal database to reflect the new state of the system. To reconfigure the DMP database, first run ioscan followed by insf to make the operating system recognize the new disks, and then invoke the vxdctl enable command. See the vxdctl(1M) manual page for more information.
This command scans all of the disk devices and their attributes, updates the VxVM device list, and reconfigures DMP with the new device database. There is no need to reboot the host. Removing Support for a Disk Array To remove support for the vrtsda disk array, use the following command: # swremove vrtsda If the arrays remain physically connected to the host after support has been removed, they are listed in the OTHER_DISKS category, and the volumes remain available.
NOTE Use this command to obtain values for the vid and pid attributes that are used with other forms of the vxddladm command. Excluding Support for a Disk Array To exclude a particular array library from participating in device discovery, use the following command: # vxddladm excludearray libname=libvxenc.sl . You can also exclude support for a disk array from a particular vendor, as shown in this example: # vxddladm excludearray vid=ACME pid=X1 This array is also excluded from device discovery.
Adding Support for Disks in the JBOD Category To add support for disks that are in the JBOD category, use the vxddladm command with the addjbod keyword.
Placing Disks Under VxVM Control When you add a disk to a system that is running VxVM, you need to put the disk under VxVM control so that VxVM can control the space allocation on the disk. Unless another disk group is specified, VxVM places new disks in the default disk group, rootdg. The method by which you place a disk under VxVM control depends on the circumstances: • CAUTION Chapter 2 If the disk is new, it must be initialized and placed under VxVM control.
To exclude disks, list the names of the disks to be excluded in the file /etc/vx/disks.exclude before the initialization. The following is an example of the contents of a disks.exclude file: c0t1d0 You can exclude all disks on specific controllers from initialization by listing those controllers in the file /etc/vx/cntrls.exclude. The following is an example of an entry in a cntrls.
Changing the Disk-Naming Scheme You can either use enclosure-based naming for disks or the traditional naming scheme (such as c#t#d#). Select menu item 20 from the vxdiskadm main menu to change the disk-naming scheme that you want VxVM to use. Selecting this option displays the following screen: Change the disk naming scheme Menu: VolumeManager/Disk/NamingScheme Use this screen to change the disk naming scheme (from the c#t#d# format to the enclosure based format and vice versa).
Issues Regarding Persistent Simple/Nopriv Disks with Enclosure-Based Naming If you change from the c#t#d# based naming scheme to the enclosure-based naming scheme, persistent simple or nopriv disks may be put in the “error” state and cause VxVM objects on those disks to fail.
Persistent Simple/Nopriv Disks in the Root Disk Group If all persistent simple and nopriv disks in rootdg go into the error state and the vxconfigd daemon is disabled after the naming scheme change, perform the following steps: Step 1. Use vxdiskadm to change back to the c#t#d# naming scheme. Step 2. Either shut down and reboot the system, or enter the following command to restart the VxVM configuration daemon: # vxconfigd -kr reset Step 3.
Installing and Formatting Disks Depending on the hardware capabilities of your disks and of your system, you may either need to shut down and power off your system before installing the disks, or you may be able to hot-insert the disks into the live system. Many operating systems can detect the presence of the new disks on being rebooted. If the disks are inserted while the system is live, you may need to enter an operating system-specific command to notify the system.
Adding a Disk to VxVM Formatted disks being placed under VxVM control may be new or previously used outside VxVM. The set of disks can consist of all disks on the system, all disks on a controller, selected disks, or a combination of these. Depending on the circumstances, all of the disks may not be processed in the same way. CAUTION Initialization does not preserve data on disks.
Select disk devices to add: [,all,list,q,?] can be a single disk, or a series of disks and/or controllers (with optional targets). If consists of multiple items, separate them using white space, for example: c3t0d0 c3t1d0 c3t2d0 c3t3d0 specifies fours disks at separate target IDs on controller 3.
Step 4. At the following prompt, specify the disk group to which the disk should be added, none to reserve the disks for future use, or press Return to accept rootdg: You can choose to add these disks to an existing disk group, a new disk group, or you can leave these disks available for use by future add or replacement operations. To create a new disk group,select a disk group name that does not yet exist. To leave the disks available for future use, specify a disk group name of “none”.
The following disk device appears to contain a currently unmounted file system. list of device names Are you sure you want to destroy these file systems [y,n,q,?] (default: n) y vxdiskadm asks you to confirm that the devices are to be reinitialized before proceeding: Reinitialize these devices? [y,n,q,?] (default: n) y Initializing device device name. Adding disk device device name to disk group disk group name withdisk name disk name. . . .
Using vxdiskadd to Place a Disk Under Control of VxVM As an alternative to vxdiskadm, you can use the vxdiskadd command to put a disk under VxVM control. For example, to initialize the second disk on the first controller, use the following command: # vxdiskadd c0t1d0 The vxdiskadd command examines your disk to determine whether it has been initialized and also checks for disks that have been added to VxVM, and for other conditions.
Rootability Rootability indicates that the volumes containing the root file system and the system swap area are under VxVM control. Without rootability, VxVM is usually started after the operating system kernel has passed control to the initial user mode process at boot time. However, if the volume containing the root file system is under VxVM control, the kernel starts portions of VxVM before starting the first user mode process.
• Any volume with an entry in the LIF LABEL record must be contiguous. It can have only one subdisk, and it cannot span to another disk. • The rootvol and swapvol volumes must have the special volume usage types root and swap respectively. Root Disk Mirrors All the volumes on a VxVM root disk may be mirrored. The simplest way to achieve this is to mirror the VxVM root disk onto an identically sized or larger physical disk.
When the kernel has passed control to the initial user procedure, the VxVM configuration daemon (vxconfigd) is started. vxconfigd reads the configuration of the volumes in the rootdg disk group and loads them into the kernel. The temporary root and swap volumes are then discarded. Further I/O for these volumes is performed using the VxVM configuration objects that were loaded into the kernel. Setting up a VxVM Root Disk and Mirror NOTE These procedures should be carried out at init level 1.
CAUTION Only create a VxVM root disk if you also intend to mirror it. There is no benefit in having a non-mirrored VxVM root disk for its own sake. The next example uses the same command and additionally specifies the -m option to set up a root mirror on disk c1t1d0: # /etc/vx/bin/vxcp_lvmroot -m c1t1d0 -R 30 -v -b c0t4d0 In this example, the -b option to vxcp_lvmroot sets c0t4d0 as the primary boot device and c1t1d0 as the alternate boot device.
with changes that you make to the VxVM root disk. See “Creating an LVM Root Disk from a VxVM Root Disk” on page 89 for a description of how to create a bootable LVM root disk from the VxVM root disk. For more information, see the vxcp_lvmroot(1M), vxrootmir(1M), vxdestroy_lvmroot(1M) and vxres_lvmroot (1M) manual pages. Creating an LVM Root Disk from a VxVM Root Disk NOTE These procedures should be carried out at init level 1.
Adding Swap Disks to a VxVM Rootable System On occasion, you may need to increase the amount of swap space for an HP-UX system. If your system has a VxVM root disk, use the procedure described below. Step 1. Create a new volume that is to be used for the swap area. The following example shows how to set up a non-mirrored 1GB simple volume: # vxassist -g rootdg make swapvol2 1g Step 2.
Removing Disks You can remove a disk from a system and move it to another system if the disk is failing or has failed. Before removing the disk from the current system, you must: Step 1. Stop all activity by applications to volumes that are configured on the disk that is to be removed. Unmount file systems and shut down databases that are configured on the volumes. Step 2. Use the following command to stop the volumes: # vxvol stop volume1 volume2 ... Step 3.
Requested operation is to remove disk disk01 from group rootdg. Continue with operation? [y,n,q,?] (default: y) The vxdiskadm utility removes the disk from the disk group and displays the following success message: Removal of disk disk01 is complete. You can now remove the disk or leave it on your system as a replacement. Step 5.
Administering Disks Removing Disks Removing a Disk with No Subdisks To remove a disk that contains no subdisks from its disk group, run the vxdiskadm program and select item 2 (Remove a disk) from the main menu, and respond to the prompts as shown in this example to remove disk02: Enter disk name [,list,q,?] disk02 Requested operation is to remove disk disk02 from group rootdg. Continue with operation? [y,n,q,?] (default: y) y Removal of disk disk02 is complete.
Administering Disks Removing and Replacing Disks Removing and Replacing Disks If failures are starting to occur on a disk, but the disk has not yet failed completely, you can replace the disk. This involves detaching the failed or failing disk from its disk group, followed by replacing the failed or failing disk with a new one. Replacing the disk can be postponed until a later date if necessary. To replace a disk, use the following procedure: Step 1.
Administering Disks Removing and Replacing Disks To remove the disk, causing the named volumes to be disabled and data to be lost when the disk is replaced, enter y or press Return. To abandon removal of the disk, and back up or move the data associated with the volumes that would otherwise be disabled, enter n or q and press Return.
Administering Disks Removing and Replacing Disks At the following prompt, indicate whether you want to remove another disk (y) or return to the vxdiskadm main menu (n): Remove another disk? [y,n,q,?] (default: n) NOTE If removing a disk causes one or more volumes to be disabled, see the section, “Restarting a Disabled Volume” in the chapter “Recovery from Hardware Failure” in the VERITAS Volume Manager Troubleshooting Guide, for information on how to restart a disabled volume so that you can restore its d
Administering Disks Removing and Replacing Disks The following devices are available as replacements: c0t1d0 You can choose one of these disks to replace disk02. Choose "none" to initialize another disk to replace disk02. Choose a device, or select "none" [,none,q,?] (default: c0t1d0) Step 4.
Administering Disks Enabling a Physical Disk Enabling a Physical Disk If you move a disk from one system to another during normal system operation, VxVM does not recognize the disk automatically. The enable disk task enables VxVM to identify the disk and to determine if this disk is part of a disk group. Also, this task re-enables access to a disk that was disabled by either the disk group deport task or the disk device disable (offline) task. To enable a disk, use the following procedure: Step 1.
Administering Disks Taking a Disk Offline Taking a Disk Offline There are instances when you must take a disk offline. If a disk is corrupted, you must disable the disk before removing it. You must also disable a disk before moving the physical disk device to another location to be connected to another system. NOTE Taking a disk offline is only useful on systems that support hot-swap removal and insertion of disks without needing to shut down and reboot the system.
Administering Disks Renaming a Disk Renaming a Disk If you do not specify a VM disk name, VxVM gives the disk a default name when you add the disk to VxVM control. The VM disk name is used by VxVM to identify the location of the disk or the disk type.
Administering Disks Reserving Disks Reserving Disks By default, the vxassist command allocates space from any disk that has free space. You can reserve a set of disks for special purposes, such as to avoid general use of a particularly slow or a particularly fast disk.
Administering Disks Displaying Disk Information Displaying Disk Information Before you use a disk, you need to know if it has been initialized and placed under VxVM control. You also need to know if the disk is part of a disk group because you cannot create volumes on a disk that is not part of a disk group. The vxdisk list command displays device names for all recognized disks, the disk names, the disk group names associated with each disk, and the status of each disk.
Administering Disks Displaying Disk Information To display disk information, use the following procedure: Step 1. Start the vxdiskadm program, and select list (List disk information) from the main menu. Step 2. At the following display, enter the address of the disk you want to see, or enter all for a list of all disks: List disk information Menu: VolumeManager/Disk/ListDisk Use this menu operation to display a list of disks.
Administering Disks Displaying Disk Information 104 Chapter 2
3 Chapter 3 Administering Dynamic Multipathing (DMP) 105
Administering Dynamic Multipathing (DMP) Introduction Introduction NOTE You may need an additional license to use this feature. The Dynamic Multipathing (DMP) feature of VERITAS Volume Manager (VxVM) provides greater reliability and performance by using path failover and load balancing. This feature is available for multiported disk arrays from various vendors. See the VERITAS Volume Manager Hardware Notes for information about supported disk arrays.
Administering Dynamic Multipathing (DMP) Introduction separate group of LUNs. If a single LUN in the primary controller’s LUN group fails, all LUNs in that group fail over to the secondary controller’s passive LUN group. VxVM uses DMP metanodes to access disk devices connected to the system. For each disk in a supported array, DMP maps one metanode to the set of paths that are connected to the disk. Additionally, DMP associates the appropriate multipathing policy for the disk array with the metanode.
Administering Dynamic Multipathing (DMP) Introduction Enclosure in a SAN Environment,”, shows that two paths, c1t99d0 and c2t99d0, exist to a single disk in the enclosure, but VxVM uses the single DMP metanode, enc0_0, to access it.
Administering Dynamic Multipathing (DMP) Introduction DMP is also informed when you repair or restore a connection, and when you add or remove devices after the system has been fully booted (provided that the operating system recognizes the devices correctly). Load Balancing DMP uses the balanced path mechanism to provide load balancing across paths for active/active disk arrays. Load balancing maximizes I/O throughput by using the total bandwidth of all available paths.
Administering Dynamic Multipathing (DMP) Disabling and Enabling Multipathing for Specific Devices Disabling and Enabling Multipathing for Specific Devices You can use vxdiskadm menu options 17 and 18 to disable or enable multipathing. These menu options also allow you to exclude devices from or include devices in the view of VxVM. For more information, see “Disabling Multipathing and Making Devices Invisible to VxVM” on page 110 and “Enabling Multipathing and Making Devices Visible to VxVM” on page 115.
Administering Dynamic Multipathing (DMP) Disabling and Enabling Multipathing for Specific Devices 4 5 6 7 8 ? ?? q Suppress all but one paths to a disk Prevent multipathing of all disks on a controller by VxVM Prevent multipathing of a disk by VxVM Prevent multipathing of disks by specifying a VID:PID combination List currently suppressed/non-multipathed devices Display help about menu Display help about the menuing system Exit from menus Select an operation to perform: • Select option 1 to exclude all
Administering Dynamic Multipathing (DMP) Disabling and Enabling Multipathing for Specific Devices • NOTE Select option 3 to exclude disks from the view of VxVM that match a specified Vendor ID and Product ID. This option requires a reboot of the system. Exclude VID:PID from VxVM Menu: VolumeManager/Disk/ExcludeDevices/VIDPID-VXVM Use this operation to exclude disks returning a specified combination from VxVM.
Administering Dynamic Multipathing (DMP) Disabling and Enabling Multipathing for Specific Devices Exclude all but one paths to a disk Menu: VolumeManager/Disk/ExcludeDevices/PATHGROUP-VXVM Use this operation to exclude all but one paths to a disk. In case of disks which are not multipathed by vxdmp, VxVM will see each path as a disk. In such cases, creating a pathgroup of all paths to the disk will ensure that only one of the paths from the group is made visible to VxVM.
Administering Dynamic Multipathing (DMP) Disabling and Enabling Multipathing for Specific Devices You can specify a pathname or a pattern at the prompt. selection examples: all: c3t4d2: list: Here are some path all paths a single path list all paths on the system Enter a pathname or pattern :[,all,list,list-exclude,q,?] If a path is specified, the corresponding disk are claimed in the OTHER_DISKS category and hence not multipathed.
Administering Dynamic Multipathing (DMP) Disabling and Enabling Multipathing for Specific Devices Some examples of VID:PID specification are: all - Exclude all disks aaa:123 - Exclude all disks having VID aaa*:123 - Exclude all disks having VID aaa:123* - Exclude all disks having VID aaa:* - Exclude all disks having VID ‘aaa’ and PID starting with ‘aaa’ and PID ‘aaa’ and any ‘123’ ‘aaa’and PID ‘123’ starting with ‘123’ PID Enter a VID:PID combination:[,all,list,list-exclude,q,?] All disks retur
Administering Dynamic Multipathing (DMP) Disabling and Enabling Multipathing for Specific Devices ?? Display help about the menuing system q Exit from menusSelect an operation to perform: • Select option 1 to make all paths through a specified controller visible to VxVM. Re-include controllers in VxVM Menu: VolumeManager/Disk/IncludeDevices/CTLR-VXVM Use this operation to make all paths through a controller visible to VxVM again.
Administering Dynamic Multipathing (DMP) Disabling and Enabling Multipathing for Specific Devices combination visible to VxVM again. As a result of this operation, disks that return VendorID:ProductID matching the specified combination will be made visible to VxVM again. You can specify a VID:PID combination at the prompt. The specification can be as follows: VID:PID where VID stands for Vendor ID PID stands for Product ID (The command vxdmpinq in /etc/vx/diag.
Administering Dynamic Multipathing (DMP) Disabling and Enabling Multipathing for Specific Devices • NOTE Select option 5 to enable multipathing for all disks that have paths through the specified controller. This option requires a reboot of the system. Re-include controllers in DMP Menu: VolumeManager/Disk/IncludeDevices/CTLR-DMP Use this operation to make vxdmp multipath all disks on a controller again.
Administering Dynamic Multipathing (DMP) Disabling and Enabling Multipathing for Specific Devices • NOTE Select option 7 to enable multipathing for disks that match a specified Vendor ID and Product ID. This option requires a reboot of the system. Make VID:PID visible to DMP Menu: VolumeManager/Disk/IncludeDevices/VIDPID-DMP Use this operation to make vxdmp multipath disks returning a specified VendorID:ProductID combination again.
Administering Dynamic Multipathing (DMP) Enabling and Disabling Input/Output (I/O) Controllers Enabling and Disabling Input/Output (I/O) Controllers DMP allows you to turn off I/O to a host I/O controller so that you can perform administrative operations. This feature can be used for maintenance of controllers attached to the host or of disk arrays supported by VxVM. I/O operations to the host I/O controller can be turned back on after the maintenance task is completed.
Administering Dynamic Multipathing (DMP) Displaying DMP Database Information Displaying DMP Database Information You can use the vxdmpadm command to list DMP database information and perform other administrative tasks. This command allows you to list all controllers that are connected to disks, and other related information that is stored in the DMP database. You can use this information to locate system hardware, and to help you decide which controllers need to be enabled or disabled.
Administering Dynamic Multipathing (DMP) Displaying Multipaths to a VM Disk Displaying Multipaths to a VM Disk The vxdisk command is used to display the multipathing information for a particular metadevice. The metadevice is a device representation of a particular physical disk having multiple physical paths from the I/O controller of the system. In VxVM, all the physical disks in the system are represented as metadevices with one or more physical paths.
Administering Dynamic Multipathing (DMP) Displaying Multipaths to a VM Disk numpaths: c1t0d3 c4t1d3 2 state=enabled type=secondary state=disabled type=primary In the Multipathing information section of this output, the numpaths line shows that there are 2 paths to the device, and the following two lines show that the path to has failed (state=disabled). The type field is shown for disks on active/passive type disk arrays such as Nike, DG Clariion, and Hitachi DF350.
Administering Dynamic Multipathing (DMP) Administering DMP Using vxdmpadm Administering DMP Using vxdmpadm The vxdmpadm utility is a command line administrative interface to the DMP feature of VxVM. You can use the vxdmpadm utility to perform the following tasks.
Administering Dynamic Multipathing (DMP) Administering DMP Using vxdmpadm The specified DMP node must be a valid node in the /dev/vx/rdmp directory.
Administering Dynamic Multipathing (DMP) Administering DMP Using vxdmpadm Listing Information About Enclosures To display the attributes of a specified enclosure, use the following command: # vxdmpadm listenclosure enc0 The following command lists attributes for all enclosures in a system: # vxdmpadm listenclosure all Renaming an Enclosure The vxdmpadm setattr command can be used to assign a meaningful name to an existing enclosure, for example: # vxdmpadm setattr enclosure enc0 name=GRP1 This example cha
Administering Dynamic Multipathing (DMP) Administering DMP Using vxdmpadm # vxdmpadm start restore policy=check_all [interval=seconds] • check_alternate The restore daemon checks that at least one alternate path is healthy. It generates a notification if this condition is not met. This policy avoids inquiry commands on all healthy paths, and is less costly than check_all in cases where a large number of paths are available. This policy is the same as check_all if there are only two paths per DMP node.
Administering Dynamic Multipathing (DMP) Administering DMP Using vxdmpadm NOTE To change the interval or policy, you must first stop the restore daemon, and then restart it with new attributes. See the vxdmpadm(1M) manual page for more information about DMP restore policies. Stopping the DMP Restore Daemon Use the following command to stop the DMP restore daemon: # vxdmpadm stop restore NOTE Automatic path failback stops if the restore daemon is stopped.
Administering Dynamic Multipathing (DMP) DMP in a Clustered Environment DMP in a Clustered Environment NOTE You may need an additional license to use this feature. In a clustered environment where active/passive type disk arrays are shared by multiple hosts, all hosts in the cluster should access the disk via the same physical path. If a disk from an active/passive type shared disk array is accessed via multiple paths simultaneously, it could lead to severe degradation of I/O performance.
Administering Dynamic Multipathing (DMP) DMP in a Clustered Environment The following error message is displayed: vxvm: vxdmpadm: ERROR: operation not supported for shared disk arrays. Operation of the DMP Restore Daemon with Shared Disk Groups The DMP restore daemon does not automatically failback I/O requests for a disk in an active/passive disk array if that disk is a part of a shared disk group.
4 Creating and Administering Disk Groups Introduction This chapter describes how to create and manage disk groups. Disk groups are named collections of disks that share a common configuration. Volumes are created within a disk group and are restricted to using disks within that disk group. A system with Volume Manager (VxVM) installed has a default disk group configured, rootdg. By default, operations are directed to the rootdg disk group.
The copy size in blocks can be obtained from the output of the command vxdg list diskgroup as the value of the permlen parameter on the line starting with the string “config:”. This value is the smallest of the len values for all copies of the configuration database in the disk group. The amount of remaining free space in the configuration database is shown as the value of the free parameter. An example is shown in “Displaying Disk Group Information” on page 134.
Specifying a Disk Group to Commands Many VxVM commands allow you to specify a disk group using the –g option. For example, to create a volume in disk group mktdg, use the following command: # vxassist -g mktdg make mktvol 50m The block special device for this volume is: /dev/vx/dsk/mktdg/mktvol The disk group does not have to be specified if the object names are unique. Most VxVM commands use object names specified on the command line to determine the disk group for the operation.
Displaying Disk Group Information To display information on existing disk groups, enter the following command: # vxdg list VxVM returns the following listing of current disk groups: NAME rootdg newdg STATE enabled enabled ID 730344554.1025.tweety 731118794.1213.tweety To display more detailed information on a specific disk group (such as rootdg), use the following command: # vxdg list rootdg The output from this command is similar to the following: Group: rootdg dgid: 962910960.1025.bass import-id: 0.
Displaying Free Space in a Disk Group Before you add volumes and file systems to your system, make sure you have enough free disk space to meet your needs.
Creating a Disk Group Data related to a particular set of applications or a particular group of users may need to be made accessible on another system. Examples of this are: • A system has failed and its data needs to be moved to other systems. • The work load must be balanced across a number of systems. It is important that you locate data related to particular applications or users on an identifiable set of disks.
The disk specified by the device name, c1t0d0, must have been previously initialized with vxdiskadd or vxdiskadm, and must not currently belong to a disk group.
Adding a Disk to a Disk Group To add a disk to an existing disk group, use menu item 1 (Add or initialize one or more disks) of the vxdiskadm command, as described in “Adding a Disk to VxVM” on page 80. You can also use the vxdiskadd command to add a disk to a disk group, for example: # vxdiskadd c1t2d0 where c1t2d0 is the device name of a disk that is not currently assigned to a disk group.
Removing a Disk from a Disk Group A disk that contains no subdisks can be removed from its disk group with this command: # vxdg [-g groupname] rmdisk diskname where the disk group name is only specified for a disk group other than the default, rootdg.
If you choose y, then all subdisks are moved off the disk, if possible. Some subdisks may not be movable. The most common reasons why a subdisk may not be movable are as follows: • There is not enough space on the remaining disks. • Plexes or striped subdisks cannot be allocated on different disks from existing plexes or striped subdisks in the volume.
Deporting a Disk Group Deporting a disk group disables access to a disk group that is currently enabled (imported) by the system. Deport a disk group if you intend to move the disks in a disk group to another system. Also, deport a disk group if you want to use all of the disks remaining in a disk group for a new purpose. To deport a disk group, use the following procedure: Step 1. Stop all activity by applications to volumes that are configured in the disk group that is to be deported.
the system. Disable (offline) the indicated disks? [y,n,q,?] (default: n) y Step 6. At the following prompt, press Return to continue with the operation: Continue with operation? [y,n,q,?] (default: y) Once the disk group is deported, the vxdiskadm utility displays the following message: Removal of disk group newdg was successful. Step 7.
Importing a Disk Group Importing a disk group enables access by the system to a disk group. To move a disk group from one system to another, first disable (deport) the disk group on the original system, and then move the disk between systems and enable (import) the disk group. To import a disk group, use the following procedure: Step 1. Use the following command to ensure that the disks in the deported disk group are online: # vxdisk -s list Step 2.
Select another disk group? [y,n,q,?] (default: n) Alternatively, you can use the vxdg command to import a disk group: # vxdg import diskgroup Chapter 4 144
Renaming a Disk Group Only one disk group of a given name can exist per system. It is not possible to import or deport a disk group when the target system already has a disk group of the same name. To avoid this problem, VxVM allows you to rename a disk group during import or deport. For example, because every system running VxVM must have a single rootdg default disk group, importing or deporting rootdg across systems is a problem. There cannot be two rootdg disk groups on the same system.
# vxdg -tC -n newdg import diskgroup The -t option indicates a temporary import name, and the -C option clears import locks. The -n option specifies an alternate name for the rootdg being imported so that it does not conflict with the existing rootdg. diskgroup is the disk group ID of the disk group being imported (for example, 774226267.1025.tweety). If a reboot or crash occurs at this point, the temporarily imported disk group becomes unimported and requires a reimport. Step 3.
Moving Disks between Disk Groups To move a disk between disk groups, remove the disk from one disk group and add it to the other. For example, to move the physical disk c0t3d0 (attached with the disk name disk04) from disk group rootdg and add it to disk group mktdg, use the following commands: # vxdg rmdisk disk04 # vxdg -g mktdg adddisk mktdg02=c0t3d0 CAUTION This procedure does not save the configurations nor data on the disks. You can also move a disk by using the vxdiskadm command.
Moving Disk Groups Between Systems An important feature of disk groups is that they can be moved between systems. If all disks in a disk group are moved from one system to another, then the disk group can be used by the second system. You do not have to re-specify the configuration. To move a disk group between systems, use the following procedure: Step 1.
CAUTION The purpose of the lock is to ensure that dual-ported disks (disks that can be accessed simultaneously by two systems) are not used by both systems at the same time. If two systems try to manage the same disks at the same time, configuration information stored on the disk is corrupted. The disk and its data become unusable. When you move disks from a system that has crashed or failed to detect the group before the disk is moved, the locks stored on the disks remain and must be cleared.
# vxdg -f import diskgroup CAUTION Be careful when using the -f option. It can cause the same disk group to be imported twice from different sets of disks, causing the disk group to become inconsistent. These operations can also be performed using the vxdiskadm utility. To deport a disk group using vxdiskadm, select menu item 8 (Enable access to (import) a disk group). The vxdiskadm import operation checks for host import locks and prompts to see if you want to clear any that are found.
If you do not specify the base of the minor number range for a disk group, VxVM chooses one at random. The number chosen is at least 1000, is a multiple of 1000, and yields a usable range of 1000 device numbers. The chosen number also does not overlap within a range of 1000 of any currently imported disk groups, and it does not overlap any currently allocated volume device numbers. NOTE The default policy ensures that a small number of disk groups can be merged successfully between a set of machines.
Reorganizing the Contents of Disk Groups NOTE You may need an additional license to use this feature. There are several circumstances under which you might want to reorganize the contents of your existing disk groups: • To group volumes or disks differently as the needs of your organization change. For example, you might want to split disk groups to match the boundaries of separate departments, or to join disk groups when departments are merged.
• move—moves a self-contained set of VxVM objects between imported disk groups. This operation fails if it would remove all the disks from the source disk group. Volume states are preserved across the move. The move operation is illustrated in Figure 4-1, “Disk Group Move Operation,” below.
destroyed if it has the same name as the target disk group (as is the case for the vxdg init command). The split operation is illustrated in Figure 4-2, “Disk Group Split Operation,” below.
• Figure 4-3 join—removes all VxVM objects from an imported disk group and moves them to an imported target disk group. The source disk group is removed when the join is complete. The join operation is illustrated in Figure 4-3, “Disk Group Join Operation,” below.
If the system crashes or a hardware subsystem fails, VxVM attempts to complete or reverse an incomplete disk group reconfiguration when the system is restarted or the hardware subsystem is repaired, depending on how far the reconfiguration had progressed.
Creating and Administering Disk Groups Reorganizing the Contents of Disk Groups • In a cluster environment, disk groups involved in a move or join must both be private or must both be shared. The following sections describe how to use the vxdg command to reorganize disk groups. For more information about the vxdg command, see the vxdg(1M) manual page.
Creating and Administering Disk Groups Reorganizing the Contents of Disk Groups DCO volume accompany their parent volume during the move. Use the vxprint command on a volume to examine the configuration of its associated DCO volume. Figure 4-4, “Examples of Disk Groups That Can and Cannot be Split,” illustrates some instances in which it is not be possible to split a disk group because of the location of the DCO plexes.
Creating and Administering Disk Groups Reorganizing the Contents of Disk Groups For more information about relocating DCO plexes, see “Specifying Storage for DCO Plexes” on page 265.
Creating and Administering Disk Groups Reorganizing the Contents of Disk Groups Figure 4-4 Examples of Disk Groups That Can and Cannot be Split Volume Data Plexes The disk group can be split as the DCO plexes are on the same disks as the data plexes and can therefore accompany their volumes. Snapshot Plex Split Volume DCO Plexes Snapshot DCO Plex Volume Data Plexes The disk group cannot be split as the DCO plexes have been separated from their data plexes and so cannot accompany their volumes.
Creating and Administering Disk Groups Reorganizing the Contents of Disk Groups Moving Objects Between Disk Groups To move a self-contained set of VxVM objects from an imported source disk group to an imported target disk group, use the following command: # vxdg [-o expand] [-o override|verify] move sourcedg targetdg object ...
Creating and Administering Disk Groups Reorganizing the Contents of Disk Groups dg dm dm dm dm v pl sd pl sd - dg1 disk01 disk05 disk07 disk08 vol1 vol1-01 disk01-01 vol1-02 disk05-01 dg1 c0t1d0 c1t96d0 c1t99d0 c1t100d0 fsgen vol1 vol1-01 vol1 vol1-02 ENABLED ENABLED ENABLED ENABLED ENABLED 17678493 17678493 17678493 17678493 2048 3591 3591 3591 3591 0 0 ACTIVE ACTIVE ACTIVE - - - The following command moves the self-contained set of objects implied by specifying disk disk01 from disk group dg1 to
Creating and Administering Disk Groups Reorganizing the Contents of Disk Groups IL0 dg dg1 dm disk07 dm disk08 dg1 c1t99d0 c1t100d0 - 17678493 17678493 - - - - The following commands would also achieve the same result: # vxdg move dg1 rootdg disk01 disk05 # vxdg move dg1 rootdg vol1 Splitting Disk Groups To remove a self-contained set of VxVM objects from an imported source disk group to a new target disk group, use the following command: # vxdg [-o expand] [-o override|verify] split sourcedg targe
Creating and Administering Disk Groups Reorganizing the Contents of Disk Groups dm v pl sd pl sd disk08 c1t100d0 - 17678493 - - - vol1 fsgen ENABLED 2048 - ACTIVE - vol1-01 vol1 ENABLED 3591 - ACTIVE - disk01-01 vol1-01 ENABLED 3591 0 - - vol1-02 ENABLED 3591 - ACTIVE - ENABLED 3591 0 - - vol1 disk05-01 vol1-02 The following command removes disks disk07 and disk08 from rootdg to form a new disk group, dg1: # vxdg -o expand split rootdg dg1 disk07 disk08 The moved vol
Creating and Administering Disk Groups Reorganizing the Contents of Disk Groups v pl sd pl sd vol1 fsgen ENABLED 2048 - ACTIVE - vol1-01 vol1 ENABLED 3591 - ACTIVE - disk01-01 vol1-01 ENABLED 3591 0 - - vol1-02 ENABLED 3591 - ACTIVE - ENABLED 3591 0 - - KSTATE LENGTH PLOFFS STATE TUTIL0 - - - - - - 17678493 - - - - 17678493 - - - vol1 disk05-01 vol1-02 Disk group: dg1 TY NAME ASSOC PUTIL0 dg dg1 dg1 dm disk07 c1t99d0 dm disk08 c1t100d0 - Joining Disk Gr
Creating and Administering Disk Groups Reorganizing the Contents of Disk Groups dg dm dm dm dm dm dm - rootdg rootdg - - - - - disk01 c0t1d0 - 17678493 - - - disk02 c1t97d0 - 17678493 - - - disk03 c1t112d0 - 17678493 - - - disk04 c1t114d0 - 17678493 - - - disk07 c1t99d0 - 17678493 - - - disk08 c1t100d0 - 17678493 - - - KSTATE LENGTH PLOFFS STATE TUTIL0 - - - - - - 17678493 - - - - 17678493 - - - ENABLED 2048 - ACTIVE - ENABLED 3591
Creating and Administering Disk Groups Reorganizing the Contents of Disk Groups The output from vxprint after the join shows that disk group dg1 has been removed: # vxprint Disk group: rootdg TY NAME ASSOC PUTIL0 dg rootdg rootdg dm disk01 c0t1d0 dm disk02 c1t97d0 dm disk03 c1t112d0 dm disk04 c1t114d0 dm disk05 c1t96d0 dm disk06 c1t98d0 dm disk07 c1t99d0 dm disk08 c1t100d0 v vol1 fsgen pl vol1-01 vol1 sd disk01-01 vol1-01 pl vol1-02 vol1 sd disk05-01 vol1-02 Chapter 4 KSTATE LENGTH PLOFFS STATE TUTIL0
Creating and Administering Disk Groups Disabling a Disk Group Disabling a Disk Group To disable a disk group, unmount and stop any volumes in the disk group, and then use the following command to deport it: # vxdg deport diskgroup Deporting a disk group does not actually remove the disk group. It disables use of the disk group by the system. Disks in a deported disk group can be reused, reinitialized, added to other disk groups, or imported for use on other systems.
Creating and Administering Disk Groups Destroying a Disk Group Destroying a Disk Group The vxdg command provides a destroy option that removes a disk group from the system and frees the disks in that disk group for reinitialization: # vxdg destroy diskgroup CAUTION This command destroys all data on the disks. When a disk group is destroyed, the disks that are released can be re-used in other disk groups.
Creating and Administering Disk Groups Upgrading a Disk Group Upgrading a Disk Group NOTE This information is not applicable for platforms whose first release was Volume Manager 3.0. However, it is applicable for subsequent releases. Prior to the release of Volume Manager 3.0, the disk group version was automatically upgraded (if needed) when the disk group was imported. From release 3.0 of Volume Manager, the two operations of importing a disk group and upgrading its version are separate.
Creating and Administering Disk Groups Upgrading a Disk Group Table 4-2 summarizes the Volume Manager releases that introduce and support specific disk group versions: Table 4-1 Disk Group Version Assignments Introduces Disk Group Version VxVM Release Chapter 4 Supports Disk Group Versions 1.2 10 10 1.3 15 15 2.0 20 20 2.2 30 30 2.3 40 40 2.5 50 50 3.0 60 20-40, 60 3.1 70 20-70 3.1.1 80 20-80 3.2, 3.
Creating and Administering Disk Groups Upgrading a Disk Group Importing the disk group of a previous version on a Volume Manager 3.5 system prevents the use of features introduced since that version was released.
Creating and Administering Disk Groups Upgrading a Disk Group Table 4-2 Features Supported by Disk Group Versions (Continued) Disk Group Version New Features Supported Previous Version Features Supported 50 • SRVM (now known as VERITAS Volume Replicator or VVR) 20, 30, 40 40 • Hot-Relocation 20, 30 30 • VxSmartSync Recovery Accelerator 20 20 • Dirty Region Logging • Disk Group Configuration Copy Limiting, • Mirrored Volumes Logging • New-Style Stripes • RAID-5 Volumes • Recovery
Creating and Administering Disk Groups Upgrading a Disk Group It may sometimes be necessary to create a disk group for an older version. The default disk group version for a disk group created on a system running Volume Manager 3.5 is 90. Such a disk group would not be importable on a system running Volume Manager 2.3, which only supports up to version 40. Therefore, to create a disk group on a system running Volume Manager 3.5 that can be imported by a system running Volume Manager 2.
Creating and Administering Disk Groups Managing the Configuration Daemon in VxVM Managing the Configuration Daemon in VxVM The VxVM configuration daemon (vxconfigd) provides the interface between VxVM commands and the kernel device drivers. vxconfigd handles configuration change requests from VxVM utilities, communicates the change requests to the VxVM kernel, and modifies configuration information stored on disk. vxconfigd also initializes VxVM when the system is booted.
Creating and Administering Disk Groups Managing the Configuration Daemon in VxVM 176 Chapter 4
5 Chapter 5 Creating and Administering Subdisks 177
Creating and Administering Subdisks Introduction Introduction This chapter describes how to create and maintain subdisks. Subdisks are the low-level building blocks in a Volume Manager (VxVM) configuration that are required to create plexes and volumes. NOTE 178 Most VxVM commands require superuser or equivalent privileges.
Creating and Administering Subdisks Creating Subdisks Creating Subdisks NOTE Subdisks are created automatically if you use the vxassist command or the VERITAS Enterprise Administrator (VEA) to create volumes. For more information, see “Creating a Volume” on page 214.
Creating and Administering Subdisks Displaying Subdisk Information Displaying Subdisk Information The vxprint command displays information about VxVM objects. To display general information for all subdisks, use this command: # vxprint -st The -s option specifies information about subdisks. The -t option prints a single-line output record that depends on the type of object being listed.
Creating and Administering Subdisks Moving Subdisks Moving Subdisks Moving a subdisk copies the disk space contents of a subdisk onto one or more other subdisks. If the subdisk being moved is associated with a plex, then the data stored on the original subdisk is copied to the new subdisks. The old subdisk is dissociated from the plex, and the new subdisks are associated with the plex. The association is at the same offset within the plex as the source subdisk.
Creating and Administering Subdisks Splitting Subdisks Splitting Subdisks Splitting a subdisk divides an existing subdisk into two separate subdisks. To split a subdisk, use the following command: # vxsd –s size split subdisk newsd1 newsd2 where subdisk is the name of the original subdisk, newsd1 is the name of the first of the two subdisks to be created and newsd2 is the name of the second subdisk to be created. The –s option is required to specify the size of the first of the two subdisks to be created.
Creating and Administering Subdisks Joining Subdisks Joining Subdisks Joining subdisks combines two or more existing subdisks into one subdisk. To join subdisks, the subdisks must be contiguous on the same disk. If the selected subdisks are associated, they must be associated with the same plex, and be contiguous in that plex. To join several subdisks, use the following command: # vxsd join subdisk1 subdisk2 ...
Creating and Administering Subdisks Associating Subdisks with Plexes Associating Subdisks with Plexes Associating a subdisk with a plex places the amount of disk space defined by the subdisk at a specific offset within the plex. The entire area that the subdisk fills must not be occupied by any portion of another subdisk. There are several ways that subdisks can be associated with plexes, depending on the overall state of the configuration.
Creating and Administering Subdisks Associating Subdisks with Plexes create a subdisk of a size that fits the hole in the sparse plex exactly. Then, associate the subdisk with the plex by specifying the offset of the beginning of the hole in the plex, using the following command: # vxsd -l offset assoc sparse_plex exact_size_subdisk NOTE The subdisk must be exactly the right size. VxVM does not allow the space defined for two subdisks to overlap within a plex.
Creating and Administering Subdisks Associating Log Subdisks Associating Log Subdisks Log subdisks are defined and added to a plex that is to become part of a volume on which dirty region logging (DRL) is enabled. DRL is enabled for a volume when the volume is mirrored and has at least one log subdisk. For a description of DRL, see “Dirty Region Logging (DRL)” on page 49. Log subdisks are ignored as far as the usual plex policies are concerned, and are only used to hold the dirty region log.
Creating and Administering Subdisks Dissociating Subdisks from Plexes Dissociating Subdisks from Plexes To break an established connection between a subdisk and the plex to which it belongs, the subdisk is dissociated from the plex. A subdisk is dissociated when the subdisk is removed or used in another plex.
Creating and Administering Subdisks Removing Subdisks Removing Subdisks To remove a subdisk, use the following command: # vxedit rm subdisk For example, to remove a subdisk named disk02-01, use the following command: # vxedit rm disk02-01 188 Chapter 5
Creating and Administering Subdisks Changing Subdisk Attributes Changing Subdisk Attributes CAUTION Change subdisk attributes with extreme care. The vxedit command changes attributes of subdisks and other VxVM objects. To change subdisk attributes, use the following command: # vxedit set attribute=value ... subdisk ...
Creating and Administering Subdisks Changing Subdisk Attributes 190 Chapter 5
6 Chapter 6 Creating and Administering Plexes 191
Creating and Administering Plexes Introduction Introduction This chapter describes how to create and maintain plexes. Plexes are logical groupings of subdisks that create an area of disk space independent of physical disk size or other restrictions. Replication (mirroring) of disk data is set up by creating multiple data plexes for a single volume. Each data plex in a mirrored volume contains an identical copy of the volume data.
Creating and Administering Plexes Creating Plexes Creating Plexes NOTE Plexes are created automatically if you use the vxassist command or the VERITAS Enterprise Administrator (VEA) to create volumes. For more information, see “Creating a Volume” on page 214. Use the vxmake command to create VxVM objects, such as plexes.
Creating and Administering Plexes Creating a Striped Plex Creating a Striped Plex To create a striped plex, you must specify additional attributes. For example, to create a striped plex named pl-01 with a stripe width of 32 sectors and 2 columns, use the following command: # vxmake plex pl-01 layout=stripe stwidth=32 ncolumn=2 \ sd=disk01-01,disk02-01 To use a plex to build a volume, you must associate the plex with the volume.
Creating and Administering Plexes Displaying Plex Information Displaying Plex Information Listing plexes helps identify free plexes for building volumes. Use the plex (–p) option to the vxprint command to list information about all plexes. To display detailed information about all plexes in the system, use the following command: # vxprint -lp To display detailed information about a specific plex, use the following command: # vxprint -l plex The –t option prints a single line of information about the plex.
Creating and Administering Plexes Displaying Plex Information • determine if a plex contains a valid copy (mirror) of the volume contents • track whether a plex was in active use at the time of a system failure • monitor operations on plexes This section explains the individual plex states in detail.
Creating and Administering Plexes Displaying Plex Information DCOSNP Plex State This state indicates that a data change object (DCO) plex attached to a volume can be used by a snapshot plex to create a DCO volume during a snapshot operation. EMPTY Plex State Volume creation sets all plexes associated with the volume to the EMPTY state to indicate that the plex is not yet initialized. IOFAIL Plex State The IOFAIL plex state is associated with persistent state logging.
Creating and Administering Plexes Displaying Plex Information SNAPDIS Plex State This state indicates a snapshot plex that is fully attached. A plex in this state can be turned into a snapshot volume with the vxplex snapshot command. If the system fails before the attach completes, the plex is dissociated from the volume. See the vxplex(1M) manual page for more information. SNAPDONE Plex State The SNAPDONE plex state indicates that a snapshot plex is ready for a snapshot to be taken using vxassist snapshot.
Creating and Administering Plexes Displaying Plex Information TEMPRM Plex State A TEMPRM plex state is similar to a TEMP state except that at the completion of the operation, the TEMPRM plex is removed. Some subdisk operations require a temporary plex. Associating a subdisk with a plex, for example, requires updating the subdisk with the volume contents before actually associating the subdisk.
Creating and Administering Plexes Displaying Plex Information RECOVER Plex Condition A disk corresponding to one of the disk media records was replaced, or was reattached too late to prevent the plex from becoming out-of-date with respect to the volume. The plex required complete recovery from another plex in the volume to synchronize its contents. REMOVED Plex Condition Set in the disk media record when one of the subdisks associated with the plex is removed.
Creating and Administering Plexes Attaching and Associating Plexes Attaching and Associating Plexes A plex becomes a participating plex for a volume by attaching it to a volume. (Attaching a plex associates it with the volume and enables the plex for use.
Creating and Administering Plexes Taking Plexes Offline Taking Plexes Offline Once a volume has been created and placed online (ENABLED), VxVM can temporarily disconnect plexes from the volume. This is useful, for example, when the hardware on which the plex resides needs repair or when a volume has been left unstartable and a source plex for the volume revive must be chosen manually. Resolving a disk or system failure includes taking a volume offline and attaching and detaching its plexes.
Creating and Administering Plexes Detaching Plexes Detaching Plexes To temporarily detach one data plex in a mirrored volume, use the following command: # vxplex det plex For example, to temporarily detach a plex named vol01-02 and place it in maintenance mode, use the following command: # vxplex det vol01-02 This command temporarily detaches the plex, but maintains the association between the plex and its volume. However, the plex is not used for I/O.
Creating and Administering Plexes Reattaching Plexes Reattaching Plexes When a disk has been repaired or replaced and is again ready for use, the plexes must be put back online (plex state set to ACTIVE). To set the plexes to ACTIVE, use one of the following procedures depending on the state of the volume. • If the volume is currently ENABLED, use the following command to reattach the plex: # vxplex att volume plex ...
Creating and Administering Plexes Moving Plexes Moving Plexes Moving a plex copies the data content from the original plex onto a new plex. To move a plex, use the following command: # vxplex mv original_plex new_plex For a move task to be successful, the following criteria must be met: • The old plex must be an active part of an active (ENABLED) volume. • The new plex must be at least the same size or larger than the old plex. • The new plex must not be associated with another volume.
Creating and Administering Plexes Copying Plexes Copying Plexes This task copies the contents of a volume onto a specified plex. The volume to be copied must not be enabled. The plex cannot be associated with any other volume. To copy a plex, use the following command: # vxplex cp volume new_plex After the copy task is complete, new_plex is not associated with the specified volume volume. The plex contains a complete copy of the volume data.
Creating and Administering Plexes Dissociating and Removing Plexes Dissociating and Removing Plexes When a plex is no longer needed, you can dissociate it from its volume and remove it as an object from VxVM. You might want to remove a plex for the following reasons: CAUTION • to provide free disk space • to reduce the number of mirrors in a volume so you can increase the length of another mirror and its associated volume.
Creating and Administering Plexes Dissociating and Removing Plexes When used together, these commands produce the same result as the vxplex -o rm dis command. The -r option to vxedit rm recursively removes all objects from the specified object downward. In this way, a plex and its associated subdisks can be removed by a single vxedit command.
Creating and Administering Plexes Changing Plex Attributes Changing Plex Attributes CAUTION Change plex attributes with extreme care. The vxedit command changes the attributes of plexes and other volume Manager objects. To change plex attributes, use the following command: # vxedit set attribute=value ...
Creating and Administering Plexes Changing Plex Attributes 210 Chapter 6
7 Creating Volumes Introduction This chapter describes how to create volumes in Volume Manager (VxVM). Volumes are logical devices that appear as physical disk partition devices to data management systems. Volumes enhance recovery from hardware failure, data availability, performance, and storage configuration. Volumes are created to take advantage of the VxVM concept of virtual disks. A file system can be placed on the volume to organize the disk space with files and directories.
Creating Volumes Types of Volume Layouts Types of Volume Layouts VxVM allows you to create volumes with the following layout types: 212 • Concatenated—A volume whose subdisks are arranged both sequentially and contiguously within a plex. Concatenation allows a volume to be created from multiple regions of one or more disks if there is not enough space for an entire volume on a single region of a disk. For more information, see “Concatenation and Spanning” on page 18.
Creating Volumes Types of Volume Layouts of this layout are increased performance by spreading data across multiple disks and redundancy of data. “Striping Plus Mirroring (Mirrored-Stripe or RAID-0+1)” on page 25. • Layered Volume—A volume constructed from other volumes. Non-layered volumes are constructed by mapping their subdisks to VM disks.
Creating Volumes Creating a Volume Creating a Volume You can create volumes using either an advanced approach or an assisted approach. Each method uses different tools although you may switch from one set to another at will. NOTE Most VxVM commands require superuser or equivalent privileges. Advanced Approach The advanced approach consists of a number of commands that typically require you to specify detailed input.
Creating Volumes Creating a Volume Assisted Approach The assisted approach takes information about what you want to accomplish and then performs the necessary underlying tasks. This approach requires only minimal input from you, but also permits more detailed specifications. Assisted operations are performed primarily through the vxassist command or the VERITAS Enterprise Administrator (VEA).
Creating Volumes Using vxassist Using vxassist You can use the vxassist command to create and modify volumes. Specify the basic requirements for volume creation or modification, and vxassist performs the necessary tasks. The advantages of using vxassist rather than the advanced approach include: • Most actions require that you enter only one command rather than several. • You are required to specify only minimal information to vxassist.
Creating Volumes Using vxassist where keyword selects the task to perform. The first argument after a vxassist keyword, volume, is a volume name, which is followed by a set of desired volume attributes. For example, the keyword make allows you to create a new volume: # vxassist [options] make volume length [attributes] The length of the volume can be specified in sectors, kilobytes, megabytes, or gigabytes using a suffix character of s, k, m, or g.
Creating Volumes Using vxassist NOTE You must create the /etc/default directory and the vxassist default file if these do not already exist on your system. The format of entries in a defaults file is a list of attribute-value pairs separated by new lines. These attribute-value pairs are the same as those specified as options on the vxassist command line. Refer to the vxassist(1M) manual page for details.
Creating Volumes Using vxassist nraid5log=1 # by default, limit mirroring log lengths to 32Kbytes max_regionloglen=32k # use 64K as the default stripe unit size for regular volumes stripe_stwid=64k # use 16K as the default stripe unit size for RAID-5 volumes raid5_stwid=16k Chapter 7 219
Creating Volumes Discovering the Maximum Size of a Volume Discovering the Maximum Size of a Volume To find out how large a volume you can create within a disk group, use the following form of the vxassist command: # vxassist [-g diskgroup] maxsize layout=layout [attributes] For example, to discover the maximum size RAID-5 volume with 5 columns and 2 logs that you can create within the disk group dgrp, enter the following command: # vxassist -g dgrp maxsize layout=raid5 nlog=2 You can use storage attributes
Creating Volumes Creating a Volume on Any Disk Creating a Volume on Any Disk By default, the vxassist make command creates a concatenated volume that uses one or more sections of disk space. On a fragmented disk, this allows you to put together a volume larger than any individual section of free disk space available. NOTE To change the default layout, edit the definition of the layout attribute defined in the /etc/default/vxassist file.
Creating Volumes Creating a Volume on Specific Disks Creating a Volume on Specific Disks VxVM automatically selects the disks on which each volume resides, unless you specify otherwise. If you want a volume to be created on specific disks, you must designate those disks to VxVM. More than one disk can be specified. To create a volume on a specific disk or disks, use the following command: # vxassist [-b] [-g diskgroup] make volume length [layout=layout] \ diskname ...
Creating Volumes Creating a Volume on Specific Disks # vxassist -b make volmega 20g diskgroup=bigone disk10 disk11 NOTE Any storage attributes that you specify for use must belong to the disk group. Otherwise, vxassist will not use them to create a volume. You can also use storage attributes to control how vxassist uses available storage, for example, when calculating the maximum size of a volume, when growing a volume or when removing mirrors or logs from a volume.
Creating Volumes Creating a Volume on Specific Disks This command places columns 1, 2 and 3 of the first mirror on disk01, disk02 and disk03 respectively, and columns 1, 2 and 3 of the second mirror on disk04, disk05 and disk06 respectively.
Creating Volumes Creating a Volume on Specific Disks This command mirrors column 1 across disk01 and disk03, and column 2 across disk02 and disk04 as illustrated in Figure 7-2, “Example of using Ordered Allocation to Create a Striped-Mirror Volume,”.
Creating Volumes Creating a Volume on Specific Disks formed from disks disk05 through disk08.
Creating Volumes Creating a Volume on Specific Disks c2, and so on as illustrated inFigure 7-4, “Example of Storage Allocation Used to Create a Mirrored-Stripe Volume Across Controllers,” Figure 7-4 Example of Storage Allocation Used to Create a Mirrored-Stripe Volume Across Controllers c2 c1 Column 1 Column 2 c3 Controllers Column 3 Striped Plex Mirror Column 1 Column 2 Column 3 Striped Plex Mirrored-Stripe Volume c4 c5 c6 Controllers For other ways in which you can control how vxassist lays
Creating Volumes Creating a Mirrored Volume Creating a Mirrored Volume A mirrored volume provides data redundancy by containing more than one copy of its data. Each copy (or mirror) is stored on different disks from the original copy of the volume and from other mirrors. Mirroring a volume ensures that its data is not lost if a disk in one of its component mirrors fails.
Creating Volumes Creating a Mirrored Volume NOTE Specify the -b option if you want to make the volume immediately available for use. See “Initializing and Starting a Volume” on page 244 for details. Alternatively, first create a concatenated volume, and then mirror it as described in “Adding a Mirror to a Volume” on page 259. Creating a Concatenated-Mirror Volume NOTE You may need an additional license to use this feature.
Creating Volumes Creating a Mirrored Volume NOTE You may need an additional license to use the Persistent FastResync feature. Even if you do not have a license, you can configure a DCO object and DCO volume so that snap objects are associated with the original and snapshot volumes. For more information about snap objects, see “How Persistent FastResync Works with Snapshots” on page 55.
Creating Volumes Creating a Mirrored Volume number. It is recommended that you configure as many DCO plexes as there are data plexes in the volume. For example, specify ndcomirror=3 when creating a 3-way mirrored volume. The default size of each plex is 132 blocks unless you use the dcolen attribute to specify a different size. If specified, the size of the plex must be a multiple of 33 blocks from 33 up to a maximum of 2112 blocks. By default, FastResync is not enabled on newly created volumes.
Creating Volumes Creating a Mirrored Volume If you use ordered allocation when creating a mirrored volume on specified storage, you can use the optional logdisk attribute to specify on which disks the log plexes should be created. Use the following form of the vxassist command to specify the disks from which space for the logs is to be allocated: # vxassist [-g diskgroup] -o ordered make volume length layout=mirror logtype=log_type logdisk=disk[,disk,...
Creating Volumes Creating a Striped Volume Creating a Striped Volume NOTE You may need an additional license to use this feature. A striped volume contains at least one plex that consists of two or more subdisks located on two or more physical disks. For more information on striping, see “Striping (RAID-0)” on page 21. NOTE A striped volume requires space to be available on at least as many disks in the disk group as the number of columns in the volume.
Creating Volumes Creating a Striped Volume To change the default number of columns from 2, or the stripe width from 64 kilobytes, use the ncolumn and stripeunit modifiers with vxassist. For example, the following command creates a striped volume with 5 columns and a 32-kilobyte stripe size: # vxassist -b make stripevol 30g layout=stripe stripeunit=32k \ ncol=5 Creating a Mirrored-Stripe Volume A mirrored-stripe volume mirrors several striped data plexes.
Creating Volumes Creating a Striped Volume NOTE A striped-mirror volume requires space to be available on at least as many disks in the disk group as the number of columns multiplied by the number of stripes in the volume.
Creating Volumes Mirroring across Targets, Controllers or Enclosures Mirroring across Targets, Controllers or Enclosures To create a volume whose mirrored data plexes lie on different controllers, you can use either of the commands described in this section. # vxassist [-b] [-g diskgroup] make volume length layout=layout mirror=target [attributes] NOTE Specify the -b option if you want to make the volume immediately available for use. See “Initializing and Starting a Volume” on page 244 for details.
Creating Volumes Mirroring across Targets, Controllers or Enclosures # vxassist -b make volspec 10g layout=mirror nmirror=2 mirror=enclr enclr:enc1 enclr:enc2 The disks in one data plex are all taken from enclosure enc1, and the disks in the other data plex are all taken from enclosure enc2. This arrangement ensures continued availability of the volume should either enclosure become unavailable.
Creating Volumes Creating a RAID-5 Volume Creating a RAID-5 Volume NOTE VxVM supports this feature for private disk groups, but not for shareable disk groups in a cluster environment. NOTE You may need an additional license to use this feature. You can create RAID-5 volumes by using either the vxassist command (recommended) or the vxmake command. Both approaches are described below.
Creating Volumes Creating a RAID-5 Volume NOTE Specify the -b option if you want to make the volume immediately available for use. See “Initializing and Starting a Volume” on page 244 for details. For example, to create the RAID-5 volume volraid together with 2 RAID-5 logs, use the following command: # vxassist -b make volraid 10g layout=raid5 nlog=2 This creates a RAID-5 volume with the default stripe unit size on the default number of disks.
Creating Volumes Creating a RAID-5 Volume # vxassist -b make volraid 10g layout=raid5 ncol=3 nlog=2 \ logdisk=disk07,disk08 disk04 disk05 disk06 NOTE The number of logs must equal the number of disks specified to logdisk. For more information about ordered allocation, see “Specifying Ordered Allocation of Storage to Volumes” on page 223 and the vxassist(1M) manual page. If you need to add more logs to a RAID-5 volume at a later date, follow the procedure described in “Adding a RAID-5 Log” on page 271.
Creating Volumes Creating a Volume Using vxmake Creating a Volume Using vxmake As an alternative to using vxassist, you can create a volume using the vxmake command to arrange existing subdisks into plexes, and then to form these plexes into a volume. Subdisks can be created using the method described in “Creating Subdisks” on page 179. The example given in this section is to create a RAID-5 volume using vxmake.
Creating Volumes Creating a Volume Using vxmake This command stacks subdisks disk00-00 and disk03-00 consecutively in column 0, subdisks disk01-00 and disk04-00 consecutively in column 1, and subdisks disk02-00 and disk05-00 in column 2. Offsets can also be specified to create sparse RAID-5 plexes, as for striped plexes.
Creating Volumes Creating a Volume Using vxmake # vxmake -d description_file The following sample description file defines a volume, db, with two plexes: #rectyp sd sd sd sd sd plex #name disk3-01 disk3-02 disk4-01 disk4-02 disk4-03 db-01 sd ramd1-01 plex vol db-02 db #options disk=disk3 offset=0 len=10000 disk=disk3 offset=25000 len=10480 disk=disk4 offset=0 len=8000 disk=disk4 offset=15000 len=8000 disk=disk4 offset=30000 len=4480 layout=STRIPE ncolumn=2 stwidth=16k sd=disk3-01:0/0,disk3-02:0/10000,
Creating Volumes Initializing and Starting a Volume Initializing and Starting a Volume A volume must be initialized if it was created by the vxmake command and has not yet been initialized, or if the volume has been set to an uninitialized state. NOTE If you create a volume using the vxassist command, vxassist initializes and starts the volume automatically unless you specify the attribute init=none.
Creating Volumes Initializing and Starting a Volume # vxvol init enable volume This allows you to restore data on the volume from a backup before using the following command to make the volume fully active: # vxvol init active volume If you want to zero out the contents of an entire volume, use this command to initialize it: # vxvol init zero volume This command writes zeroes to the entire length of the volume and to any log plexes. It then makes the volume active.
Creating Volumes Accessing a Volume Accessing a Volume As soon as a volume has been created and initialized, it is available for use as a virtual disk partition by the operating system for the creation of a file system, or by application programs such as relational databases and other data management software.
8 Chapter 8 Administering Volumes 247
Administering Volumes Introduction Introduction This chapter describes how to perform common maintenance tasks on volumes in Volume Manager (VxVM). This includes displaying volume information, monitoring tasks, adding and removing logs, resizing volumes, removing mirrors, removing volumes, backing up volumes using mirrors and snapshots, and changing the layout of volumes without taking them offline. NOTE 248 Most VxVM commands require superuser or equivalent privileges.
Administering Volumes Displaying Volume Information Displaying Volume Information You can use the vxprint command to display information about how a volume is configured.
Administering Volumes Displaying Volume Information For example, to display information about the voldef volume, use the following command: # vxprint -t voldef This is example output from this command: Disk group: rootdg V NAME USETYPE v voldef fsgen NOTE KSTATE STATE LENGTH READPOL PREFPLEX ENABLED ACTIVE 20480 SELECT - If you enable enclosure-based naming, and use the vxprint command to display the structure of a volume, it shows enclosure-based disk device names (disk access names) rather than c#t#d#
Administering Volumes Displaying Volume Information EMPTY Volume State The volume contents are not initialized. The kernel state is always DISABLED when the volume is EMPTY. NEEDSYNC Volume State The volume requires a resynchronization operation the next time it is started. For a RAID-5 volume, a parity resynchronization operation is required. REPLAY Volume State The volume is in a transient state as part of a log replay. A log replay occurs when it becomes necessary to use logged parity and data.
Administering Volumes Displaying Volume Information Volume Kernel States The volume kernel state indicates the accessibility of the volume. The volume kernel state allows a volume to have an offline (DISABLED), maintenance (DETACHED), or online (ENABLED) mode of operation. NOTE No user intervention is required to set these states; they are maintained internally. On a system that is operating properly, all volumes are enabled.
Administering Volumes Monitoring and Controlling Tasks Monitoring and Controlling Tasks NOTE VxVM supports this feature for private disk groups, but not for shareable disk groups in a cluster environment. The VxVM task monitor tracks the progress of system recovery by monitoring task creation, maintenance, and completion. The task monitor allows you to monitor task progress and to modify characteristics of tasks, such as pausing and recovery rate (for example, to reduce the impact on system performance).
Administering Volumes Monitoring and Controlling Tasks Managing Tasks with vxtask NOTE New tasks take time to be set up, and so may not be immediately available for use after a command is invoked. Any script that operates on tasks may need to poll for the existence of a new task. You can use the vxtask command to administer operations on VxVM tasks that are running on the system.
Administering Volumes Monitoring and Controlling Tasks • pausePuts a running task in the paused state, causing it to suspend operation. resumeCauses a paused task to continue operation. setChanges modifiable parameters of a task. Currently, there is only one modifiable parameter, slow[=iodelay], which can be used to reduce the impact that copy operations have on system performance. If slow is specified, this introduces a delay between such operations with a default value for iodelay of 250 milliseconds.
Administering Volumes Monitoring and Controlling Tasks This command causes VxVM to attempt to reverse the progress of the operation so far. For an example of how to use vxtask to monitor and modify the progress of the Online Relayout feature, see “Controlling the Progress of a Relayout” on page 306.
Administering Volumes Stopping a Volume Stopping a Volume Stopping a volume renders it unavailable to the user, and changes the volume state from ENABLED or DETACHED to DISABLED. If the volume cannot be disabled, it remains in its current state. To stop a volume, use the following command: # vxvol stop volume ...
Administering Volumes Starting a Volume Starting a Volume Starting a volume makes it available for use, and changes the volume state from DISABLED or DETACHED to ENABLED. To start a DISABLED or DETACHED volume, use the following command: # vxvol -g diskgroup start volume ... If a volume cannot be enabled, it remains in its current state.
Administering Volumes Adding a Mirror to a Volume Adding a Mirror to a Volume A mirror can be added to an existing volume with the vxassist command, as follows: # vxassist [-b] [-g diskgroup] mirror volume NOTE If specified, the -b option makes synchronizing the new mirror a background task.
Administering Volumes Adding a Mirror to a Volume Mirroring Volumes on a VM Disk Mirroring volumes on a VM disk gives you one or more copies of your volumes in another disk location. By creating mirror copies of your volumes, you protect your system against loss of data in case of a disk failure. NOTE This task only mirrors concatenated volumes. Volumes that are already mirrored or that contain subdisks that reside on multiple disks are ignored.
Administering Volumes Adding a Mirror to a Volume The requested operation is to mirror all volumes on disk disk02in disk group rootdg onto available disk space on disk disk01. NOTE: This operation can take a long time to complete. Continue with operation? [y,n,q,?] (default: y) The vxdiskadm program displays the status of the mirroring operation, as follows: Mirror volume voltest-bk00 ... Mirroring of disk disk01 is complete. Step 5.
Administering Volumes Removing a Mirror Removing a Mirror When a mirror is no longer needed, you can remove it to free up disk space. NOTE The last valid plex associated with a volume cannot be removed. To remove a mirror from a volume, use the following command: # vxassist remove mirror volume Additionally, you can use storage attributes to specify the storage to be removed.
Administering Volumes Adding a DCO and DCO Volume Adding a DCO and DCO Volume CAUTION If the existing volume was created before release 3.2 of VxVM, and it has any attached snapshot plexes or it is associated with any snapshot volumes, follow the procedure given in “Enabling Persistent FastResync on Existing Volumes with Associated Snapshots” on page 288. The procedure given in this section is for existing volumes without existing snapshot plexes or associated snapshot volumes.
Administering Volumes Adding a DCO and DCO Volume Step 2. Use the following command to turn off Non-Persistent FastResync on the original volume if it is currently enabled: # vxvol [-g diskgroup] set fastresync=off volume If you are uncertain about which volumes have Non-Persistent FastResync enabled, use the following command to obtain a listing of such volumes: # vxprint [-g diskgroup] -F “%name” \ -e “v_fastresync=on && !v_hasdcolog” Step 3.
Administering Volumes Adding a DCO and DCO Volume pl sd zoo_dcl-02 c1t67d0-01 zoo_dcl zoo_dcl-02 ENABLED ENABLED 132 132 0 ACTIVE - In this output, the DCO object is shown as zoo_dco, and the DCO volume as zoo_dcl with 2 plexes, zoo_dcl-01 and zoo_dcl-02. For more information, see the vxassist(1M) manual page. Attaching a DCO and DCO volume to a RAID-5 Volume The procedure in the previous section can be used to add a DCO and DCO volume to a RAID-5 volume.
Administering Volumes Adding a DCO and DCO Volume placed on disks which are used to hold the plexes of other volumes, this may cause problems when you subsequently attempt to move volumes into other disk groups. You can use storage attributes to specify explicitly which disks to use for the DCO plexes. If possible, specify the same disks as those on which the volume is configured.
Administering Volumes Removing a DCO and DCO Volume Removing a DCO and DCO Volume To dissociate a DCO object, DCO volume and any snap objects from a volume, use the following command: # vxassist [-g diskgroup] remove log volume logtype=dco This completely removes the DCO object, DCO volume and any snap objects. It also has the effect of disabling FastResync for the volume.
Administering Volumes Reattaching a DCO and DCO Volume Reattaching a DCO and DCO Volume If the DCO object and DCO volume are not removed by specifying the -o rm option to vxdco, they can be reattached to the parent volume using the following command: # vxdco [-g diskgroup] att volume dco_obj For example, to reattach the DCO object, myvol_dco, to the volume, myvol, use the following command: # vxdco -g mydg att myvol myvol_dco For more information, see the vxdco(1M) manual page.
Administering Volumes Adding DRL Logging to a Mirrored Volume Adding DRL Logging to a Mirrored Volume To put dirty region logging (DRL) into effect for a mirrored volume, a log subdisk must be added to that volume. Only one log subdisk can exist per plex. To add DRL logs to an existing volume, use the following command: # vxassist [-b] addlog volume logtype=drl [nlog=n] NOTE If specified, the -b option makes adding the new logs a background task.
Administering Volumes Removing a DRL Log Removing a DRL Log To remove a DRL log, use the vxassist command as follows: # vxassist remove log volume [nlog=n] Use the optional attribute nlog=n to specify the number, n, of logs to be removed. By default, the vxassist command removes one log.
Administering Volumes Adding a RAID-5 Log Adding a RAID-5 Log NOTE You may need an additional license to use this feature. Only one RAID-5 plex can exist per RAID-5 volume. Any additional plexes become RAID-5 log plexes, which are used to log information about data and parity being written to the volume. When a RAID-5 volume is created using the vxassist command, a log plex is created for that volume by default.
Administering Volumes Adding a RAID-5 Log The attach operation can only proceed if the size of the new log is large enough to hold all of the data on the stripe. If the RAID-5 volume already contains logs, the new log length is the minimum of each individual log length. This is because the new log is a mirror of the old logs. If the RAID-5 volume is not enabled, the new log is marked as BADLOG and is enabled when the volume is started. However, the contents of the log are ignored.
Administering Volumes Removing a RAID-5 Log Removing a RAID-5 Log To identify the plex of the RAID-5 log, use the following command: # vxprint -ht volume where volume is the name of the RAID-5 volume. For a RAID-5 log, the output lists a plex with a STATE field entry of LOG.
Administering Volumes Resizing a Volume Resizing a Volume Resizing a volume changes the volume size. For example, you might need to increase the length of a volume if it is no longer large enough for the amount of data to be stored on it. To resize a volume, use one of the commands: vxresize (preferred), vxassist, or vxvol. Alternatively, you can use the graphical VERITAS Enterprise Administrator (VEA) to resize volumes.
Administering Volumes Resizing a Volume Resizing Volumes using vxresize Use the vxresize command to resize a volume containing a file system. Although other commands can be used to resize volumes containing file systems, the vxresize command offers the advantage of automatically resizing certain types of file system as well as the volume.
Administering Volumes Resizing a Volume vxvm:vxresize: ERROR: Volume volume has different organization in each mirror For more information about the vxresize command, see the vxresize(1M) manual page.
Administering Volumes Resizing a Volume NOTE If specified, the -b option makes growing the volume a background task. For example, to extend volcat by 100 sectors, use the following command: # vxassist growby volcat 100 NOTE If you previously performed a relayout on the volume, additionally specify the attribute layout=nodiskalign to the growby command if you want the subdisks to be grown using contiguous disk space.
Administering Volumes Resizing a Volume CAUTION Do not shrink the volume below the current size of the file system or database using the volume. The vxassist shrinkby command can be safely used on empty volumes.
Administering Volumes Changing the Read Policy for Mirrored Volumes Changing the Read Policy for Mirrored Volumes VxVM offers the choice of the following read policies on the data plexes in a mirrored volume: • round—reads each plex in turn in “round-robin” fashion for each nonsequential I/O detected. Sequential access causes only one plex to be accessed. This takes advantage of the drive or controller read-ahead caching policies.
Administering Volumes Changing the Read Policy for Mirrored Volumes # vxvol rdpol select volume For more information about how read policies affect performance, see “Volume Read Policies” on page 389.
Administering Volumes Removing a Volume Removing a Volume Once a volume is no longer necessary (it is inactive and its contents have been archived, for example), it is possible to remove the volume and free up the disk space for other uses. Before removing a volume, use the following procedure to stop all activity on the volume: Step 1. Remove all references to the volume by application programs, including shells, that are running on the system. Step 2.
Administering Volumes Moving Volumes from a VM Disk Moving Volumes from a VM Disk Before you disable or remove a disk, you can move the data from that disk to other disks on the system. To do this, ensure that the target disks have sufficient space, and then use the following procedure: Step 1. Select menu item 6 (Move volumes from a disk) from the vxdiskadm main menu. Step 2.
Administering Volumes Moving Volumes from a VM Disk Move volume voltest ... Move volume voltest-bk00 ... When the volumes have all been moved, the vxdiskadm program displays the following success message: Evacuation of disk disk01 is complete. Step 3.
Administering Volumes Enabling FastResync on a Volume Enabling FastResync on a Volume NOTE You may need an additional license to use this feature. FastResync performs quick and efficient resynchronization of stale mirrors. It also increases the efficiency of the VxVM snapshot mechanism when used with operations such as backup and decision support. See “Backing Up Volumes Online Using Snapshots” on page 294 and “FastResync” on page 53 for more information. From Release 3.
Administering Volumes Enabling FastResync on a Volume NOTE It is not possible to configure both Persistent and Non-Persistent FastResync on a volume. Persistent FastResync is used if a DCO object and a DCO volume are associated with the volume. Otherwise, Non-Persistent FastResync is used.
Administering Volumes Enabling FastResync on a Volume # vxprint [-g diskgroup] -F “%name” -e “v_fastresync=on \ && v_hasdcolog” 286 Chapter 8
Administering Volumes Disabling FastResync Disabling FastResync Use the vxvol command to turn off Persistent or Non-Persistent FastResync for an existing volume, as shown here: # vxvol [-g diskgroup] set fastresync=off volume Turning FastResync off releases all tracking maps for the specified volume. All subsequent reattaches will not use the FastResync facility, but perform a full resynchronization of the volume. This occurs even if FastResync is later turned on.
Administering Volumes Enabling Persistent FastResync on Existing Volumes with Associated Snapshots Enabling Persistent FastResync on Existing Volumes with Associated Snapshots The procedure described in this section describes how to enable Persistent FastResync on a volume created before release 3.2 of VxVM, and which has attached snapshot plexes or is associated with one or more snapshot volumes.
Administering Volumes Enabling Persistent FastResync on Existing Volumes with Associated Snapshots use the disk group move feature to bring in spare disks from a different disk group. For more information, see “Reorganizing the Contents of Disk Groups” on page 152. Perform the following steps to enable Persistent FastResync on an existing volume that has attached snapshot plexes or associated snapshot volumes: Step 1.
Administering Volumes Enabling Persistent FastResync on Existing Volumes with Associated Snapshots # vxvol [-g diskgroup] set fastresync=off volume If you are uncertain about which volumes have Non-Persistent FastResync enabled, use the following command to obtain a listing of such volumes: # vxprint [-g diskgroup] -F “%name” \ -e “v_fastresync=on && !v_hasdcolog” Step 4. Use the following command on the original volume and on each of its snapshot volumes (if any) to add a DCO and DCO volume.
Administering Volumes Enabling Persistent FastResync on Existing Volumes with Associated Snapshots # vxassist -g egdg addlog SNAP-vol logtype=dco \ dcolen=264 ndcomirror=1 !disk01 !disk02 disk03 NOTE If the DCO plexes of the snapshot volume are configured on disks that also contain the plexes of other volumes, this prevents the snapshot volume from being moved to a different disk group. See “Considerations for Placing DCO Plexes” on page 157 for more information. Step 5.
Administering Volumes Enabling Persistent FastResync on Existing Volumes with Associated Snapshots Step 6. Perform this step on any snapshot volumes as well as on the original volume.
Administering Volumes Backing up Volumes Online Backing up Volumes Online It is important to make backup copies of your volumes. These provide replicas of the data as it existed at the time of the backup. Backup copies are used to restore volumes lost due to disk failure, or data destroyed due to human error. VxVM allows you to back up volumes online with minimal interruption to users.
Administering Volumes Backing up Volumes Online Step 4. Use fsck (or some utility appropriate for the application running on the volume) to clean the temporary volume’s contents. For example, you can use this command: # fsck vxfs /dev/vx/rdsk/diskgroup/tempvol Step 5. Perform appropriate backup procedures, using the temporary volume. Step 6. Stop the temporary volume, using the following command: # vxvol [-g diskgroup] stop tempvol Step 7.
Administering Volumes Backing up Volumes Online NOTE You may need an additional license to use this feature. VxVM provides snapshot images of volume devices using vxassist and other commands. If the fsgen volume usage type is set on a volume that contains a VERITAS File System (VxFS), the snapshot mechanism ensures the internal consistency of the file system that is backed up. For file system types, there may be inconsistencies between in-memory data and the data in the snapshot image.
Administering Volumes Backing up Volumes Online The online backup procedure is completed by running the vxassist snapshot command on a volume with a SNAPDONE mirror. This task detaches the finished snapshot (which becomes a normal mirror), creates a new normal volume and attaches the snapshot mirror to the snapshot volume. The snapshot then becomes a normal, functioning mirror and the state of the snapshot is set to ACTIVE.
Administering Volumes Backing up Volumes Online If vxassist snapstart is not run in the background, it does not exit until the mirror has been synchronized with the volume. The mirror is then ready to be used as a plex of a snapshot volume. While attached to the original volume, its contents continue to be updated until you take the snapshot. Use the nmirror attribute to create as many snapshot mirrors as you need for the snapshot volume. For a backup, you should usually only require the default of one.
Administering Volumes Backing up Volumes Online Step 5. Use a backup utility or operating system command to copy the temporary volume to tape, or to some other appropriate backup media. When the backup is complete, you have three choices for what to do with the snapshot volume: • Reattach some or all of the plexes of the snapshot volume with the original volume as described in “Merging a Snapshot Volume (snapback)” on page 300.
Administering Volumes Backing up Volumes Online # vxplex [-g diskgroup] dcoplex=dcologplex convert \ state=SNAPDONE plex dcologplex is the name of an existing DCO plex that is to be associated with the new snapshot plex. You can use the vxprint command to find out the name of the DCO volume as described in “Adding a DCO and DCO Volume” on page 263.
Administering Volumes Backing up Volumes Online To snapshot all the volumes in a single disk group, specify the option -o allvols to vxassist: # vxassist -g diskgroup -o allvols snapshot This operation requires that all snapstart operations are complete on the volumes. It fails if any of the volumes in the disk group do not have a complete snapshot plex in the SNAPDONE state.
Administering Volumes Backing up Volumes Online Here the nmirror attribute specifies the number of mirrors in the snapshot volume that are to be re-attached. Once the snapshot plexes have been reattached and their data resynchronized, they are ready to be used in another snapshot operation. By default, the data in the original volume is used to update the snapshot plexes that have been re-attached.
Administering Volumes Backing up Volumes Online If you have split or moved the snapshot volume and the original volume into different disk groups, you must run snapclear on the each volume separately, specifying the snap object in the volume that points to the other volume: # vxassist snapclear volume snap_object For example, if myvol1 and SNAP-myvol1 are in separate disk groups mydg1 and mydg2 respectively, the following commands stop the tracking on SNAP-myvol1 with respect to myvol1 and on myvol1 with re
Administering Volumes Backing up Volumes Online v ss SNAP-v2 -- fsgen v2 20480 20480 0 In this example, Persistent FastResync is enabled on volume v1, and Non-Persistent FastResync on volume v2. Lines beginning with v, dp and ss indicate a volume, detached plex and snapshot plex respectively. The %DIRTY field indicates the percentage of a snapshot plex or detached plex that is dirty with respect to the original volume.
Administering Volumes Performing Online Relayout Performing Online Relayout NOTE You may need an additional license to use this feature. You can use the vxassist relayout command to reconfigure the layout of a volume without taking it offline. The general form of this command is: # vxassist [-b] [-g diskgroup] relayout volume [layout=layout] \ [relayout_options] NOTE If specified, the -b option makes relayout of the volume a background task.
Administering Volumes Performing Online Relayout Specifying a Non-Default Layout You can specify one or more relayout options to change the default layout configuration. Examples of these options are: ncol=number—specifies the number of columns ncol=+number—specifies the number of columns to add ncol=-number—specifies the number of colums to remove stripeunit=size—specifies the stripe width See the vxassist(1M) manual page for more information about relayout options.
Administering Volumes Performing Online Relayout Tagging a Relayout Operation If you want to control the progress of a relayout operation, for example to pause or reverse it, use the -t option to vxassist to specify a task tag for the operation.
Administering Volumes Performing Online Relayout # vxtask pause myconv To resume the operation, use the vxtask command: # vxtask resume myconv For relayout operations that have not been stopped using the vxtask pause command (for example, the vxtask abort command was used to stop the task, the transformation process died, or there was an I/O failure), resume the relayout by specifying the start keyword to vxrelayout, as shown here: # vxrelayout -o bg start vol04 NOTE If you use the vxrelayout start comman
Administering Volumes Converting Between Layered and Non-Layered Volumes Converting Between Layered and Non-Layered Volumes The vxassist convert command transforms volume layouts between layered and non-layered forms: # vxassist [-b] convert volume [layout=layout] [convert_options] NOTE If specified, the -b option makes conversion of the volume a background task.
Administering Volumes Converting Between Layered and Non-Layered Volumes NOTE Chapter 8 If the system crashes during relayout or conversion, the process continues when the system is rebooted. However, if the crash occurred during the first stage of a two-stage relayout and convert operation, only the first stage will be completed. You must run vxassist convert manually to complete the operation.
Administering Volumes Converting Between Layered and Non-Layered Volumes 310 Chapter 8
9 Chapter 9 Administering Hot-Relocation 311
Administering Hot-Relocation Introduction Introduction If a volume has a disk I/O failure (for example, because the disk has an uncorrectable error), Volume Manager (VxVM) can detach the plex involved in the failure. I/O stops on that plex but continues on the remaining plexes of the volume. If a disk fails completely, VxVM can detach the disk from its disk group. All plexes on the disk are disabled. If there are any unmirrored volumes on a disk when it is detached, those volumes are also disabled.
Administering Hot-Relocation How Hot-Relocation works How Hot-Relocation works Hot-relocation allows a system to react automatically to I/O failures on redundant (mirrored or RAID-5) VxVM objects, and to restore redundancy and access to those objects. VxVM detects I/O failures on objects and relocates the affected subdisks to disks designated as spare disks or to free space within the disk group.
Administering Hot-Relocation How Hot-Relocation works Step 1. vxrelocd informs the system administrator (and other nominated users, see “Modifying the Behavior of Hot-Relocation” on page 335) by electronic mail of the failure and which VxVM objects are affected. See “Partial Disk Failure Mail Messages” on page 317 and “Complete Disk Failure Mail Messages” on page 318 for more information. Step 2. vxrelocd next determines if any subdisks can be relocated.
Administering Hot-Relocation How Hot-Relocation works • The only available space is on a disk that already contains the RAID-5 log plex or one of its healthy subdisks, failing subdisks in the RAID-5 plex cannot be relocated. • If a mirrored volume has a dirty region logging (DRL) log subdisk as part of its data plex, failing subdisks belonging to that plex cannot be relocated. • If a RAID-5 volume log plex or a mirrored volume DRL log plex fails, a new log plex is created elsewhere.
Administering Hot-Relocation How Hot-Relocation works Figure 9-1, “Example of Hot-Relocation for a Subdisk in a RAID-5 Volume,” illustrates the hot-relocation process in the case of the failure of a single subdisk of a RAID-5 volume. Figure 9-1 Example of Hot-Relocation for a Subdisk in a RAID-5 Volume a) Disk group contains five disks. Two RAID-5 volumes are configured across four of the disks. One spare disk is available for hot-relocation.
Administering Hot-Relocation How Hot-Relocation works Partial Disk Failure Mail Messages If hot-relocation is enabled when a plex or disk is detached by a failure, mail indicating the failed objects is sent to root. If a partial disk failure occurs, the mail identifies the failed plexes.
Administering Hot-Relocation How Hot-Relocation works # vxrecover -b home src This starts recovery of the failed plexes in the background (the command prompt reappears before the operation completes). If an error message appears later, or if the plexes become detached again and there are no obvious cabling failures, replace the disk (see “Removing and Replacing Disks” on page 94).
Administering Hot-Relocation How Hot-Relocation works any available free space in the disk group in which the failure occurs. If there is not enough spare disk space, a combination of spare space and free space is used. The free space used in hot-relocation must not have been excluded from hot-relocation use. Disks can be excluded from hot-relocation use by using vxdiskadm, vxedit or the VERITAS Enterprise Administrator (VEA).
Administering Hot-Relocation Configuring a System for Hot-Relocation Configuring a System for Hot-Relocation By designating spare disks and making free space on disks available for use by hot relocation, you can control how disk space is used for relocating subdisks in the event of a disk failure. If the combined free space and space on spare disks is not sufficient or does not meet the redundancy constraints, the subdisks are not relocated.
Administering Hot-Relocation Displaying Spare Disk Information Displaying Spare Disk Information Use the following command to display information about spare disks that are available for relocation: # vxdg spare The following is example output: GROUP AGS rootdg DISK DEVICE TAG OFFSET LENGTH FL disk02 c0t2d0 c0t2d0 0 658007 s Here disk02 is the only disk designated as a spare. The LENGTH field indicates how much spare space is currently available on disk02 for relocation.
Administering Hot-Relocation Marking a Disk as a Hot-Relocation Spare Marking a Disk as a Hot-Relocation Spare Hot-relocation allows the system to react automatically to I/O failure by relocating redundant subdisks to other disks. Hot-relocation then restores the affected VxVM objects and data. If a disk has already been designated as a spare in the disk group, the subdisks from the failed disk are relocated to the spare disk. Otherwise, any suitable free space in the disk group is used.
Administering Hot-Relocation Marking a Disk as a Hot-Relocation Spare Step 3. At the following prompt, indicate whether you want to add more disks as spares (y) or return to the vxdiskadm main menu (n): Mark another disk as a spare? [y,n,q,?] (default: n) Any VM disk in this disk group can now use this disk as a spare in the event of a failure. If a disk fails, hot-relocation should automatically occur (if possible). You should be notified of the failure and relocation through electronic mail.
Administering Hot-Relocation Removing a Disk from Use as a Hot-Relocation Spare Removing a Disk from Use as a Hot-Relocation Spare While a disk is designated as a spare, the space on that disk is not used for the creation of VxVM objects within its disk group. If necessary, you can free a spare disk for general use by removing it from the pool of hot-relocation disks.
Administering Hot-Relocation Excluding a Disk from Hot-Relocation Use Excluding a Disk from Hot-Relocation Use To exclude a disk from hot-relocation use, use the following command: # vxedit -g disk_group set nohotuse=on diskname Alternatively, using vxdiskadm: Step 1. Select menu item 15 (Exclude a disk from hot-relocation use) from the vxdiskadm main menu. Step 2.
Administering Hot-Relocation Making a Disk Available for Hot-Relocation Use Making a Disk Available for Hot-Relocation Use Free space is used automatically by hot-relocation in case spare space is not sufficient to relocate failed subdisks. You can limit this free space usage by hot-relocation by specifying which free disks should not be touched by hot-relocation. If a disk was previously excluded from hot-relocation use, you can undo the exclusion and add the disk back to the hot-relocation pool.
Administering Hot-Relocation Making a Disk Available for Hot-Relocation Use Make another disk available for hot-relocation use? [y,n,q,?] (default: n) Chapter 9 327
Administering Hot-Relocation Configuring Hot-Relocation to Use Only Spare Disks Configuring Hot-Relocation to Use Only Spare Disks If you want VxVM to use only spare disks for hot-relocation, add the following line to the file /etc/default/vxassist: spare=only If not enough storage can be located on disks marked as spare, the relocation fails. Any free space on non-spare disks is not used.
Administering Hot-Relocation Moving and Unrelocating Subdisks Moving and Unrelocating Subdisks When hot-relocation occurs, subdisks are relocated to spare disks and/or available free space within the disk group. The new subdisk locations may not provide the same performance or data layout that existed before hot-relocation took place. You can move the relocated subdisks (after hot-relocation is complete) to improve performance.
Administering Hot-Relocation Moving and Unrelocating Subdisks CAUTION During subdisk move operations, RAID-5 volumes are not redundant. Moving and Unrelocating Subdisks using vxdiskadm To move the hot-relocated subdisks back to the disk where they originally resided after the disk has been replaced following a failure, use the following procedure: Step 1. Select menu item 14 (Unrelocate subdisks back to a disk) from the vxdiskadm main menu. Step 2.
Administering Hot-Relocation Moving and Unrelocating Subdisks Requested operation is to move all the subdisks which were hot-relocated from disk10 back to disk10 of disk group rootdg. Continue with operation? [y,n,q,?] (default: y) A status message is displayed at the end of the operation. Unrelocate to disk disk10 is complete.
Administering Hot-Relocation Moving and Unrelocating Subdisks vxunreloc allows you to restore the system back to the configuration that existed before the disk failure. vxunreloc allows you to move the hot-relocated subdisks back onto a disk that was replaced due to a failure. When vxunreloc is invoked, you must specify the disk media name where the hot-relocated subdisks originally resided. When vxunreloc moves the subdisks, it moves them to the original offsets.
Administering Hot-Relocation Moving and Unrelocating Subdisks The destination disk should have at least as much storage capacity as was in use on the original disk. If there is not enough space, the unrelocate operation will fail and none of the subdisks will be moved. Forcing hot-relocated subdisks to accept different offsets By default, vxunreloc attempts to move hot-relocated subdisks to their original offsets.
Administering Hot-Relocation Moving and Unrelocating Subdisks subdisk is moved back to the original disk or to a new disk using vxunreloc, the information is erased. The original disk-media name and the original offset are saved in the subdisk records. To print all of the subdisks that were hot-relocated from disk01 in the rootdg disk group, use the following command: # vxprint -g rootdg -se 'sd_orig_dmname="disk01"' Restarting vxunreloc After Errors vxunreloc moves subdisks in three phases: Step 1.
Administering Hot-Relocation Modifying the Behavior of Hot-Relocation Modifying the Behavior of Hot-Relocation Hot-relocation is turned on as long as vxrelocd is running. You leave hot-relocation turned on so that you can take advantage of this feature if a failure occurs. However, if you choose to disable this feature (perhaps because you do not want the free space on some of your disks to be used for relocation), prevent vxrelocd from starting at system startup time.
Administering Hot-Relocation Modifying the Behavior of Hot-Relocation When executing vxrelocd manually, either include /etc/vx/bin in your PATH or specify vxrelocd’s absolute pathname, for example: # PATH=/etc/vx/bin:$PATH # export PATH # nohup vxrelocd root & or # nohup /etc/vx/bin/vxrelocd root user1 user2 & See the vxrelocd (1M) manual page for more information.
10 Chapter 10 Administering Cluster Functionality 337
Administering Cluster Functionality Introduction Introduction A cluster consists of a number of hosts or nodes that share a set of disks. The main benefits of cluster configurations are: • Availability—If one node fails, the other nodes can still access the shared disks. When configured with suitable software, mission-critical applications can continue running by transferring their execution to a standby node in the cluster.
Administering Cluster Functionality Overview of Cluster Volume Management Overview of Cluster Volume Management In recent years, tightly coupled cluster systems have become increasingly popular in the realm of enterprise-scale mission-critical data processing. The primary advantage of clusters is protection against hardware failure. Should the primary node fail or otherwise become unavailable, applications can continue to run by transferring their execution to standby nodes in the cluster.
Administering Cluster Functionality Overview of Cluster Volume Management joining or leaving the cluster, and which have failed. The private network requires at least two communication channels to provide redundancy against one of the channels failing. If only one channel were used, its failure would be indistinguishable from node failure—a condition known as network partitioning.
Administering Cluster Functionality Overview of Cluster Volume Management NOTE You must run commands that configure or reconfigure VxVM objects on the master node. Tasks that must be initiated from the master node include setting up shared disk groups, creating and reconfiguring volumes, and performing snapshot operations. VxVM determines that the first node to join a cluster performs the function of master node. If the master node leaves a cluster, one of the slave nodes is chosen to be the new master.
Administering Cluster Functionality Overview of Cluster Volume Management Each physical disk is marked with a unique disk ID. When cluster functionality for VxVM starts on the master, it imports all shared disk groups (except for any that have the noautoimport attribute set). When a slave tries to join a cluster, the master sends it a list of the disk IDs that it has imported, and the slave checks to see if it can access them all.
Administering Cluster Functionality Overview of Cluster Volume Management NOTE The default activation mode for shared disk groups is off (inactive). Special uses of clusters, such as high availability (HA) applications and off-host backup, can use disk group activation to explicitly control volume access from different nodes in the cluster. The activation mode of a disk group controls volume I/O from different nodes in the cluster.
Administering Cluster Functionality Overview of Cluster Volume Management Table 10-2 summarizes the allowed and conflicting activation modes for shared disk groups: Table 10-2 Allowed and Conflicting Activation Modes Disk group activated in cluster as... Attempt to activate disk group on another node as...
Administering Cluster Functionality Overview of Cluster Volume Management NOTE If the default activation node is anything other than off, an activation following a cluster join, or a disk group creation or import can fail if another node in the cluster has activated the disk group in a conflicting mode. To display the activation mode for a shared disk group, use the vxdg list diskgroup command as described in “Listing Shared Disk Groups” on page 361.
Administering Cluster Functionality Overview of Cluster Volume Management See “Setting the Connectivity Policy on a Shared Disk Group” on page 366 for information on how to use the vxedit command to set the connectivity policy on a shared disk group. Limitations of Shared Disk Groups The cluster functionality of VxVM does not support RAID-5 volumes, or task monitoring for cluster-shareable disk groups.
Administering Cluster Functionality Cluster Initialization and Configuration Cluster Initialization and Configuration Before any nodes can join a new cluster for the first time, you must supply certain configuration information during cluster monitor setup. This information is normally stored in some form of cluster monitor configuration database. The precise content and format of this information depends on the characteristics of the cluster monitor.
Administering Cluster Functionality Cluster Initialization and Configuration Cluster Reconfiguration A cluster reconfiguration occurs if a node leaves or joins a cluster. Each node’s cluster monitor continuously watches the other cluster nodes. When the membership of the cluster changes, the cluster monitor calls the vxclustd cluster reconfiguration daemon. The vxclustd daemon coordinates cluster reconfigurations and provides communication between VxVM and the cluster monitor.
Administering Cluster Functionality Cluster Initialization and Configuration Registration also sets up a callback mechanism for the cluster monitor to notify the vxclustd daemon when cluster membership changes. After initializing kernel cluster variables, the vxclustd daemon waits for a callback from the cluster monitor. When the vxclustd daemon obtains membership information from the cluster monitor, it validates the membership change, and provides the new membership to the kernel.
Administering Cluster Functionality Cluster Initialization and Configuration example, vxconfigd rejects an attempt to create a new disk group with the same name as an existing disk group. The vxconfigd daemon on the master node then sends details of the changes to the vxconfigd daemons on the slave nodes. The vxconfigd daemons on the slave nodes then perform their own checking.
Administering Cluster Functionality Cluster Initialization and Configuration running on the same node; it does not attempt to connect with vxconfigd daemons on other nodes. During cluster startup, the kernel prompts vxconfigd to begin cluster operation and indicates whether it is a master node or a slave node.
Administering Cluster Functionality Cluster Initialization and Configuration Different actions are taken depending on which node the vxconfigd daemon is stopped: • If the vxconfigd daemon is stopped on the master node, the vxconfigd daemons on the slave nodes periodically attempt to rejoin to the master node. Such attempts do not succeed until the vxconfigd daemon is restarted on the master.
Administering Cluster Functionality Cluster Initialization and Configuration Node Shutdown Although it is possible to shut down the cluster on a node by invoking the shutdown procedure of the node’s cluster monitor, this procedure is intended for terminating cluster components after stopping any applications on the node that have access to shared storage. VxVM supports clean node shutdown, which allows a node to leave the cluster gracefully when all access to shared volumes has ceased.
Administering Cluster Functionality Cluster Initialization and Configuration NOTE Once shutdown succeeds, the node has left the cluster. It is not possible to access the shared volumes until the node joins the cluster again. Since shutdown can be a lengthy process, other reconfiguration can take place while shutdown is in progress. Normally, the shutdown attempt is suspended until the other reconfiguration completes. However, if it is already too far advanced, the shutdown may complete first.
Administering Cluster Functionality Upgrading Cluster Functionality Upgrading Cluster Functionality The rolling upgrade feature allows you to upgrade the version of VxVM running in a cluster without shutting down the entire cluster. To install the new version of VxVM running on a cluster, make one node leave the cluster, upgrade it, and then join it back into the cluster. This operation is repeated for each node in the cluster. Each Volume Manager release starting with Release 3.
Administering Cluster Functionality Upgrading Cluster Functionality Once you have installed the new release on all nodes, run the vxdctl upgrade command on the master node to switch the cluster to the higher cluster protocol version. See “Upgrading the Cluster Protocol Version” on page 369 for more information.
Administering Cluster Functionality Dirty Region Logging (DRL) in Cluster Environments Dirty Region Logging (DRL) in Cluster Environments Dirty region logging (DRL) is an optional property of a volume that provides speedy recovery of mirrored volumes after a system failure. DRL is supported in cluster-shareable disk groups. This section provides a brief overview of DRL and describes how DRL behaves in a cluster environment. For more information on DRL, see “Dirty Region Logging (DRL)” on page 49.
Administering Cluster Functionality Dirty Region Logging (DRL) in Cluster Environments gigabytes of volume size. For a 2-gigabyte volume in a 2-node cluster, a log size of 2 blocks (one block per map) is sufficient; this is the minimum log size. A 4-gigabyte volume in a 4-node cluster requires a log size of 10 blocks, and so on. It is possible to re-import a non-shared disk group (and its volumes) as a shared disk group in a cluster environment.
Administering Cluster Functionality Dirty Region Logging (DRL) in Cluster Environments VxVM tracks which nodes have crashed. If multiple node recoveries are underway in a cluster at a given time, their respective recoveries and recovery map updates can compete with each other. VxVM tracks changes in the state of DRL recovery and prevents I/O collisions.
Administering Cluster Functionality Administering VxVM in Cluster Environments Administering VxVM in Cluster Environments The following sections describe procedures for administering the cluster functionality of VxVM. NOTE Most VxVM commands require superuser or equivalent privileges. Requesting the Status of a Cluster Node The vxdctl utility controls the operation of the vxconfigd volume configuration daemon. The -c option can be used to request cluster information.
Administering Cluster Functionality Administering VxVM in Cluster Environments A portion of the output from this command (for the device c4t1d0) is shown here: Device: devicetag: type: clusterid: disk: timeout: group: flags: ... c4t1d0 c4t1d0 sliced cvm2 name=disk01 id=963616090.1034.cvm2 30 name=rootdg id=963616065.1032.cvm2 online ready autoconfig shared imported Note that the clusterid field is set to cvm2 (the name of the cluster), and the flags field includes an entry for shared.
Administering Cluster Functionality Administering VxVM in Cluster Environments To display information about one specific disk group, use the following command: # vxdg list diskgroup where diskgroup is the disk group name. For example, the output for the command vxdg list group1 on the master is as follows: Group: group1 dgid: 774222028.1090.teal import-id: 32768.
Administering Cluster Functionality Administering VxVM in Cluster Environments CAUTION The operating system cannot tell if a disk is shared. To protect data integrity when dealing with disks that can be accessed by multiple systems, use the correct designation when adding a disk to a disk group. VxVM allows you to add a disk that is not physically shared to a shared disk group if the node where the disk is accessible is the only node in the cluster.
Administering Cluster Functionality Administering VxVM in Cluster Environments # vxdg -s import diskgroup where diskgroup is the disk group name or ID. On subsequent cluster restarts, the disk group is automatically imported as shared. Note that it can be necessary to deport the disk group (using the vxdg deport diskgroup command) before invoking the vxdg utility. Forcibly Importing a Disk Group You can use the -f option to the vxdg command to import a disk group forcibly.
Administering Cluster Functionality Administering VxVM in Cluster Environments Moving Objects Between Disk Groups As described in “Moving Objects Between Disk Groups” on page 161, you can use the vxdg move command to move a self-contained set of VxVM objects such as disks and top-level volumes between disk groups. In a cluster, you can move such objects between private disk groups on any cluster node where those disk groups are imported.
Administering Cluster Functionality Administering VxVM in Cluster Environments Changing the Activation Mode on a Shared Disk Group NOTE The activation mode for access by a cluster node to a shared disk group is set on that node. The activation mode of a shared disk group can be changed using the following command: # vxdg -g diskgroup set activation=mode The activation mode is one of exclusive-write or ew, read-only or ro, shared-read or sr, shared-write or sw, or off.
Administering Cluster Functionality Administering VxVM in Cluster Environments When using the vxassist command to create a volume, you can use the exclusive=on attribute to specify that the volume may only be opened by one node in the cluster at a time. For example, to create the mirrored volume volmir in the disk group dskgrp, and configure it for exclusive open, use the following command: # vxassist -g dskgrp make volmir 5g layout=mirror exclusive=on Multiple opens by the same node are also supported.
Administering Cluster Functionality Administering VxVM in Cluster Environments # vxdctl list This command produces output similar to the following: version: 3/1 seqno: 0.
Administering Cluster Functionality Administering VxVM in Cluster Environments Upgrading the Cluster Protocol Version NOTE The cluster protocol version can only be updated on the master node. After all the nodes in the cluster have been updated with a new cluster protocol, you can upgrade the entire cluster using the following command on the master node: # vxdctl upgrade Recovering Volumes in Shared Disk Groups NOTE Volumes can only be recovered on the master node.
Administering Cluster Functionality Administering VxVM in Cluster Environments where node is an integer. If a comma-separated list of nodes is supplied, the vxstat utility displays the sum of the statistics for the nodes in the list. For example, to obtain statistics for node 2, volume vol1,use the following command: # vxstat -g group1 -n 2 vol1 This command produces output similar to the following: TYP NAME vol vol1 OPERATIONS READ WRITE 2421 0 BLOCKS READ WRITE 600000 0 AVG TIME(ms) READ WRITE 99.0 0.
11 Chapter 11 Configuring Off-Host Processing 371
Configuring Off-Host Processing Introduction Introduction Off-host processing allows you to implement the following activities: • Data Backup—As the requirement for 24 x 7 availability becomes essential for many businesses, organizations cannot afford the downtime involved in backing up critical data offline. By taking a snapshot of the data, and backing up from this snapshot, business-critical applications can continue to run without extended down time or impacted performance.
Configuring Off-Host Processing Introduction FastResync of Volume Snapshots NOTE You may need an additional license to use this feature. VxVM allows you to take multiple snapshots of your data at the level of a volume. A snapshot volume contains a stable copy of a volume’s data at a given moment in time that you can use for online backup or decision support. If FastResync is enabled on a volume, VxVM uses a FastResync map to keep track of which blocks are updated in the volume and in the snapshot.
Configuring Off-Host Processing Introduction option to resynchronize the snapshot plexes. You cannot use vxassist snapback for this purpose. This restriction does not apply if you split a snapshot volume into a separate disk group from its original volume, and subsequently return the snapshot volume to the original disk group. For more information, see “Volume Snapshots” on page 51 and “FastResync” on page 53. Disk Group Split and Join NOTE You may need an additional license to use this feature.
Configuring Off-Host Processing Implementing Off-Host Processing Solutions Implementing Off-Host Processing Solutions As shown in Figure 11-1, “Example Implementation of Off-Host Processing,” by accessing snapshot volumes from a lightly loaded host (shown here as the OHP host), CPU- and I/O-intensive operations for online backup and decision support do not degrade the performance of the primary host that is performing the main production activity (such as running a database).
Configuring Off-Host Processing Implementing Off-Host Processing Solutions • “Implementing Decision Support” on page 380 These applications use the Persistent FastResync and disk group move, split and join features of VxVM in conjunction with volume snapshots. Implementing Online Backup This section describes a procedure for implementing off-host online backup for a volume in a private disk group.
Configuring Off-Host Processing Implementing Off-Host Processing Solutions NOTE If the volume was created before release 3.2 of VxVM, and it has any attached snapshot plexes or it is associated with any snapshot volumes, follow the procedure given in “Enabling Persistent FastResync on Existing Volumes with Associated Snapshots” on page 288. Step 3.
Configuring Off-Host Processing Implementing Off-Host Processing Solutions Use the nmirror attribute to create as many snapshot mirrors as you need for the snapshot volume. For a backup, you should usually only require the default of one. Step 4. If the volume to be backed up contains database tables in a file system, suspend updates to the volume. The database may have a hot backup mode that allows you to do this by temporarily suspending writes to its tables. Step 5.
Configuring Off-Host Processing Implementing Off-Host Processing Solutions # vxvol -g snapvoldg start snapvol Step 11. On the OHP host, back up the snapshot volume. If you need to remount the file system in the volume to back it up, first run fsck on the volume.
Configuring Off-Host Processing Implementing Off-Host Processing Solutions Implementing Decision Support This section describes a procedure for implementing off-host decision support for a volume in a private disk group. The intention is to present an outline of how to set up a replica database by combining the Persistent FastResync and disk group split and join features of VxVM. It is beyond the scope of this guide to describe how to configure a database to use this procedure.
Configuring Off-Host Processing Implementing Off-Host Processing Solutions NOTE By default, VxVM attempts to avoid placing a snapshot mirrors on a disk that already holds any plexes of a data volume. However, this may be impossible if insufficient space is available in the disk group. In this case, VxVM uses any available space on other disks in the disk group.
Configuring Off-Host Processing Implementing Off-Host Processing Solutions If required, use the nmirrors attribute to specify the number of mirrors in the snapshot volume. If a database spans more than one volume, specify all the volumes and their snapshot volumes on the same line, for example: # vxassist -g dbasedg snapshot vol1 snapvol1 vol2 snapvol2 \ vol3 snapvol3 Step 7. On the primary host, release the tables from hot backup mode. Step 8.
Configuring Off-Host Processing Implementing Off-Host Processing Solutions Step 1. On the OHP host, shut down the replica database, and use the following command to unmount the snapshot volume: # umount mount_point Step 2. On the OHP host, use the following command to deport the snapshot volume’s disk group: # vxdg deport snapvoldg Step 3. On the primary host, re-import the snapshot volume’s disk group using the following command: # vxdg import snapvoldg Step 4.
Configuring Off-Host Processing Implementing Off-Host Processing Solutions 384 Chapter 11
12 Performance Monitoring and Tuning Introduction Volume Manager (VxVM) can improve overall system performance by optimizing the layout of data storage on the available hardware. This chapter contains guidelines establishing performance priorities, for monitoring performance, and for configuring your system appropriately.
Performance Monitoring and Tuning Performance Guidelines Performance Guidelines VxVM allows you to optimize data storage performance using the following two strategies: • Balance the I/O load among the available disk drives. • Use striping and mirroring to increase I/O bandwidth to the most frequently accessed data. VxVM also provides data redundancy (through mirroring and RAID-5) that allows continuous access to data in the event of disk failure.
Performance Monitoring and Tuning Performance Guidelines The figure “Use of Striping for Optimal Data Access” shows an example of a single volume (HotVol) that has been identified as a data-access bottleneck. This volume is striped across four disks, leaving the remaining space on these disks free for use by less-heavily used volumes.
Performance Monitoring and Tuning Performance Guidelines Mirroring and striping can be used together to achieve a significant improvement in performance when there are multiple I/O streams. Striping provides better throughput because parallel I/O streams can operate concurrently on separate devices. Serial access is optimized when I/O exactly fits across all stripe units in one stripe.
Performance Monitoring and Tuning Performance Guidelines Volume Read Policies To help optimize performance for different types of volumes, VxVM supports the following read policies on data plexes: NOTE • round—a round-robin read policy, where all plexes in the volume take turns satisfying read requests to the volume. • prefer—a preferred-plex read policy, where the plex with the highest performance usually satisfies read requests. If that plex fails, another plex is accessed.
Performance Monitoring and Tuning Performance Guidelines otherwise lightly-used disks in PL1, as opposed to the single disk in plex PL2. (HotVol is an example of a mirrored-stripe volume in which one data plex is striped and the other data plex is concatenated.
Performance Monitoring and Tuning Performance Monitoring Performance Monitoring As a system administrator, you have two sets of priorities for setting priorities for performance. One set is physical, concerned with hardware such as disks and controllers. The other set is logical, concerned with managing software and its operation.
Performance Monitoring and Tuning Performance Monitoring Tracing Volume Operations Use the vxtrace command to trace operations on specified volumes, kernel I/O object types or devices. The vxtrace command either prints kernel I/O errors or I/O trace records to the standard output or writes the records to a file in binary format. Binary trace records written to a file can also be read back and formatted by vxtrace.
Performance Monitoring and Tuning Performance Monitoring VxVM also maintains other statistical data. For each plex, it records read and write failures. For volumes, it records corrected read and write failures in addition to read and write failures. To reset the statistics information to zero, use the -r option. This can be done for all objects or for only those objects that are specified. Resetting just prior to an operation makes it possible to measure the impact of that particular operation.
Performance Monitoring and Tuning Performance Monitoring multiple purposes, try not to exercise any one application more than usual. When monitoring a time-sharing system with many users, let statistics accumulate for several hours during the normal working day. To display volume statistics, enter the vxstat command with no arguments.
Performance Monitoring and Tuning Performance Monitoring NOTE Your system may use a device name that differs from the examples. For more information on device names, see Chapter 2, “Administering Disks,” on page 65. The subdisks line (beginning sd) indicates that the archive volume is on disk disk03. To move the volume off disk03, use the following command: # vxassist move archive !disk03 dest_disk where dest_disk is the disk to which you want to move the volume.
Performance Monitoring and Tuning Performance Monitoring # vxplex -o rm dis archive-01 After reorganizing any particularly busy volumes, check the disk statistics. If some volumes have been reorganized, clear statistics first and then accumulate statistics for a reasonable period of time. If some disks appear to be excessively busy (or have particularly long read or write times), you may want to reconfigure some volumes.
Performance Monitoring and Tuning Performance Monitoring Using I/O Tracing I/O statistics provide the data for basic performance analysis; I/O traces serve for more detailed analysis. With an I/O trace, focus is narrowed to obtain an event trace for a specific workload. This helps to explicitly identify the location and size of a hot spot, as well as which application is causing it. Using data from I/O traces, real work loads on disks can be simulated and the results traced.
Performance Monitoring and Tuning Tuning VxVM Tuning VxVM This section describes how to adjust the tunable parameters that control the system resources used by VxVM. Depending on the system resources that are available, adjustments may be required to the values of some tunable parameters to optimize performance. General Tuning Guidelines VxVM is optimally tuned for most configurations ranging from small systems to larger servers.
Performance Monitoring and Tuning Tuning VxVM A general recommendation for users of disk array subsystems is to create a single disk group for each array so the disk group can be physically moved as a unit between systems. Number of Configuration Copies for a Disk Group Selection of the number of configuration copies for a disk group is based on a trade-off between redundancy and performance.
Performance Monitoring and Tuning Tuning VxVM The values of system tunables can be examined by selecting Kernel Configuration > Configuration Parameters in the System Administration Manager (SAM). Tunable Parameters The following sections describe specific tunable parameters. dmp_pathswitch_blks_shift The number of contiguous I/O blocks (expressed as an integer power of 2) that are sent along a DMP path to an Active/Active array before switching to the next available path.
Performance Monitoring and Tuning Tuning VxVM The default for this tunable is 50 ticks. Increasing this value results in slower recovery operations and consequently lower system impact while recoveries are being performed. vol_fmr_logsz The maximum size in kilobytes of the bitmap that Non-Persistent FastResync uses to track changed blocks in a volume.
Performance Monitoring and Tuning Tuning VxVM vol_max_vol The maximum number of volumes that can be created on the system. This value can be set to between 1 and the maximum number of minor numbers representable in the system. The default value for this tunable is 16777215. vol_maxio The maximum size of logical I/O operations that can be performed without breaking up the request. I/O requests to VxVM that are larger than this value are broken up and performed synchronously.
Performance Monitoring and Tuning Tuning VxVM vol_maxparallelio The number of I/O operations that the vxconfigd (1M) daemon is permitted to request from the kernel in a single VOL_VOLDIO_READ per VOL_VOLDIO_WRITE ioctl call. The default value for this tunable is 256. It is not desirable to change this value. vol_maxspecialio The maximum size of an I/O request that can be issued by an ioctl call. Although the ioctl request itself can be small, it can request a large I/O request be performed.
Performance Monitoring and Tuning Tuning VxVM volcvm_smartsync If set to 0, volcvm_smartsync disables SmartSync on shared disk groups. If set to 1, this parameter enables the use of SmartSync with shared disk groups. See“SmartSync Recovery Accelerator” on page 62 for more information. voldrl_max_drtregs The maximum number of dirty regions that can exist for non-sequential DRL on a volume. A larger value may result in improved system performance at the expense of recovery time.
Performance Monitoring and Tuning Tuning VxVM voliomem_maxpool_sz The maximum memory requested from the system by VxVM for internal purposes. This tunable has a direct impact on the performance of VxVM as it prevents one I/O operation from using all the memory in the system. VxVM allocates two pools that can grow up to voliomem_maxpool_sz, one for RAID-5 and one for mirrored volumes.
Performance Monitoring and Tuning Tuning VxVM If trace data is often being lost due to this buffer size being too small, then this value can be tuned to a more generous amount. voliot_iobuf_limit The upper limit to the size of memory that can be used for storing tracing buffers in the kernel. Tracing buffers are used by the VxVM kernel to store the tracing event records. As trace buffers are requested to be stored in the kernel, the memory for them is drawn from this pool.
Performance Monitoring and Tuning Tuning VxVM volraid_rsrtransmax The maximum number of transient reconstruct operations that can be performed in parallel for RAID-5. A transient reconstruct operation is one that occurs on a non-degraded RAID-5 volume that has not been predicted. Limiting the number of these operations that can occur simultaneously removes the possibility of flooding the system with many reconstruct operations, and so reduces the risk of causing memory starvation.
Performance Monitoring and Tuning Tuning VxVM 408 Chapter 12
A Commands Summary This appendix summarizes the usage and purpose of important commands in VERITAS Volume Manager (VxVM). References are included to longer descriptions in the remainder of this book.
Commands Summary appropriate manual page in the 1M section. Table A-1 Obtaining Information About Objects in VxVM Command Table A-2 vxdisk list [diskname] Lists disks under control of VxVM. See “Displaying Disk Information” on page 102. vxdg list [diskgroup] Lists information about disk groups. See “Displaying Disk Group Information” on page 134. vxdg -s list Lists information about shared disk groups in a cluster. See “Creating Volumes with Exclusive Open Access by a Node” on page 366.
Commands Summary Table A-2 Administering Disks (Continued) Command Table A-3 vxedit rename olddisk newdisk Renames a disk under control of VxVM.See “Renaming a Disk” on page 100. vxedit set reserve=on|off diskname Sets aside/does not set aside a disk from use in a disk group. See “Reserving Disks” on page 101. vxedit set nohotuse=on|off diskname Does not/does allow free space on a disk to be used for hot-relocation.
Commands Summary Table A-3 Creating and Administering Disk Groups (Continued) Command vxdg -s init diskgroup \ [diskname=]devicename 412 Description Creates a shared disk group in a cluster using a pre-initialized disk. See “Creating a Shared Disk Group” on page 362. vxdg [-n newname] deport diskgroup “Deporting a Disk Group” on page 141Deports a disk group and optionally renames it. See . vxdg [-n newname] import diskgroup Imports a disk group and optionally renames it.
Commands Summary Table A-3 Creating and Administering Disk Groups (Continued) Command Table A-4 vxrecover -g diskgroup -sb Starts all volumes in an imported disk group. See .“Moving Disk Groups Between Systems” on page 148 vxdg destroy diskgroup Destroys a disk group and releases its disks. See “Destroying a Disk Group” on page 169. Creating and Administering Subdisks Command Description vxmake sd subdisk diskname,offset,length Creates a subdisk. See “Creating Subdisks” on page 179.
Commands Summary Table A-4 Creating and Administering Subdisks (Continued) Command Table A-5 vxunreloc [-g diskgroup] original_disk Relocates subdisks to their original disks. See “Moving and Unrelocating Subdisks using vxunreloc” on page 331 vxsd dis subdisk Dissociates a subdisk from a plex. See “Dissociating Subdisks from Plexes” on page 187. vxedit rm subdisk Removes a subdisk. See “Removing Subdisks” on page 188. vxsd -o rm dis subdisk Dissociates and removes a subdisk from a plex.
Commands Summary Table A-5 Table A-6 Creating and Administering Plexes (Continued) Command Description vxplex mv oldplex newplex Replaces a plex. See “Moving Plexes” on page 205. vxplex cp volume newplex Copies a volume onto a plex. See “Copying Plexes” on page 206. vxplex fix clean plex Sets the state of a plex in an unstartable volume to CLEAN. See “Reattaching Plexes” on page 204. vxplex -o rm dis plex Dissociates and removes a plex from a volume.
Commands Summary Table A-6 Creating Volumes (Continued) Command vxassist make volume length \ layout=stripe|raid5 \ [stripeunit=W] [ncol=N] [attributes] vxassist make volume length \ layout=layout mirror=ctlr [attributes] Table A-7 Creates a striped or RAID-5 volume. See “Creating a Striped Volume” on page 233 and “Creating a RAID-5 Volume” on page 238. Creates a volume with mirrored data plexes on separate controllers. See “Mirroring across Targets, Controllers or Enclosures” on page 236.
Commands Summary Table A-7 Administering Volumes (Continued) Command vxassist remove log volume [attributes] Removes a log from a volume. See “Removing a DCO and DCO Volume” on page 267, “Removing a DRL Log” on page 270 and “Removing a RAID-5 Log” on page 273. vxvol set fastresync=on|off volume Turns FastResync on or off for a volume. See “Adding a RAID-5 Log” on page 271. vxassist growto volume length Grows a volume to a specified size. See “Resizing Volumes using vxassist” on page 276.
Commands Summary Table A-7 Administering Volumes (Continued) Command 418 Description vxassist snapclear snapshot Makes the snapshot volume independent. See “Dissociating a Snapshot Volume (snapclear)” on page 301. vxassist [-g diskgroup] relayout volume \ [layout=layout] [relayout_options] Performs online relayout of a volume. See “Performing Online Relayout” on page 304.
Commands Summary Table A-8 Monitoring and Controlling Tasks Command Appendix A Description vxcommand -t tasktag [options] [arguments] Specifies a task tag to a command. See “Specifying Task Tags” on page 253. vxtask [-h] list Lists tasks running on a system. See “vxtask Usage” on page 255. vxtask monitor task Monitors the progress of a task. See “vxtask Usage” on page 255. vxtask pause task Suspends operation of a task. See “vxtask Usage” on page 255. vxtask -p list Lists all paused tasks.
Commands Summary 420 Appendix A
Index Symbols /dev/vx/dsk block device files, 246 /dev/vx/rdsk character device files, 246 /etc/default/vxassist defaults file, 217 /etc/default/vxassist file, 328 /etc/default/vxdg defaults file, 344 /etc/fstab file, 281 /etc/vx/cntrls.exclude file, 75 /etc/vx/disks.exclude file, 75 /etc/vx/enclr.exclude file, 75 /etc/vx/volboot file, 175 /sbin/rc2.
Index resolving disk status in, 345 setting disk connectivity policies in, 366 shared disk groups, 341 shared objects, 342 splitting disk groups in, 365 upgrading cluster protocol version, 369 upgrading online, 355 use of MC/ServiceGuard with VxVM, 347 used of DMP in, 129 vol_fmr_logsz tunable, 401 volume reconfiguration, 349 vxclustd, 348 vxdctl, 360 vxrecover, 369 vxstat, 369 cluster-shareable disk groups in clusters, 341 cmhaltnode interaction with VXVM, 354 columns changing number of, 305 in striping, 2
Index DISABLED plex kernel state, 200 volume kernel state, 252 disabled paths, 123 disk arrays A/A, 106 A/P, 106 A/PF, 106 A/PG, 106 active/active, 106 active/passive, 106 adding vendor-supplied support package, 70 defined, 5 excluding support for, 72 listing excluded, 72, 73 listing supported, 71 multipathed, 6 re-including support for, 72 removing vendor-supplied support package, 71 disk groups activating shared, 366 activation in clusters, 344 adding disks to, 138 avoiding conflicting minor numbers on
Index determining failed, 317 determining if shared, 360 Device Discovery Layer, 71 disabled path, 123 discovery of by VxVM, 70 disk arrays, 5 displaying information, 102 displaying information about, 134 displaying spare, 321 enabled path, 123 enabling, 98 enclosures, 7 excluding free space from hot-relocation use, 325 failure handled by hot-relocation, 313 formatting, 79 hot-relocation, 312 initializing, 74, 80 installing, 79 invoking discovery of, 70 layout of DCO plexes, 157 making available for hot-rel
Index adding log subdisks, 186 adding logs to mirrored volumes, 269 creating mirrored volumes with logging enabled, 231 creating mirrored volumes with sequential DRL enabled, 231 dirty region logging, 49 handling recovery in clusters, 358 hot-relocation limitations, 314 log subdisks, 49 logs.
Index using only spare disks for, 328 hot-relocation complete failure messages, 318 configuration summary, 320 daemon, 313 defined, 64 detecting disk failure, 313 detecting plex failure, 313 detecting RAID-5 subdisk failure, 313 excluding free space on disks from use by, 325 limitations, 314 making free space on disks available for use by, 326 marking disks as spare, 322 modifying behavior of, 335 notifying users other than root, 335 operation of, 312 partial failure messages, 317 preventing from running,
Index complete disk failure, 318 hot-relocation of subdisks, 329 partial disk failure, 317 metadevices, 69 metanodes DMP, 107 minor numbers, 150 mirrored volumes adding DRL logs, 269 adding sequential DRL logs, 269 changing read policies for, 279 configuring VxVM to create by default, 259 creating, 228 creating across controllers, 226, 236 creating across enclosures, 236 creating across targets, 224 creating with logging enabled, 231 creating with sequential DRL enabled, 231 defined, 212 dirty region loggin
Index destination layouts, 304 failure recovery, 46 how it works, 38 limitations, 44 monitoring tasks for, 306 pausing, 306 performing, 304 resuming, 307 reversing direction of, 307 specifying non-default, 305 specifying plexes, 305 specifying task tags for, 306 temporary area, 39 transformation characteristics, 45 transformations and volume length, 46 types of transformation, 41 viewing status of, 306 ordered allocation, 223, 232, 239 P parity in RAID-5, 29 path failover in DMP, 108 pathgroup create, 112 r
Index CLEAN, 196 DCOSNP, 197 EMPTY, 197 IOFAIL, 197 LOG, 197 OFFLINE, 197 SNAPATT, 197 SNAPDIS, 198 SNAPDONE, 198 SNAPTMP, 198 STALE, 198 TEMP, 198 TEMPRM, 199 TEMPRMSD, 199 plexes associating log subdisks with, 186 associating subdisks with, 184 associating with volumes, 201 attaching to volumes, 201 changing attributes, 209 changing read policies for, 279 comment attribute, 209 complete failure messages, 318 condition flags, 199 converting to snapshot, 298 copying, 206 creating, 193 creating striped, 194
Index defined, 212 making backups of, 294 performance, 388 removing logs, 273 taking snapshots of, 294 read policies changing, 279 performance of, 389 prefer, 279 round, 279 select, 279 read-only mode, 342, 343 RECOVER plex condition, 200 recovery checkpoint interval, 400 I/O delay, 400 preventing on restarting volumes, 258 recovery accelerator, 62 redo log configuration, 63 redundancy of data on mirrors, 212 of data on RAID-5, 212 redundant-loop access, 9 region, 68 Re-include controllers for multipathing,
Index vxdisk subcommand, 70 secondary path, 106 secondary path display, 123 sequential DRL creating mirrored volumes with logging enabled, 231 defined, 50 maximum number of dirty regions, 404 shared disk groups activating, 366 activation modes, 342, 343 converting to private, 364 creating, 362 importing, 363 in clusters, 341 limitations of, 346 listing, 361 shared-read mode, 342, 343 shared-write mode, 342, 343 simple disk type, 69 simple disks issues with enclosures, 77 size units, 179 slave nodes, 340 Sma
Index Striped-Pro volumes, 213 stripe-mirror-col-split-trigger-pt, 235 striping, 21 striping plus mirroring, 25 subdisk names, 12 subdisks associating log subdisks, 186 associating with plexes, 184 associating with RAID-5 plexes, 185 associating with striped plexes, 185 blocks, 12 changing attributes, 189 comment attribute, 189 complete failure messages, 318 copying contents of, 181 creating, 179 defined, 12 determining failed, 317 displaying information about, 180 dissociating from plexes, 187 dividing, 18
Index voldrl_min_regionsz, 404 voliomem_chunk_size, 404 voliomem_maxpool_sz, 405 voliot_errbuf_default, 405 voliot_iobuf_dflt, 405 voliot_iobuf_limit, 406 voliot_iobuf_max, 406 voliot_max_open, 406 volraid_minpool_size, 406 volraid_rsrtransmax, 407 tutil plex attribute, 209 subdisk attribute, 189 U units of size, 179 V versions disk group, 170 displaying for disk group, 173 upgrading, 170 virtual objects, 10 VM disks defined, 11 determining if shared, 360 displaying spare, 321 excluding free space from hot-
Index checking if FastResync is enabled, 285 combining mirroring and striping for performance, 388 combining online relayout and conversion, 308 concatenated, 18, 212 concatenated-mirror, 28, 213 Concatenated-Pro, 213 configuring exclusive open by cluster node, 367 converting between layered and non-layered, 308 converting concatenated-mirror to mirrored-concatenated, 308 converting mirrored-concatenated to concatenated-mirror, 308 converting mirrored-stripe to striped-mirror, 308 converting striped-mirro
Index removing RAID-5 logs, 273 removing sequential DRL logs, 270 resizing, 274 resizing using vxassist, 276 resizing using vxresize, 275 resizing using vxvol, 278 restarting moved, 164 restrictions on VxVM-bootable, 85 resynchronizing from snapshots, 301 spanned, 18 specifying default layout, 221 specifying non-default number of columns, 234 specifying non-default relayout, 305 specifying non-default stripe unit size, 234 specifying storage for DCO plexes, 265 specifying use of storage to vxassist, 222 st
Index used to specify storage attributes, 222 used to specify tags for online relayout tasks, 306 used to unrelocate subdisks after hot-relocation, 331 vxclustd cluster reconfiguration daemon, 348 interaction with MC/ServiceGuard, 349 vxconfigd managed using vxdctl, 175 operation in clusters, 350 vxcp_lvm_root used to create VxVM root disk, 87 used to create VxVM root disk mirrors, 87 vxdarestore used to handle simple/nopriv disk failures, 77 vxdco used to remove DCOs from volumes, 267 vxdctl used in clust
Index Make a disk available for hot-relocation use, 326 Mark a disk as a spare for a disk group, 322 Mirror volumes on a disk, 260 Move volumes from a disk, 282 Remove a disk, 91, 139 Remove a disk for replacement, 94 Remove access to (deport) a disk group, 141 Replace a failed or removed disk, 96 Turn off the spare flag on a disk, 324 Unrelocate subdisks back to a disk, 330 used to add disks, 80 used to add disks to disk groups, 138 used to change disk-naming scheme, 76 used to create disk groups, 136 used
Index used to mirror volumes, 259 vxplex used to add a RAID-5 log, 271 used to attach plexes to volumes, 201, 259 used to convert plexes to snapshots, 298 used to copy plexes, 206 used to detach plexes temporarily, 203 used to dissociate and remove plexes, 207 used to dissociate plexes from volumes, 207 used to move plexes, 205 used to reattach plexes, 204 used to remove mirrors, 262 used to remove plexes, 262 used to remove RAID-5 logs, 273 vxprint used to display DCO information, 264 used to display plex
Index granularity of memory allocation by, 404 interaction with MC/ServiceGuard, 354 limitations of shared disk groups, 346 maximum number of data plexes per volume, 390 maximum number of subdisks per plex, 403 maximum number of volumes, 402 maximum size of memory pool, 405 minimum size of memory pool, 406 objects in, 10 operation in clusters, 339 performance tuning, 398 shared objects in cluster, 342 size units, 179 task monitor, 253 types of volume layout, 212 upgrading, 170 upgrading disk group version,