Veritas Volume Manager 5.0 Administrator’s Guide HP-UX 11i v2 First Edition Manufacturing Part Number: 5991-5512 September 2006 Printed in the United States © Copyright 2006 Hewlett-Packard Development Company L.P.
Legal Notices Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation and Technical Data for Commercial Items are licensed to the U.S. Government under vendor’s standard commercial license. The information contained herein is subject to change without notice.
Contents Chapter 1 Understanding Veritas Volume Manager VxVM and the operating system ..............................................................................19 How data is stored ............................................................................................19 How VxVM handles storage management ...............................................................20 Physical objects—physical disks ......................................................................20 Virtual objects ......
6 Contents DCO volume versioning ...................................................................................68 FastResync limitations ......................................................................................74 Hot-relocation ...........................................................................................................75 Volume sets ...............................................................................................................
Contents Taking a disk offline ...............................................................................................117 Renaming a disk ......................................................................................................118 Reserving disks .......................................................................................................119 Displaying disk information ...................................................................................
8 Contents Specifying a disk group to commands ....................................................................159 System-wide reserved disk groups .................................................................159 Rules for determining the default disk group .................................................160 Displaying disk group information .........................................................................161 Displaying free space in a disk group ...........................................
Contents Associating subdisks with plexes ...........................................................................210 Associating log subdisks .........................................................................................212 Dissociating subdisks from plexes ..........................................................................212 Removing subdisks .................................................................................................213 Changing subdisk attributes .............
10 Contents Creating a mirrored-stripe volume ..................................................................246 Creating a striped-mirror volume ...................................................................247 Mirroring across targets, controllers or enclosures .................................................247 Creating a RAID-5 volume .....................................................................................248 Creating tagged volumes ................................................
Contents Resizing volumes using vxvol ........................................................................279 Setting tags on volumes ..........................................................................................280 Changing the read policy for mirrored volumes .....................................................281 Removing a volume ................................................................................................282 Moving volumes from a VM disk ............................
12 Contents Dissociating an instant snapshot .....................................................................332 Removing an instant snapshot ........................................................................333 Splitting an instant snapshot hierarchy ...........................................................333 Displaying instant snapshot information ........................................................334 Controlling instant snapshot synchronization ........................................
Contents Configuring a system for hot-relocation .................................................................375 Displaying spare disk information ..........................................................................376 Marking a disk as a hot-relocation spare ................................................................377 Removing a disk from use as a hot-relocation spare ..............................................378 Excluding a disk from hot-relocation use ..............................
14 Contents Splitting disk groups .......................................................................................418 Joining disk groups .........................................................................................418 Changing the activation mode on a shared disk group ...................................418 Setting the disk detach policy on a shared disk group ....................................419 Setting the disk group failure policy on a shared disk group ..........................
Contents Disk striping ....................................................................................................444 Disk sparing and relocation management .......................................................445 Hardware failures ............................................................................................445 Rootability ......................................................................................................445 System name ..........................................
16 Contents Accessing volume devices ..............................................................................504 Controlling VxVM’s view of multipathed devices .................................................504 Configuring cluster support ....................................................................................504 Configuring shared disk groups ......................................................................505 Converting existing VxVM disk groups to shared disk groups ............
Chapter 1 Understanding Veritas Volume Manager VeritasTM Volume Manager (VxVM) by Symantec is a storage management subsystem that allows you to manage physical disks as logical devices called volumes. A VxVM volume appears to applications and the operating system as a physical disk on which file systems, databases and other managed data objects can be configured. VxVM provides easy-to-use online disk storage management for computing environments and Storage Area Network (SAN) environments.
18 Understanding Veritas Volume Manager ■ Hot-relocation ■ Volume sets Further information on administering Veritas Volume Manager may be found in the following documents: ■ Veritas Storage Foundation Cross-Platform Data Sharing Administrator’s Guide Provides more information on using the Cross-platform Data Sharing (CDS) feature of Veritas Volume Manager, which allows you to move VxVM disks and objects between machines that are running under different operating systems.
Understanding Veritas Volume Manager VxVM and the operating system VxVM and the operating system VxVM operates as a subsystem between your operating system and your data management systems, such as file systems and database management systems. VxVM is tightly coupled with the operating system. Before a disk can be brought under VxVM control, the disk must be accessible through the operating system device interface.
20 Understanding Veritas Volume Manager How VxVM handles storage management How VxVM handles storage management VxVM uses two types of objects to handle storage management: physical objects and virtual objects. ■ Physical objects—physical disks or other hardware with block and raw operating system device interfaces that are used to store data. ■ Virtual objects—When one or more physical disks are brought under the control of VxVM, it creates virtual objects called volumes on those physical disks.
Understanding Veritas Volume Manager How VxVM handles storage management Disk arrays Performing I/O to disks is a relatively slow process because disks are physical devices that require time to move the heads to the correct position on the disk before reading or writing. If all of the read or write operations are done to individual disks, one at a time, the read-write time can become unmanageable. Performing these operations on multiple disks can help to reduce this problem.
22 Understanding Veritas Volume Manager How VxVM handles storage management array, make up multiple hardware paths to access the disk devices. Such disk arrays are called multipathed disk arrays. This type of disk array can be connected to host systems in many different configurations, (such as multiple ports connected to different controllers on a single host, chaining of the ports through a single controller on a host, or ports connected to different hosts simultaneously).
Understanding Veritas Volume Manager How VxVM handles storage management Figure 1-3 Example configuration for disk enclosures connected via a fibre channel hub or switch c1 Host Fibre Channel hub or switch Disk enclosures enc0 enc1 enc2 In such a configuration, enclosure-based naming can be used to refer to each disk within an enclosure. For example, the device names for the disks in enclosure enc0 are named enc0_0, enc0_1, and so on.
24 Understanding Veritas Volume Manager How VxVM handles storage management host and one of the hubs. In this example, each disk is known by the same name to VxVM for all of the paths over which it can be accessed. For example, the disk device enc0_0 represents a single disk for which two different paths are known to the operating system, such as c1t99d0 and c2t99d0.
Understanding Veritas Volume Manager How VxVM handles storage management The connection between physical objects and VxVM objects is made when you place a physical disk under VxVM control. After installing VxVM on a host system, you must bring the contents of physical disks under VxVM control by collecting the VM disks into disk groups and allocating the disk group space to create logical volumes. Note: To bring the physical disk under VxVM control, the disk must not be under LVM control.
26 Understanding Veritas Volume Manager How VxVM handles storage management Figure 1-5 Connection between objects in VxVM vol02 vol01 vol01-01 vol02-01 vol02-02 Volumes vol01-01 vol02-01 vol02-02 disk01-01 disk02-01 disk03-01 Plexes disk01-01 disk02-01 disk3-01 Subdisks disk01-01 disk02-01 disk03-01 VM disks disk01 disk02 disk03 Disk group devname1 devname2 devname3 Physical disks The various types of virtual objects (disk groups, VM disks, subdisks, plexes and volumes) are d
Understanding Veritas Volume Manager How VxVM handles storage management In releases prior to VxVM 4.0, the default disk group was rootdg (the root disk group). For VxVM to function, the rootdg disk group had to exist and it had to contain at least one disk. This requirement no longer exists, and VxVM can work without any disk groups configured (although you must set up at least one disk group before you can create any volumes of otherVxVM objects).
28 Understanding Veritas Volume Manager How VxVM handles storage management Subdisks A subdisk is a set of contiguous disk blocks. A block is a unit of space on the disk. VxVM allocates disk space using subdisks. A VM disk can be divided into one or more subdisks. Each subdisk represents a specific portion of a VM disk, which is mapped to a specific region of a physical disk.
Understanding Veritas Volume Manager How VxVM handles storage management Figure 1-8 Example of three subdisks assigned to one VM Disk disk01-01 disk01-02 disk01-03 disk01-01 disk01-02 disk01-03 Subdisks VM disk with three subdisks disk01 Any VM disk space that is not part of a subdisk is free space. You can use free space to create new subdisks. VxVM release 3.0 or higher supports the concept of layered volumes in which subdisks can contain volumes.
30 Understanding Veritas Volume Manager How VxVM handles storage management ■ striping (RAID-0) ■ mirroring (RAID-1) ■ striping with parity (RAID-5) Concatenation, striping (RAID-0), mirroring (RAID-1) and RAID-5 are described in “Volume layouts in VxVM” on page 32. Volumes A volume is a virtual disk device that appears to applications, databases, and file systems like a physical disk device, but does not have the physical limitations of a physical disk device.
Understanding Veritas Volume Manager How VxVM handles storage management ■ The subdisk disk01-01 is allocated from VM disk disk01. Figure 1-10 Example of a volume with one plex vol01 Volume with one plex vol01-01 disk01-01 vol01-01 Plex with one subdisk In Figure 1-11 a volume, vol06, with two data plexes is mirrored. Each plex of the mirror contains a complete copy of the volume data.
32 Understanding Veritas Volume Manager Volume layouts in VxVM Volume layouts in VxVM A VxVM virtual device is defined by a volume. A volume has a layout defined by the association of a volume to one or more plexes, each of which map to subdisks. The volume presents a virtual device interface that is exposed to other applications for data access. These logical building blocks re-map the volume address space through which I/O is re-directed at run-time.
Understanding Veritas Volume Manager Volume layouts in VxVM Layout methods Data in virtual objects is organized to create volumes by using the following layout methods: ■ Concatenation and spanning ■ Striping (RAID-0) ■ Mirroring (RAID-1) ■ Striping plus mirroring (mirrored-stripe or RAID-0+1) ■ Mirroring plus striping (striped-mirror, RAID-1+0 or RAID-10) ■ RAID-5 (striping with parity) The following sections describe each layout method.
34 Understanding Veritas Volume Manager Volume layouts in VxVM Figure 1-12 Example of concatenation Data in disk01-01 n Data in disk01-03 n+1 n+2 n+3 Data blocks disk01-01 disk01-03 disk01-01 disk01-01 Plex with concatenated subdisks disk01-03 disk01-02 disk01-03 Subdisks VM disk disk01 devname n n+1 n+2 n+3 Physical disk You can use concatenation with multiple subdisks when there is insufficient contiguous space for the plex on any one disk.
Understanding Veritas Volume Manager Volume layouts in VxVM Figure 1-13 Example of spanning Data in disk01-01 n Data in disk02-01 Data blocks n+1 n+2 n+3 Plex with concatenated subdisks disk01-01 disk02-01 disk01-01 disk02-01 disk01-01 disk02-01 Subdisks disk02-02 disk01 disk02 devname1 devname2 n n+1 n+2 n+3 VM disks Physical disks Caution: Spanning a plex across multiple disks increases the chance that a disk failure results in failure of the assigned volume.
36 Understanding Veritas Volume Manager Volume layouts in VxVM Striping (RAID-0) is useful if you need large amounts of data written to or read from physical disks, and performance is important. Striping is also helpful in balancing the I/O load from multi-user applications across multiple disks. By using parallel data transfer to and from multiple disks, striping significantly improves data-access performance. Striping maps data so that the data is interleaved among two or more physical disks.
Understanding Veritas Volume Manager Volume layouts in VxVM Figure 1-14 Striping across three columns Column 0 Column 1 Column 2 Stripe 1 su1 su2 su3 Stripe 2 su4 su5 su6 Subdisk 1 Subdisk 2 Subdisk 3 Plex SU = stripe unit A stripe consists of the set of stripe units at the same positions across all columns. In the figure, stripe units 1, 2, and 3 constitute a single stripe.
38 Understanding Veritas Volume Manager Volume layouts in VxVM Figure 1-15 shows a striped plex with three equal sized, single-subdisk columns. There is one column per physical disk. This example shows three subdisks that occupy all of the space on the VM disks. It is also possible for each subdisk in a striped plex to occupy only a portion of the VM disk, which leaves free space for other disk management tasks. su1 Example of a striped plex with one subdisk per column su2 su4 su3 su5 su6 . . .
Understanding Veritas Volume Manager Volume layouts in VxVM su1 Example of a striped plex with concatenated subdisks per column su2 Column 0 disk01-01 su4 su3 Column 1 disk02-01 disk02-02 disk01-01 disk02-01 su6 disk02-01 disk03-01 disk03-02 disk03-01 disk03-01 disk03 devname1 devname2 devname3 su3 su6 VM disks Physical disks . . . disk02 . . . disk03-03 disk01 . . .
40 Understanding Veritas Volume Manager Volume layouts in VxVM Note: Although a volume can have a single plex, at least two plexes are required to provide redundancy of data. Each of these plexes must contain disk space from different disks to achieve redundancy. When striping or spanning across a large number of disks, failure of any one of those disks can make the entire plex unusable.
Understanding Veritas Volume Manager Volume layouts in VxVM The layout type of the data plexes in a mirror can be concatenated or striped. Even if only one is striped, the volume is still termed a mirrored-stripe volume. If they are all concatenated, the volume is termed a mirrored-concatenated volume. Mirroring plus striping (striped-mirror, RAID-1+0 or RAID-10) Note: You need a full license to use this feature. VxVM supports the combination of striping above mirroring.
42 Understanding Veritas Volume Manager Volume layouts in VxVM replaced, the entire plex must be brought up to date. Recovering the entire plex can take a substantial amount of time. If a disk fails in a striped-mirror layout, only the failing subdisk must be detached, and only that portion of the volume loses redundancy. When the disk is replaced, only a portion of the volume needs to be recovered.
Understanding Veritas Volume Manager Volume layouts in VxVM RAID-5 (striping with parity) Note: VxVM supports RAID-5 for private disk groups, but not for shareable disk groups in a cluster environment. In addition, VxVM does not support the mirroring of RAID-5 volumes that are configured using Veritas Volume Manager software. Disk devices that support RAID-5 in hardware may be mirrored. You need a full license to use this feature.
44 Understanding Veritas Volume Manager Volume layouts in VxVM Traditional RAID-5 arrays A traditional RAID-5 array is several disks organized in rows and columns. A column is a number of disks located in the same ordinal position in the array. A row is the minimal number of disks necessary to support the full width of a parity stripe. Figure 1-21 shows the row and column arrangement of a traditional RAID-5 array.
Understanding Veritas Volume Manager Volume layouts in VxVM Figure 1-22 Veritas Volume Manager RAID-5 array Stripe 1 Stripe 2 SD SD SD SD SD SD SD SD Column 0 Column 1 Column 2 Column 3 SD = subdisk Note: Mirroring of RAID-5 volumes is not supported. See “Creating a RAID-5 volume” on page 248 for information on how to create a RAID-5 volume. Left-symmetric layout There are several layouts for data and parity that can be used in the setup of a RAID-5 array.
46 Understanding Veritas Volume Manager Volume layouts in VxVM Figure 1-23 Left-symmetric layout Column Stripe Parity stripe unit 0 1 2 3 P0 5 6 7 P1 4 10 11 P2 8 9 15 P3 12 13 14 P4 16 17 18 19 (Data) stripe unit For each stripe, data is organized starting to the right of the parity stripe unit. In the figure, data organization for the first stripe begins at P0 and continues to stripe units 0-3.
Understanding Veritas Volume Manager Volume layouts in VxVM RAID-5 logging Logging is used to prevent corruption of data during recovery by immediately recording changes to data and parity to a log area on a persistent device such as a volume on disk or in non-volatile RAM. The new data and parity are then written to the disks. Without logging, it is possible for data not involved in any active writes to be lost or silently corrupted if both a disk in a RAID-5 volume and the system fail.
48 Understanding Veritas Volume Manager Volume layouts in VxVM mirror (plex) covers a smaller area of storage space, so recovery is quicker than with a standard mirrored volume.
Understanding Veritas Volume Manager Volume layouts in VxVM can perform all necessary operations in the “Managed by User” area that includes the toplevel volume and striped plex (for example, resizing the volume, changing the column width, or adding a column). System administrators can manipulate the layered volume structure for troubleshooting or other operations (for example, to place data on specific disks).
50 Understanding Veritas Volume Manager Online relayout Online relayout Note: You need a full license to use this feature. Online relayout allows you to convert between storage layouts in VxVM, with uninterrupted data access. Typically, you would do this to change the redundancy or performance characteristics of a volume. VxVM adds redundancy to storage either by duplicating the data (mirroring) or by adding parity (RAID-5).
Understanding Veritas Volume Manager Online relayout The following error message displays the number of blocks required if there is insufficient free space available in the disk group for the temporary area: tmpsize too small to perform this relayout (nblks minimum required) You can override the default size used for the temporary area by using the tmpsize attribute to vxassist. See the vxassist(1M) manual page for more information.
52 Understanding Veritas Volume Manager Online relayout Figure 1-27 Example of relayout of a RAID-5 volume to a striped volume RAID-5 volume ■ Striped volume Change a volume to a RAID-5 volume (add parity). See Figure 1-28 for an example. Note that adding parity (shown by the shaded area) increases the overall storage space that the volume requires.
Understanding Veritas Volume Manager Online relayout For details of how to perform online relayout operations, see “Performing online relayout” on page 286. For information about the relayout transformations that are possible, see “Permitted relayout transformations” on page 287. Limitations of online relayout Note the following limitations of online relayout: ■ Log plexes cannot be transformed. ■ Volume snapshots cannot be taken when there is an online relayout operation running on the volume.
54 Understanding Veritas Volume Manager Online relayout redundancy by mirroring any temporary space used. Read and write access to data is not interrupted during the transformation. Data is not corrupted if the system fails during a transformation. The transformation continues after the system is restored and both read and write access are maintained. You can reverse the layout transformation process at any time, but the data may not be returned to the exact previous storage location.
Understanding Veritas Volume Manager Volume resynchronization Volume resynchronization When storing data redundantly and using mirrored or RAID-5 volumes, VxVM ensures that all copies of the data match exactly. However, under certain conditions (usually due to complete system failures), some redundant data on a volume can become inconsistent or unsynchronized. The mirrored data is not exactly the same as the original data.
56 Understanding Veritas Volume Manager Dirty region logging For large volumes or for a large number of volumes, the resynchronization process can take time. These effects can be addressed by using dirty region logging (DRL) and FastResync (fast mirror resynchronization) for mirrored volumes, or by ensuring that RAID-5 volumes have valid RAID-5 logs. See the sections “Dirty region logging” on page 56 and “FastResync” on page 62 for more information.
Understanding Veritas Volume Manager Dirty region logging Log subdisks and plexes DRL log subdisks store the dirty region log of a mirrored volume that has DRL enabled. A volume with DRL has at least one log subdisk; multiple log subdisks can be used to mirror the dirty region log. Each log subdisk is associated with one plex of the volume. Only one log subdisk can exist per plex. If the plex contains only a log subdisk and no data subdisks, that plex is referred to as a log plex.
58 Understanding Veritas Volume Manager Dirty region logging Note: The SmartSync feature of Veritas Volume Manager is only applicable to databases that are configured on raw volumes. You cannot use it with volumes that contain file systems. Use an alternative solution such as the Oracle Resilvering feature of Veritas File System (VxFS). You must configure volumes correctly to use SmartSync.
Understanding Veritas Volume Manager Volume snapshots depends on dirty region logs, redo log volumes should be configured as mirrored volumes with sequential DRL. For additional information, see “Sequential DRL” on page 57. Volume snapshots Veritas Volume Manager provides the capability for taking an image of a volume at a given point in time. Such an image is referred to as a volume snapshot.
60 Understanding Veritas Volume Manager Volume snapshots Figure 1-31 Time Volume snapshot as a point-in-time image of a volume T1 Original volume T2 Original volume Snapshot volume T3 Original volume Snapshot volume T4 Original volume Snapshot volume Snapshot volume is created at time T2 Snapshot volume retains image taken at time T2 Snapshot volume is updated at time T4 Resynchronize snapshot volume from original volume The traditional type of volume snapshot in VxVM is of the third-mirr
Understanding Veritas Volume Manager Volume snapshots For more information, see the following sections: ■ “Full-sized instant snapshots” on page 299. ■ “Space-optimized instant snapshots” on page 301. ■ “Emulation of third-mirror break-off snapshots” on page 302. ■ “Linked break-off snapshot volumes” on page 303. “Comparison of snapshot features” on page 61 compares the features that are supported by the different types of snapshot.
62 Understanding Veritas Volume Manager FastResync Table 1-1 Comparison of snapshot features for supported snapshot types Snapshot feature Full-sized instant (vxsnap) Space-optimized instant (vxsnap) Break-off (vxassist or vxsnap) Can quickly be refreshed without being reattached Yes Yes No Snapshot hierarchy can be split Yes No No Can be moved into separate Yes disk group from original volume No Yes Can be turned into an independent volume Yes No Yes FastResync ability persists across
Understanding Veritas Volume Manager FastResync FastResync enhancements FastResync provides two fundamental enhancements to VxVM: ■ FastResync optimizes mirror resynchronization by keeping track of updates to stored data that have been missed by a mirror. (A mirror may be unavailable because it has been detached from its volume, either automatically by VxVM as the result of an error, or directly by an administrator using a utility such as vxplex or vxassist.
64 Understanding Veritas Volume Manager FastResync Non-Persistent FastResync uses a map in memory to implement change tracking. Each bit in the map represents a contiguous number of blocks in a volume’s address space. The default size of the map is 4 blocks. The kernel tunable vol_fmr_logsz can be used to limit the maximum size in blocks of the map as described on “Tunable parameters” on page 463.
Understanding Veritas Volume Manager FastResync block map, you would specify dcolen=264. The maximum possible map size is 64 blocks, which corresponds to a dcolen value of 2112 blocks. Note: The size of a DCO plex is rounded up to the nearest integer multiple of the disk group alignment value. The alignment value is 8KB for disk groups that support the Cross-platform Data Sharing (CDS) feature. Otherwise, the alignment value is 1 block.
66 Understanding Veritas Volume Manager FastResync Note: Full-sized and space-optimized instant snapshots, which are administered using the vxsnap command, are supported for a version 20 DCO volume layout. The use of the vxassist command to administer traditional (third-mirror break-off) snapshots is not supported for a version 20 DCO volume layout. How persistent FastResync works with snapshots Persistent FastResync uses a map in a DCO volume on disk to implement change tracking.
Understanding Veritas Volume Manager FastResync Figure 1-33 Mirrored volume after completion of a snapstart operation Mirrored volume Data plex Data plex Snapshot plex Data change object Disabled DCO plex DCO plex DCO plex DCO volume Multiple snapshot plexes and associated DCO plexes may be created in the volume by rerunning the vxassist snapstart command for traditional snapshots, or the vxsnap make command for space-optimized snapshots.
68 Understanding Veritas Volume Manager FastResync relationship between volumes and their snapshots even if they are moved into different disk groups. The snap objects in the original volume and snapshot volume are automatically deleted in the following circumstances: ■ For traditional snapshots, the vxassist snapback operation is run to return all of the plexes of the snapshot volume to the original volume.
Understanding Veritas Volume Manager FastResync Figure 1-34 Mirrored volume and snapshot volume after completion of a snapshot operation Mirrored volume Data plex Data plex Data change object DCO log plex Snap object DCO log plex DCO volume Snapshot volume Data plex Data change object Snap object DCO log plex DCO volume Effect of growing a volume on the FastResync map It is possible to grow the replica volume, or the original volume, and still use FastResync.
70 Understanding Veritas Volume Manager FastResync In either case, the part of the map that corresponds to the grown area of the volume is marked as “dirty” so that this area is resynchronized. The snapback operation fails if it attempts to create an incomplete snapshot plex. In such cases, you must grow the replica volume, or the original volume, before invoking any of the commands vxsnap reattach, vxsnap restore, or vxassist snapback.
Understanding Veritas Volume Manager Hot-relocation completed. For more information, see the vxvol (1M), vxassist (1M), and vxplex (1M) manual pages. Hot-relocation Note: You need a full license to use this feature. Hot-relocation is a feature that allows a system to react automatically to I/O failures on redundant objects (mirrored or RAID-5 volumes) in VxVM and restore redundancy and access to those objects. VxVM detects I/O failures on objects and relocates the affected subdisks.
72 Understanding Veritas Volume Manager Volume sets
Chapter 2 Administering disks This chapter describes the operations for managing disks used by the Veritas Volume Manager (VxVM). This includes placing disks under VxVM control, initializing disks, mirroring the root disk, and removing and replacing disks. Note: Most VxVM commands require superuser or equivalent privileges.
78 Administering disks Disk devices VxVM recreates disk devices, including those from the /dev/[r]dsk directories, as metadevices in the /dev/vx/[r]dmp directories. The dynamic multipathing (DMP) feature of VxVM uses these metadevices (or DMP nodes) to represent disks that can be accessed by more than one physical path, usually via different controllers.
Administering disks Disk devices ■ All fabric or non-fabric disks in supported disk arrays are named using the enclosure_name_# format. For example, disks in the supported disk array, enggdept are named enggdept_0, enggdept_1, enggdept_2 and so on. (You can use the vxdmpadm command to administer enclosure names. See “Administering DMP using vxdmpadm” on page 133 and the vxdmpadm(1M) manual page for more information.) ■ Disks in the DISKS category (JBOD disks) are named using the Disk_# format.
80 Administering disks Disk devices smallest copy of the configuration database on any of its member disks. public region An area that covers the remainder of the disk, and which is used for the allocation of storage space to subdisks. A disk’s type identifies how VxVM accesses a disk, and how it manages the disk’s private and public regions.
Administering disks Discovering and configuring newly added disk devices By default, auto-configured non-EFI disks are formatted as cdsdisk disks when they are initialized for use with VxVM. You can change the default format by using the vxdiskadm(1M) command to update the /etc/default/vxdisk defaults file as described in “Displaying and changing default disk layout attributes” on page 95.
82 Administering disks Discovering and configuring newly added disk devices partial device discovery.
Administering disks Discovering and configuring newly added disk devices Disks in JBODs that do not fall into any supported category, and which are not capable of being multipathed by DMP are placed in the OTHER_DISKS category.
84 Administering disks Discovering and configuring newly added disk devices See “Changing device naming for TPD-controlled enclosures” on page 91 for information on how to change the form of TPD device names that are displayed by VxVM. See “Displaying information about TPD-controlled devices” on page 137 for details of how to find out the TPD configuration information that is known to DMP. Autodiscovery of EMC Symmetrix arrays In VxVM 4.
Administering disks Discovering and configuring newly added disk devices Note: If any EMCpower discs are configured as foreign discs, use the vxddladm rmforeign command to remove the foreign definitions, as shown in this example: # vxddladm rmforeign blockpath=/dev/dsk/emcpower10 \ charpath=/dev/rdsk/emcpower10 To allow DMP to receive correct enquiry data, the Common Serial Number (C-bit) Symmetrix Director parameter must be set to enabled.
86 Administering disks Discovering and configuring newly added disk devices This command displays the vendor ID (VID), product IDs (PIDs) for the arrays, array types (for example, A/A or A/P), and array names. The following is sample output. # vxddladm listsupport libname=libvxfujitsu.so ATTR_NAME ATTR_VALUE ================================================= LIBNAME libvxfujitsu.
Administering disks Discovering and configuring newly added disk devices Adding unsupported disk arrays to the DISKS category Caution: The procedure in this section ensures that Dynamic Multipathing (DMP) is set up correctly on an array that is not supported by Veritas Volume Manager. Otherwise, Veritas Volume Manager treats the independent paths to the disks as separate devices, which can result in data corruption.
88 Administering disks Discovering and configuring newly added disk devices VID PID Opcode Page Code Page Offset SNO length ============================================================= SEAGATE ALL PIDs 18 -1 36 12 5 To verify that the array is recognized, use the vxdmpadm listenclosure command as shown in the following sample output for the example array: # vxdmpadm listenclosure all ENCLR_NAME ENCLR_TYPE ENCLR_SNO STATUS ============================================================= OTHER_DISKS OTHER_DI
Administering disks Placing disks under VxVM control Adding foreign devices DDL may not be able to discover some devices that are controlled by third-party drivers, such as those that provide multipathing or RAM disk capabilities. For these devices it may be preferable to use the multipathing capability that is provided by the third-party drivers for some arrays rather than using the Dynamic Multipathing (DMP) feature.
90 Administering disks Changing the disk-naming scheme The method by which you place a disk under VxVM control depends on the circumstances: ■ If the disk is new, it must be initialized and placed under VxVM control. You can use the menu-based vxdiskadm utility to do this. Caution: Initialization destroys existing data on disks. ■ If the disk is not needed immediately, it can be initialized (but not added to a disk group) and reserved for future use.
Administering disks Changing the disk-naming scheme You can either use enclosure-based naming for disks or the operating system’s naming scheme (such as c#t#d#). Select menu item 20 from the vxdiskadm main menu to change the disk-naming scheme that you want VxVM to use. When prompted, enter y to change the naming scheme. This restarts the vxconfigd daemon to bring the new disk naming scheme into effect. Alternatively, you can change the naming scheme from the command line.
92 Administering disks Changing the disk-naming scheme Changing device naming for TPD-controlled enclosures Note: This feature is available only if the default disk-naming scheme is set to use operating system-based naming, and the TPD-controlled enclosure does not contain fabric disks. For disk enclosures that are controlled by third-party drivers (TPD) whose coexistence is supported by an appropriate ASL, the default behavior is to assign device names that are based on the TPD-assigned node names.
Administering disks Changing the disk-naming scheme Discovering the association between enclosure-based disk names and OSbased disk names If you enable enclosure-based naming, and use the vxprint command to display the structure of a volume, it shows enclosure-based disk device names (disk access names) rather than c#t#d# names.
94 Administering disks Changing the disk-naming scheme Note: You cannot run vxdarestore if c#t#d# naming is in use. Additionally, vxdarestore does not handle failures on persistent simple/nopriv disks that are caused by renaming enclosures, by hardware reconfiguration that changes device names. or by removing support from the JBOD category for disks that belong to a particular vendor when enclosure-based naming is in use.
Administering disks Installing and formatting disks # /etc/vx/bin/vxdarestore 3 Re-import the disk group using the following command: # vxdg import diskgroup Installing and formatting disks Depending on the hardware capabilities of your disks and of your system, you may either need to shut down and power off your system before installing the disks, or you may be able to hot-insert the disks into the live system. Many operating systems can detect the presence of the new disks on being rebooted.
96 Administering disks Adding a disk to VxVM When initializing multiple disks at one time, it is possible to exclude certain disks or certain controllers. To exclude disks, list the names of the disks to be excluded in the file /etc/vx/disks.exclude before the initialization. You can exclude all disks on specific controllers from initialization by listing those controllers in the file /etc/vx/cntrls.exclude.
Administering disks Adding a disk to VxVM c3t1d0 c3t2d0 c3t3d0 c3t8d0 c3t9d0 c3t10d0 c4t1d0 c4t2d0 c4t13d0 c4t14d0 . . . mydg03 mydg04 mydg05 mydg06 mydg07 mydg02 mydg08 TCd1-18238 - mydg mydg mydg mydg mydg mydg mydg TCg1-18238 - online online online online online online online online online invalid online Select disk devices to add: [,all,list,q,?] The phrase online invalid in the STATUS line indicates that a disk has yet to be added or initialized for VxVM control.
98 Administering disks Adding a disk to VxVM If the new disk group may be moved between different operating system platforms, enter y. Otherwise, enter n.
Administering disks Adding a disk to VxVM Reinitialize these devices? [y,n,q,?] (default: n) y VxVM INFO V-5-2-205 Initializing device device name. 13 You can now choose whether the disk is to be formatted as a CDS disk that is portable between different operating systems, or as a non-portable hpdisk-format disk: Enter the desired format [cdsdisk,hpdisk,q,?] (default: cdsdisk) Enter the format that is appropriate for your needs. In most cases, this is the default format, cdsdisk.
100 Administering disks Rootability Using vxdiskadd to place a disk under control of VxVM As an alternative to vxdiskadm, you can use the vxdiskadd command to put a disk under VxVM control. For example, to initialize the second disk on the first controller, use the following command: # vxdiskadd c0t1d0 The vxdiskadd command examines your disk to determine whether it has been initialized and also checks for disks that have been added to VxVM, and for other conditions.
Administering disks Rootability VxVM root disk volume restrictions Volumes on a bootable VxVM root disk have the following configuration restrictions: ■ All volumes on the root disk must be in the disk group that you choose to be the bootdg disk group. ■ The names of the volumes with entries in the LIF LABEL record must be standvol, rootvol, swapvol, and dumpvol (if present).
102 Administering disks Rootability Booting root volumes Note: At boot time, the system firmware provides you with a short time period during which you can manually override the automatic boot process and select an alternate boot device. For information on how to boot your system from a device other than the primary or alternate boot devices, and how to change the primary and alternate boot devices, see the HP-UX documentation and the boot(1M), pdc(1M) and isl(1M) manual pages.
Administering disks Rootability size of the file systems on the target disk. (This takes advantage of the fact that most of these file systems are usually nowhere near 100% full.) For example, to specify a size reduction of 30%, the following command would be used: # /etc/vx/bin/vxcp_lvmroot -R 30 -v -b c0t4d0 The verbose option, -v, is specified to give an indication of the progress of the operation. Caution: Only create a VxVM root disk if you also intend to mirror it.
104 Administering disks Rootability Creating an LVM root disk from a VxVM root disk Note: These procedures should be carried out at init level 1. In some circumstances, it may be necessary to boot the system from an LVM root disk. If an LVM root disk is no longer available or an existing LVM root disk is out-of-date, you can use the vxres_lvmroot command to create an LVM root disk on a spare physical disk that is not currently under LVM or VxVM control.
Administering disks Dynamic LUN expansion Dynamic LUN expansion Note: A Storage Foundation license is required to use the dynamic LUN expansion feature. The following form of the vxdisk command can be used to make VxVM aware of the new size of a virtual disk device that has been resized: # vxdisk [-f] [-g diskgroup] resize {accessname|medianame} \ [length=value] The device must have a SCSI interface that is presented by a smart switch, smart array or RAID controller.
106 Administering disks Extended Copy Service Resizing a device that contains the only valid configuration copy for a disk group can result in data loss if a system crash occurs during the resize. Resizing a virtual disk device is a non-transactional operation outside the control of VxVM. This means that the resize command may have to be re-issued following a system crash. In addition, a system crash may leave the private region on the device in an unusable state.
Administering disks Removing disks 2 Enable the Ecopy features in the array. This procedure is vendor-specific. 3 Install the vendor ASL that supports the Ecopy feature. contact VITA@veritas.com for vendor ASL information. Enabling Extended Copy Service for Hitachi arrays To implement extended copy for the Hitachi 9900 and 9900V arrays, use the following command to create the two files, /etc/vx/user_pwwn_file and /etc/vx/user_luid_file, that contain identification information for the disks.
108 Administering disks Removing disks To prepare your system for the removal of the disk 1 Stop all activity by applications to volumes that are configured on the disk that is to be removed. Unmount file systems and shut down databases that are configured on the volumes. 2 Use the following command to stop the volumes: # vxvol [-g diskgroup] stop volume1 volume2 ... 3 Move the volumes to other disks or back up the volumes.
Administering disks Removing a disk from VxVM control Removing a disk with subdisks You can remove a disk on which some subdisks are defined. For example, you can consolidate all the volumes onto one disk. If you use the vxdiskadm program to remove a disk, you can choose to move volumes off that disk. To do this, run the vxdiskadm program and select item 2 (Remove a disk) from the main menu.
110 Administering disks Removing and replacing disks # /usr/lib/vxvm/bin/vxdiskunsetup c#t#d# Caution: The vxdiskunsetup command removes a disk from Veritas Volume Manager control by erasing the VxVM metadata on the disk. To prevent data loss, any data on the disk should first be evacuated from the disk. The vxdiskunsetup command should only be used by a system administrator who is trained and knowledgeable about Veritas Volume Manager.
Administering disks Removing and replacing disks No data on these volumes will be lost. The following volumes are in use, and will be disabled as a result of this operation: mkting Any applications using these volumes will fail future accesses. These volumes will require restoration from backup. Are you sure you want do this? [y,n,q,?] (default: n) To remove the disk, causing the named volumes to be disabled and data to be lost when the disk is replaced, enter y or press Return.
112 Administering disks Removing and replacing disks VxVM NOTICE V-5-2-260 Proceeding to replace mydg02 with device c0t1d0. 6 You can now choose whether the disk is to be formatted as a CDS disk that is portable between different operating systems, or as a non-portable hpdisk-format disk: Enter the desired format [cdsdisk,hpdisk,q,?] (default: cdsdisk) Enter the format that is appropriate for your needs. In most cases, this is the default format, cdsdisk.
Administering disks Removing and replacing disks 2 At the following prompt, enter the name of the disk to be replaced (or enter list for a list of disks): Replace a failed or removed disk Menu: VolumeManager/Disk/ReplaceDisk VxVM INFO V-5-2-479 Use this menu operation to specify a replacement disk for a disk that you removed with the “Remove a disk for replacement” menu operation, or that failed during use. You will be prompted for a disk name to replace and a disk device to use as a replacement.
114 Administering disks Enabling a disk Enter the format that is appropriate for your needs. In most cases, this is the default format, cdsdisk. 6 At the following prompt, vxdiskadm asks if you want to use the default private region size of 32768 blocks (32 MB). Press Return to confirm that you want to use the default value, or enter a different value. (The maximum value that you can specify is 524288 blocks.
Administering disks Taking a disk offline Select a disk device to enable [
,list,q,?] c0t2d0 vxdiskadm enables the specified device. 3 At the following prompt, indicate whether you want to enable another device (y) or return to the vxdiskadm main menu (n): Enable another device? [y,n,q,?] (default: n) Taking a disk offline There are instances when you must take a disk offline. If a disk is corrupted, you must disable the disk before removing it.116 Administering disks Renaming a disk Renaming a disk If you do not specify a VM disk name, VxVM gives the disk a default name when you add the disk to VxVM control. The VM disk name is used by VxVM to identify the location of the disk or the disk type.
Administering disks Displaying disk information does not use mydg03, even if there is no free space on any other disk. To turn off reservation of a disk, use the following command: # vxedit [-g diskgroup] set reserve=off diskname See the vxedit(1M) manual page for more information. Displaying disk information Before you use a disk, you need to know if it has been initialized and placed under VxVM control.
118 Administering disks Displaying disk information Displaying disk information with vxdiskadm Displaying disk information shows you which disks are initialized, to which disk groups they belong, and the disk status. The list command displays device names for all recognized disks, the disk names, the disk group names associated with each disk, and the status of each disk. To display disk information 1 Start the vxdiskadm program, and select list (List disk information) from the main menu.
Chapter 3 Administering dynamic multipathing (DMP) Note: You need a full license to use this feature. The dynamic multipathing (DMP) feature of Veritas Volume Manager (VxVM) provides greater reliability and performance by using path failover and load balancing. This feature is available for multiported disk arrays from various vendors. How DMP works Multiported disk arrays can be connected to host systems through multiple paths.
122 Administering dynamic multipathing (DMP) How DMP works primary and secondary controller are each connected to a separate group of LUNs. If a single LUN in the primary controller’s LUN group fails, all LUNs in that group fail over to the secondary controller. Active/Passive arrays in explicit failover mode (or non-autotrespass mode) are termed A/PF arrays. DMP issues the appropriate low-level command to make the LUNs fail over to the secondary path.
Administering dynamic multipathing (DMP) How DMP works Figure 3-1 How DMP represents multiple physical paths to a disk as one node VxVM Host c1 c2 Single DMP node Multiple paths Mapped by DMP DMP Multiple paths Disk As described in “Enclosure-based naming” on page 23, VxVM implements a disk device naming scheme that allows you to recognize to which array a disk belongs.
124 Administering dynamic multipathing (DMP) How DMP works See “Changing the disk-naming scheme” on page 92 for details of how to change the naming scheme that VxVM uses for disk devices. See “Discovering and configuring newly added disk devices” on page 81 for a description of how to make newly added disk hardware known to a host system.
Administering dynamic multipathing (DMP) How DMP works DMP is also informed when a connection is repaired or restored, and when you add or remove devices after the system has been fully booted (provided that the operating system recognizes the devices correctly). If required, the response of DMP to I/O failure on a path can be tuned for the paths to individual arrays.
126 Administering dynamic multipathing (DMP) How DMP works You can use the vxdmpadm command to change the I/O policy for the paths to an enclosure or disk array as described in “Specifying the I/O policy” on page 141. DMP in a clustered environment Note: You need an additional license to use the cluster feature of VxVM. In a clustered environment where Active/Passive type disk arrays are shared by multiple hosts, all nodes in the cluster must access the disk via the same physical path.
Administering dynamic multipathing (DMP) Disabling and enabling multipathing for specific devices Disabling and enabling multipathing for specific devices You can use vxdiskadm menu options 17 and 18 to disable or enable multipathing. These menu options also allow you to exclude or exclude devices from the view of VxVM. See “Disabling multipathing and making devices invisible to VxVM” on page 127. See “Enabling multipathing and making devices visible to VxVM” on page 128.
128 Administering dynamic multipathing (DMP) Disabling and enabling multipathing for specific devices ◆ Select option 6 to disable multipathing for specified paths. The disks that correspond to a specified path are claimed in the OTHER_DISKS category and are not multipathed. ◆ Select option 7 to disable multipathing for disks that match a specified Vendor ID and Product ID.
Administering dynamic multipathing (DMP) Disabling and enabling multipathing for specific devices ◆ Select option 7 to enable multipathing for disks that match a specified Vendor ID and Product ID. ◆ Select option 8 to list the devices that are currently suppressed or not multipathed.
130 Administering dynamic multipathing (DMP) Enabling and disabling I/O for controllers and storage processors Enabling and disabling I/O for controllers and storage processors DMP allows you to turn off I/O for a controller or the array port of a storage processor so that you can perform administrative operations. This feature can be used for maintenance of HBA controllers on the host, or array ports that are attached to disk arrays supported by VxVM.
Administering dynamic multipathing (DMP) Displaying the paths to a disk The vxdmpadm command also provides useful information such as disk array serial numbers, which DMP devices (disks) are connected to the disk array, and which paths are connected to a particular controller, enclosure or array port. For more information, see “Administering DMP using vxdmpadm” on page 132. Displaying the paths to a disk The vxdisk command is used to display the multipathing information for a particular metadevice.
132 Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm public: slice=0 offset=1152 len=4101723 private: slice=0 offset=128 len=1024 update: time=962923719 seqno=0.
Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm ■ Display information about devices that are controlled by third-party multipathing drivers. ■ Gather I/O statistics for a DMP node, enclosure, path or controller. ■ Configure the attributes of the paths to an enclosure. ■ Set the I/O policy that is used for the paths to an enclosure. ■ Enable or disable I/O for a path, HBA controller or array port on the system. ■ Upgrade disk controller firmware. ■ Rename an enclosure.
134 Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm Displaying the members of a LUN group The following command displays the DMP nodes that are in the same LUN group as a specified DMP node: # vxdmpadm getlungroup dmpnodename=c11t0d10 The above command displays output such as the following: NAME STATE ENCLR-TYPE PATHS ENBL DSBL ENCLR-NAME =============================================================== c11t0d8 ENABLED ACME 2 2 0 enc1 c11t0d9 ENABLED ACME 2 2 0 enc1 c11t0d10 ENABLE
Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm You can also use getsubpaths to obtain information about all the paths that are connected to a port on an array.
136 Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm The following command lists attributes for all enclosures in a system: # vxdmpadm listenclosure all The following is example output from this command: ENCLR_NAME ENCLR_TYPE ENCLR_SNO STATUS ARRAY_TYPE =============================================================== Disk Disk DISKS CONNECTED Disk ANA0 ACME 508002000001d660 CONNECTED A/A enc0 A3 60020f20000001a90000 CONNECTED A/P Displaying information about array ports To display
Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm emcpower10 emcpower11 emcpower12 emcpower13 emcpower14 emcpower15 emcpower16 emcpower17 emcpower18 emcpower19 auto:sliced auto:sliced auto:sliced auto:sliced auto:sliced auto:sliced auto:sliced auto:sliced auto:sliced auto:sliced disk1 disk2 disk3 disk4 disk5 disk6 disk7 disk8 disk9 disk10 ppdg ppdg ppdg ppdg ppdg ppdg ppdg ppdg ppdg ppdg online online online online online online online online online online The following command
138 Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm The interval and count attributes may be used to specify the interval in seconds between displaying the I/O statistics, and the number of lines to be displayed. The actual interval may be smaller than the value specified if insufficient memory is available to record the statistics.
Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm PATHNAME c3t115d0 cpu usage = 8132us per cpu memory = 4096b OPERATIONS BYTES AVG TIME(ms) READS WRITES READS WRITES READS WRITES 0 0 0 0 0.000000 0.000000 # vxdmpadm iostat show dmpnodename=c0t0d0 cpu usage = 8501us per cpu memory = 4096b OPERATIONS BYTES AVG TIME(ms) PATHNAME READS WRITES READS WRITES READS WRITES c0t0d0 1088 0 557056 0 0.009542 0.
140 Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm # vxdmpadm setattr path c1t20d0 pathtype=nopreferred ■ preferred [priority=N] Specifies a path as preferred, and optionally assigns a priority number to it. If specified, the priority number must be an integer that is greater than or equal to one. Higher priority numbers indicate that a path is able to carry a greater I/O load. Note: Setting a priority for path does not change the I/O policy.
Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm The next example displays the setting of partitionsize for the enclosure enc0, on which the balanced I/O policy with a partition size of 2MB has been set: # vxdmpadm getattr enclosure enc0 partitionsize ENCLR_NAME DEFAULT CURRENT --------------------------------------enc0 1024 2048 Specifying the I/O policy You can use the vxdmpadm setattr command to change the I/O policy for distributing I/O load across multiple paths to a disk arr
142 Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm You can use the size argument to the partitionsize attribute to specify the partition size.
Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm significant track cache does not exist. No further configuration is possible as DMP automatically determines the path with the shortest queue. The following example sets the I/O policy to minimumq for a JBOD: # vxdmpadm setattr enclosure Disk iopolicy=minimumq This is the default I/O policy for A/A arrays.
144 Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm Scheduling I/O on the paths of an Asymmetric Active/Active array You can specify the use_all_paths attribute in conjunction with the adaptive, balanced, minimumq, priority and round-robin I/O policies to specify whether I/O requests are to be scheduled on the secondary paths in addition to the primary paths of an Asymmetric Active/Active (A/A-A) array.
Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm ... cpu usage = 11294us per cpu OPERATIONS PATHNAME READS WRITES c2t0d15 0 0 c2t1d15 0 0 c3t1d15 0 0 c3t2d15 0 0 c4t2d15 0 0 c4t3d15 0 0 c5t3d15 0 0 c5t4d15 5493 0 memory = 32768b KBYTES READS WRITES 0 0 0 0 0 0 0 0 0 0 0 0 0 0 5493 0 AVG TIME(ms) READS WRITES 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.000000 0.411069 0.
146 Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm The enclosure can be returned to the single active I/O policy by entering the following command: # vxdmpadm setattr enclosure ENC0 iopolicy=singleactive Disabling I/O for paths, controllers or array ports Note: From release 5.0 of VxVM, this operation is supported for controllers that are used to access disk arrays on which cluster-shareable disk groups are configured.
Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm Enabling a controller allows a previously disabled path, HBA controller or array port to accept I/O again. This operation succeeds only if the path, controller or array port is accessible to the host, and I/O can be performed on it. When connecting Active/Passive disk arrays, the enable operation results in failback of I/O to the primary path.
148 Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm # /opt/VRTS/bin/vxdmpadm enable ctlr=second_cntlr 5 Re-enable the plex associated with the device: # /opt/VRTS/bin/vxplex -g diskgroup att volume plex This command takes some time depending upon the size of the mirror set.
Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm The value of the argument to retrycount specifies the number of retries to be attempted before DMP reschedules the I/O request on another available path, or fails the request altogether.
150 Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm # vxdmpadm getattr \ {enclosure enc-name|arrayname name|arraytype type}\ recoveryoption See “Displaying recoveryoption values” on page 151 for more information.
Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm Specifying recoveryoption=default resets I/O throttling to the default settings corresponding to recoveryoption=throttle queuedepth=20, for example: # vxdmpadm setattr arraytype A/A recoveryoption=default This command also has the effect of configuring a fixed-retry limit of 5 on the paths. See “Configuring the response to I/O failures” on page 148 for details.
152 Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm Configuring DMP path restoration policies DMP maintains a kernel thread that re-examines the condition of paths at a specified interval. The type of analysis that is performed on the paths depends on the checking policy that is configured. Note: The DMP path restoration thread does not change the disabled state of the path through a controller that you have disabled using vxdmpadm disable.
Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm The interval attribute specifies how often the path restoration thread examines the paths. For example, after stopping the path restoration thread, the polling interval can be set to 400 seconds using the following command: # vxdmpadm start restore interval=400 Note: The default interval is 300 seconds. Decreasing this interval can adversely affect system performance.
154 Administering dynamic multipathing (DMP) Administering DMP using vxdmpadm Configuring array policy modules An array policy module (APM) is a dynamically loadable kernel module that may be provided by some vendors for use in conjunction with an array. An APM defines procedures to: ■ Select an I/O path when multiple paths to a disk within the array are available. ■ Select the path failover mechanism. ■ Select the alternate path in the case of a path failure. ■ Put a path change into effect.
Chapter 4 Creating and administering disk groups This chapter describes how to create and manage disk groups. Disk groups are named collections of disks that share a common configuration. Volumes are created within a disk group and are restricted to using disks within that disk group. Note: In releases of Veritas Volume Manager (VxVM) prior to 4.0, a system installed with VxVM was configured with a default disk group, rootdg, that had to contain at least one disk.
158 Creating and administering disk groups Specifying a disk group to commands groups at any time. Disks need not be added to disk groups until the disks are needed to create VxVM objects. When a disk is added to a disk group, it is given a name (for example, mydg02). This name identifies a disk for operations such as volume creation or mirroring. The name also relates directly to the underlying physical disk.
Creating and administering disk groups Specifying a disk group to commands System-wide reserved disk groups The following disk group names are reserved, and cannot be used to name any disk groups that you create: bootdg Specifes the boot disk group. This is an alias for the disk group that contains the volumes that are used to boot the system. VxVM sets bootdg to the appropriate disk group if it takes control of the root disk. Otherwise, bootdg is set to nodg (no disk group; see below).
160 Creating and administering disk groups Displaying disk group information Caution: In releases of VxVM prior to 4.0, a subset of commands attempted to deduce the disk group by searching for the object name that was being operated upon by a command. This functionality is no longer supported. Scripts that rely on deducing the disk group from an object name may fail.
Creating and administering disk groups Displaying disk group information To display more detailed information on a specific disk group, use the following command: # vxdg list diskgroup The output from this command is similar to the following: Group: mydg dgid: 962910960.1025.bass import-id: 0.1 flags: version: 140 local-activation: read-write alignment : 512 (bytes) ssb: on detach-policy: local copies: nconfig=default nlog=default config: seqno=0.
162 Creating and administering disk groups Creating a disk group mydg newdg newdg oradg mydg02 newdg01 newdg02 oradg01 c0t11d0 c0t12d0 c0t13d0 c0t14d0 c0t11d0 c0t12d0 c0t13d0 c0t14d0 0 0 0 0 4443310 4443310 4443310 4443310 - To display free space for a disk group, use the following command: # vxdg -g diskgroup free where -g diskgroup optionally specifies a disk group.
Creating and administering disk groups Adding a disk to a disk group You can use the cds attribute with the vxdg init command to specify whether a new disk group is compatible with the Cross-platform Data Sharing (CDS) feature. In Veritas Volume Manager 4.0 and later releases, newly created disk groups are compatible with CDS by default (equivalent to specifying cds=on).
164 Creating and administering disk groups Deporting a disk group Once the disk has been removed from its disk group, you can (optionally) remove it from VxVM control completely, as follows: # vxdiskunsetup devicename For example, to remove c1t0d0 from VxVM control, use these commands: # vxdiskunsetup c1t0d0 You can remove a disk on which some subdisks of volumes are defined. For example, you can consolidate all the volumes onto one disk.
Creating and administering disk groups Deporting a disk group # vxvol -g diskgroup stopall 3 Select menu item 8 (Remove access to (deport) a disk group) from the vxdiskadm main menu. 4 At the following prompt, enter the name of the disk group to be deported (in this example, newdg): Remove access to (deport) a disk group Menu: VolumeManager/Disk/DeportDiskGroup Use this menu operation to remove access to a disk group that is currently enabled (imported) by this system.
166 Creating and administering disk groups Importing a disk group Importing a disk group Importing a disk group enables access by the system to a disk group. To move a disk group from one system to another, first disable (deport) the disk group on the original system, and then move the disk between systems and enable (import) the disk group.
Creating and administering disk groups Handling disks with duplicated identifiers Handling disks with duplicated identifiers Advanced disk arrays provide hardware tools that you can use to create clones of existing disks outside the control of VxVM. For example, these disks may have been created as hardware snapshots or mirrors of existing disks in a disk group.As a result, the VxVM private region is also duplicated on the cloned disk.
168 Creating and administering disk groups Handling disks with duplicated identifiers # vxdisk updateudid c2t66d0 c2t67d0 Importing a disk group containing cloned disks By default, disks on which the udid_mismatch flag or the clone_disk flag has been set are not imported by the vxdg import command unless all disks in the disk group have at least one of these flags set, and no two of the disks have the same UDID.
Creating and administering disk groups Handling disks with duplicated identifiers when the disk group is imported. At least one of the cloned disks that are being imported must contain a copy of the current configuration database in its private region.
170 Creating and administering disk groups Handling disks with duplicated identifiers Enabling configuration database copies on tagged disks In this example, the following commands have been used to tag some of the disks in an Hitachi TagmaStore array: # # # # vxdisk vxdisk vxdisk vxdisk settag settag settag settag TagmaStore-USP0_28 TagmaStore-USP0_28 TagmaStore-USP0_24 TagmaStore-USP0_25 t1=v1 t2=v2 t2=v2 t1=v1 These tags can be viewed by using the vxdisk listtag command: # vxdisk listtag DEVICE Tag
Creating and administering disk groups Handling disks with duplicated identifiers Importing cloned disks without tags In this example, cloned disks (shadow image devices) from an Hitachi TagmaStore array are to be imported. The primary (non-cloned) disks, mydg01, mydg03 and mydg03, are already imported, and the cloned disks are not tagged.
172 Creating and administering disk groups Handling disks with duplicated identifiers Importing cloned disks with tags In this example, cloned disks (BCV devices) from an EMC Symmetrix DMX array are to be imported. The primary (non-cloned) disks, mydg01, mydg03 and mydg03, are already imported, and the cloned disks with the tag t1 are to be imported.
Creating and administering disk groups Renaming a disk group In the next example, none of the disks (neither cloned nor non-cloned) have been imported: # vxdisk -o alldgs list DEVICE TYPE EMC0_4 auto:cdsdisk EMC0_6 auto:cdsdisk EMC0_8 auto:cdsdisk EMC0_15 auto:cdsdisk EMC0_18 auto:cdsdisk EMC0_24 auto:cdsdisk DISK - GROUP (mydg) (mydg) (mydg) (mydg) (mydg) (mydg) STATUS online online online udid_mismatch online udid_mismatch online online udid_mismatch To import only the cloned disks that have been tag
174 Creating and administering disk groups Renaming a disk group For example, this command renames the disk group, mydg, as myexdg, and deports it to the host, jingo: # vxdg -h jingo -n myexdg deport mydg Note: You cannot use this method to rename the active boot disk group because it contains volumes that are in use by mounted file systems (such as /). To rename the boot disk group, boot the system from an LVM root disk instead of from the VxVM root disk.
Creating and administering disk groups Moving disks between disk groups Here hostname is the name of the system whose rootdg is being returned (the system name can be confirmed with the command uname -n). This command removes the imported disk group from the importing host and returns locks to its original host. The original host can then automatically import its boot disk group at the next reboot.
176 Creating and administering disk groups Moving disk groups between systems 3 Import (enable local access to) the disk group on the second system with this command: # vxdg import diskgroup Caution: All disks in the disk group must be moved to the other system. If they are not moved, the import fails. 4 After the disk group is imported, start all volumes in the disk group with this command: # vxrecover -g diskgroup -sb You can also move disks from a system that has crashed.
Creating and administering disk groups Moving disk groups between systems Caution: Be careful when using the vxdisk clearimport or vxdg -C import command on systems that have dual-ported disks. Clearing the locks allows those disks to be accessed at the same time from multiple hosts and can result in corrupted data. A disk group can be imported successfully if all the disks are accessible that were visible when the disk group was last imported successfully.
178 Creating and administering disk groups Moving disk groups between systems When you move a disk group between systems, it is possible for the minor numbers that it used on its previous system to coincide (or collide) with those of objects known to VxVM on the new system. To get around this potential problem, you can allocate separate ranges of minor numbers for each disk group. VxVM uses the specified range of minor numbers when it creates volume objects from the disks in the disk group.
Creating and administering disk groups Moving disk groups between systems If a volume is open, its old device number remains in effect until the system is rebooted or until the disk group is deported and re-imported. If you close the open volume, you can run vxdg reminor again to allow the renumbering to take effect without rebooting or reimporting. An example of where it is necessary to change the base minor number is for a clustershareable disk group.
180 Creating and administering disk groups Handling conflicting configuration copies Note: Such a disk group may still not be importable by VxVM 4.0 on Linux with a pre-2.6 kernel if it would increase the number of minor numbers on the system that are assigned to volumes to more than 4079, or if the number of available extended major numbers is smaller than 15.
Creating and administering disk groups Handling conflicting configuration copies Figure 4-1 Typical arrangement of a 2-node campus cluster Node 0 Redundant private network Node 1 Fibre Channel switches Disk enclosures enc0 Building A enc1 Building B A serial split brain condition typically arises in a cluster when a private (non-shared) disk group is imported on Node 0 with Node 1 configured as the failover node.
182 Creating and administering disk groups Handling conflicting configuration copies shared disk group, the actual serial IDs on the disks do not agree with the expected values from the configuration copies on other disks in the disk group.
Creating and administering disk groups Handling conflicting configuration copies Figure 4-3 Example of a true serial split brain condition that cannot be resolved automatically Partial disk group imported on host X Partial disk group imported on host Y Disk A Disk B Actual A = 1 Actual B = 1 Configuration database Configuration database Expected A = 1 Expected B = 0 Expected A = 0 Expected B = 1 Shared disk group fails to import Disk A Disk B Actual A = 1 Actual B = 1 Configuration database
184 Creating and administering disk groups Handling conflicting configuration copies Correcting conflicting configuration information Note: This procedure requires that the disk group has a version number of at least 110. See “Upgrading a disk group” on page 198 for more information about disk group version numbers. To resolve conflicting configuration information, you must decide which disk contains the correct version of the disk group configuration database.
Creating and administering disk groups Reorganizing the contents of disk groups c2t8d0( c2t8d0 ) || 0.1 || 0.0 ssb ids don’t match Please note that even though some disks ssb ids might match that does not necessarily mean that those disks’ config copies have all the changes. From some other configuration copies, those disks’ ssb ids might not match. To see the configuration from this disk, run /etc/vx/diag.
186 Creating and administering disk groups Reorganizing the contents of disk groups ■ move—moves a self-contained set of VxVM objects between imported disk groups. This operation fails if it would remove all the disks from the source disk group. Volume states are preserved across the move. The move operation is illustrated in Figure 4-4.
Creating and administering disk groups Reorganizing the contents of disk groups Figure 4-5 Disk group split operation Source disk group Disks to be split into new disk group Source disk group ■ After split New target disk group join—removes all VxVM objects from an imported disk group and moves them to an imported target disk group. The source disk group is removed when the join is complete. The join operation is illustrated in Figure 4-6.
188 Creating and administering disk groups Reorganizing the contents of disk groups Figure 4-6 Disk group join operation Source disk group Target disk group join After join Target disk group These operations are performed on VxVM objects such as disks or top-level volumes, and include all component objects such as sub-volumes, plexes and subdisks.
Creating and administering disk groups Reorganizing the contents of disk groups Limitations of disk group split and join The disk group split and join feature has the following limitations: ■ Disk groups involved in a move, split or join must be version 90 or greater. See “Upgrading a disk group” on page 198 for more information on disk group versions. ■ The reconfiguration must involve an integral number of physical disks. ■ Objects to be moved must not contain open volumes.
190 Creating and administering disk groups Reorganizing the contents of disk groups Listing objects potentially affected by a move To display the VxVM objects that would be moved for a specified list of objects, use the following command: # vxdg [-o expand] listmove sourcedg targetdg object ...
Creating and administering disk groups Reorganizing the contents of disk groups For more information about the layout of DCO volumes and their use with volume snapshots, see and “FastResync” on page 66. For more information about the administration of volume snapshots, see “Volume snapshots” on page 63 and “Administering volume snapshots” on page 295.
192 Creating and administering disk groups Reorganizing the contents of disk groups Figure 4-7 Examples of disk groups that can and cannot be split Volume data plexes The disk group can be split as the DCO plexes are on dedicated disks, and can therefore accompany the disks that contain the volume data. Snapshot plex Split Volume DCO plexes Snapshot DCO plex Volume data plexes Snapshot plex The disk group cannot be split as the DCO plexes cannot accompany their volumes.
Creating and administering disk groups Reorganizing the contents of disk groups Moving objects between disk groups To move a self-contained set of VxVM objects from an imported source disk group to an imported target disk group, use the following command: # vxdg [-o expand] [-o override|verify] move sourcedg targetdg \ object ...
194 Creating and administering disk groups Reorganizing the contents of disk groups The following command moves the self-contained set of objects implied by specifying disk mydg01 from disk group mydg to rootdg: # vxdg -o expand move mydg rootdg mydg01 The moved volumes are initially disabled following the move. Use the following commands to recover and restart the volumes in the target disk group: # vxrecover -g targetdg -m [volume ...
Creating and administering disk groups Reorganizing the contents of disk groups # vxprint Disk group: rootdg TY NAME ASSOC dg rootdg rootdg dm rootdg01 c0t1d0 dm rootdg02 c1t97d0 dm rootdg03 c1t112d0 dm rootdg04 c1t114d0 dm rootdg05 c1t96d0 dm rootdg06 c1t98d0 dm rootdg07 c1t99d0 dm rootdg08 c1t100d0 v vol1 fsgen pl vol1-01 vol1 sd rootdg01-01 vol1-01 pl vol1-02 vol1 sd rootdg05-01 vol1-02 KSTATE ENABLED ENABLED ENABLED ENABLED ENABLED LENGTH 17678493 17678493 1767849 3 17678493 17678493 17678493 1767849
196 Creating and administering disk groups Reorganizing the contents of disk groups Joining disk groups To remove all VxVM objects from an imported source disk group to an imported target disk group, use the following command: # vxdg [-o override|verify] join sourcedg targetdg Note: You cannot specify rootdg as the source disk group for a join operation. For a description of the -o override and -o verify options, see “Moving objects between disk groups” on page 193.
Creating and administering disk groups Disabling a disk group The output from vxprint after the join shows that disk group mydg has been removed: # vxprint Disk group: rootdg TY NAME ASSOC dg rootdg rootdg dm mydg01 c0t1d0 dm rootdg02 c1t97d0 dm rootdg03 c1t112d0 dm rootdg04 c1t114d0 dm mydg05 c1t96d0 dm rootdg06 c1t98d0 dm rootdg07 c1t99d0 dm rootdg08 c1t100d0 v vol1 fsgen pl vol1-01 vol1 sd mydg01-01 vol1-01 pl vol1-02 vol1 sd mydg05-01 vol1-02 KSTATE ENABLED ENABLED ENABLED ENABLED ENABLED LENGTH 1767
198 Creating and administering disk groups Upgrading a disk group To recover a destroyed disk group 1 Enter the following command to find out the disk group ID (dgid) of one of the disks that was in the disk group: # vxdisk -s list disk_access_name The disk must be specified by its disk access name, such as c0t12d0. Examine the output from the command for a line similar to the following that specifies the disk group ID. dgid: 2 963504895.1075.
Creating and administering disk groups Upgrading a disk group The table, “Disk group version assignments,” summarizes the Veritas Volume Manager releases that introduce and support specific disk group versions: Table 4-1 Disk group version assignments VxVM release Introduces disk group version Supports disk group versions 1.2 10 10 1.3 15 15 2.0 20 20 2.2 30 30 2.3 40 40 2.5 50 50 3.0 60 20-40, 60 3.1 70 20-70 3.1.1 80 20-80 3.2, 3.5 90 20-90 4.0 110 20-110 4.
200 Creating and administering disk groups Upgrading a disk group Table 4-2 Features supported by disk group versions Disk group New features supported version Previous version features supported ■ Automatic Cluster-wide Failback for A/P arrays ■ DMP Co-existence with Third-Party Drivers ■ Migration of Volumes to ISP ■ Persistent DMP Policies ■ Shared Disk Group Failure Policy ■ Cross-platform Data Sharing (CDS) ■ Device Discovery Layer (DDL) 2.
Creating and administering disk groups Managing the configuration daemon in VxVM Table 4-2 Features supported by disk group versions Disk group New features supported version 20 ■ Dirty Region Logging (DRL) ■ Disk Group Configuration Copy Limiting ■ Mirrored Volumes Logging ■ New-Style Stripes ■ RAID-5 Volumes ■ Recovery Checkpointing Previous version features supported To list the version of a disk group, use this command: # vxdg list dgname You can also determine the disk group version b
202 Creating and administering disk groups Backing up and restoring disk group configuration data and modifies configuration information stored on disk. vxconfigd also initializes VxVM when the system is booted. The vxdctl command is the command-line interface to the vxconfigd daemon. You can use vxdctl to: ■ Control the operation of the vxconfigd daemon. ■ Change the system-wide definition of the default disk group. Note: In VxVM 4.
Creating and administering disk groups Using vxnotify to monitor configuration changes Using vxnotify to monitor configuration changes You can use the vxnotify utility to display events relating to disk and configuration changes that are managed by the vxconfigd configuration daemon. If vxnotify is running on a system where the VxVM clustering feature is active, it displays events that are related to changes in the cluster state of the system on which it is running.
204 Creating and administering disk groups Using vxnotify to monitor configuration changes
Chapter 5 Creating and administering subdisks This chapter describes how to create and maintain subdisks. Subdisks are the low-level building blocks in a Veritas Volume Manager (VxVM) configuration that are required to create plexes and volumes. Note: Most VxVM commands require superuser or equivalent privileges. Creating subdisks Note: Subdisks are created automatically if you use the vxassist command or the Veritas Enterprise Administrator (VEA) to create volumes.
208 Creating and administering subdisks Displaying subdisk information Note: As for all VxVM commands, the default size unit is s, representing a sector. Add a suffix, such as k for kilobyte, m for megabyte or g for gigabyte, to change the unit of size. For example, 500m would represent 500 megabytes. If you intend to use the new subdisk to build a volume, you must associate the subdisk with a plex (see “Associating subdisks with plexes” on page 210).
Creating and administering subdisks Splitting subdisks # vxsd [-g diskgroup] mv old_subdisk new_subdisk [new_subdisk ...] For example, if mydg03 in the disk group, mydg, is to be evacuated, and mydg12 has enough room on two of its subdisks, use the following command: # vxsd -g mydg mv mydg03-01 mydg12-01 mydg12-02 For the subdisk move to work correctly, the following conditions must be met: ■ The subdisks involved must be the same size.
210 Creating and administering subdisks Associating subdisks with plexes For example, to join the contiguous subdisks mydg03-02, mydg03-03, mydg03-04 and mydg03-05 as subdisk mydg03-02 in the disk group, mydg, use the following command: # vxsd -g mydg join mydg03-02 mydg03-03 mydg03-04 mydg03-05 \ mydg03-02 Associating subdisks with plexes Associating a subdisk with a plex places the amount of disk space defined by the subdisk at a specific offset within the plex.
Creating and administering subdisks Associating log subdisks Note: The subdisk must be exactly the right size. VxVM does not allow the space defined for two subdisks to overlap within a plex. For striped or RAID-5 plexes, use the following command to specify a column number and column offset for the subdisk to be added: # vxsd [-g diskgroup] -l column_#/offset assoc plex subdisk ...
212 Creating and administering subdisks Dissociating subdisks from plexes # vxsd [-g diskgroup] aslog plex subdisk where subdisk is the name to be used for the log subdisk. The plex must be associated with a mirrored volume before dirty region logging takes effect.
Creating and administering subdisks Changing subdisk attributes Changing subdisk attributes Caution: Change subdisk attributes with extreme care. The vxedit command changes attributes of subdisks and other VxVM objects. To change subdisk attributes, use the following command: # vxedit [-g diskgroup] set attribute=value ... subdisk ...
214 Creating and administering subdisks Changing subdisk attributes
Chapter 6 Creating and administering plexes This chapter describes how to create and maintain plexes. Plexes are logical groupings of subdisks that create an area of disk space independent of physical disk size or other restrictions. Replication (mirroring) of disk data is set up by creating multiple data plexes for a single volume. Each data plex in a mirrored volume contains an identical copy of the volume data.
216 Creating and administering plexes Creating a striped plex Creating a striped plex To create a striped plex, you must specify additional attributes. For example, to create a striped plex named pl-01 in the disk group, mydg, with a stripe width of 32 sectors and 2 columns, use the following command: # vxmake -g mydg plex pl-01 layout=stripe stwidth=32 ncolumn=2 \ sd=mydg01-01,mydg02-01 To use a plex to build a volume, you must associate the plex with the volume.
Creating and administering plexes Displaying plex information VxVM utilities use plex states to: ■ indicate whether volume contents have been initialized to a known state ■ determine if a plex contains a valid copy (mirror) of the volume contents ■ track whether a plex was in active use at the time of a system failure ■ monitor operations on plexes This section explains the individual plex states in detail.
218 Creating and administering plexes Displaying plex information EMPTY plex state Volume creation sets all plexes associated with the volume to the EMPTY state to indicate that the plex is not yet initialized. IOFAIL plex state The IOFAIL plex state is associated with persistent state logging. When the vxconfigd daemon detects an uncorrectable I/O failure on an ACTIVE plex, it places the plex in the IOFAIL state to exclude it from the recovery selection process at volume start time.
Creating and administering plexes Displaying plex information SNAPTMP plex state The SNAPTMP plex state is used during a vxassist snapstart operation when a snapshot is being prepared on a volume. STALE plex state If there is a possibility that a plex does not have the complete and current volume contents, that plex is placed in the STALE state. Also, if an I/O error occurs on a plex, the kernel stops using and updating the contents of that plex, and the plex state is set to STALE.
220 Creating and administering plexes Displaying plex information IOFAIL plex condition The plex was detached as a result of an I/O failure detected during normal volume I/O. The plex is out-of-date with respect to the volume, and in need of complete recovery. However, this condition also indicates a likelihood that one of the disks in the system should be replaced. NODAREC plex condition No physical disk was found for one of the subdisks in the plex.
Creating and administering plexes Attaching and associating plexes DETACHED plex kernel state Maintenance is being performed on the plex. Any write request to the volume is not reflected in the plex. A read request from the volume is not satisfied from the plex. Plex operations and ioctl function calls are accepted. DISABLED plex kernel state The plex is offline and cannot be accessed. ENABLED plex kernel state The plex is online. A write request to the volume is reflected in the plex.
222 Creating and administering plexes Taking plexes offline Resolving a disk or system failure includes taking a volume offline and attaching and detaching its plexes. The two commands used to accomplish disk failure resolution are vxmend and vxplex.
Creating and administering plexes Detaching plexes Detaching plexes To temporarily detach one data plex in a mirrored volume, use the following command: # vxplex [-g diskgroup] det plex For example, to temporarily detach a plex named vol01-02 in the disk group, mydg, and place it in maintenance mode, use the following command: # vxplex -g mydg det vol01-02 This command temporarily detaches the plex, but maintains the association between the plex and its volume. However, the plex is not used for I/O.
224 Creating and administering plexes Moving plexes # vxmend [-g diskgroup] fix clean plex Start the volume using the following command: # vxvol [-g diskgroup] start volume Moving plexes Moving a plex copies the data content from the original plex onto a new plex. To move a plex, use the following command: # vxplex [-g diskgroup] mv original_plex new_plex For a move task to be successful, the following criteria must be met: ■ The old plex must be an active part of an active (ENABLED) volume.
Creating and administering plexes Copying plexes Copying plexes This task copies the contents of a volume onto a specified plex. The volume to be copied must not be enabled. The plex cannot be associated with any other volume. To copy a plex, use the following command: # vxplex [-g diskgroup] cp volume new_plex After the copy task is complete, new_plex is not associated with the specified volume volume. The plex contains a complete copy of the volume data.
226 Creating and administering plexes Changing plex attributes Alternatively, you can first dissociate the plex and subdisks, and then remove them with the following commands: # vxplex [-g diskgroup] dis plex # vxedit [-g diskgroup] -r rm plex When used together, these commands produce the same result as the vxplex -o rm dis command. The -r option to vxedit rm recursively removes all objects from the specified object downward.
Chapter 7 Creating volumes This chapter describes how to create volumes in Veritas Volume Manager (VxVM). Volumes are logical devices that appear as physical disk partition devices to data management systems. Volumes enhance recovery from hardware failure, data availability, performance, and storage configuration. Note: You can also use the Veritas Intelligent Storage Provisioning (ISP) feature to create and administer application volumes.
228 Creating volumes Types of volume layouts Types of volume layouts VxVM allows you to create volumes with the following layout types: Concatenated A volume whose subdisks are arranged both sequentially and contiguously within a plex. Concatenation allows a volume to be created from multiple regions of one or more disks if there is not enough space for an entire volume on a single region of a disk. For more information, see “Concatenation and spanning” on page 35.
Creating volumes Types of volume layouts Layered Volume A volume constructed from other volumes. Non-layered volumes are constructed by mapping their subdisks to VM disks. Layered volumes are constructed by mapping their subdisks to underlying volumes (known as storage volumes), and allow the creation of more complex forms of logical layout. Examples of layered volumes are striped-mirror and concatenatedmirror volumes. See “Layered volumes” on page 51.
230 Creating volumes Creating a volume ■ ■ “Creating a volume with a version 20 DCO volume” on page 242 for creating a volume with DRL configured within a version 20 DCO volume. RAID-5 logs are used to prevent corruption of data during recovery of RAID-5 volumes (see “RAID-5 logging” on page 50 for details). These logs are configured as plexes on disks other than those that are used for the columns of the RAID-5 volume.
Creating volumes Using vxassist Assisted approach The assisted approach takes information about what you want to accomplish and then performs the necessary underlying tasks. This approach requires only minimal input from you, but also permits more detailed specifications. Assisted operations are performed primarily through the vxassist command or the Veritas Enterprise Administrator (VEA).
232 Creating volumes Using vxassist vxassist obtains most of the information it needs from sources other than your input. vxassist obtains information about the existing objects and their layouts from the objects themselves. For tasks requiring new disk space, vxassist seeks out available disk space and allocates it in the configuration that conforms to the layout specifications and that offers the best use of free space.
Creating volumes Using vxassist The format of entries in a defaults file is a list of attribute-value pairs separated by new lines. These attribute-value pairs are the same as those specified as options on the vxassist command line. Refer to the vxassist(1M) manual page for details.
234 Creating volumes Discovering the maximum size of a volume Discovering the maximum size of a volume To find out how large a volume you can create within a disk group, use the following form of the vxassist command: # vxassist [-g diskgroup] maxsize layout=layout [attributes] For example, to discover the maximum size RAID-5 volume with 5 columns and 2 logs that you can create within the disk group, dgrp, enter the following command: # vxassist -g dgrp maxsize layout=raid5 nlog=2 You can use storage att
Creating volumes Creating a volume on specific disks Note: To change the default layout, edit the definition of the layout attribute defined in the /etc/default/vxassist file. If there is not enough space on a single disk, vxassist creates a spanned volume. A spanned volume is a concatenated volume with sections of disk space spread across more than one disk. A spanned volume can be larger than any disk on a system, since it takes space from more than one disk.
236 Creating volumes Creating a volume on specific disks # vxassist -b make volmega 20g diskgroup=bigone bigone10 \ bigone11 Note: Any storage attributes that you specify for use must belong to the disk group. Otherwise, vxassist will not use them to create a volume. You can also use storage attributes to control how vxassist uses available storage, for example, when calculating the maximum size of a volume, when growing a volume or when removing mirrors or logs from a volume.
Creating volumes Creating a volume on specific disks Figure 7-1 Example of using ordered allocation to create a mirrored-stripe volume Column 1 Column 2 Column 3 mydg01-01 mydg02-01 mydg03-01 Striped plex Mirror Column 1 Column 2 Column 3 mydg04-01 mydg05-01 mydg06-01 Striped plex Mirrored-stripe volume For layered volumes, vxassist applies the same rules to allocate storage as for nonlayered volumes.
238 Creating volumes Creating a volume on specific disks mirrors of these columns are then similarly formed from disks mydg05 through mydg08. This arrangement is illustrated in Figure 7-3.
Creating volumes Creating a volume on specific disks Figure 7-4 Example of storage allocation used to create a mirrored-stripe volume across controllers c1 c2 c3 Column 1 Column 2 Column 3 Controllers Striped plex Mirror Column 1 Column 2 Column 3 Striped plex Mirrored-stripe volume c4 c5 c6 Controllers For other ways in which you can control how vxassist lays out mirrored volumes across controllers, see “Mirroring across targets, controllers or enclosures” on page 245.
240 Creating volumes Creating a mirrored volume Creating a mirrored volume A mirrored volume provides data redundancy by containing more than one copy of its data. Each copy (or mirror) is stored on different disks from the original copy of the volume and from other mirrors. Mirroring a volume ensures that its data is not lost if a disk in one of its component mirrors fails.
Creating volumes Creating a volume with a version 0 DCO volume Creating a volume with a version 0 DCO volume If a data change object (DCO) and DCO volume are associated with a volume, this allows Persistent FastResync to be used with the volume. (See “How persistent FastResync works with snapshots” on page 70 for details of how Persistent FastResync performs fast resynchronization of snapshot mirrors when they are returned to their original volume.
242 Creating volumes Creating a volume with a version 20 DCO volume 2 Use the following command to create the volume (you may need to specify additional attributes to create a volume with the desired characteristics): # vxassist [-g diskgroup] make volume length layout=layout \ logtype=dco [ndcomirror=number] [dcolen=size] \ [fastresync=on] [other attributes] For non-layered volumes, the default number of plexes in the mirrored DCO volume is equal to the lesser of the number of plexes in the data volume
Creating volumes Creating a volume with dirty region logging enabled # vxdg upgrade diskgroup For more information, see “Upgrading a disk group” on page 200.
244 Creating volumes Creating a striped volume For example, to create a mirrored 10GB volume, vol02, with two log plexes in the disk group, mydg, use the following command: # vxassist -g mydg make vol02 10g layout=mirror logtype=drl \ nlog=2 nmirror=2 Sequential DRL limits the number of dirty regions for volumes that are written to sequentially, such as database replay logs.
Creating volumes Creating a striped volume # vxassist -b -g mydg make stripevol 30g layout=stripe \ stripeunit=32k ncol=5 Creating a mirrored-stripe volume A mirrored-stripe volume mirrors several striped data plexes. Note: A mirrored-stripe volume requires space to be available on at least as many disks in the disk group as the number of mirrors multiplied by the number of columns in the volume.
246 Creating volumes Mirroring across targets, controllers or enclosures Mirroring across targets, controllers or enclosures To create a volume whose mirrored data plexes lie on different controllers (also known as disk duplexing) or in different enclosures, use the vxassist command as described in this section. In the following command, the attribute mirror=target specifies that volumes should be mirrored between identical target IDs on different controllers.
Creating volumes Creating a RAID-5 volume Creating a RAID-5 volume Note: VxVM supports this feature for private disk groups, but not for shareable disk groups in a cluster environment. A RAID-5 volume requires space to be available on at least as many disks in the disk group as the number of columns in the volume. Additional disks may be required for any RAID-5 logs that are created. You need a full license to use this feature.
248 Creating volumes Creating tagged volumes plexes for each RAID-5 volume protects against the loss of logging information due to the failure of a single disk. If you use ordered allocation when creating a RAID-5 volume on specified storage, you must use the logdisk attribute to specify on which disks the RAID-5 log plexes should be created.
Creating volumes Creating a volume using vxmake The tag names site, udid and vdid are reserved and should not be used. To avoid possible clashes with future product features, it is recommended that tag names do not start with any of the following strings: asl, be, isp, nbu, sf, symc or vx. See “Setting tags on volumes” on page 280.
250 Creating volumes Creating a volume using vxmake Log plexes may be created as default concatenated plexes by not specifying a layout, for example: # vxmake -g mydg plex raidlog1 sd=mydg06-00 # vxmake -g mydg plex raidlog2 sd=mydg07-00 The following command creates a RAID-5 volume, and associates the prepared RAID-5 plex and RAID-5 log plexes with it: # vxmake -g mydg -Uraid5 vol raidvol \ plex=raidplex,raidlog1,raidlog2 Note: Each RAID-5 volume has one RAID-5 plex where the data and parity are stored.
Creating volumes Initializing and starting a volume Note: The subdisk definition for plex, db-01, must be specified on a single line. It is shown here split across two lines because of space constraints. The first plex, db-01, is striped and has five subdisks on two physical disks, mydg03 and mydg04. The second plex, db-02, is the preferred plex in the mirror, and has one subdisk, ramd1-01, on a volatile memory disk. For detailed information about how to use vxmake, refer to the vxmake(1M) manual page.
252 Creating volumes Accessing a volume Initializing and starting a volume created using vxmake A volume may be initialized by running the vxvol command if the volume was created by the vxmake command and has not yet been initialized, or if the volume has been set to an uninitialized state.
Chapter 8 Administering volumes This chapter describes how to perform common maintenance tasks on volumes in Veritas Volume Manager (VxVM). This includes displaying volume information, monitoring tasks, adding and removing logs, resizing volumes, removing mirrors, removing volumes, and changing the layout of volumes without taking them offline. Note: You can also use the Veritas Intelligent Storage Provisioning (ISP) feature to create and administer application volumes.
256 Administering volumes Displaying volume information Displaying volume information You can use the vxprint command to display information about how a volume is configured.
Administering volumes Displaying volume information Note: If you enable enclosure-based naming, and use the vxprint command to display the structure of a volume, it shows enclosure-based disk device names (disk access names) rather than c#t#d# names. See “Discovering the association between enclosure-based disk names and OS-based disk names” on page 94 for information on how to obtain the true device names. The following section describes the meaning of the various volume states that may be displayed.
258 Administering volumes Displaying volume information REPLAY volume state The volume is in a transient state as part of a log replay. A log replay occurs when it becomes necessary to use logged parity and data. This state is only applied to RAID-5 volumes. SYNC volume state The volume is either in read-writeback recovery mode (kernel state is currently ENABLED) or was in read-writeback mode when the machine was rebooted (kernel state is DISABLED).
Administering volumes Monitoring and controlling tasks DISABLED volume kernel state The volume is offline and cannot be accessed. ENABLED volume kernel state The volume is online and can be read from or written to. Monitoring and controlling tasks Note: VxVM supports this feature for private disk groups, but not for shareable disk groups in a cluster environment. The VxVM task monitor tracks the progress of system recovery by monitoring task creation, maintenance, and completion.
260 Administering volumes Monitoring and controlling tasks Managing tasks with vxtask Note: New tasks take time to be set up, and so may not be immediately available for use after a command is invoked. Any script that operates on tasks may need to poll for the existence of a new task. You can use the vxtask command to administer operations on VxVM tasks that are running on the system.
Administering volumes Stopping a volume is specified, the slower is the progress of the task and the fewer system resources that it consumes in a given time. (The slow attribute is also accepted by the vxplex, vxvol and vxrecover commands.
262 Administering volumes Starting a volume If you need to prevent a closed volume from being opened, it is recommended that you use the vxvol maint command, as described in the following section. Putting a volume in maintenance mode If all mirrors of a volume become STALE, you can place the volume in maintenance mode. Then you can view the plexes while the volume is DETACHED and determine which plex to use for reviving the others.
Administering volumes Adding a mirror to a volume Adding a mirror to a volume A mirror can be added to an existing volume with the vxassist command, as follows: # vxassist [-b] [-g diskgroup] mirror volume Note: If specified, the -b option makes synchronizing the new mirror a background task.
264 Administering volumes Adding a mirror to a volume 2 Select menu item 5 (Mirror volumes on a disk) from the vxdiskadm main menu. 3 At the following prompt, enter the disk name of the disk that you wish to mirror: Mirror volumes on a disk Menu: VolumeManager/Disk/Mirror This operation can be used to mirror volumes on a disk. These volumes can be mirrored onto another disk or onto any available disk space. Volumes will not be mirrored if they are already mirrored.
Administering volumes Removing a mirror Removing a mirror When a mirror is no longer needed, you can remove it to free up disk space. Note: The last valid plex associated with a volume cannot be removed. To remove a mirror from a volume, use the following command: # vxassist [-g diskgroup] remove mirror volume Additionally, you can use storage attributes to specify the storage to be removed.
266 Administering volumes Preparing a volume for DRL and instant snapshots See “Enabling FastResync on a volume” on page 282 for information on how to enable Persistent or Non-Persistent FastResync on a volume. ■ ■ Dirty Region Logs allow the fast recovery of mirrored volumes after a system crash (see “Dirty region logging” on page 60 for details). These logs are supported either as DRL log plexes, or as part of a version 20 DCO volume.
Administering volumes Preparing a volume for DRL and instant snapshots snapshot volume that you subsequently create from the snapshot plexes. For example, specify ndcomirs=5 for a volume with 3 data plexes and 2 snapshot plexes. The value of the regionsize attribute specifies the size of the tracked regions in the volume. A write to a region is tracked by setting a bit in the change map. The default value is 64k (64KB).
268 Administering volumes Preparing a volume for DRL and instant snapshots sd pl sd dc v pl sd pl sd disk01-01 foo-02 disk02-01 vol1_dco vol1_dcl vol1_dcl-01 disk03-01 vol1_dcl-02 disk04-01 vol1-01 ENABLED vol1 ENABLED vol1-02 ENABLED vol1 gen ENABLED vol1_dcl ENABLED vol1_dcl-01 ENABLED vol1_dcl ENABLED vol1_dcl-02 ENABLED 1024 1024 1024 132 132 132 132 132 0 0 0 0 ACTIVE ACTIVE ACTIVE ACTIVE - In this output, the DCO object is shown as vol1_dco, and the DCO volume as vol1_dcl with 2 plexes, vol1_dc
Administering volumes Preparing a volume for DRL and instant snapshots Determining the DCO version number The instant snapshot and DRL-enabled DCO features require that a version 20 DCO be associated with a volume, rather than an earlier version 0 DCO.
270 Administering volumes Upgrading existing volumes to use version 20 DCOs # DCONAME=‘vxprint [-g diskgroup] -F%dco_name volume‘ # DCOVOL=‘vxprint [-g diskgroup] -F%parent_vol $DCONAME‘ 2 Use the vxprint command on the DCO volume to find out if DRL logging is active: # vxprint [-g diskgroup] -F%drllogging $DCOVOL This command returns on if DRL logging is enabled.
Administering volumes Upgrading existing volumes to use version 20 DCOs Note: The plexes of the DCO volume require persistent storage space on disk to be available. To make room for the DCO plexes, you may need to add extra disks to the disk group, or reconfigure existing volumes to free up space in the disk group. Another way to add disk space is to use the disk group move feature to bring in spare disks from a different disk group.
272 Administering volumes Adding traditional DRL logging to a mirrored volume 6 Use the following command to dissociate a version 0 DCO object, DCO volume and snap objects from the volume: # vxassist [-g diskgroup] remove log volume logtype=dco 7 Use the following command on the volume to upgrade it: # vxsnap [-g diskgroup] prepare volume [ndcomirs=number] \ [regionsize=size] [drl=on|sequential|off] \ [storage_attribute ...
Administering volumes Adding traditional DRL logging to a mirrored volume To add DRL logs to an existing volume, use the following command: # vxassist [-b] [-g diskgroup] addlog volume logtype=drl \ [nlog=n] [loglen=size] Note: If specified, the -b option makes adding the new logs a background task. The nlog attribute can be used to specify the number of log plexes to add. By default, one log plex is added.
274 Administering volumes Adding a RAID-5 log Adding a RAID-5 log Note: You need a full license to use this feature. Only one RAID-5 plex can exist per RAID-5 volume. Any additional plexes become RAID-5 log plexes, which are used to log information about data and parity being written to the volume. When a RAID-5 volume is created using the vxassist command, a log plex is created for that volume by default.
Administering volumes Resizing a volume # vxprint [-g diskgroup] -ht volume where volume is the name of the RAID-5 volume. For a RAID-5 log, the output lists a plex with a STATE field entry of LOG.
276 Administering volumes Resizing a volume Caution: If you use vxassist or vxvol to resize a volume, do not shrink it below the size of the file system which is located on it. If you do not shrink the file system first, you risk unrecoverable data loss. If you have a VxFS file system, shrink the file system first, and then shrink the volume. Other file systems may require you to back up your data so that you can later recreate the file system and restore its data.
Administering volumes Resizing a volume VxVM vxresize ERROR V-5-1-2536 Volume volume has different organization in each mirror To resize such a volume successfully, you must first reconfigure it so that each data plex has the same layout. For more information about the vxresize command, see the vxresize(1M) manual page. Resizing volumes using vxassist The following modifiers are used with the vxassist command to resize a volume: growto Increase volume to a specified length.
278 Administering volumes Resizing a volume Note: If you previously performed a relayout on the volume, additionally specify the attribute layout=nodiskalign to the growby command if you want the subdisks to be grown using contiguous disk space.
Administering volumes Setting tags on volumes If a volume is active and its length is being reduced, the operation must be forced using the -o force option to vxvol. This prevents accidental removal of space from applications using the volume. The length of logs can also be changed using the following command: # vxvol [-g diskgroup] set loglen=length log_volume Note: Sparse log plexes are not valid. They must map the entire length of the log.
280 Administering volumes Changing the read policy for mirrored volumes Changing the read policy for mirrored volumes VxVM offers the choice of the following read policies on the data plexes in a mirrored volume: round Reads each plex in turn in “round-robin” fashion for each nonsequential I/O detected. Sequential access causes only one plex to be accessed. This takes advantage of the drive or controller readahead caching policies.
Administering volumes Removing a volume Removing a volume Once a volume is no longer necessary (it is inactive and its contents have been archived, for example), it is possible to remove the volume and free up the disk space for other uses. To stop all activity on a volume before removing it 1 Remove all references to the volume by application programs, including shells, that are running on the system.
282 Administering volumes Enabling FastResync on a volume NOTE: Simply moving volumes off of a disk, without also removing the disk, does not prevent volumes from being moved onto the disk by future operations. For example, using two consecutive move operations may move volumes from the second disk to the first.
Administering volumes Enabling FastResync on a volume such as backup and decision support. See “Administering volume snapshots” on page 295 and “FastResync” on page 66 for more information. There are two possible versions of FastResync that can be enabled on a volume: ■ Persistent FastResync holds copies of the FastResync maps on disk. These can be used for the speedy recovery of mirrored volumes if a system is rebooted.
284 Administering volumes Performing online relayout To list all volumes on which Non-Persistent FastResync is enabled, use the following command: # vxprint [-g diskgroup] -F “%name” \ -e “v_fastresync=on && !v_hasdcolog” To list all volumes on which Persistent FastResync is enabled, use the following command: # vxprint [-g diskgroup] -F “%name” -e “v_fastresync=on \ && v_hasdcolog” Disabling FastResync Use the vxvol command to turn off Persistent or Non-Persistent FastResync for an existing volume, as s
Administering volumes Performing online relayout On occasions, it may be necessary to perform a relayout on a plex rather than on a volume. See “Specifying a plex for relayout” on page 289 for more information.
286 Administering volumes Performing online relayout Permitted relayout transformations The tables below give details of the relayout operations that are possible for each type of source storage layout. Table 8-2 Supported relayout transformations for concatenated volumes Relayout to From concat concat No. concat-mirror No. Add a mirror, and then use vxassist convert instead. mirror-concat No. Add a mirror instead. mirror-stripe No.
Administering volumes Performing online relayout Table 8-4 Supported relayout transformations for RAID-5 volumes Relayout to From raid5 concat Yes. concat-mirror Yes. mirror-concat No. Use vxassist convert after relayout to concatenated-mirror volume instead. mirror-stripe No. Use vxassist convert after relayout to striped-mirror volume instead. raid5 Yes. The stripe width and number of columns may be changed. stripe Yes. The stripe width and number of columns may be changed.
288 Administering volumes Performing online relayout Table 8-6 Supported relayout transformations for mirrored-stripe volumes Relayout to From mirror-stripe concat Yes. concat-mirror Yes. mirror-concat No. Use vxassist convert after relayout to concatenated-mirror volume instead. mirror-stripe No. Use vxassist convert after relayout to striped-mirror volume instead. raid5 Yes. The stripe width and number of columns may be changed. stripe Yes.
Administering volumes Performing online relayout Specifying a non-default layout You can specify one or more relayout options to change the default layout configuration. Examples of these options are: ncol=number Specifies the number of columns. ncol=+number Specifies the number of columns to add. ncol=-number Specifies the number of colums to remove. stripeunit=size Specifies the stripe width. See the vxassist(1M) manual page for more information about relayout options.
290 Administering volumes Performing online relayout Viewing the status of a relayout Online relayout operations take some time to perform. You can use the vxrelayout command to obtain information about the status of a relayout operation. For example, the command: # vxrelayout -g mydg status vol04 might display output similar to this: STRIPED, columns=5, stwidth=128--> STRIPED, columns=6, stwidth=128 Relayout running, 68.58% completed.
Administering volumes Converting between layered and non-layered volumes The default delay and region size values are 250 milliseconds and 1 megabyte respectively. To reverse the direction of relayout operation that is currently stopped, specify the reverse keyword to vxrelayout as shown in this example: # vxrelayout -g mydg -o bg reverse vol04 This undoes changes made to the volume so far, and returns it to its original layout.
292 Administering volumes Converting between layered and non-layered volumes Note: If the system crashes during relayout or conversion, the process continues when the system is rebooted. However, if the crash occurred during the first stage of a two-stage relayout and convert operation, only the first stage will be completed. You must run vxassist convert manually to complete the operation.
Chapter 9 Administering volume snapshots Veritas Volume Manager (VxVM) provides the capability for taking an image of a volume at a given point in time. Such an image is referred to as a volume snapshot. You can also take a snapshot of a volume set as described in “Creating instant snapshots of volume sets” on page 324. Volume snapshots allow you to make backup copies of your volumes online with minimal interruption to users.
296 Administering volume snapshots Note: A volume snapshot represents the data that exists in a volume at a given point in time. As such, VxVM does not have any knowledge of data that is cached by the overlying file system, or by applications such as databases that have files open in the file system. If the fsgen volume usage type is set on a volume that contains a Veritas File System (VxFS), intent logging of the file system metadata ensures the internal consistency of the file system that is backed up.
Administering volume snapshots Traditional third-mirror break-off snapshots Traditional third-mirror break-off snapshots The traditional third-mirror break-off volume snapshot model that is supported by the vxassist command is shown in Figure 9-1. This also shows the transitions that are supported by the snapback and snapclear commands to vxassist.
298 Administering volume snapshots Traditional third-mirror break-off snapshots the snapshot. If more than one snapshot mirror is used, the snapshot volume is itself mirrored. The command, vxassist snapback, can be used to return snapshot plexes to the original volume from which they were snapped, and to resynchronize the data in the snapshot mirrors from the data in the original volume. This enables you to refresh the data in a snapshot after each time that you use it to make a backup.
Administering volume snapshots Full-sized instant snapshots Full-sized instant snapshots Full-sized instant snapshots are a variation on the third-mirror volume snapshot model that make a snapshot volume available for access as soon as the snapshot plexes have been created. The full-sized instant volume snapshot model is illustrated in Figure 9-2.
300 Administering volume snapshots Full-sized instant snapshots to move the snapshot volume into a separate disk group for off-host processing, or you want to use the vxsnap dis or vxsnap split commands to turn the snapshot volume into an independent volume. The vxsnap refresh command allows you to update the data in a snapshot each time that you make a backup.
Administering volume snapshots Space-optimized instant snapshots Space-optimized instant snapshots Volume snapshots, such as those described in “Traditional third-mirror break-off snapshots” on page 297 and “Full-sized instant snapshots” on page 299, require the creation of a complete copy of the original volume, and use as much storage space as the original volume. Instead of requiring a complete copy of the original volume’s storage space, spaceoptimized instant snapshots use a storage cache.
302 Administering volume snapshots Emulation of third-mirror break-off snapshots space-optimized snapshots, reattach them to their original volume, or turn them into independent volumes. See “Creating and managing space-optimized instant snapshots” on page 315 for details of the procedures for creating and using this type of snapshot. For information about how to set up a cache for use by space-optimized instant snapshots, see “Creating a shared cache object” on page 312.
Administering volume snapshots Linked break-off snapshot volumes For information about how to add snapshot mirrors to a volume, see “Adding snapshot mirrors to a volume” on page 325. Linked break-off snapshot volumes A variant of the third-mirror break-off snapshot type are linked break-off snapshot volumes, which use the vxsnap addmir command to link a specially prepared volume with the data volume. The volume that is used for the snapshot is prepared in the same way as for full-sized instant snapshots.
304 Administering volume snapshots Cascaded snapshots is complete. The vxsnap snapwait command can be used to wait for the state to become ACTIVE. .When you use the vxsnap make command to create the snapshot volume, this removes the link, and establishes a snapshot relationship between the snapshot volume and the original volume. The vxsnap reattach operation re-establishes the link relationship between the two volumes, and starts a resynchronization of the mirror volume.
Administering volume snapshots Cascaded snapshots The following points determine whether it is appropriate for an application to use a snapshot cascade: ■ Deletion of a snapshot in the cascade takes time to copy the snapshot’s data to the next snapshot in the cascade. ■ The reliability of a snapshot in the cascade depends on all the newer snapshots in the chain. Thus the oldest snapshot in the cascade is the most vulnerable.
306 Administering volume snapshots Cascaded snapshots snapshot volume, S2, can be used to restore the original volume if that volume becomes corrupted. For a database, you might need to replay a redo log on S2 before you could use it to restore V. These steps are illustrated in Figure 9-6. Figure 9-6 Using a snapshot of a snapshot to restore a database 1. Create instant snapshot S1 of volume V vxsnap make source=V Original volume V Snapshot volume of V: S1 2.
Administering volume snapshots Cascaded snapshots Figure 9-7 Dissociating a snapshot volume vxsnap dis is applied to snapshot S2, which has no snapshots of its own Original volume V Snapshot volume of V: S1 Snapshot volume of S1: S2 vxsnap dis S2 Original volume V Snapshot volume of V: S1 S1 remains owned by V Volume S2 S2 is independent vxsnap dis is applied to snapshot S1, which has one snapshot S2 Original volume V Snapshot volume of V: S1 Snapshot volume of S1: S2 vxsnap dis S1 Original volu
308 Administering volume snapshots Creating multiple snapshots Figure 9-8 Splitting snapshots Original volume V Snapshot volume of V: S1 Snapshot volume of S1: S2 vxsnap split S1 Original volume V Volume S1 Snapshot volume of S1: S2 S1 is independent S2 continues to be a snapshot of S1 Creating multiple snapshots To make it easier to create snapshots of several volumes at the same time, both the vxsnap make and vxassist snapshot commands accept more than one volume name as their argument.
Administering volume snapshots Restoring the original volume from a snapshot Figure 9-9 Refresh on snapback Resynchronizing an original volume from a snapshot Original volume Snapshot mirror snapshot Snapshot volume -o resyncfromreplica snapback Note: The original volume must not be in use during a snapback operation that specifies the option -o resyncfromreplica to resynchronize the volume from a snapshot.
310 Administering volume snapshots Creating instant snapshots Creating instant snapshots Note: You need a full license and a a Veritas FlashSnapTM license to use this feature. VxVM allows you to make instant snapshots of volumes by using the vxsnap command. Note: The information in this section also applies to RAID-5 volumes that have been converted to a special layered volume layout by the addition of a DCO and DCO volume. See “Using a DCO and DCO volume with a RAID-5 volume” on page 269 for details.
Administering volume snapshots Creating instant snapshots Note: When using the vxsnap prepare or vxassist make commands to make a volume ready for instant snapshot operations, if the specified region size exceeds half the value of the tunable voliomem_maxpool_sz (see “voliomem_maxpool_sz” on page 469), the operation succeeds but gives a warning such as the following (for a system where voliomem_maxpool_sz is set to 12MB): VxVM vxassist WARNING V-5-1-0 Specified regionsize is larger than the limit on the sy
312 Administering volume snapshots Creating instant snapshots 2 To prepare a volume for instant snapshots, use the following command: # vxsnap [-g diskgroup] prepare volume [regionsize=size] \ [ndcomirs=number] [alloc=storage_attributes] Note: It is only necessary to run the vxsnap prepare command on a volume if it does not already have a version 20 DCO volume (for example, if you have run the vxsnap unprepare command on the volume).
Administering volume snapshots Creating instant snapshots ■ 2 If the cache volume is mirrored, space is required on at least as many disks as it has mirrors. These disks should not be shared with the disks used for the parent volumes. The disks should also be chosen to avoid impacting I/O performance for critical volumes, or hindering disk group split and join operations. Having decided on its characteristics, use the vxassist command to create the volume that is to be used for the cache volume.
314 Administering volume snapshots Creating instant snapshots Creating a volume for use as a full-sized instant or linked break-off snapshot To create an empty volume for use by a full-sized instant snapshot or a linked breakoff snapshot 1 Use the vxprint command on the original volume to find the required size for the snapshot volume. # LEN=‘vxprint [-g diskgroup] -F%len volume‘ Note: The command shown in this and subsequent steps assumes that you are using a Bourne-type shell such as sh, ksh or bash.
Administering volume snapshots Creating instant snapshots Creating and managing space-optimized instant snapshots Note: Space-optimized instant snapshots are not suitable for write-intensive volumes (such as for database redo logs) because the copy-on-write mechanism may degrade the performance of the volume.
316 Administering volume snapshots Creating instant snapshots ◆ To create a space-optimized instant snapshot, snapvol, and also create a cache object for it to use: # vxsnap [-g diskgroup] make source=vol/newvol=snapvol\ [/cachesize=size][/autogrow=yes][/ncachemirror=number]\ [alloc=storage_attributes] The cachesize attribute determines the size of the cache relative to the size of the volume.
Administering volume snapshots Creating instant snapshots snapshot having to be resynchronized. See “Refreshing an instant snapshot” on page 327 for details. ■ Restore the contents of the original volume from the snapshot volume. The space-optimized instant snapshot remains intact at the end of the operation. See “Restoring a volume from an instant snapshot” on page 329 for details.
318 Administering volume snapshots Creating instant snapshots # vxsnap -g mydg syncwait snap2myvol This command exits (with a return code of zero) when synchronization of the snapshot volume is complete. The snapshot volume may then be moved to another disk group or turned into an independent volume.
Administering volume snapshots Creating instant snapshots ■ Dissociate the snapshot volume entirely from the original volume. This may be useful if you want to use the copy for other purposes such as testing or report generation. If desired, you can delete the dissociated volume. See “Dissociating an instant snapshot” on page 330 for details. ■ If the snapshot is part of a snapshot hierarchy, you can also choose to split this hierarchy from its parent volumes.
320 Administering volume snapshots Creating instant snapshots If you specify the -b option to the vxsnap addmir command, you can use the vxsnap snapwait command to wait for synchronization of the snapshot plexes to complete, as shown in this example: # vxsnap -g mydg snapwait vol1 nmirror=2 2 To create a third-mirror break-off snapshot, use the following form of the vxsnap make command. # vxsnap [-g diskgroup] make source=volume[/newvol=snapvol]\ {/plex=plex1[,plex2,...
Administering volume snapshots Creating instant snapshots ■ Reattach some or all of the plexes of the snapshot volume with the original volume. See “Reattaching an instant snapshot” on page 327 for details. ■ Restore the contents of the original volume from the snapshot volume. You can choose whether none, a subset, or all of the plexes of the snapshot volume are returned to the original volume as a result of the operation. See “Restoring a volume from an instant snapshot” on page 329 for details.
322 Administering volume snapshots Creating instant snapshots # vxsnap -g mydg -b addmir vol1 mirvol=prepsnap \ mirdg=mysnapdg If the -b option is specified, you can use the vxsnap snapwait command to wait for the synchronization of the linked snapshot volume to complete, as shown in this example: # vxsnap -g mydg snapwait vol1 mirvol=prepsnap \ mirdg=mysnapvoldg 2 To create a linked break-off snapshot, use the following form of the vxsnap make command.
Administering volume snapshots Creating instant snapshots ■ If the snapshot is part of a snapshot hierarchy, you can also choose to split this hierarchy from its parent volumes. See “Splitting an instant snapshot hierarchy” on page 331 for details. Creating multiple instant snapshots To make it easier to create snapshots of several volumes at the same time, the vxsnap make command accepts multiple tuples that define the source and snapshot volumes names as their arguments.
324 Administering volume snapshots Creating instant snapshots disk group. Also note that break-off snapshots are used for the redo logs as such volumes are write intensive. Creating instant snapshots of volume sets Volume set names can be used in place of volume names with the following vxsnap operations on instant snapshots: addmir, dis, make, prepare, reattach, refresh, restore, rmmir, split, syncpause, syncresume, syncstart, syncstop, syncwait, and unprepare.
Administering volume snapshots Creating instant snapshots achieve this by using the vxsnap command to add the required number of plexes before breaking off the snapshot: # vxsnap -g mydg prepare vset2 # vxsnap -g mydg addmir vset2 nmirror=1 # vxsnap -g mydg make source=vset2/newvol=snapvset2/nmirror=1 See “Adding snapshot mirrors to a volume” on page 325 for more information about adding plexes to volumes or to volume sets.
326 Administering volume snapshots Creating instant snapshots Note: This command is similar in usage to the vxassist snapstart command, and supports the traditional third-mirror break-off snapshot model. As such, it does not provide an instant snapshot capability. Once you have added one or more snapshot mirrors to a volume, you can use the vxsnap make command with either the nmirror attribute or the plex attribute to create the snapshot volumes.
Administering volume snapshots Creating instant snapshots For more information on the application of cascaded snapshots, see “Cascaded snapshots” on page 304. Refreshing an instant snapshot Refreshing an instant snapshot replaces it with another point-in-time copy of a parent volume. To refresh one or more snapshots and make them immediately available for use, use the following command: # vxsnap [-g diskgroup] refresh snapvolume|snapvolume_set \ source=volume|volume_set [[snapvol2 source=vol2]...
328 Administering volume snapshots Creating instant snapshots Note: The snapshot being reattached must not be open to any application. For example, any file system configured on the snapshot volume must first be unmounted. It is possible to reattach a volume to an unrelated volume provided that their volume sizes and region sizes are compatible.
Administering volume snapshots Creating instant snapshots Note: The snapshot being reattached must not be open to any application. For example, any file system configured on the snapshot volume must first be unmounted. It is possible to reattach a volume to an unrelated volume provided that their sizes and region sizes are compatible.
330 Administering volume snapshots Creating instant snapshots The following example demonstrates how to restore the volume, myvol, from the spaceoptimized snapshot, snap3myvol.
Administering volume snapshots Creating instant snapshots Splitting an instant snapshot hierarchy Note: This operation is not supported for space-optimized instant snapshots.
332 Administering volume snapshots Creating instant snapshots snapshot were fully synchronized, the %VALID value would be 100%. The snapshot could then be made independent or moved into another disk group.
Administering volume snapshots Creating instant snapshots The -x option expands the output to include the component volumes of volume sets. See the vxsnap(1M) manual page for more information about using the vxsnap print and vxsnap list commands. Controlling instant snapshot synchronization Note: Synchronization of the contents of a snapshot with its original volume is not possible for space-optimized instant snapshots.
334 Administering volume snapshots Creating instant snapshots break-off snapshot volumes” on page 321, “Reattaching an instant snapshot” on page 327 and “Reattaching a linked break-off snapshot volume” on page 328 for details.
Administering volume snapshots Creating instant snapshots ■ When cache usage reaches the high watermark value, highwatermark (default value is 90 percent), vxcached grows the size of the cache volume by the value of autogrowby (default value is 20% of the size of the cache volume in blocks). The new required cache size cannot exceed the value of maxautogrow (default value is twice the size of the cache volume in blocks).
336 Administering volume snapshots Creating instant snapshots Growing and shrinking a cache You can use the vxcache command to increase the size of the cache volume that is associated with a cache object: # vxcache [-g diskgroup] growcacheto cache_object size For example, to increase the size of the cache volume associated with the cache object, mycache, to 2GB, you would use the following command: # vxcache -g mydg growcacheto mycache 2g To grow a cache by a specified amount, use the following form of t
Administering volume snapshots Creating traditional third-mirror break-off snapshots Creating traditional third-mirror break-off snapshots VxVM provides third-mirror break-off snapshot images of volume devices using vxassist and other commands. Note: To enhance the efficiency and usability of volume snapshots, turn on FastResync as described in “Enabling FastResync on a volume” on page 284.
338 Administering volume snapshots Creating traditional third-mirror break-off snapshots snapshot (which becomes a normal mirror), creates a new normal volume and attaches the snapshot mirror to the snapshot volume. The snapshot then becomes a normal, functioning volume and the state of the snapshot is set to ACTIVE.
Administering volume snapshots Creating traditional third-mirror break-off snapshots For example, to create a snapshot of voldef, use the following command: # vxassist [-g diskgroup] snapshot voldef snapvol The vxassist snapshot task detaches the finished snapshot mirror, creates a new volume, and attaches the snapshot mirror to it. This step should only take a few minutes.
340 Administering volume snapshots Creating traditional third-mirror break-off snapshots Converting a plex into a snapshot plex Note: The procedure described in this section cannot be used with layered volumes or any volume that has an associated version 20 DCO volume. It is recommended that the instant snapshot feature is used in preference to the procedure described in this section.
Administering volume snapshots Creating traditional third-mirror break-off snapshots Creating multiple snapshots To make it easier to create snapshots of several volumes at the same time, the snapshot option accepts more than one volume name as its argument, for example: # vxassist [-g diskgroup] snapshot volume1 volume2 ...
342 Administering volume snapshots Creating traditional third-mirror break-off snapshots Here the nmirror attribute specifies the number of mirrors in the snapshot volume that are to be re-attached. Once the snapshot plexes have been reattached and their data resynchronized, they are ready to be used in another snapshot operation. By default, the data in the original volume is used to update the snapshot plexes that have been re-attached.
Administering volume snapshots Creating traditional third-mirror break-off snapshots Dissociating a snapshot volume The link between a snapshot and its original volume can be permanently broken so that the snapshot volume becomes an independent volume.
344 Administering volume snapshots Adding a version 0 DCO and DCO volume Adding a version 0 DCO and DCO volume Note: The procedure described in this section adds a DCO log volume that has a version 0 layout as introduced in VxVM 3.2. The version 0 layout supports traditional (third-mirror break-off) snapshots, but not full-sized or space-optimized instant snapshots.
Administering volume snapshots Adding a version 0 DCO and DCO volume For non-layered volumes, the default number of plexes in the mirrored DCO volume is equal to the lesser of the number of plexes in the data volume or 2. For layered volumes, the default number of DCO plexes is always 2. If required, use the ndcomirror attribute to specify a different number. It is recommended that you configure as many DCO plexes as there are existing data and snapshot plexes in the volume.
346 Administering volume snapshots Adding a version 0 DCO and DCO volume sd dc v pl sd pl sd disk02-01 vol1_dco vol1_dcl vol1_dcl-01 disk03-01 vol1_dcl-02 disk04-01 vol1-02 vol1 gen vol1_dcl vol1_dcl-01 vol1_dcl vol1_dcl-02 ENABLED ENABLED ENABLED ENABLED ENABLED ENABLED 1024 132 132 132 132 132 0 0 0 ACTIVE ACTIVE ACTIVE - In this output, the DCO object is shown as vol1_dco, and the DCO volume as vol1_dcl with 2 plexes, vol1_dcl-01 and vol1_dcl-02.
Administering volume snapshots Adding a version 0 DCO and DCO volume For more information, see the vxassist(1M) and vxdco(1M) manual pages. Reattaching a version 0 DCO and DCO volume Note: The operations in this section relate to version 0 DCO volumes. They are not supported for version 20 DCO volume layout that was introduced in VxVM 4.0.
348 Administering volume snapshots Adding a version 0 DCO and DCO volume
Chapter 10 Creating and administering volume sets This chapter describes how to use the vxvset command to create and administer volume sets in Veritas Volume Manager (VxVM). Volume sets enable the use of the MultiVolume Support feature with Veritas File System (VxFS). It is also possible to use the Veritas Enterprise Administrator (VEA) to create and administer volumes sets. For more information, see the VEA online help. For full details of the usage of the vxvset command, see the vxvset(1M) manual page.
354 Creating and administering volume sets Creating a volume set ■ Volume sets can be used in place of volumes with the following vxsnap operations on instant snapshots: addmir, dis, make, prepare, reattach, refresh, restore, rmmir, split, syncpause, syncresume, syncstart, syncstop, syncwait, and unprepare. The third-mirror break-off usage model for full-sized instant snapshots is supported for volume sets provided that sufficient plexes exist for each volume in the volume set.
Creating and administering volume sets Listing details of volume sets Listing details of volume sets To list the details of the component volumes of a volume set, use the following command: # vxvset [-g diskgroup] list [volset] If the name of a volume set is not specified, the command lists the details of all volume sets in a disk group, as shown in the following example: # vxvset -g mydg list NAME GROUP NVOLS set1 mydg 3 set2 mydg 2 CONTEXT - To list the details of each volume in a volume set, specify
356 Creating and administering volume sets Removing a volume from a volume set vol2 vol3 1 2 12582912 12582912 ENABLED ENABLED - Removing a volume from a volume set To remove a component volume from a volume set, use the following command: # vxvset [-g diskgroup] [-f] rmvol volset volume For example, the following commands remove the volumes, vol1 and vol2, from the volume set myvset: # vxvset -g mydg rmvol myvset vol1 # vxvset -g mydg rmvol myvset vol2 Note: When the final volume is removed, this d
Creating and administering volume sets Raw device node access to component volumes Access to the raw device nodes for the component volumes can be configured to be readonly or read-write. This mode is shared by all the raw device nodes for the component volumes of a volume set. The read-only access mode implies that any writes to the raw device will fail, however writes using the ioctl interface or by VxFS to update metadata are not prevented.
358 Creating and administering volume sets Raw device node access to component volumes vset_devinfo=on:read-only Raw device nodes in read-only mode. vset_devinfo=on:read-write Raw device nodes in read-write mode. This field is not displayed if makedev is set to off. Note: If the output from the vxprint -m command is fed to the vxmake command to recreate a volume set, the vset_devinfo attribute must set to off.
Chapter 11 Configuring off-host processing Off-host processing allows you to implement the following activities: Data backup As the requirement for 24 x 7 availability becomes essential for many businesses, organizations cannot afford the downtime involved in backing up critical data offline. By taking a snapshot of the data, and backing up from this snapshot, business-critical applications can continue to run without extended down time or impacted performance.
362 Configuring off-host processing Implementing off-host processing solutions Implementing off-host processing solutions As shown in Figure 11-1, by accessing snapshot volumes from a lightly loaded host (shown here as the OHP host), CPU- and I/O-intensive operations for online backup and decision support do not degrade the performance of the primary host that is performing the main production activity (such as running a database).
Configuring off-host processing Implementing off-host processing solutions Note: A volume snapshot represents the data that exists in a volume at a given point in time. As such, VxVM does not have any knowledge of data that is cached by the overlying file system, or by applications such as databases that have files open in the file system.
364 Configuring off-host processing Implementing off-host processing solutions linked break-off snapshot” on page 315. It is recommended that a snapshot disk group is dedicated to maintaining only those disks that are used for off-host processing.
Configuring off-host processing Implementing off-host processing solutions 11 On the OHP host, back up the snapshot volume. If you need to remount the file system in the volume to back it up, first run fsck on the volume. The following are sample commands for checking and mounting a file system: # fsck -F vxfs /dev/vx/rdsk/snapvoldg/snapvol # mount -F vxfs /dev/vx/dsk/snapvoldg/snapvol mount_point Back up the file system at this point, and then use the following command to unmount it.
366 Configuring off-host processing Implementing off-host processing solutions To set up a replica database using the table files that are configured within a volume in a private disk group 1 Use the following command on the primary host to see if the volume is associated with a version 20 data change object (DCO) and DCO volume that allow instant snapshots and Persistent FastResync to be used with the volume: # vxprint -g volumedg -F%instant volume This command returns on if the volume can be used for i
Configuring off-host processing Implementing off-host processing solutions Note: This step sets up the snapshot volumes, and starts tracking changes to the original volumes. When you are ready to create a replica database, proceed to step 6. 6 On the primary host, suspend updates to the volume that contains the database tables. A database may have a hot backup mode that allows you to do this by temporarily suspending writes to its tables.
368 Configuring off-host processing Implementing off-host processing solutions # umount mount_point 2 On the OHP host, use the following command to deport the snapshot volume’s disk group: # vxdg deport snapvoldg 3 On the primary host, re-import the snapshot volume’s disk group using the following command: # vxdg import snapvoldg 4 The snapshot volume is initially disabled following the join.
Chapter 12 Administering hot-relocation If a volume has a disk I/O failure (for example, the disk has an uncorrectable error), Veritas Volume Manager (VxVM) can detach the plex involved in the failure. I/O stops on that plex but continues on the remaining plexes of the volume. If a disk fails completely, VxVM can detach the disk from its disk group. All plexes on the disk are disabled. If there are any unmirrored volumes on a disk when it is detached, those volumes are also disabled.
370 Administering hot-relocation How hot-relocation works How hot-relocation works Hot-relocation allows a system to react automatically to I/O failures on redundant (mirrored or RAID-5) VxVM objects, and to restore redundancy and access to those objects. VxVM detects I/O failures on objects and relocates the affected subdisks to disks designated as spare disks or to free space within the disk group.
Administering hot-relocation How hot-relocation works ■ If no spare disks are available or additional space is needed, vxrelocd uses free space on disks in the same disk group, except those disks that have been excluded for hot-relocation use (marked nohotuse). When vxrelocd has relocated the subdisks, it reattaches each relocated subdisk to its plex. ■ Finally, vxrelocd initiates appropriate recovery procedures.
372 Administering hot-relocation How hot-relocation works Figure 12-1 Example of hot-relocation for a subdisk in a RAID-5 volume a) Disk group contains five disks. Two RAID-5 volumes are configured across four of the disks. One spare disk is available for hot-relocation. mydg01 mydg02 mydg03 mydg04 mydg01-01 mydg02-01 mydg03-01 mydg04-01 mydg02-02 mydg03-02 mydg05 Spare Disk b) Subdisk mydg02-01 in one RAID-5 volume fails.
Administering hot-relocation How hot-relocation works Failures have been detected by the Veritas Volume Manager: failed plexes: home-02 src-02 See “Modifying the behavior of hot-relocation” on page 384 for information on how to send the mail to users other than root.
374 Administering hot-relocation How hot-relocation works home-02 src-02 mkting-01 failing disks: mydg02 This message shows that mydg02 was detached by a failure. When a disk is detached, I/O cannot get to that disk. The plexes home-02, src-02, and mkting-01 were also detached (probably because of the failure of the disk). As described in “Partial disk failure mail messages” on page 372, the problem can be a cabling error.
Administering hot-relocation Configuring a system for hot-relocation When hot-relocation takes place, the failed subdisk is removed from the configuration database, and VxVM ensures that the disk space used by the failed subdisk is not recycled as free space. Configuring a system for hot-relocation By designating spare disks and making free space on disks available for use by hot relocation, you can control how disk space is used for relocating subdisks in the event of a disk failure.
376 Administering hot-relocation Displaying spare disk information Here mydg02 is the only disk designated as a spare in the mydg disk group. The LENGTH field indicates how much spare space is currently available on mydg02 for relocation. The following commands can also be used to display information about disks that are currently designated as spares: ■ vxdisk list lists disk information and displays spare disks with a spare flag.
Administering hot-relocation Marking a disk as a hot-relocation spare Marking a disk as a hot-relocation spare Hot-relocation allows the system to react automatically to I/O failure by relocating redundant subdisks to other disks. Hot-relocation then restores the affected VxVM objects and data. If a disk has already been designated as a spare in the disk group, the subdisks from the failed disk are relocated to the spare disk. Otherwise, any suitable free space in the disk group is used.
378 Administering hot-relocation Removing a disk from use as a hot-relocation spare Removing a disk from use as a hot-relocation spare While a disk is designated as a spare, the space on that disk is not used for the creation of VxVM objects within its disk group. If necessary, you can free a spare disk for general use by removing it from the pool of hot-relocation disks.
Administering hot-relocation Making a disk available for hot-relocation use 2 At the following prompt, enter the disk media name (such as mydg01): Exclude a disk from hot-relocation use Menu: VolumeManager/Disk/UnmarkSpareDisk Use this operation to exclude a disk from hot-relocation use. This operation takes, as input, a disk name. This is the same name that you gave to the disk when you added the disk to the disk group.
380 Administering hot-relocation Configuring hot-relocation to use only spare disks 3 At the following prompt, indicate whether you want to add more disks to be excluded from hot-relocation (y) or return to the vxdiskadm main menu (n): Make another disk available for hot-relocation use? [y,n,q,?] (default: n) Configuring hot-relocation to use only spare disks If you want VxVM to use only spare disks for hot-relocation, add the following line to the file /etc/default/vxassist: spare=only If not enough st
Administering hot-relocation Moving and unrelocating subdisks Caution: During subdisk move operations, RAID-5 volumes are not redundant. Moving and unrelocating subdisks using vxdiskadm To move the hot-relocated subdisks back to the disk where they originally resided after the disk has been replaced following a failure 1 Select menu item 14 (Unrelocate subdisks back to a disk) from the vxdiskadm main menu. 2 This option prompts for the original disk media name first.
382 Administering hot-relocation Moving and unrelocating subdisks # vxassist -g mydg move home !mydg05 mydg02 Here, !mydg05 specifies the current location of the subdisks, and mydg02 specifies where the subdisks should be relocated. If the volume is enabled, subdisks within detached or disabled plexes, and detached log or RAID-5 subdisks, are moved without recovery of data. If the volume is not enabled, subdisks within STALE or OFFLINE plexes, and stale log or RAID-5 subdisks, are moved without recovery.
Administering hot-relocation Moving and unrelocating subdisks Moving hot-relocated subdisks back to a different disk The vxunreloc utility provides the -n option to move the subdisks to a different disk from where they were originally relocated. Assume that mydg01 failed, and that all of the subdisks that resided on it were hot-relocated to other disks. vxunreloc provides an option to move the subdisks to a different disk from where they were originally relocated.
384 Administering hot-relocation Modifying the behavior of hot-relocation After the disk that experienced the failure is fixed or replaced, vxunreloc can be used to move all the hot-relocated subdisks back to the disk. When a subdisk is hot-relocated, its original disk-media name and the offset into the disk are saved in the configuration database. When a subdisk is moved back to the original disk or to a new disk using vxunreloc, the information is erased.
Administering hot-relocation Modifying the behavior of hot-relocation vxrelocd from starting at system startup time by editing the startup file that invokes vxrelocd: /sbin/init.d/vxvm-recover. You can alter the behavior of vxrelocd as follows: ◆ To prevent vxrelocd starting, comment out the entry that invokes it in the startup file: # nohup vxrelocd root & ◆ By default, vxrelocd sends electronic mail to root when failures are detected and relocation actions are performed.
386 Administering hot-relocation Modifying the behavior of hot-relocation
Chapter 13 Administering cluster functionality A cluster consists of a number of hosts or nodes that share a set of disks. The main benefits of cluster configurations are: Availability If one node fails, the other nodes can still access the shared disks. When configured with Serviceguard, HP’s cluster management software, mission-critical applications can continue running by transferring their execution to a standby node in the cluster.
388 Administering cluster functionality Overview of cluster volume management This chapter does not discuss Veritas Storage Foundation Cluster File System (SFCFS) nor cluster management software such as Serviceguard. Such products are separately licensed, and are not included with Veritas Volume Manager. See the documentation provided with those products for more information about them. (For Serviceguard documentation, go to www.docs.hp.com and click on High Availability, then on Serviceguard.
Administering cluster functionality Overview of cluster volume management Note: In this example, each node has two independent paths to the disks, which are configured in one or more cluster-shareable disk groups. Multiple paths provide resilience against failure of one of the paths, but this is not a requirement for cluster configuration. Disks may also be connected by single paths. The private network allows the nodes to share information about system resources and about each other’s state.
390 Administering cluster functionality Overview of cluster volume management capable of being the master node, and it is responsible for coordinating certain VxVM activities. Note: You must run commands that configure or reconfigure VxVM objects on the master node. Tasks that must be initiated from the master node include setting up shared disk groups, creating and reconfiguring volumes, and performing snapshot operations.
Administering cluster functionality Overview of cluster volume management and joins the cluster. When a node leaves the cluster, it deports all its imported shared disk groups, but they remain imported on the surviving nodes. Reconfiguring a shared disk group is performed with the cooperation of all nodes. Configuration changes to the disk group happen simultaneously on all nodes and the changes are identical.
392 Administering cluster functionality Overview of cluster volume management Table 13-1 Activation modes for shared disk groups Activation mode Description readonly (ro) The node has read access to the disk group and denies write access for all other nodes in the cluster. The node has no write access to the disk group. Attempts to activate a disk group for either of the write modes on other nodes fail. sharedread (sr) The node has read access to the disk group.
Administering cluster functionality Overview of cluster volume management Note: The activation mode of a disk group controls volume I/O from different nodes in the cluster. It is not possible to activate a disk group on a given node if it is activated in a conflicting mode on another node in the cluster. When enabling activation using the defaults file, it is recommended that this file be made identical on all nodes in the cluster. Otherwise, the results of activation are unpredictable.
394 Administering cluster functionality Overview of cluster volume management The local detach policy is intended for use with shared mirrored volumes in a cluster. This policy prevents I/O failure on a single slave node from causing a plex to be detached. This would require the plex to be resynchronized when it is subsequently reattached. The local detach policy is available for disk groups that have a version number of 70 or greater.
Administering cluster functionality Overview of cluster volume management continues to return write errors, as long as one mirror of the volume has an error. The volume continues to satisfy read requests as long as one good plex is available. If the reason for the I/O error is corrected and the node is still a member of the cluster, it can resume performing I/O from/to the volume without affecting the redundancy of the data.
396 Administering cluster functionality Overview of cluster volume management determine the behavior of the master node in such cases. This policy has two possible settings as shown in the following table: Table 13-4 Behavior of master node for different failure policies Type of I/O failure Leave (dgfailpolicy=leave) Disable (dgfailpolicy=dgdisable) Master node loses access to all copies of the logs.
Administering cluster functionality Overview of cluster volume management applications to a different volume from the one that experienced the I/O problem. This preserves data redundancy, and other nodes may still be able to perform I/O from/to the volumes on the disk. If you have a critical disk group that you do not want to become disabled in the case that the master node loses access to the copies of the logs, set the disk group failure policy to leave.
398 Administering cluster functionality Cluster initialization and configuration If you have RAID-5 volumes in a private disk group that you wish to make shareable, you must first relayout the volumes as a supported volume type such as stripe-mirror or mirror-stripe. Online relayout of shared volumes is supported provided that it does not involve RAID-5 volumes. If a shared disk group contains RAID-5 volumes, deport it and then reimport the disk group as private on one of the cluster nodes.
Administering cluster functionality Cluster initialization and configuration be held up and restarted later. In most cases, cluster reconfiguration takes precedence. However, if the volume reconfiguration is in the commit stage, it completes first. For more information on cluster reconfiguration, see “Volume reconfiguration” on page 399. Volume reconfiguration Volume reconfiguration is the process of creating, changing, and removing VxVM objects such as disk groups, volumes and plexes.
400 Administering cluster functionality Cluster initialization and configuration vxconfigd daemon The VxVM configuration daemon, vxconfigd, maintains the configuration of VxVM objects. It receives cluster-related instructions from the kernel. A separate copy of vxconfigd runs on each node, and these copies communicate with each other over a network.
Administering cluster functionality Cluster initialization and configuration ■ If the vxconfigd daemon is stopped on the master node, the vxconfigd daemons on the slave nodes periodically attempt to rejoin to the master node. Such attempts do not succeed until the vxconfigd daemon is restarted on the master. In this case, the vxconfigd daemons on the slave nodes have not lost information about the shared configuration, so that any displayed configuration information is correct.
402 Administering cluster functionality Cluster initialization and configuration progress before exiting. This process can take a long time if, for example, a long-running transaction is active. When the VxVM shutdown procedure is invoked, it checks all volumes in all shared disk groups on the node that is being shut down.
Administering cluster functionality Upgrading cluster functionality Upgrading cluster functionality The rolling upgrade feature allows you to upgrade the version of VxVM running in a cluster without shutting down the entire cluster. To install the new version of VxVM running on a cluster, make one node leave the cluster, upgrade it, and then join it back into the cluster. This operation is repeated for each node in the cluster.
404 Administering cluster functionality Dirty region logging in cluster environments Dirty region logging in cluster environments Dirty region logging (DRL) is an optional property of a volume that provides speedy recovery of mirrored volumes after a system failure. DRL is supported in cluster-shareable disk groups. This section provides a brief overview of how DRL behaves in a cluster environment. In a cluster environment, the VxVM implementation of DRL differs slightly from the normal implementation.
Administering cluster functionality Multiple host failover configurations O to the volume (which overwrites the active map). During this time, other nodes can continue to perform I/O. VxVM tracks which nodes have crashed. If multiple node recoveries are underway in a cluster at a given time, their respective recoveries and recovery map updates can compete with each other. VxVM tracks changes in the state of DRL recovery and prevents I/O collisions.
406 Administering cluster functionality Multiple host failover configurations Note: Since Veritas Volume Manager uses the host name as the host ID (by default), it is advisable to change the host name of one machine if another machine shares its host name. To change the host name, use the vxdctl hostid new_hostname command. Failover The import locking scheme works well in an environment where disk groups are not normally shifted from one system to another.
Administering cluster functionality Multiple host failover configurations See the Veritas Volume Manager Troubleshooting Guide for more information on Veritas Volume Manager error messages. If you use the Veritas Cluster Server product, all disk group failover issues can be managed correctly. VCS includes a high availability monitor and includes failover scripts for VxVM, VxFS, and for several popular databases.
408 Administering cluster functionality Administering VxVM in cluster environments Administering VxVM in cluster environments The following sections describe the administration of VxVM’s cluster functionality. Note: Most VxVM commands require superuser or equivalent privileges. Requesting node status and discovering the master node The vxdctl utility controls the operation of the vxconfigd volume configuration daemon.
Administering cluster functionality Administering VxVM in cluster environments # vxdisk list accessname where accessname is the disk access name (or device name). A portion of the output from this command (for the device c4t1d0) is shown here: Device: devicetag: type: clusterid: disk: timeout: group: flags: ... c4t1d0 c4t1d0 auto cvm2 name=shdg01 id=963616090.1034.cvm2 30 name=shdg id=963616065.1032.
410 Administering cluster functionality Administering VxVM in cluster environments The following is example output for the command vxdg list group1 on the master: Group: group1 dgid: 774222028.1090.teal import-id: 32768.1749 flags: shared version: 140 alignment: 8192 (bytes) ssb: on local-activation: exclusive-write cluster-actv-modes: node0=ew node1=off detach-policy: local private_region_failure: leave copies: nconfig=2 nlog=2 config: seqno=0.
Administering cluster functionality Administering VxVM in cluster environments Forcibly adding a disk to a disk group Note: Disks can only be forcibly added to a shared disk group on the master node.
412 Administering cluster functionality Administering VxVM in cluster environments Converting a disk group from shared to private Note: Shared disk groups can only be deported on the master node.
Administering cluster functionality Administering VxVM in cluster environments Changing the activation mode on a shared disk group Note: The activation mode for access by a cluster node to a shared disk group is set on that node. The activation mode of a shared disk group can be changed using the following command: # vxdg -g diskgroup set activation=mode The activation mode is one of exclusivewrite or ew, readonly or ro, sharedread or sr, sharedwrite or sw, or off.
414 Administering cluster functionality Administering VxVM in cluster environments Creating volumes with exclusive open access by a node Note: All shared volumes, including those with exclusive open access, can only be created on the master node. When using the vxassist command to create a volume, you can use the exclusive=on attribute to specify that the volume may only be opened by one node in the cluster at a time.
Administering cluster functionality Administering VxVM in cluster environments You can also check the existing cluster protocol version using the following command: # vxdctl protocolversion This produces output similar to the following: Cluster running at protocol 70 Displaying the supported cluster protocol version range The following command displays the maximum and minimum protocol version supported by the node and the current protocol version: # vxdctl support This command produces out put similar t
416 Administering cluster functionality Administering VxVM in cluster environments Note: While the vxrecover utility is active, there can be some degradation in system performance. Obtaining cluster performance statistics The vxstat utility returns statistics for specified objects. In a cluster environment, vxstat gathers statistics from all of the nodes in the cluster. The statistics give the total usage, by all nodes, for the requested objects.
14 Chapter Administering sites and remote mirrors In a Remote Mirror configuration (also known as a campus cluster or stretch cluster) the hosts and storage of a cluster that would usually be located in one place, are instead divided between two or more sites. These sites are typically connected via a redundant high-capacity network that provides access to storage and private link communication between the cluster nodes. A typical two-site remote mirror configuration is illustrated in Figure 14-1.
424 Administering sites and remote mirrors For more information about administering Extended Distance Clusters, see Designing Disaster Tolerant High Availability Clusters (at http://docs.hp.com --> High Availability --> Metrocluster).
Administering sites and remote mirrors Although not shown in this figure, DCO log volumes are also mirrored across the sites, and disk group configuration copies are distributed across the sites. To enhance read performance, VxVM will service reads from the plexes at the local site where an application is running if the siteread read policy is set on a volume. Writes are written to plexes at all sites.
426 Administering sites and remote mirrors Configuring sites for hosts and disks Configuring sites for hosts and disks Note: The Remote Mirror feature requires that the Site Awareness license has been installed on all hosts at all sites that are participating in the configuration.
Administering sites and remote mirrors Configuring site consistency on a disk group The site name is not removed from the disks. If required, use the vxdisk rmtag command to remove the site tag as described in “Configuring sites for hosts and disks” on page 426.
428 Administering sites and remote mirrors Setting the siteread policy on a volume Note: The siteconsistent and allsites attributes must be set to off for RAID-5 volumes in a site-consistent disk group. Setting the siteread policy on a volume If the Site Awareness license is installed on all the hosts in the Remote Mirror configuration, the disk group is configured for site consistency with several sites enabled, and the allsites=on attribute is specified for a volume, the default read policy is siteread.
Administering sites and remote mirrors Site-based allocation of storage to volumes # vxassist -g diskgroup make volume size mirror=site \ site:site1 site:site2 ... allsites=off siteconsistent=off a non-site-consistent mirrored volume with plexes at all of the sites: # vxassist -g diskgroup make volume size mirror=site \ allsites=on siteconsistent=off a site-consistent mirrored volume with plexes only at some of the sites: # vxassist -g diskgroup make volume size mirror=site \ site:site1 site:site2 ...
430 Administering sites and remote mirrors Site-based allocation of storage to volumes Examples of storage allocation using sites The examples in the following table demonstrate how to use site names with the vxassist command to allocate storage. The disk group, ccdg, has been enabled for site consistency with disks configured at two sites, site1 and site2. Command Description # vxassist -g ccdg make vol 2g \ nmirror=2 Create a volume with one mirror at each site.
Administering sites and remote mirrors Making an existing disk group site consistent Command Description # vxassist -g ccdg remove \ mirror vol site:site1 Remove a mirror from a volume at a specified site. If the volume is site consistent, the command fails if this would remove the last remaining plex at a site. # vxassist -g ccdg growto vol 4g Grow a volume. If the volume is site consistent, the command fails if there is insufficient storage available at each site.
432 Administering sites and remote mirrors Fire drill — testing the configuration Fire drill — testing the configuration Caution: To avoid potential loss of service or data, it is recommended that you do not use these procedures on a live system. After validating that the consistency of the volumes and disk groups at your sites, you should validate the procedures that you will use in the event of the various possible types of failure.
Administering sites and remote mirrors Failure scenarios and recovery procedures Recovery from a loss of site connectivity If the network links between the sites are disrupted, the application environments may continue to run in parallel, and this may lead to inconsistencies between the disk group configuration copies at the sites. When connectivity between the sites is restored, a serial split-brain condition may then exist between the sites.
434 Administering sites and remote mirrors Failure scenarios and recovery procedures Recovery from site failure If all the hosts and storage fail at a site, use the following commands to reattach the site after it comes back online, and to recover the disk group: # vxdg -g diskgroup [-o overridessb] reattachsite sitename # vxrecover -g diskgroup The -o overridessb option is only required if a serial split-brain condition is indicated, which may happen if the site was brought back up while the private netw
Chapter 15 Using Storage Expert System administrators often find that gathering and interpreting data about large and complex configurations can be a difficult task. Veritas Storage Expert (vxse) is designed to help in diagnosing configuration problems with VxVM. Storage Expert consists of a set of simple commands that collect VxVM configuration data and compare it with “best practice.
436 Using Storage Expert How Storage Expert works How Storage Expert works Storage Expert components include a set of rule scripts and a rules engine. The rules engine runs the scripts and produces ASCII output, which is organized and archived by Storage Expert’s report generator. This output contains information about areas of VxVM configuration that do not meet the set criteria. By default, output is sent to the screen, but you can send it to a file using standard output redirection.
Using Storage Expert Running Storage Expert run Run the rule. A full list of the Storage Expert rules and their default values are listed in “Rule definitions and attributes” on page 445.
438 Using Storage Expert Running Storage Expert vxse_dg1 PASS: Disk group (mydg) okay amount of disks in this disk group (4) This indicates that the specified disk group (mydg) met the conditions specified in the rule. See “Rule result types” on page 438 for a list of the possible result types. Note: You can set Storage Expert to run as a cron job to notify administrators and automatically archive reports.
Using Storage Expert Identifying configuration problems using Storage Expert Identifying configuration problems using Storage Expert Storage Expert provides a large number of rules that help you to diagnose configuration issues that might cause problems for your storage environment. Each rule describes the issues involved, and suggests remedial actions.
440 Using Storage Expert Identifying configuration problems using Storage Expert For information on adding a DRL log to a mirrored volume, see “Preparing a volume for DRL and instant snapshots” on page 267. Checking for large mirrored volumes without a mirrored dirty region log (vxse_drl2) To check whether a large mirrored volume has a mirrored DRL log, run rule vxse_drl2. Mirroring the DRL log provides added protection in the event of a disk failure.
Using Storage Expert Identifying configuration problems using Storage Expert Disk groups Disks groups are the basis of VxVM storage configuration so it is critical that the integrity and resilience of your disk groups are maintained. Storage Expert provides a number of rules that enable you to check the status of disk groups and associated objects. Checking whether a configuration database is too full (vxse_dg1) To check whether the disk group configuration database has become too full, run rule vxse_dg1.
442 Using Storage Expert Identifying configuration problems using Storage Expert Checking for non-imported disk groups (vxse_dg6) To check for disk groups that are visible to VxVM but not imported, run rule vxse_dg6. Importing a disk to a disk group is described in “Importing a disk group” on page 167. Checking for initialized VM disks that are not in a disk group (vxse_disk) To find out whether there are any initialized disks that are not a part of any disk group, run rule vxse_disk.
Using Storage Expert Identifying configuration problems using Storage Expert ■ To recover a volume, see the chapter “Recovery from Hardware Failure” in the Veritas Volume Manager Troubleshooting Guide. Disk striping Striping enables you to enhance your system’s performance. Several rules enable you to monitor important parameters such as the number of columns in a stripe plex or RAID-5 plex, and the stripe unit size of the columns.
444 Using Storage Expert Identifying configuration problems using Storage Expert Disk sparing and relocation management The hot-relocation feature of VxVM uses spare disks in a disk group to recreate volume redundancy after disk failure. Checking the number of spare disks in a disk group (vxse_spares) This “best practice” rule assumes that between 10% and 20% of disks in a disk group should be allocated as spare disks. By default, vxse_spares checks that a disk group falls within these limits.
Using Storage Expert Rule definitions and attributes Rule definitions and attributes The tables in this section list rule definitions, and rule attributes and their default values. Note: You can use the info keyword to show a description of a rule. See “Discovering what a rule does” on page 437 for details. Table 15-1 Rule definitions in Storage Expert Rule Description vxse_dc_failures Checks and points out failed disks and disabled controllers.
446 Using Storage Expert Rule definitions and attributes Table 15-1 Rule definitions in Storage Expert Rule Description vxse_rootmir Checks that all root mirrors are set up correctly. vxse_spares Checks that the number of spare disks in a disk group is within the VxVM “Best Practices” thresholds. vxse_stripes1 Checks for stripe volumes whose stripe unit is not a multiple of the default stripe unit size. vxse_stripes2 Checks for stripe volumes that have too many or too few columns.
Using Storage Expert Rule definitions and attributes Table 15-2 Rule attributes and default attribute values Rule Attribute Default value Description vxse_drl1 mirror_threshold 1g (1GB) Large mirror threshold size. Warn if a mirror is larger than this and does not have an attached DRL log. vxse_drl2 large_mirror_size 20g (20GB) Large mirror-stripe threshold size. Warn if a mirror-stripe volume is larger than this. vxse_host - - No user-configurable variables.
448 Using Storage Expert Rule definitions and attributes Table 15-2 Rule attributes and default attribute values Rule Attribute Default value Description vxse_redundancy volume_redundancy 0 Volume redundancy check. The value of 2 performs a mirror redundancy check. A value of 1 performs a RAID-5 redundancy check. The default value of 0 performs no redundancy check. vxse_rootmir - - No user-configurable variables.
Chapter 16 Performance monitoring and tuning Veritas Volume Manager (VxVM) can improve overall system performance by optimizing the layout of data storage on the available hardware. This chapter contains guidelines establishing performance priorities, for monitoring performance, and for configuring your system appropriately. Performance guidelines VxVM allows you to optimize data storage performance using the following two strategies: ■ Balance the I/O load among the available disk drives.
452 Performance monitoring and tuning Performance guidelines Striping Striping improves access performance by cutting data into slices and storing it on multiple devices that can be accessed in parallel. Striped plexes improve access performance for both read and write operations. Having identified the most heavily accessed volumes (containing file systems or databases), you can increase access bandwidth to this data by striping it across portions of multiple disks.
Performance monitoring and tuning Performance guidelines Combining mirroring and striping Note: You need a full license to use this feature. Mirroring and striping can be used together to achieve a significant improvement in performance when there are multiple I/O streams. Striping provides better throughput because parallel I/O streams can operate concurrently on separate devices. Serial access is optimized when I/O exactly fits across all stripe units in one stripe.
454 Performance monitoring and tuning Performance guidelines ■ round—a round-robin read policy, where all plexes in the volume take turns satisfying read requests to the volume. ■ prefer—a preferred-plex read policy, where the plex with the highest performance usually satisfies read requests. If that plex fails, another plex is accessed. ■ select—default read policy, where the appropriate read policy for the configuration is selected automatically.
Performance monitoring and tuning Performance monitoring Performance monitoring As a system administrator, you have two sets of priorities for setting priorities for performance. One set is physical, concerned with hardware such as disks and controllers. The other set is logical, concerned with managing software and its operation.
456 Performance monitoring and tuning Performance monitoring For detailed information about how to use vxtrace, refer to the vxtrace(1M) manual page. Printing volume statistics Use the vxstat command to access information about activity on volumes, plexes, subdisks, and disks under VxVM control, and to print summary statistics to the standard output. These statistics represent VxVM activity from the time the system initially booted or from the last time the counters were reset to zero.
Performance monitoring and tuning Performance monitoring Using I/O statistics Examination of the I/O statistics can suggest how to reconfigure your system. You should examine two primary statistics: volume I/O activity and disk I/O activity. Before obtaining statistics, reset the counters for all existing statistics using the vxstat r command.
458 Performance monitoring and tuning Performance monitoring Note: Your system may use device names that differ from these examples. For more information on device names, see “Administering disks” on page 77. The subdisks line (beginning sd) indicates that the volume archive is on disk mydg03. To move the volume off mydg03, use the following command: # vxassist -g mydg move archive !mydg03 dest_disk Here dest_disk is the destination disk to which you want to move the volume.
Performance monitoring and tuning Performance monitoring Use I/O tracing (or subdisk statistics) to determine whether volumes have excessive activity in particular regions of the volume. If the active regions can be identified, split the subdisks in the volume and move those regions to a less busy disk. Caution: Striping a volume, or splitting a volume across multiple disks, increases the chance that a disk failure results in failure of that volume.
460 Performance monitoring and tuning Tuning VxVM Tuning VxVM This section describes how to adjust the tunable parameters that control the system resources used by VxVM. Depending on the system resources that are available, adjustments may be required to the values of some tunable parameters to optimize performance. General tuning guidelines VxVM is optimally tuned for most configurations ranging from small systems to larger servers.
Performance monitoring and tuning Tuning VxVM Number of configuration copies for a disk group Selection of the number of configuration copies for a disk group is based on a trade-off between redundancy and performance. As a general rule, reducing the number configuration copies in a disk group speeds up initial access of the disk group, initial startup of the vxconfigd daemon, and transactions performed within the disk group.
462 Performance monitoring and tuning Tuning VxVM Tunable parameters The following sections describe specific tunable parameters. dmp_cache_open If set to on, the first open of a device that is performed by an array support library (ASL) is cached. This enhances the performance of device discovery by minimizing the overhead caused by subsequent opens by ASLs. If set to off, caching is not performed. The default value is off.
Performance monitoring and tuning Tuning VxVM 3 Display level 1 and 2 messages plus messages that relate to I/O errors, I/O error analysis and path media errors. 4 Display level 1, 2 and 3 messages plus messages that relate to setting or changing attributes on a path. dmp_path_age The time for which an intermittently failing path needs to be monitored as healthy before DMP once again attempts to schedule I/O requests on it. The default value is 300 seconds. The minimum value is 1 second.
464 Performance monitoring and tuning Tuning VxVM command. A value can also be set for paths to individual arrays by using the vxdmpadm command as described in “Configuring the I/O throttling mechanism” on page 151. dmp_restore_daemon_cycles If the DMP restore policy is CHECK_PERIODIC, the number of cycles after which the CHECK_ALL policy is called. The value of this tunable can also be changed by using the vxdmpadm command as described in “Configuring DMP path restoration policies” on page 153.
Performance monitoring and tuning Tuning VxVM tunable is used by utilities performing operations such as resynchronizing mirrors or rebuilding RAID-5 columns. The default for this tunable is 50 ticks. Increasing this value results in slower recovery operations and consequently lower system impact while recoveries are being performed. vol_fmr_logsz The maximum size in kilobytes of the bitmap that Non-Persistent FastResync uses to track changed blocks in a volume.
466 Performance monitoring and tuning Tuning VxVM vol_maxio The maximum size of logical I/O operations that can be performed without breaking up the request. I/O requests to VxVM that are larger than this value are broken up and performed synchronously. Physical I/O requests are broken up based on the capabilities of the disk device and are unaffected by changes to this maximum logical request limit. The default value for this tunable is 256 sectors (256KB).
Performance monitoring and tuning Tuning VxVM If stripes are larger than vol_maxspecialio, full stripe I/O requests are broken up, which prevents full-stripe read/writes. This throttles the volume I/O throughput for sequential I/O or larger I/O requests. This tunable limits the size of an I/O request at a higher level in VxVM than the level of an individual disk.
468 Performance monitoring and tuning Tuning VxVM The VxVM kernel currently sets the default value for this tunable to 512 sectors. Note: If DRL sequential logging is configured, the value of voldrl_min_regionsz must be set to at least half the value of vol_maxio. voliomem_chunk_size The granularity of memory chunks used by VxVM when allocating or releasing system memory. A larger granularity reduces CPU overhead due to memory allocation by allowing VxVM to retain hold of a larger amount of memory.
Performance monitoring and tuning Tuning VxVM voliot_iobuf_default The default size for the creation of a tracing buffer in the absence of any other specification of desired kernel buffer size as part of the trace ioctl. The default size of this tunable is 8192 bytes (8KB). If trace data is often being lost due to this buffer size being too small, then this value can be tuned to a more generous amount.
470 Performance monitoring and tuning Tuning VxVM Note: The memory allocated for this cache is exclusively dedicated to it. It is not available for other processes or applications. Setting the value of volpagemod_max_memsz below 512KB fails if cache objects or volumes that have been prepared for instant snapshot operations are present on the system. If you do not use the FastResync or DRL features that are implemented using a version 20 DCO volume, the value of volpagemod_max_memsz can be set to 0.
Appendix A Commands summary This appendix summarizes the usage and purpose of important commonly-used commands in Veritas Volume Manager (VxVM). References are included to longer descriptions in the remainder of this book. Most commands (excepting daemons, library commands and supporting scripts) are linked to the /usr/sbin directory from the /opt/VRTS/bin directory.
474 Commands summary The following tables summarize the commonly-used commands: ■ “Obtaining information about objects in VxVM” on page 474 ■ “Administering disks” on page 475 ■ “Creating and administering disk groups” on page 478 ■ “Creating and administering subdisks” on page 480 ■ “Creating and administering plexes” on page 482 ■ “Creating volumes” on page 484 ■ “Administering volumes” on page 487 ■ “Monitoring and controlling tasks” on page 491 Table A-1 Obtaining information about obj
Commands summary Table A-1 Obtaining information about objects in VxVM Command Description vxinfo [-g diskgroup] [volume ...] Displays information about the accessibility and usability of volumes. See “Listing Unstartable Volumes” in the Veritas Volume Manager Troubleshooting Guide. Example: # vxinfo -g mydg myvol1 \ myvol2 vxprint -hrt [-g diskgroup] [object] Prints single-line information about objects in VxVM. See “Displaying volume information” on page 256.
476 Commands summary Table A-2 Administering disks Command Description vxdiskadd [devicename ...] Adds a disk specified by device name. See “Using vxdiskadd to place a disk under control of VxVM” on page 101. Example: # vxdiskadd c0t1d0 vxedit [-g diskgroup] rename olddisk \ newdisk Renames a disk under control of VxVM. See “Renaming a disk” on page 118.
Commands summary Table A-2 Administering disks Command Description vxedit [-g diskgroup] set \ spare=on|off diskname Adds/removes a disk from the pool of hotrelocation spares. See “Marking a disk as a hot-relocation spare” on page 377. See “Removing a disk from use as a hotrelocation spare” on page 378. Examples: # vxedit -g mydg set \ spare=on mydg04 # vxedit -g mydg set \ spare=off mydg04 vxdisk offline devicename Takes a disk offline. See “Taking a disk offline” on page 117.
478 Commands summary Table A-3 Creating and administering disk groups Command Description vxdg [-s] init diskgroup \ [diskname=]devicename Creates a disk group using a pre-initialized disk. See “Creating a disk group” on page 163. See “Creating a shared disk group” on page 415. Example: # vxdg init mydg \ mydg01=c0t1d0 vxsplitlines -g diskgroup Reports conflicting configuration information. See “Handling conflicting configuration copies” on page 182.
Commands summary Table A-3 Creating and administering disk groups Command Description vxdg [-o expand] listmove sourcedg \ targetdg object ... Lists the objects potentially affected by moving a disk group. See “Listing objects potentially affected by a move” on page 192. Example: # vxdg -o expand listmove \ mydg newdg myvol1 vxdg [-o expand] move sourcedg \ targetdg object ... Moves objects between disk groups. See “Moving objects between disk groups” on page 195.
480 Commands summary Table A-3 Creating and administering disk groups Command Description vxrecover -g diskgroup -sb Starts all volumes in an imported disk group. See “Moving disk groups between systems” on page 177. Example: # vxrecover -g mydg -sb vxdg destroy diskgroup Destroys a disk group and releases its disks. See “Destroying a disk group” on page 200.
Commands summary Table A-4 Creating and administering subdisks Command Description vxsd [-g diskgroup] assoc plex \ subdisk1:0 ... subdiskM:N-1 Adds subdisks to the ends of the columns in a striped or RAID-5 volume. See “Associating subdisks with plexes” on page 210. Example: # vxsd -g mydg assoc \ vol01-01 mydg10-01:0 \ mydg11-01:1 mydg12-01:2 vxsd [-g diskgroup] mv oldsubdisk \ newsubdisk ... Replaces a subdisk. See “Moving subdisks” on page 209.
482 Commands summary Table A-4 Creating and administering subdisks Command Description vxunreloc [-g diskgroup] original_disk Relocates subdisks to their original disks. See “Moving and unrelocating subdisks using vxunreloc” on page 382. Example: # vxunreloc -g mydg mydg01 vxsd [-g diskgroup] dis subdisk Dissociates a subdisk from a plex. See “Dissociating subdisks from plexes” on page 212. Example: # vxsd -g mydg dis mydg02-01 vxedit [-g diskgroup] rm subdisk Removes a subdisk.
Commands summary Table A-5 Creating and administering plexes Command Description vxmake [-g diskgroup] plex plex \ layout=stripe|raid5 stwidth=W \ ncolumn=N sd=subdisk1[,subdisk2,...] Creates a striped or RAID-5 plex. See “Creating a striped plex” on page 216. Example: # vxmake -g mydg plex pl-01 \ layout=stripe stwidth=32 \ ncolumn=2 \ sd=mydg01-01,mydg02-01 vxplex [-g diskgroup] att volume plex Attaches a plex to an existing volume. See “Attaching and associating plexes” on page 221.
484 Commands summary Table A-5 Creating and administering plexes Command Description vxplex [-g diskgroup] cp volume newplex Copies a volume onto a plex. See “Copying plexes” on page 225. Example: # vxplex -g mydg cp vol02 \ vol03-01 vxmend [-g diskgroup] fix clean plex Sets the state of a plex in an unstartable volume to CLEAN. See “Reattaching plexes” on page 223. Example: # vxmend -g mydg fix clean \ vol02-02 vxplex [-g diskgroup] -o rm dis plex Dissociates and removes a plex from a volume.
Commands summary Table A-6 Creating volumes Command Description vxassist -b [-g diskgroup] make \ volume length [layout=layout ] [attributes] Creates a volume. See “Creating a volume on any disk” on page 235. See “Creating a volume on specific disks” on page 236. Example: # vxassist -b -g mydg make \ myvol 20g layout=concat \ mydg01 mydg02 vxassist -b [-g diskgroup] make \ volume length layout=mirror \ [nmirror=N] [attributes] Creates a mirrored volume. See “Creating a mirrored volume” on page 241.
486 Commands summary Table A-6 Creating volumes Command Description vxassist -b [-g diskgroup] make \ volume length layout=mirror \ mirror=ctlr [attributes] Creates a volume with mirrored data plexes on separate controllers. See “Mirroring across targets, controllers or enclosures” on page 247. Example: # vxassist -b -g mydg make \ mymcvol 20g layout=mirror \ mirror=ctlr vxmake -b [-g diskgroup] -Uusage_type \ vol volume [len=length] plex=plex,... Creates a volume from existing plexes.
Commands summary Table A-7 Administering volumes Command Description vxassist [-g diskgroup] mirror volume \ [attributes] Adds a mirror to a volume. See “Adding a mirror to a volume” on page 263. Example: # vxassist -g mydg mirror \ myvol mydg10 vxassist [-g diskgroup] remove \ mirror volume [attributes] Removes a mirror from a volume. See “Removing a mirror” on page 265.
488 Commands summary Table A-7 Administering volumes Command Description vxsnap [-g diskgroup] prepare volume \ [drl=on|sequential|off] Prepares a volume for instant snapshots and for DRL logging. See “Preparing a volume for DRL and instant snapshots” on page 267. Example: # vxsnap -g mydg prepare \ myvol drl=on vxsnap [-g diskgroup] make \ source=volume/newvol=snapvol\ [/nmirror=number] Takes a full-sized instant snapshot of a volume by breaking off plexes of the original volume.
Commands summary Table A-7 Administering volumes Command Description vxsnap [-g diskgroup] make \ source=volume/newvol=snapvol\ /cache=cache_object Takes a space-optimized instant snapshot of a volume. See “Creating instant snapshots” on page 311. Example: # vxsnap -g mydg make \ source=myvol/\ newvol=mysosvol/\ cache=cobj vxsnap [-g diskgroup] refresh snapshot Refreshes a snapshot from its original volume. See “Refreshing an instant snapshot” on page 329.
490 Commands summary Table A-7 Administering volumes Command Description vxassist [-g diskgroup] relayout \ volume [layout=layout] [relayout_options] Performs online relayout of a volume. See “Performing online relayout” on page 286. Example: # vxassist -g mydg relayout \ vol2 layout=stripe vxassist [-g diskgroup] relayout \ volume layout=raid5 stripeunit=W \ ncol=N Relays out a volume as a RAID-5 volume with stripe width W and N columns. See “Performing online relayout” on page 286.
Commands summary Table A-8 Monitoring and controlling tasks Command Description command [-g diskgroup] -t tasktag \ [options] [arguments] Specifies a task tag to a VxVM command. See “Specifying task tags” on page 259. Example: # vxrecover -g mydg \ -t mytask -b mydg05 vxtask [-h] [-g diskgroup] list Lists tasks running on a system. See “Using the vxtask command” on page 261. Example: # vxtask -h -g mydg list vxtask monitor task Monitors the progress of a task.
492 Commands summary Table A-8 Monitoring and controlling tasks Command Description vxtask abort task Cancels a task and attempts to reverse its effects. See “Using the vxtask command” on page 261.
Commands summary Online manual pages Online manual pages Manual pages are organized into three sections: ■ Section 1M — administrative commands ■ Section 4 — file formats ■ Section 7 — device driver interfaces Section 1M — administrative commands Manual pages in section 1M describe commands that are used to administer Veritas Volume Manager. Table A-9 Section 1M manual pages Name Description dgcfgbackup Create or update VxVM volume group configuration backup file.
494 Commands summary Online manual pages Table A-9 Section 1M manual pages Name Description vxconfigd Veritas Volume Manager configuration daemon vxconfigrestore Restore disk group configuration. vxcp_lvmroot Copy LVM root disk onto new Veritas Volume Manager root disk. vxdarestore Restore simple or nopriv disk access records. vxdco Perform operations on version 0 DCO objects and DCO volumes. vxdctl Control the volume configuration daemon.
Commands summary Online manual pages Table A-9 Section 1M manual pages Name Description vxnotify Display Veritas Volume Manager configuration events. vxpfto Set Powerfail Timeout (pfto). vxplex Perform Veritas Volume Manager operations on plexes. vxpool Create and administer ISP storage pools. vxprint Display records from the Veritas Volume Manager configuration. vxr5check Verify RAID-5 volume parity. vxreattach Reattach disk drives that have become accessible again.
496 Commands summary Online manual pages Table A-9 Section 1M manual pages Name Description vxvmboot Prepare Veritas Volume Manager volume as a root, boot, primary swap or dump volume. vxvmconvert Convert LVM volume groups to VxVM disk groups. vxvol Perform Veritas Volume Manager operations on volumes. vxvoladm Create and administer ISP application volumes on allocated storage. vxvoladmtask Administer ISP tasks. vxvset Create and administer volume sets.
Appendix B Configuring Veritas Volume Manager This appendix provides guidelines for setting up efficient storage management after installing the Veritas Volume Manager software.
498 Configuring Veritas Volume Manager Adding unsupported disk arrays as JBODs Optional Setup Tasks ■ Place the root disk under VxVM control and mirror it to create an alternate boot disk. ■ Designate hot-relocation spare disks in each disk group. ■ Add mirrors to volumes. ■ Configure DRL and FastResync on volumes. Maintenance Tasks ■ Resize volumes and file systems. ■ Add more disks, create new disk groups, and create new volumes. ■ Create and maintain snapshots.
Configuring Veritas Volume Manager Guidelines for configuring storage Guidelines for configuring storage A disk failure can cause loss of data on the failed disk and loss of access to your system. Loss of access is due to the failure of a key disk used for system operations. Veritas Volume Manager can protect your system from these problems. To maintain system availability, data important to running and booting your system must be mirrored. The data must be preserved so it can be used in case of failure.
500 Configuring Veritas Volume Manager Guidelines for configuring storage ■ Do not place subdisks from different plexes of a mirrored volume on the same physical disk. This action compromises the availability benefits of mirroring and degrades performance. Using the vxassist or vxdiskadm commands precludes this from happening. ■ To provide optimum performance improvements through the use of mirroring, at least 70 percent of physical I/O operations should be read operations.
Configuring Veritas Volume Manager Guidelines for configuring storage Striping guidelines Refer to the following guidelines when using striping. ■ Do not place more than one column of a striped plex on the same physical disk. ■ Calculate stripe-unit sizes carefully. In general, a moderate stripe-unit size (for example, 64 kilobytes, which is also the default used by vxassist) is recommended.
502 Configuring Veritas Volume Manager Guidelines for configuring storage RAID-5 guidelines Refer to the following guidelines when using RAID-5. In general, the guidelines for mirroring and striping together also apply to RAID-5. The following guidelines should also be observed with RAID-5: ■ Only one RAID-5 plex can exist per RAID-5 volume (but there can be multiple log plexes). ■ The RAID-5 plex must be derived from at least three subdisks on three or more physical disks.
Configuring Veritas Volume Manager Guidelines for configuring storage ■ After hot-relocation occurs, designate one or more additional disks as spares to augment the spare space. Some of the original spare space may be occupied by relocated subdisks. ■ If a given disk group spans multiple controllers and has more than one spare disk, set up the spare disks on different controllers (in case one of the controllers fails).
504 Configuring Veritas Volume Manager Controlling VxVM’s view of multipathed devices The pathnames include a directory named for the disk group. Use the appropriate device node to create, mount and repair file systems, and to lay out databases that require raw partitions.
Configuring Veritas Volume Manager Configuring cluster support 5 From the master node only, use vxassist or VEA to create volumes in the disk groups. Note: RAID-5 volumes are not supported for sharing in a cluster. 6 If the cluster is only running with one node, bring up the other cluster nodes. Enter the vxdg list command on each node to display the shared disk groups.
506 Configuring Veritas Volume Manager Reconfiguration tasks Reconfiguration tasks The following sections describe tasks that allow you to make changes to the configuration that you specified during installation. Changing the name of the default disk group If you use the Veritas installer to install the Veritas Volume Manager software, you can enter the name of the default disk group.
Glossary Active/Active disk arrays This type of multipathed disk array allows you to access a disk in the disk array through all the paths to the disk simultaneously, without any performance degradation. Active/Passive disk arrays This type of multipathed disk array allows one path to a disk to be designated as primary and used to access the disk at any time. Using a path other than the designated active path results in severe performance degradation in some disk arrays.
508 Glossary A set of hosts (each termed a node) that share a set of disks. cluster manager An externally-provided daemon that runs on each node in a cluster. The cluster managers on each node communicate with each other and inform VxVM of changes in cluster membership. cluster-shareable disk group A disk group in which access to the disks is shared by multiple hosts (also referred to as a shared disk group). Also see private disk group. column A set of one or more subdisks within a striped plex.
Glossary disk A collection of read/write data blocks that are indexed and can be accessed fairly quickly. Each disk has a universally unique identifier. disk access name An alternative term for a device name. disk access records Configuration records used to specify the access path to particular disks.
510 Glossary dissociate The process by which any link that exists between two VxVM objects is removed. For example, dissociating a subdisk from a plex removes the subdisk from the plex and adds the subdisk to the free space pool. dissociated plex A plex dissociated from a volume. dissociated subdisk A subdisk dissociated from a plex. distributed lock manager A lock manager that runs on different systems in a cluster, and ensures consistent access to distributed resources.
Glossary hot-relocation A technique of automatically restoring redundancy and access to mirrored and RAID-5 volumes when a disk fails. This is done by relocating the affected subdisks to disks designated as spares and/or free space in the same disk group. hot-swap Refers to devices that can be removed from, or inserted into, a system without first turning off the power supply to the system.
512 Glossary A form of FastResync that cannot preserve its maps across reboots of the system because it stores its change map in memory. object An entity that is defined to and recognized internally by VxVM. The VxVM objects are: volume, plex, subdisk, disk, and disk group. There are actually two types of disk objects—one for the physical aspect of the disk and the other for the logical aspect. parity A calculated value that can be used to reconstruct data after a failure.
Glossary In Active/Passive disk arrays, a disk can be bound to one particular controller on the disk array or owned by a controller. The disk can then be accessed using the path through this particular controller. Also see path and secondary path. private disk group A disk group in which the disks are accessed by only one specific host in a cluster. Also see shared disk group. private region A region of a physical disk used to store private, structured VxVM information.
514 Glossary sector A unit of size, which can vary between systems. Sector size is set per device (hard drive, CD-ROM, and so on). Although all devices within a system are usually configured to the same sector size for interoperability, this is not always the case. A sector is commonly 1024 bytes. shared disk group A disk group in which access to the disks is shared by multiple hosts (also referred to as a clustershareable disk group). Also see private disk group.
Glossary A layout technique that spreads data across several physical disks using stripes. The data is allocated alternately to the stripes within the subdisks of each plex. subdisk A consecutive set of contiguous disk blocks that form a logical disk segment. Subdisks can be associated with plexes to form volumes. swap area A disk region used to hold copies of memory pages swapped out by the system pager process. swap volume A VxVM volume that is configured for use as a swap area.
516 Glossary
Index Symbols /dev/vx/dmp directory 122 /dev/vx/rdmp directory 122 /etc/default/vxassist file 233, 380 /etc/default/vxdg defaults file 393 /etc/default/vxdg file 163 /etc/default/vxdisk file 81, 97 /etc/default/vxse file 439 /etc/fstab file 282 /etc/volboot file 204 /etc/vx/darecs file 204 /etc/vx/disk.info file 93 /etc/vx/dmppolicy.info file 142 /etc/vx/volboot file 178 /sbin/init.
518 Index newvol 322 nmirror 322 nomanual 140 nopreferred 140 plex 226 preferred priority 140 primary 141 putil 213, 226 secondary 141 sequential DRL 244 setting for paths 140 setting for rules 438 snapvol 319, 324 source 319, 324 standby 141 subdisk 213 syncing 311, 336 tutil 213, 226 auto disk type 80 autogrow tuning 338 autogrow attribute 314, 317 autogrowby attribute 314 autotrespass mode 121 B backups created using snapshots 311 creating for volumes 295 creating using instant snapshots 311 creating u
Index activating shared disk groups 418 activation modes for shared disk groups 392 benefits 387 checking cluster protocol version 420 cluster protocol version number 408 cluster-shareable disk groups 391 configuration 400 configuring exclusive open of volume by node 419, 420 connectivity policies 394 converting shared disk groups to private 417 creating shared disk groups 415 designating shareable disk groups 391 detach policies 394 determining if disks are shared 414 forcibly adding disks to disk groups
520 Index Cross-platform Data Sharing (CDS) alignment constraints 234 disk format 80 CVM cluster functionality of VxVM 387 D d# 20, 78 data change object DCO 69 data redundancy 42, 43, 46 data volume configuration 62 database replay logs and sequential DRL 61 databases resilvering 62 resynchronizing 62 DCO adding to RAID-5 volumes 269 adding version 0 DCOs to volumes 348 adding version 20 DCOs to volumes 267 calculating plex size for version 20 70 considerations for disk layout 192 creating volumes with v
Index Active/Active 122 Active/Passive 121 adding disks to DISKS category 87 adding vendor-supplied support package 83 Asymmetric Active/Active 122 defined 21 excluding support for 86 listing excluded 87 listing supported 86 listing supported disks in DISKS category 87 multipathed 22 re-including support for 87 removing disks from DISKS category 89 removing vendor-supplied support package 84 disk drives variable geometry 501 disk duplexing 42, 247 disk groups activating shared 418 activation in clusters 39
522 Index splitting 188, 197 splitting in clusters 418 Storage Expert rules 441 upgrading version of 200, 203 version 200, 202 disk media names 28, 77 disk names 77 configuring persistent 92 disk sparing Storage Expert rules 445 disk## 29 disk##-## 29 diskdetpolicy attribute 399 diskgroup## 77 disks 83 adding 101 adding to disk groups 164 adding to disk groups forcibly 417 adding to DISKS category 87 array support library 83 auto-configured 80 categories 83 CDS format 80 changing default layout attributes
Index resolving status in clusters 394 scanning for 81 secondary path 132 setting connectivity policies in clusters 419 setting failure policies in clusters 419 setting tags on 169 simple 80 spare 374 specifying to vxassist 236 stripe unit size 501 tagging with site name 426 taking offline 117 UDID flag 168 unique identifier 168 unreserving 119 upgrading contoller firmware 148 VM 28 writing a new identifier to 168 DISKS category 83 adding disks 87 listing supported disks 87 removing disks 89 DMP check_all
524 Index maximum number of dirty regions 468 minimum number of sectors 469 operation in clusters 408 recovery map in version 20 DCO 69 re-enabling 270 removing logs from mirrored volumes 274 removing support for 271 sequential 61 use of DCO with 60 drl attribute 244, 273 DRL guidelines 501 duplexing 42, 247 dynamic LUN expansion 107 E ecopy 108 EFI disks 80 EMC arrays moving disks between disk groups 195 EMC PowerPath coexistence with DMP 84 EMC Symmetrix autodiscovery 84 EMPTY plex state 218 volume stat
Index upgrading 148 FMR.
526 Index reattaching 330 refreshing 329 removing 333 removing support for 271 restoring volumes using 332 space-optimized 301 splitting hierarchies 333 synchronizing 336 Intelligent Storage Provisioning (ISP) 32 intent logging 296 INVALID volume state 258 ioctl calls 467, 468 IOFAIL plex condition 220 IOFAIL plex state 218 ISP volumes 32 J JBODs adding disks to DISKS category 87 listing supported disks 87 removing disks from DISKS category 89 K kernel states for plexes 221 volumes 258 L layered volumes
Index memory granularity of allocation by VxVM 469 maximum size of pool for VxVM 469 minimum size of pool for VxVM 471 persistence of FastResync in 67 messages complete disk failure 374 hot-relocation of subdisks 380 partial disk failure 373 metadata 170 metadevices 78 metanodes DMP 122 minimum queue load balancing policy 144 minor numbers 179 mirbrk snapshot type 335 mirdg attribute 324 mirrored volumes adding DRL logs 273 adding sequential DRL logs 273 changing read policies for 281 checking existence of
528 Index maximum number in a cluster 387 node abort in clusters 407 requesting status of 413 shutdown in clusters 406 use of vxclustadm to control cluster functionality 401 NODEVICE plex condition 220 nodg 159 nomanual path attribute 140 non-autotrespass mode 122 non-layered volume conversion 292 Non-Persistent FastResync 67 nopreferred path attribute 140 nopriv disk type 80 nopriv disks issues with enclosures 94 O objects physical 20 virtual 25 off-host processing 361, 387 OFFLINE plex state 218 online
Index physical disks adding to disk groups 164 clearing locks on 178 complete failure messages 374 determining failed 373 displaying information 119 displaying information about 119, 161 displaying spare 376 enabling 116 excluding free space from hot-relocation use 378 failure handled by hot-relocation 370 initializing 91 installing 96 making available for hot-relocation 377 making free space available for hot-relocation use 379 marking as spare 377 moving between disk groups 176, 195 moving disk groups be
530 Index recovering after correctable hardware failure 373 removing 225 removing from volumes 265 sparse 57, 211, 221, 224 specifying for online relayout 290 states 216 striped 38 taking offline 222, 262 tutil attribute 226 types 31 polling interval for DMP restore 153 PowerPath coexistence with DMP 84 prefer read policy 281 preferred plex performance of read policy 454 read policy 281 preferred priority path attribute 140 primary path 121, 132 primary path attribute 141 priority load balancing 144 privat
Index reinitialization of disks 101 relayout changing number of columns 290 changing region size 291 changing speed of 291 changing stripe unit size 290 combining with conversion 292 controlling progress of 291 limitations 57 monitoring tasks for 291 online 54 pausing 291 performing online 286 resuming 291 reversing direction of 292 specifying non-default 290 specifying plexes 290 specifying task tags for 290 storage 54 transformation characteristics 58 types of transformation 287 viewing status of 291 rel
532 Index checking plex and volume states 443 checking RAID-5 log size 441 checking rootability 445 checking stripe unit size 444 checking system name 445 checking volume redundancy 443 definitions 446 finding information about 437 for checking hardware 445 for checking rootability 445 for checking system name 445 for disk groups 441 for disk sparing 445 for recovery time 440 for striped mirror volumes 444 listing attributes 437 result types 438 running 438 setting values of attributes 438 S scandisks vxd
Index splitting 333 snapshot mirrors adding to volumes 328 removing from volumes 328 snapshots adding mirrors to volumes 328 adding plexes to 345 and FastResync 66 backing up multiple volumes 325, 344 backing up volumes online using 311 cascaded 304 comparison of features 65 converting plexes to 343 creating a hierarchy of 329 creating backups using third-mirror 340 creating for volume sets 326 creating full-sized instant 319 creating independent volumes 346 creating instant 311 creating linked break-off 3
534 Index setting values of rule attributes 438 vxse 435 storage failures 433 storage processor 121 storage relayout 54 stripe columns 38 stripe unit size recommendations 501 stripe units changing size 290 checking size 444 defined 38 striped plexes adding subdisks 211 defined 38 striped volumes changing number of columns 290 changing stripe unit size 290 checking number of columns 444 checking stripe unit size 444 creating 245 defined 228 failure of 38 performance 452 specifying non-default number of colu
Index T t# 20, 78 tags for tasks 259 listing for disks 169 removing from disks 170 removing from volumes 280 renaming 280 setting on disks 169 setting on volumes 250, 280 specifying for online relayout tasks 290 specifying for tasks 259 target IDs number 20 specifying to vxassist 236 target mirroring 237, 247 task monitor in VxVM 259 tasks aborting 260 changing state of 260, 261 identifiers 259 listing 260 managing 260 modifying parameters of 261 monitoring 260 monitoring online relayout 291 pausing 261 re
536 Index V-5-1-552 164 V-5-1-569 412 V-5-1-587 178 V-5-2-3091 192 V-5-2-369 165 V-5-2-4292 192 version 0 of DCOs 69 version 20 of DCOs 69 versioning of DCOs 68 versions checking for disk group 442 disk group 200 displaying for disk group 203 upgrading 200 virtual objects 25 VM disks defined 28 determining if shared 414 displaying spare 376 excluding free space from hot-relocation use 378 initializing 91 making free space available for hot-relocation use 379 marking as spare 377 mirroring volumes on 264 mo
Index advanced approach to creating 230 assisted approach to creating 231 associating plexes with 221 attaching plexes to 221 backing up 295 backing up online using snapshots 311 block device files 254, 504 booting VxVM-rootable 103 changing layout online 286 changing number of columns 290 changing read policies for mirrored 281 changing stripe unit size 290 character device files 254, 504 checking for disabled 443 checking for stopped 443 checking if FastResync is enabled 285 checking redundancy of 443 co
538 Index RAID-1+0 43 RAID-10 43 RAID-5 46, 228 raw device files 254, 504 reattaching plexes 223 reattaching version 0 DCOs to 351 reconfiguration in clusters 403 recovering after correctable hardware failure 373 removing 282 removing DRL logs 274 removing from /etc/fstab 282 removing linked snapshots from 329 removing mirrors from 265 removing plexes from 265 removing RAID-5 logs 276 removing sequential DRL logs 274 removing snapshot mirrors from 328 removing support for DRL and instant snapshots 271 remo
Index moving volumes 458 relaying out volumes online 286 removing DCOs from volumes 272 removing DRL logs 274 removing mirrors 265 removing plexes 265 removing RAID-5 logs 276 removing tags from volumes 280 removing version 0 DCOs from volumes 350 removing volumes 282 replacing tags set on volumes 280 reserving disks 119 resizing volumes 278 resynchronizing volumes from snapshots 345 setting default values 233 setting tags on volumes 250, 280, 281 snapabort 297 snapback 298 snapshot 298 snapstart 297 speci
540 Index importing cloned disks 170 importing disk groups 167 importing shared disk groups 416 joining disk groups 198 listing disks with configuration database copies 170 listing objects affected by move 192 listing shared disk groups 414 listing spare disks 376 moving disk groups between systems 177 moving disks between disk groups 176 moving objects between disk groups 195 obtaining copy size of configuration database 158 placing a configuration database on cloned disks 170 reattaching a site 432 recov
Index disabling controllers in DMP 130 disabling I/O in DMP 147 discovering disk access names 94 displaying APM information 156 displaying DMP database information 131 displaying DMP node for a path 134 displaying DMP node for an enclosure 134 displaying I/O error recovery settings 152 displaying I/O policy 141 displaying I/O throttling settings 152 displaying information about controllers 135 displaying information about enclosures 136 displaying partition size 141 displaying paths controlled by DMP node
542 Index restarting moved volumes 196, 197, 199 restarting volumes 263 vxrelayout resuming online relayout 291 reversing direction of online relayout 292 viewing status of online relayout 291 vxrelocd hot-relocation daemon 370 modifying behavior of 385 notifying users other than root 385 operation of 371 preventing from running 385 reducing performance impact of recovery 385 vxres_lvmroot used to create LVM root disks 105 vxresize growing volumes and file systems 277 limitations 277 shrinking volumes and
Index reattaching linked third-mirror snapshots 331 refreshing instant snapshots 329 removing a snapshot mirror from a volume 328, 329 removing support for DRL and instant snapshots 271 restore 300 restoring volumes 332 splitting snapshot hierarchies 333 vxsplitlines diagnosing serial split brain condition 186 vxstat determining which disks have failed 373 obtaining disk performance statistics 458 obtaining volume performance statistics 456 usage with clusters 422 zeroing counters 457 vxtask aborting tasks
544 Index Z zero setting volume contents to 253