Veritas Storage Foundation 5.1 SP1 Advanced Features Administrator's Guide HP-UX 11i v3 HP Part Number: 5900-1503 Published: April 2011 Edition: 1.
© Copyright 2011 Hewlett-Packard Development Company, L.P. Legal Notices Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation and Technical Data for Commercial Items are licensed to the U.S. Government under vendor’s standard commercial license. The information contained herein is subject to change without notice.
Contents Technical Support ............................................................................................... 4 Section 1 Storage Foundation advanced features ....................................................................... 21 Chapter 1 Introducing Veritas Storage Foundation™ Advanced Features .......................................................................... 23 About Storage Foundation management features ...............................
8 Contents Extending a Quick I/O file in a DB2 environment ................................ Monitoring tablespace free space with DB2 and extending tablespace containers ............................................................................ Recreating Quick I/O files after restoring a database ........................... Disabling Quick I/O ...................................................................... Chapter 4 52 55 56 Improving Sybase performance with Veritas Quick I/O ................
Contents Chapter 6 Improving Sybase database performance with Veritas Cached Quick I/O ............................................. 89 Tasks for setting up Cached Quick I/O .............................................. Enabling Cached Quick I/O on a file system ....................................... Enabling and disabling the qio_cache_enable flag ........................ Making Cached Quick I/O settings persistent across reboots and mounts ..............................................................
10 Contents Chapter 9 Migrating data from thick storage to thin storage ........................................................................... 113 About using SmartMove to migrate to Thin Storage .......................... Setting up SmartMove ................................................................ Displaying the SmartMove configuration .................................. Changing the SmartMove configuration ................................... Migrating to thin provisioning .................
Contents Chapter 13 Online database backup .................................................. 155 About online database backup ....................................................... 155 Making a backup of an online database on the same host ................... 156 Making an off-host backup of an online database ............................. 160 Chapter 14 Off-host cluster file system backup ............................... 167 About off-host cluster file system backup ...................................
12 Contents Creating multiple instant snapshots ........................................ Creating instant snapshots of volume sets ................................ Adding snapshot mirrors to a volume ....................................... Removing a snapshot mirror .................................................. Removing a linked break-off snapshot volume ........................... Adding a snapshot to a cascaded snapshot hierarchy .................. Refreshing an instant snapshot ...................
Contents Chapter 19 Administering Storage Checkpoints .............................. 267 About Storage Checkpoints .......................................................... Distinguishing between Storage Checkpoints and snapshots ..................................................................... Operation of a Storage Checkpoint ................................................. Copy-on-write ...................................................................... Types of Storage Checkpoints ...........
14 Contents Chapter 21 Backing up and restoring with Netbackup in an SFHA environment ...................................................... 301 About Veritas NetBackup ............................................................. About using Veritas NetBackup for backup and restore for DB2 .......... Components of Veritas NetBackup ........................................... About performing a backup ....................................................
Contents Chapter 24 Multi-volume file systems ................................................ 323 About multi-volume support ........................................................ About volume types .................................................................... Features implemented using multi-volume support .......................... Volume availability ............................................................... Creating multi-volume file systems .............................................
16 Contents File placement policy grammar ..................................................... File placement policy rules ........................................................... SELECT statement ................................................................ CREATE statement ............................................................... RELOCATE statement ........................................................... DELETE statement ................................................................
Contents Restoring the LVM volume group configuration ......................... Examples ............................................................................ General information regarding conversion speed ....................... Non-interactive conversion of volume groups ............................ Command differences ................................................................. About LVM and VxVM command differences ............................. LVM and VxVM command equivalents .............
18 Contents File system migration ............................................................ Specifying the migration target ............................................... Using the fscdsadm command ................................................ Migrating a file system one time ............................................. Migrating a file system on an ongoing basis ............................... When to convert a file system .................................................
Contents Glossary ............................................................................................................. 535 Index ...................................................................................................................
20 Contents
Section 1 Storage Foundation advanced features ■ Chapter 1.
22
Chapter 1 Introducing Veritas Storage Foundation™ Advanced Features This chapter includes the following topics: ■ About Storage Foundation management features ■ Storage management features for Storage Foundation products About Storage Foundation management features This guide documents the advanced features of Veritas Storage Foundation and High Availability products. It is a supplemental guide to be used in conjunction with Veritas Storage Foundation product guides.
24 Introducing Veritas Storage Foundation™ Advanced Features About Storage Foundation management features Table 1-1 Veritas Storage Foundation management features Feature Uses Enhanced I/O methods enable ■ To improve Oracle performance and manage system you to improve database bandwidth through an improved Application performance: Programming Interface (API) that contains advanced kernel support for file I/O, use Veritas Oracle Disk ■ Veritas Extension for Manager (ODM).
Introducing Veritas Storage Foundation™ Advanced Features About Storage Foundation management features Table 1-1 Veritas Storage Foundation management features (continued) Feature Uses Point-in-time copy features enable you to capture an instantaneous image of actively changing data: ■ ■ FlashSnap Database FlashSnap See theVeritas Storage Foundation: Storage and Availability Management for Oracle Databases ■ Storage Checkpoints ■ ■ ■ Database Storage Checkpoints See the Veritas Storage Foundatio
26 Introducing Veritas Storage Foundation™ Advanced Features Storage management features for Storage Foundation products Table 1-1 Veritas Storage Foundation management features (continued) Feature Uses Data sharing options to enable ■ To migrate from HP-UX Logical Volume Manager to you to migrate data: Veritas Volume Manager, use the Veritas Volume Manager utilities for offline migration.
Introducing Veritas Storage Foundation™ Advanced Features Storage management features for Storage Foundation products Table 1-2 Advanced features in Storage Foundation (continued) Storage Foundation feature Product licenses which enable this feature Veritas Extension for Cached Oracle Disk Manager Storage Foundation Standard Storage Foundation Standard HA Storage Foundation Enterprise Storage Foundation Enterprise HA Storage Foundation Cluster File System Storage Foundation Cluster File System HA Quick
28 Introducing Veritas Storage Foundation™ Advanced Features Storage management features for Storage Foundation products Table 1-2 Advanced features in Storage Foundation (continued) Storage Foundation feature Product licenses which enable this feature Thin Reclamation Storage Foundation Basic Storage Foundation Standard Storage Foundation Standard HA Storage Foundation Enterprise Storage Foundation Enterprise HA Storage Foundation Cluster File System Storage Foundation Cluster File System HA Storage F
Introducing Veritas Storage Foundation™ Advanced Features Storage management features for Storage Foundation products Table 1-2 Advanced features in Storage Foundation (continued) Storage Foundation feature Product licenses which enable this feature SmartTier Storage Foundation Enterprise Storage Foundation Enterprise HA Storage Foundation Cluster File System Storage Foundation Cluster File System HA Storage Foundation for Oracle RAC Portable Data Containers Storage Foundation Basic Storage Foundation
30 Introducing Veritas Storage Foundation™ Advanced Features Storage management features for Storage Foundation products
Section 2 Improving performance with database accelerators ■ Chapter 2. Overview of database accelerators ■ Chapter 3. Improving DB2 performance with Veritas Quick I/O ■ Chapter 4. Improving Sybase performance with Veritas Quick I/O ■ Chapter 5. Improving DB2 database performance with Veritas Cached Quick I/O ■ Chapter 6. Improving Sybase database performance with Veritas Cached Quick I/O ■ Chapter 7.
32
Chapter 2 Overview of database accelerators This chapter includes the following topics: ■ About Storage Foundation database accelerators ■ About Quick I/O ■ About Oracle Disk Manager ■ About Cached ODM About Storage Foundation database accelerators The major concern in any environment is maintaining respectable performance or meeting performance SLAs. Veritas Storage Foundation improves the overall performance of database environments in a variety of ways.
34 Overview of database accelerators About Quick I/O Storage Foundation database accelerators enable you to manage performance for your database with more precision. ■ To achieve raw device performance for databases run on Veritas File System file systems, use Veritas Quick I/O. ■ To further enhance database performance by leveraging large system memory to selectively buffer the frequently accessed data, use Veritas Cached Quick I/O.
Overview of database accelerators About Oracle Disk Manager ■ Improved performance and processing throughput by having Quick I/O files act as raw devices. ■ Ability to manage Quick I/O files as regular files, which simplifies administrative tasks such as allocating, moving, copying, resizing, and backing up Sybase dataservers. ■ Ability to manage Quick I/O files as regular files, which simplifies administrative tasks such as allocating, moving, copying, resizing, and backing up DB2 containers.
36 Overview of database accelerators About Cached ODM Database administrators can choose the datafile type used with the Oracle product. Historically, choosing between file system files and raw devices was based on manageability and performance. The exception to this is a database intended for use with Oracle Parallel Server, which requires raw devices on most platforms. If performance is not as important as administrative ease, file system files are typically the preferred file type.
Overview of database accelerators About Cached ODM See Veritas Storage Foundation: Storage and Availability Management for Oracle Databases.
38 Overview of database accelerators About Cached ODM
Chapter 3 Improving DB2 performance with Veritas Quick I/O This chapter includes the following topics: ■ How to set up Quick I/O ■ Creating database containers as Quick I/O files using qiomkfile for DB2 ■ Preallocating space for Quick I/O files using the setext command ■ Accessing regular VxFS files as Quick I/O files ■ Converting DB2 containers to Quick I/O files ■ About sparse files ■ Displaying Quick I/O status and file attributes ■ Extending a Quick I/O file in a DB2 environment ■ Mon
40 Improving DB2 performance with Veritas Quick I/O Creating database containers as Quick I/O files using qiomkfile for DB2 If Quick I/O is not available in the kernel, or a Veritas Storage Foundation Standard or Enterprise product license is not installed, a file system mounts without Quick I/O by default, the Quick I/O file name is treated as a regular file, and no error message is displayed.
Improving DB2 performance with Veritas Quick I/O Creating database containers as Quick I/O files using qiomkfile for DB2 Usage notes ■ The qiomkfile command creates two files: a regular file with preallocated, contiguous space, and a file that is a symbolic link pointing to the Quick I/O name extension. ■ See the qiomkfile(1M) manual page for more information. -a Creates a symbolic link with an absolute path name for a specified file. Use the -a option when absolute path names are required.
42 Improving DB2 performance with Veritas Quick I/O Preallocating space for Quick I/O files using the setext command An example to show how to create a 100MB Quick I/O-capable file named dbfile on the VxFS file system /db01 using a relative path name: $ /opt/VRTS/bin/qiomkfile -s 100m /db01/dbfile $ ls -al -rw-r--r-1 db2inst1 lrwxrwxrwx 1 db2inst1 .dbfile::cdev:vxfs: db2iadm1 104857600 db2iadm1 19 Oct 2 13:42 Oct 2 13:42 .
Improving DB2 performance with Veritas Quick I/O Accessing regular VxFS files as Quick I/O files To create a Quick I/O database file using setext 1 Access the VxFS mount point and create a file: # cd /mount_point # touch .filename 2 Use the setext command to preallocate space for the file: # /opt/VRTS/bin/setext -r size -f noreserve -f chgsize .filename 3 \ Create a symbolic link to allow databases or applications access to the file using its Quick I/O interface: # ln -s .
44 Improving DB2 performance with Veritas Quick I/O Converting DB2 containers to Quick I/O files Usage notes ■ If possible, use relative path names instead of absolute path names when creating symbolic links to access regular files as Quick I/O files. Using relative path names prevents copies of the symbolic link from referring to the original file when the directory is copied. This is important if you are backing up or moving database files with a command that preserves the symbolic link.
Improving DB2 performance with Veritas Quick I/O Converting DB2 containers to Quick I/O files Note: It is recommended that you create a Storage Checkpoint before converting to or from Quick I/O. See “Creating a Storage Checkpoint” on page 275. Before converting database files to Ouick I/O files, the following conditions must be met: Prerequisites Log in as the DB2 instance owner (typically, the user ID db2inst1) to run the qio_getdbfiles and qio_convertdbfiles commands.
46 Improving DB2 performance with Veritas Quick I/O Converting DB2 containers to Quick I/O files -T Lets you specify the type of database as db2. Specify this option only in environments where the type of database is ambiguous (for example, when multiple types of database environment variables, such as $ORACLE_SID, SYBASE, DSQUERY, and $DB2INSTANCE, are present on a server).
Improving DB2 performance with Veritas Quick I/O Converting DB2 containers to Quick I/O files To extract a list of DB2 containers to convert ◆ With the database instance up and running, run the qio_getdbfiles command from a directory for which you have write permission: $ cd /extract_directory $ export DB2DATABASE=database_name $ /opt/VRTSdb2ed/bin/qio_getdbfiles The qio_getdbfiles command extracts the list file names from the database system tables and stores the file names and their size in bytes in a
48 Improving DB2 performance with Veritas Quick I/O Converting DB2 containers to Quick I/O files To convert the DB2 database files to Quick I/O files 1 Make the database inactive by either shutting down the instance or disabling user connections. Warning: Running the qio_convertdbfiles command while the database is up and running can cause severe problems with your database, including loss of data and corruption. 2 Run the qio_convertdbfiles command from the directory containing the mkqio.
Improving DB2 performance with Veritas Quick I/O About sparse files To undo the previous run of qio_convertdbfiles and change Quick I/O files back to regular VxFS files 1 If the database is active, make it inactive by either shutting down the instance or disabling user connections. 2 Run the following command from the directory containing the mkqio.dat file: $ cd /extract_directory $ export DB2DATABASE=database_name $ /opt/VRTSdb2ed/bin/qio_convertdbfiles -u The list of Quick I/O files in the mkqio.
50 Improving DB2 performance with Veritas Quick I/O Displaying Quick I/O status and file attributes ■ 5-9KB - hole ■ 9-10KB - data block So a 1TB file system can potentially store up to 2TB worth of files if there are sufficient blocks containing zeroes. Quick I/O files cannot be sparse and will always have all blocks specified allocated to them.
Improving DB2 performance with Veritas Quick I/O Extending a Quick I/O file in a DB2 environment 51 To show a Quick I/O file resolved to a raw device ◆ Use the ls command with the file names as follows: $ ls -alL filename .filename The following example shows how the Quick I/O file name dbfile is resolved to that of a raw device: $ ls -alL d* .d* crw-r--r-- 1 db2inst1 db2iadm1 45, 1 -rw-r--r-- 1 db2inst1 db2iadm1 104890368 Oct 2 13:42 dbfile Oct 2 13:42 .
52 Improving DB2 performance with Veritas Quick I/O Monitoring tablespace free space with DB2 and extending tablespace containers To extend a Quick I/O file 1 If required, ensure the underlying storage device is large enough to contain a larger VxFS file system (see the vxassist(1M) manual page for more information), and resize the VxFS file system using fsadm command: 2 Extend the Quick I/O file using the qiomkfile command: $ /opt/VRTS/bin/qiomkfile -e extend_amount /mount_point/filename or $ /opt/V
Improving DB2 performance with Veritas Quick I/O Monitoring tablespace free space with DB2 and extending tablespace containers Usage notes ■ Monitor the free space available in the Quick I/O file, and grow the file as necessary with the qiomkfile command. A Database Administrator can grow underlying VxFS file systems online (provided the underlying disk or volume can be extended) using the fsadm command. See the fsadm (1M) manual page for more information.
54 Improving DB2 performance with Veritas Quick I/O Monitoring tablespace free space with DB2 and extending tablespace containers To extend a DB2 tablespace by a fixed amount ◆ Use the following DB2 commands: $ db2 connect to database $ db2 alter tablespace tablespace-name extend (ALL amount) $ db2 terminate This example shows how to monitor the free space on the tablespaces in database PROD: $ db2 connect to PROD $ db2 list tablespaces show detail $ db2 terminate This example shows how to extend the t
Improving DB2 performance with Veritas Quick I/O Recreating Quick I/O files after restoring a database Recreating Quick I/O files after restoring a database If you need to restore your database and were using Quick I/O files, you can use the qio_recreate command to automatically recreate the Quick I/O files after you have performed a full database recovery. The qio_recreate command uses the mkqio.dat file, which contains a list of the Quick I/O files used by the database and the file sizes.
56 Improving DB2 performance with Veritas Quick I/O Disabling Quick I/O If... Then... a Quick I/O file is missing and the regular VxFS file that it is symbolically linked to is not the original VxFS file the Quick I/O file is not recreated and a warning message is displayed. a Quick I/O file is smaller than the size listed the Quick I/O file is not recreated and a in the mkqio.dat file warning message is displayed.
Chapter 4 Improving Sybase performance with Veritas Quick I/O This chapter includes the following topics: ■ How to set up Quick I/O ■ Preallocating space for Quick I/O files using the setext command ■ Creating database files as Quick I/O files with qiomkfile ■ Accessing regular VxFS files as Quick I/O files ■ Converting Sybase files to Quick I/O files ■ Displaying Quick I/O status and file attributes ■ Extending a Quick I/O file in a Sybase environment ■ Recreating Quick I/O files after res
58 Improving Sybase performance with Veritas Quick I/O Preallocating space for Quick I/O files using the setext command message displays. If, however, you specify the -o qio option, the mount command prints an error and terminates without mounting the file system.
Improving Sybase performance with Veritas Quick I/O Creating database files as Quick I/O files with qiomkfile To create a Quick I/O database file using setext 1 Access the VxFS mount point and create a file: # cd /mount_point # touch .filename 2 Use the setext command to preallocate space for the file: # /opt/VRTS/bin/setext -r size -f noreserve -f chgsize .filename 3 \ Create a symbolic link to allow databases or applications access to the file using its Quick I/O interface: # ln -s .
60 Improving Sybase performance with Veritas Quick I/O Creating database files as Quick I/O files with qiomkfile Prerequisites ■ You can create Quick I/O files only on VxFS file systems. To create device files on an existing file system, run fsadm (or similar utility) to report and eliminate fragmentation. ■ You must have read/write permissions on the directory in which you intend to create DB2 Quick I/O files.
Improving Sybase performance with Veritas Quick I/O Creating database files as Quick I/O files with qiomkfile 61 To create a database file as a Quick I/O file using qiomkfile 1 Create a database file using the qiomkfile command: $ /opt/VRTS/bin/qiomkfile -s file_size /mount_point/filename 2 Add a device to the Sybase dataserver device pool for the Quick I/O file using the disk init command: $ > > > > > > > > isql -Usa -Psa_password -Sdataserver_name disk init name=”device_name”, physname=”/mount_poin
62 Improving Sybase performance with Veritas Quick I/O Accessing regular VxFS files as Quick I/O files In the example, qiomkfile creates a regular file named /db01/.dbfile, which has the real space allocated. Then, qiomkfile creates a symbolic link named /db01/dbfile. The symbolic link is a relative link to the Quick I/O interface for /db01/.dbfile, that is, to the .dbfile::cdev:vxfs: file. The symbolic link allows .dbfile to be accessible to any database or application using its Quick I/O interface.
Improving Sybase performance with Veritas Quick I/O Accessing regular VxFS files as Quick I/O files of using symbolic links is that you must manage two sets of files (for instance, during database backup and restore). Note: Sybase requires special prerequisites. See “Converting Sybase files to Quick I/O files” on page 64. Usage notes ■ If possible, use relative path names instead of absolute path names when creating symbolic links to access regular files as Quick I/O files.
64 Improving Sybase performance with Veritas Quick I/O Converting Sybase files to Quick I/O files Converting Sybase files to Quick I/O files Special commands are provided to assist you in identifying and converting an existing database to use Quick I/O. Use the qio_getdbfiles and qio_convertdbfiles commands to first extract and then convert Sybase dataserver files to Quick I/O files.
Improving Sybase performance with Veritas Quick I/O Converting Sybase files to Quick I/O files -m Enables you to specify a master device path. A master device does not have a corresponding physical path name in Sybase's database catalog, but rather has a d_master string. When you start an ASE server, you must pass in the full path name of master device.
66 Improving Sybase performance with Veritas Quick I/O Converting Sybase files to Quick I/O files ■ Create a file named sa_password_dataserver_name (where dataserver_name is server defined in the $DSQUERY environment variable) in the /opt/VRTSsybed/.private directory which contains the sa password. The .private directory must be owned by the Sybase database administrator user (typically sybase) and carry a file permission mode of 700.
Improving Sybase performance with Veritas Quick I/O Converting Sybase files to Quick I/O files To determine if the Sybase server is up and running 1 Access the install directory: $ cd $SYBASE/ASE-12_5/install 2 Use the showserver and grep commands to determine if the Sybase server is running: $ ./showserver | grep servername If the output of these commands displays the server name, the server is running. If the no output is displayed, the server is not running.
68 Improving Sybase performance with Veritas Quick I/O Converting Sybase files to Quick I/O files To convert the Sybase files to Quick I/O files 1 Shut down the Sybase dataserver. Caution: Running the qio_convertdbfiles command while the database is up and running can cause severe problems with your database, including loss of data, and corruption. 2 Supply the sa password when prompted, or create a file named sa_password_dataserver_name in the /opt/VRTSsybed/.
Improving Sybase performance with Veritas Quick I/O Converting Sybase files to Quick I/O files 69 Examples ■ To prepare for and convert Sybase ASE 12.5 dataserver files to Quick I/O files: $ SYBASE=/sybase; export SYBASE $ DSQUERY=L001; export DSQUERY $ PATH=$SYBASE/ASE-12_5/bin:$SYBASE/OCS-12_5/bin:$PATH; \ export PATH $ LD_LIBRARY_PATH=$SYBASE/OCS-12_5/lib; export LD_LIBRARY_PATH $ NLSPATH=/usr/lib/locale/%L/%N:$NLSPATH; export NLSPATH $ cd /sybase/ASE-12_5/install $ .
70 Improving Sybase performance with Veritas Quick I/O Converting Sybase files to Quick I/O files ■ To convert the database files listed in the mkqio.dat file to Quick I/O files, shut down the database and enter: $ /opt/VRTSsybed/bin/qio_convertdbfiles Check whether Sybase server L001 is up running... Attempt to Connect to Server L001...
Improving Sybase performance with Veritas Quick I/O Displaying Quick I/O status and file attributes Note: If the server is up and running, you receive an error message stating that you need to shut down before you can run the qio_convertdbfiles command. Displaying Quick I/O status and file attributes You can obtain and display information about Quick I/O status and file attributes using various options of the ls command: -al Lists all files on a file system, including Quick I/O files and their links.
72 Improving Sybase performance with Veritas Quick I/O Extending a Quick I/O file in a Sybase environment To show a Quick I/O file resolved to a raw device ◆ Use the ls command with the file names as follows: $ ls -alL filename .filename The following example shows how the Quick I/O file name dbfile is resolved to that of a raw device: $ ls -alL d* .d* crw-r--r-- 1 sybase sybase 45, 1 -rw-r--r-- 1 sybase sybase 104890368 Oct 2 13:42 dbfile Oct 2 13:42 .
Improving Sybase performance with Veritas Quick I/O Extending a Quick I/O file in a Sybase environment Increases the file to a specified size to allow Sybase resizing. -r To extend a Quick I/O file 1 If required, verify the underlying storage device is large enough to contain a larger VxFS file system (see the vxassist(1M) manual page for more information), and resize the VxFS file system using fsadm command.
74 Improving Sybase performance with Veritas Quick I/O Recreating Quick I/O files after restoring a database Recreating Quick I/O files after restoring a database If you need to restore your database and you were using Quick I/O files, you can use the qio_recreate command to automatically recreate the Quick I/O files after you have performed a full database recovery. The qio_recreate command uses the mkqio.dat file, which contains a list of the Quick I/O files used by the database and the file sizes.
Improving Sybase performance with Veritas Quick I/O Disabling Quick I/O If... Then... a Quick I/O file is missing and the regular VxFS file that it is symbolically linked to is not the original VxFS file the Quick I/O file is not recreated and a warning message is displayed. a Quick I/O file is smaller than the size listed the Quick I/O file is not recreated and a in the mkqio.dat file warning message is displayed.
76 Improving Sybase performance with Veritas Quick I/O Disabling Quick I/O
Chapter 5 Improving DB2 database performance with Veritas Cached Quick I/O This chapter includes the following topics: ■ Tasks for setting up Cached Quick I/O ■ Enabling Cached Quick I/O on a file system ■ Determining candidates for Cached Quick I/O ■ Enabling and disabling Cached Quick I/O for individual files Tasks for setting up Cached Quick I/O To set up and use Cached Quick I/O, you should do the following in the order in which they are listed: ■ Enable Cached Quick I/O on the underlying fil
78 Improving DB2 database performance with Veritas Cached Quick I/O Enabling Cached Quick I/O on a file system Enabling Cached Quick I/O on a file system Cached Quick I/O depends on Veritas Quick I/O running as an underlying system enhancement in order to function correctly. Follow the procedures listed here to ensure that you have the correct setup to use Cached Quick I/O successfully.
Improving DB2 database performance with Veritas Cached Quick I/O Enabling Cached Quick I/O on a file system 79 To enable the qio_cache_enable flag for a file system ◆ Use the vxtunefs command as follows: # /sbin/fs/vxfs5.0/vxtunefs -s -o qio_cache_enable=1 / mount_point For example: # /sbin/fs/vxfs5.0/vxtunefs -s -o qio_cache_enable=1 /db02 where /db02 is a VxFS file system containing the Quick I/O files and setting the qio_cache_enable flag to “1” enables Cached Quick I/O.
80 Improving DB2 database performance with Veritas Cached Quick I/O Enabling Cached Quick I/O on a file system ■ volname is the name of the volume For example: /dev/vx/dsk/PRODdg/db01 qio_cache_enable=1 /dev/vx/dsk/PRODdg/db02 qio_cache_enable=1 where /dev/vx/dsk/PRODdg/db01 is the block device on which the file system resides. The tunefstab (4) manual pages contain information on how to add tuning parameters. See the tunefstab (4) manual page.
Improving DB2 database performance with Veritas Cached Quick I/O Determining candidates for Cached Quick I/O For example: # /opt/VRTS/bin/vxtunefs /db01 The vxtunefs command displays output similar to the following: Filesystem i/o parameters for /db01 read_pref_io = 2097152 read_nstream = 1 read_unit_io = 2097152 write_pref_io = 2097152 write_nstream = 1 write_unit_io = 2097152 pref_strength = 10 buf_breakup_size = 2097152 discovered_direct_iosz = 262144 max_direct_iosz = 1048576 default_indir_size = 8192
82 Improving DB2 database performance with Veritas Cached Quick I/O Determining candidates for Cached Quick I/O following steps more than once to determine the best possible candidates for Cached Quick I/O. Before determining candidate files for Quick I/O, make sure the following conditions have been met: Prerequisites ■ You must enable Cached Quick I/O for the file systems. See “Enabling Cached Quick I/O on a file system” on page 78.
Improving DB2 database performance with Veritas Cached Quick I/O Determining candidates for Cached Quick I/O About I/O statistics The output of the qiostat command is the primary source of information to use in deciding whether to enable or disable Cached Quick I/O on specific files. Statistics are printed in two lines per object.
84 Improving DB2 database performance with Veritas Cached Quick I/O Determining candidates for Cached Quick I/O gains with Cached Quick I/O when using it for files that have higher read than write activity. Based on these two factors, /db01/tbs2_cont001 and /db01/tbs2_cont002 are prime candidates for Cached Quick I/O. See “Enabling and disabling Cached Quick I/O for individual files” on page 85.
Improving DB2 database performance with Veritas Cached Quick I/O Enabling and disabling Cached Quick I/O for individual files Enabling and disabling Cached Quick I/O for individual files After using qiostat or other analysis tools to determine the appropriate files for Cached Quick I/O, you need to disable Cached Quick I/O for those individual files that do not benefit from caching using the qioadmin command.
86 Improving DB2 database performance with Veritas Cached Quick I/O Enabling and disabling Cached Quick I/O for individual files Making individual file settings for Cached Quick I/O persistent You can make the enable or disable individual file settings for Cached Quick I/O persistent across reboots and mounts by adding cache advisory entries in the /etc/vx/qioadmin file. Cache advisories set using the qioadmin command are stored as extended attributes of the file in the inode.
Improving DB2 database performance with Veritas Cached Quick I/O Enabling and disabling Cached Quick I/O for individual files Note: To verify caching, always check the setting of the flag qio_cache_enable using vxtunefs, along with the individual cache advisories for each file.
88 Improving DB2 database performance with Veritas Cached Quick I/O Enabling and disabling Cached Quick I/O for individual files
Chapter 6 Improving Sybase database performance with Veritas Cached Quick I/O This chapter includes the following topics: ■ Tasks for setting up Cached Quick I/O ■ Enabling Cached Quick I/O on a file system ■ Determining candidates for Cached Quick I/O ■ Enabling and disabling Cached Quick I/O for individual files Tasks for setting up Cached Quick I/O To set up and use Cached Quick I/O, you should do the following in the order in which they are listed: ■ Enable Cached Quick I/O on the underlying
90 Improving Sybase database performance with Veritas Cached Quick I/O Enabling Cached Quick I/O on a file system Enabling Cached Quick I/O on a file system Cached Quick I/O depends on Veritas Quick I/O running as an underlying system enhancement in order to function correctly. Follow the procedures listed here to ensure that you have the correct setup to use Cached Quick I/O successfully.
Improving Sybase database performance with Veritas Cached Quick I/O Enabling Cached Quick I/O on a file system 91 To enable the qio_cache_enable flag for a file system ◆ Use the vxtunefs command as follows: # /sbin/fs/vxfs5.0/vxtunefs -s -o qio_cache_enable=1 / mount_point For example: # /sbin/fs/vxfs5.0/vxtunefs -s -o qio_cache_enable=1 /db02 where /db02 is a VxFS file system containing the Quick I/O files and setting the qio_cache_enable flag to “1” enables Cached Quick I/O.
92 Improving Sybase database performance with Veritas Cached Quick I/O Enabling Cached Quick I/O on a file system ■ volname is the name of the volume For example: /dev/vx/dsk/PRODdg/db01 qio_cache_enable=1 /dev/vx/dsk/PRODdg/db02 qio_cache_enable=1 where /dev/vx/dsk/PRODdg/db01 is the block device on which the file system resides. The tunefstab (4) manual pages contain information on how to add tuning parameters. See the tunefstab (4) manual page.
Improving Sybase database performance with Veritas Cached Quick I/O Enabling Cached Quick I/O on a file system To obtain information on all vxtunefs system parameters ◆ Use the vxtunefs command without grep: # /opt/VRTS/bin/vxtunefs /mount_point # /opt/VRTS/bin/vxtunefs /mount_point For example: # /opt/VRTS/bin/vxtunefs /db01 The vxtunefs command displays output similar to the following: Filesystem i/o parameters for /db01 read_pref_io = 2097152 read_nstream = 1 read_unit_io = 2097152 write_pref_io = 20
94 Improving Sybase database performance with Veritas Cached Quick I/O Determining candidates for Cached Quick I/O Determining candidates for Cached Quick I/O Determining which files can benefit from Cached Quick I/O is an iterative process that varies with each application. For this reason, you may need to complete the following steps more than once to determine the best possible candidates for Cached Quick I/O.
Improving Sybase database performance with Veritas Cached Quick I/O Determining candidates for Cached Quick I/O About I/O statistics The output of the qiostat command is the primary source of information to use in deciding whether to enable or disable Cached Quick I/O on specific files. Statistics are printed in two lines per object.
96 Improving Sybase database performance with Veritas Cached Quick I/O Determining candidates for Cached Quick I/O gains with Cached Quick I/O when using it for files that have higher read than write activity. Based on these two factors, /db01/user.dbf is a prime candidate for Cached Quick I/O. See “Enabling and disabling Cached Quick I/O for individual files” on page 97.
Improving Sybase database performance with Veritas Cached Quick I/O Enabling and disabling Cached Quick I/O for individual files Enabling and disabling Cached Quick I/O for individual files After using qiostat or other analysis tools to determine the appropriate files for Cached Quick I/O, you need to disable Cached Quick I/O for those individual files that do not benefit from caching using the qioadmin command.
98 Improving Sybase database performance with Veritas Cached Quick I/O Enabling and disabling Cached Quick I/O for individual files To enable Cached Quick I/O for an individual file ◆ Use the qioadmin command to set the cache advisory to ON as follows: $ /opt/VRTS/bin/qioadmin -S filename=ON /mount_point For example, running qiostatshows the cache hit ratio for the file /db01/master.dbfreaches a level that would benefit from caching. To enable Cached Quick I/O for the file /db01/master.
Improving Sybase database performance with Veritas Cached Quick I/O Enabling and disabling Cached Quick I/O for individual files To enable or disable individual file settings for Cached Quick I/O automatically after a reboot or mount ◆ Add cache advisory entries in the /etc/vx/qioadmin file as follows: device=/dev/vx/dsk// filename,OFF filename,OFF filename,OFF filename,ON For example, to make the Cached Quick I/O settings for individual files in the /db01 file system persistent, edit
100 Improving Sybase database performance with Veritas Cached Quick I/O Enabling and disabling Cached Quick I/O for individual files To display the current cache advisory settings for a file ◆ Use the qioadmin command with the -P option as follows: $ /opt/VRTS/bin/qioadmin -P filename /mount_point For example, to display the current cache advisory setting for the file sysprocs.dbfin the /db01file system: $ /opt/VRTS/bin/qioadmin -P sysprocs.dbf /db01 sysprocs.
Chapter 7 Improving database performance with Veritas Concurrent I/O This chapter includes the following topics: ■ About Concurrent I/O ■ Enabling and disabling Concurrent I/O About Concurrent I/O Veritas Concurrent I/O improves the performance of regular files on a VxFS file system without the need for extending namespaces and presenting the files as devices. This simplifies administrative tasks and allows databases, which do not have a sequential read/write requirement, to access files concurrently.
102 Improving database performance with Veritas Concurrent I/O Enabling and disabling Concurrent I/O The Veritas Concurrent I/O feature removes these semantics from the read and write operations for databases and other applications that do not require serialization.
Improving database performance with Veritas Concurrent I/O Enabling and disabling Concurrent I/O For example for DB2 to mount a file system named /datavol on a mount point named /db2data: To enable Concurrent I/O on a new SMS container using the namefs -o cio option ◆ Using the mountcommand, mount the directory in which you want to put data containers of the SMS tablespaces using the Concurrent I/O feature.
104 Improving database performance with Veritas Concurrent I/O Enabling and disabling Concurrent I/O To enable Concurrent I/O on a DB2 tablespace when creating the tablespace 1 Use the db2 -v "create regular tablespace..." command with the no file system caching option. 2 Set all other parameters according to your system requirements.
Improving database performance with Veritas Concurrent I/O Enabling and disabling Concurrent I/O Enabling Concurrent I/O for Sybase Because you do not need to extend name spaces and present the files as devices, you can enable Concurrent I/O on regular files. Before enabling Concurrent I/O, review the following: Prerequisites To use the Concurrent I/O feature, the file system must be a VxFS file system. ■ Make sure the mount point on which you plan to mount the file system exists.
106 Improving database performance with Veritas Concurrent I/O Enabling and disabling Concurrent I/O To disable Concurrent I/O on a file system using the mount command 1 Shutdown the DB2 instance. 2 Unmount the file sytem using the umount command. 3 Mount the file system again using the mount command without using the -o cio option.
Section 3 Storage Foundation Thin Storage optimization ■ Chapter 8. About SF Thin Storage optimization solutions ■ Chapter 9. Migrating data from thick storage to thin storage ■ Chapter 10.
108
Chapter 8 About SF Thin Storage optimization solutions This chapter includes the following topics: ■ About SF solutions for thin optimization ■ About Thin Storage ■ About Thin Provisioning ■ About SF Thin Reclamation feature ■ About SmartMove About SF solutions for thin optimization Array-based options like Thin Storage and Thin Provisioning help storage administrators to meet the challenges in managing storage, such as provisioning storage, migrating data for storage utilization, and maintainin
110 About SF Thin Storage optimization solutions About Thin Storage About Thin Storage Thin Storage is an array vendor solution for allocating storage to applications only when the storage is truly needed, from a pool of free storage. Thin Storage attempts to solve the problem of under utilization of available array capacity. Thin Storage Reclamation-capable arrays and LUNs allow the administrators to release once-used storage to the pool of free storage.
About SF Thin Storage optimization solutions About SmartMove Using SmartMove with Thin Provisioning This section describes how to use SmartMove with Thin Provisioning that improves the synchronization performance and uses thin storage efficiently. To use SmartMove with Thin Provisioning 1 Mount the volume as the VxFS file system type. For example: # mount -F vxfs /dev/vx/dsk/oradg/oravol1 /oravol1 2 Run the following command: # sync 3 Mirror the volume.
112 About SF Thin Storage optimization solutions About SmartMove
Chapter 9 Migrating data from thick storage to thin storage This chapter includes the following topics: ■ About using SmartMove to migrate to Thin Storage ■ Setting up SmartMove ■ Migrating to thin provisioning About using SmartMove to migrate to Thin Storage If you have existing data on a thick LUN, the SmartMove feature enables you to migrate that data to a thin LUN, and reclaim the unused space.
114 Migrating data from thick storage to thin storage Migrating to thin provisioning Displaying the SmartMove configuration This section describes how to display the SmartMove configuration. To display the SmartMove value ◆ To display the current and default SmartMove values, type the following command: # vxdefault list KEYWORD usefssmartmove ... CURRENT-VALUE all DEFAULT-VALUE all Changing the SmartMove configuration SmartMove has three different values where SmartMove can be applied or not.
Migrating data from thick storage to thin storage Migrating to thin provisioning To migrate to thin provisioning 1 Check if the SmartMove Feature is enabled. See “Displaying the SmartMove configuration” on page 114. See “Changing the SmartMove configuration” on page 114. 2 Add the new, thin LUNs to the existing disk group. Enter the following commands: # vxdisksetup -i da_name # vxdg -g datadg adddisk da_name where da_name is the disk access name in VxVM.
116 Migrating data from thick storage to thin storage Migrating to thin provisioning # vxprint -g datadg TY NAME dg datadg dm THINARRAY0_02 ASSOC datadg THINARRAY0_02 KSTATE - LENGTH 83886080 PLOFFS - STATE - TUTIL0 PUTIL0 - dm STDARRAY1_01 STDARRAY1_01 - 41943040 - -OHOTUSE - - v pl sd pl sd fsgen datavol datavol-01 datavol datavol-02 ENABLED ENABLED ENABLED ENABLED ENABLED 41943040 41943040 41943040 41943040 41943040 0 0 ACTIVE ACTIVE ACTIVE - - - datavol datavol-01 STDARRAY1_01-0
Chapter 10 Using SF Thin Reclamation This chapter includes the following topics: ■ Reclamation of storage on thin reclamation arrays ■ Monitoring Thin Reclamation using the vxtask command Reclamation of storage on thin reclamation arrays Storage Foundation enables reclamation of storage on thin reclamation arrays. See “How reclamation on a deleted volume works” on page 118. The thin reclamation feature is supported only for LUNs that have the thinrclm attribute.
118 Using SF Thin Reclamation Reclamation of storage on thin reclamation arrays To identify LUNs ◆ To identify LUNs that are thin or thinrclm type the following command: # vxdisk -o thin list DEVICE SIZE(mb) hitachi_usp0_065a 10000 hitachi_usp0_065b 10000 hitachi_usp0_065c 10000 hitachi_usp0_065d 10000 . . . hitachi_usp0_0660 10000 PHYS_ALLOC(mb) 84 110 74 50 - 672 GROUP thindg TYPE thinrclm thinrclm thinrclm thinrclm thinrclm In the above output, the SIZE column shows the size of the disk.
Using SF Thin Reclamation Reclamation of storage on thin reclamation arrays reclaim_on_delete_wait_period The storage space that is used by the deleted volume is reclaimed after reclaim_on_delete_wait_period days. The value of the tunable can be anything between -1 to 367. The default is set to 1, which means the volume is deleted the next day. The storage is reclaimed immediately if the value is -1. The storage space is not reclaimed automatically, if the value is greater than 366.
120 Using SF Thin Reclamation Reclamation of storage on thin reclamation arrays To reclaim space on disk1, use the following command: # vxdisk -o full reclaim disk1 The above command reclaims unused space on disk1 that is outside of the vol1. The reclamation skips the vol1 volume, since the VxFS file system is not mounted, but it scans the rest of the disk for unused space. Example of reclamation for disk groups.
Using SF Thin Reclamation Monitoring Thin Reclamation using the vxtask command Note: Thin Reclamation is a slow process and may take several hours to complete, depending on the file system size. Thin Reclamation is not guaranteed to reclaim 100% of the free space. You can track the progress of the Thin Reclamation process by using the vxtask list command when using the Veritas Volume Manager (VxVM) command vxdisk reclaim. See the vxtask(1M) and vxdisk(1M) manual pages.
122 Using SF Thin Reclamation Monitoring Thin Reclamation using the vxtask command To monitor thin reclamation 1 To initiate thin reclamation, use the following command: # vxdisk reclaim diskgroup For example: # vxdisk reclaim dg100 2 To monitor the reclamation status, run the following command in another session: # vxtask list TASKID PTID TYPE/STATE PCT PROGRESS 171 RECLAIM/R 00.
Section 4 Making point-in-time copies ■ Chapter 11. Understanding point-in-time copy methods ■ Chapter 12. Setting up volumes for instant snapshots ■ Chapter 13. Online database backup ■ Chapter 14. Off-host cluster file system backup ■ Chapter 15. Decision support ■ Chapter 16. Database recovery ■ Chapter 17. Administering volume snapshots ■ Chapter 18. Administering snapshot file systems ■ Chapter 19. Administering Storage Checkpoints ■ Chapter 20.
124
Chapter 11 Understanding point-in-time copy methods This chapter includes the following topics: ■ About point-in-time copies ■ About point-in-time copy technology ■ Point-in-time copy use cases About point-in-time copies Two trends dominate the evolution of digital data used to conduct and manage business. First, more and more data must be continuously available for 24x7 transaction processing, decision making, intellectual property creation, and so forth.
126 Understanding point-in-time copy methods About point-in-time copies The following types of point-in-time copy solution are considered in this document: ■ Primary host solutions where the copy is processed on the same system as the active data. See “Implementing point-in time copy solutions on a primary host” on page 126. ■ Off-host solutions where the copy is processed on a different system from the active data.
Understanding point-in-time copy methods About point-in-time copies Note: The Disk Group Split/Join functionality is not used. As all processing takes place in the same disk group, synchronization of the contents of the snapshots from the original volumes is not usually required unless you want to prevent disk contention. Snapshot creation and updating are practically instantaneous.
128 Understanding point-in-time copy methods About point-in-time copies backup and decision support are prevented from degrading the performance of the primary host that is performing the main production activity (such as running a database).
Understanding point-in-time copy methods About point-in-time copies Figure 11-4 Example connectivity for off-host solution using redundant-loop access OHP host Primary host Network c1 c2 c1 c2 c3 c4 c3 c4 Fibre Channel hubs or switches Disk arrays This layout uses redundant-loop access to deal with the potential failure of any single component in the path between a system and a disk array. Note: On some operating systems, controller names may differ from what is shown here.
130 Understanding point-in-time copy methods About point-in-time copies Figure 11-5 Example implementation of an off-host point-in-time copy solution using a cluster node Cluster Cluster node configured as OHP host 1 2 Disks containing primary volumes used to hold production databases or file systems SCSI or Fibre Channel connectivity Disks containing snapshot volumes used to implement off-host processing solutions Figure 11-6 shows an alternative arrangement, where the OHP node could be a separat
Understanding point-in-time copy methods About point-in-time copies Note: For off-host processing, the example scenarios in this document assume that a separate OHP host is dedicated to the backup or decision support role. For clusters, it may be simpler, and more efficient, to configure an OHP host that is not a member of the cluster. Figure 11-7 illustrates the steps that are needed to set up the processing solution on the primary host.
132 Understanding point-in-time copy methods About point-in-time copies Figure 11-7 Implementing off-host processing solutions OHP host Primary host or cluster 1. Prepare the volumes If required, create an empty volume in the disk group, and use vxsnap prepare to prepare volumes for snapshot creation. Volume Empty volume 2. Create snapshot volumes Use vxsnap make to create synchronized snapshot volumes. (Use vxsnap print to check the status of synchronization.) Volume Snapshot volume 3.
Understanding point-in-time copy methods About point-in-time copy technology Disk Group Split/Join is used to split off snapshot volumes into a separate disk group that is imported on the OHP host. Note: As the snapshot volumes are to be moved into another disk group and then imported on another host, their contents must first be synchronized with the parent volumes. On reimporting the snapshot volumes, refreshing their contents from the original volume is speeded by using FastResync.
134 Understanding point-in-time copy methods About point-in-time copy technology See “Veritas FlashSnap Agent for Symmetrix” on page 138. Volume snapshots A snapshot is a virtual image of the content of a set of data at the instant of creation. Physically, a snapshot may be a full (complete bit-for-bit) copy of the data set, or it may contain only those elements of the data set that have been updated since snapshot creation.
Understanding point-in-time copy methods About point-in-time copy technology When snapshot volumes are reattached to their original volumes, persistent FastResync allows the snapshot data to be quickly refreshed and re-used. Persistent FastResync uses disk storage to ensure that FastResync maps survive both system and cluster crashes. If persistent FastResync is enabled on a volume in a private disk group, incremental resynchronization can take place even if the host is rebooted.
136 Understanding point-in-time copy methods About point-in-time copy technology types, depending on the intent logging capabilities of the file system, there may potentially be inconsistencies between in-memory data and the data in the snapshot image. For databases, a suitable mechanism must additionally be used to ensure the integrity of tablespace data when the volume snapshot is taken. The facility to temporarily suspend file system I/O is provided by most modern database software.
Understanding point-in-time copy methods About point-in-time copy technology Note: As space-optimized instant snapshots only record information about changed regions in the original volume, they cannot be moved to a different disk group. They are therefore unsuitable for the off-host processing applications that are described in this document.
138 Understanding point-in-time copy methods Point-in-time copy use cases For more information about the implementation of Storage Checkpoints, see the Veritas File System Administrator’s Guide. Veritas FlashSnap Agent for Symmetrix The EMC TimeFinder product from EMC is a business continuance solution that allows you to create and use copies of EMC Symmetrix devices while the standard devices remain online and accessible.
Understanding point-in-time copy methods Point-in-time copy use cases ■ Decision support analysis and reporting—Operations such as decision support analysis and business reporting may not require access to real-time information. You can direct such operations to use a replica database that you have created from snapshots, rather than allow them to compete for access to the primary database. When required, you can quickly resynchronize the database copy with the data in the primary database.
140 Understanding point-in-time copy methods Point-in-time copy use cases
Chapter 12 Setting up volumes for instant snapshots This chapter includes the following topics: ■ About setting up volumes for instant snapshots ■ Additional preparation activities ■ Preparing a volume for instant snapshot operations ■ Creating a volume for use as a full-sized instant snapshot ■ Creating a shared cache object About setting up volumes for instant snapshots This chapter describes how to make volumes ready for instant snapshot creation.
142 Setting up volumes for instant snapshots Additional preparation activities Table 12-1 Creation of snapshot mirrors Point-in-time copy application Create snapshot mirrors for volumes containing... Online database backup VxFS file systems for database datafiles to be backed up. See “About online database backup” on page 155. Off-host cluster file system backup VxFS cluster file systems to be backed up. See “About off-host cluster file system backup” on page 167.
Setting up volumes for instant snapshots Preparing a volume for instant snapshot operations See “About volume snapshots” on page 199. ■ Use a separate empty volume that you have prepared in advance. See “Creating a volume for use as a full-sized instant snapshot” on page 150. When creating space-optimized instant snapshots that share a cache, you must set up the cache before creating the snapshots. See “Creating a shared cache object” on page 151.
144 Setting up volumes for instant snapshots Preparing a volume for instant snapshot operations To add a version 20 DCO object and DCO volume to an existing volume 1 Ensure that the disk group containing the existing volume has been upgraded to at least version 110.
Setting up volumes for instant snapshots Preparing a volume for instant snapshot operations 2 Use the following command to add a version 20 DCO and DCO volume to an existing volume: # vxsnap [-g diskgroup] prepare volume [ndcomirs=number] \ [regionsize=size] [alloc=storage_attribute[,...]] The ndcomirs attribute specifies the number of DCO plexes that are created in the DCO volume. It is recommended that you configure as many DCO plexes as there are data and snapshot plexes in the volume.
146 Setting up volumes for instant snapshots Preparing a volume for instant snapshot operations 3 If you are going to create a snapshot volume by breaking off existing plexes, use the following command to add one or more snapshot mirrors to the volume: # vxsnap [-b] [-g diskgroup] addmir volume [nmirror=N] \ [alloc=storage_attribute[,...]] By default, one snapshot plex is added unless you specify a number using the nmirror attribute. For a backup, you should usually only require one plex.
Setting up volumes for instant snapshots Preparing a volume for instant snapshot operations To view the details of the DCO object and DCO volume that are associated with a volume, use the vxprint command. The following is example vxprint -vh output for the volume named zoo (the TUTIL0 and PUTIL0 columns are omitted for clarity): TTY ...
148 Setting up volumes for instant snapshots Preparing a volume for instant snapshot operations Figure 12-1 illustrates some instances in which it is not be possible to split a disk group because of the location of the DCO plexes. Relocate DCO plexes as needed. See “Preparing a volume for instant snapshot operations” on page 143.
Setting up volumes for instant snapshots Preparing a volume for instant snapshot operations 149 Examples of disk groups that can and cannot be split Figure 12-1 Volume data plexes The disk group can be split as the DCO plexes are on dedicated disks, and can therefore accompany the disks that contain the volume data. Snapshot plex Split Volume DCO plexes Snapshot DCO plex Volume data plexes Snapshot plex The disk group cannot be split as the DCO plexes cannot accompany their volumes.
150 Setting up volumes for instant snapshots Creating a volume for use as a full-sized instant snapshot Creating a volume for use as a full-sized instant snapshot If you want to create a full-sized instant snapshot for an original volume that does not contain any spare plexes, you can use an empty volume with the required degree of redundancy, and with the same size and same region size as the original volume.
Setting up volumes for instant snapshots Creating a shared cache object 3 Use the vxprint command on the DCO to discover its region size (in blocks): # RSZ=‘vxprint [-g diskgroup] -F%regionsz $DCONAME‘ 4 Use the vxassist command to create a volume, snapvol, of the required size and redundancy, together with a version 20 DCO volume with the correct region size: # vxassist [-g diskgroup] make snapvol $LEN \ [layout=mirror nmirror=number] logtype=dco drl=no \ dcoversion=20 [ndcomirror=number] regionsz=$RSZ
152 Setting up volumes for instant snapshots Creating a shared cache object 2 ■ If redundancy is a desired characteristic of the cache volume, it should be mirrored. This increases the space that is required for the cache volume in proportion to the number of mirrors that it has. ■ If the cache volume is mirrored, space is required on at least as many disks as it has mirrors. These disks should not be shared with the disks used for the parent volumes.
Setting up volumes for instant snapshots Creating a shared cache object 3 Use the vxmake cache command to create a cache object on top of the cache volume that you created in the previous step: # vxmake [-g diskgroup] cache cache_object \ cachevolname=volume [regionsize=size] [autogrow=on] \ [highwatermark=hwmk] [autogrowby=agbvalue] \ [maxautogrow=maxagbvalue]] If you specify the region size, it must be a power of 2, and be greater than or equal to 16KB (16k).
154 Setting up volumes for instant snapshots Creating a shared cache object blocks), vxcached grows the size of the cache volume by the value of autogrowby (default value is 20% of the size of the cache volume in blocks). ■ When cache usage reaches the high watermark value, and the new required cache size would exceed the value of maxautogrow, vxcached deletes the oldest snapshot in the cache. If there are several snapshots with the same age, the largest of these is deleted.
Chapter 13 Online database backup This chapter includes the following topics: ■ About online database backup ■ Making a backup of an online database on the same host ■ Making an off-host backup of an online database About online database backup Online backup of a database can be implemented by configuring either the primary host or a dedicated separate host to perform the backup operation on snapshot mirrors of the primary host‘s database.
156 Online database backup Making a backup of an online database on the same host ■ See “Script to suspend I/O for a DB2 database” on page 516. ■ See “Script to end Oracle database hot backup mode” on page 517. ■ See “Script to release a Sybase ASE database from quiesce mode” on page 517. ■ See “Script to resume I/O for a DB2 database” on page 518. ■ See “Script to perform off-host backup” on page 518.
Online database backup Making a backup of an online database on the same host Figure 13-1 Example system configuration for database backup on the primary host Primary host for database Local disks Controllers c1 c2 c3 Database volumes are created on these disks c4 Disk arrays Snapshot volumes are created on these disks Backup to disk, tape or other media by primary host Note: It is assumed that you have already prepared the volumes containing the file systems for the datafiles to be backed up as
158 Online database backup Making a backup of an online database on the same host # vxsnap -g volumedg addmir volume [nmirror=N] \ [alloc=storage_attributes] # vxsnap -g volumedg make \ source=volume/newvol=snapvol[/nmirror=N]\ [alloc=storage_attributes] By default, one snapshot plex is added unless you specify a number using the nmirror attribute. For a backup, you should usually only require one plex.
Online database backup Making a backup of an online database on the same host When you are ready to make a backup, proceed to step 2. 2 3 If the volumes to be backed up contain database tables in file systems, suspend updates to the volumes: ■ DB2 provides the write suspend command to temporarily suspend I/O activity for a database. As the DB2 database administrator, use a script such as that shown in the example. See “Script to suspend I/O for a DB2 database” on page 516.
160 Online database backup Making an off-host backup of an online database See “Script to release a Sybase ASE database from quiesce mode” on page 517. 5 Back up the snapshot volume. If you need to remount the file system in the volume to back it up, first run fsck on the volume.
Online database backup Making an off-host backup of an online database There is no requirement for the OHP host to have access to the disks that contain the primary database volumes.
162 Online database backup Making an off-host backup of an online database If the database is configured on volumes in a cluster-shareable disk group, it is assumed that the primary host for the database is the master node for the cluster. However, if the primary host is not also the master node, most Volume Manager operations on shared disk groups are best performed on the master node. The procedure in this section is designed to minimize copy-on-write operations that can impact system performance.
Online database backup Making an off-host backup of an online database 3 Use the following command to make a full-sized snapshot, snapvol, of the tablespace volume by breaking off the plexes that you added in step 1 from the original volume: # vxsnap -g volumedg make \ source=volume/newvol=snapvol/nmirror=N \ [alloc=storage_attributes] The nmirror attribute specifies the number of mirrors, N, in the snapshot volume.
164 Online database backup Making an off-host backup of an online database 6 On the primary host, deport the snapshot volume’s disk group using the following command: # vxdg deport snapvoldg 7 On the OHP host where the backup is to be performed, use the following command to import the snapshot volume’s disk group: # vxdg import snapvoldg 8 VxVM will recover the volumes automatically after the disk group import unless it is set to not recover automatically.
Online database backup Making an off-host backup of an online database 12 On the primary host, use the following command to rejoin the snapshot volume’s disk group with the original volume’s disk group: # vxdg join snapvoldg volumedg 13 VxVM will recover the volumes automatically after the join unless it is set not to recover automatically. Check if the snapshot volumes are initially disabled and not recovered following the join.
166 Online database backup Making an off-host backup of an online database dbase_vol from its snapshot volume snap2_dbase_vol without removing the snapshot volume: # vxsnap -g dbasedg restore dbase_vol \ source=snap2_dbase_vol destroy=no Note: You must shut down the database and unmount the file system that is configured on the original volume before attempting to resynchronize its contents from a snapshot.
Chapter 14 Off-host cluster file system backup This chapter includes the following topics: ■ About off-host cluster file system backup ■ Mounting a file system for shared access ■ Using off-host processing to back up cluster file systems About off-host cluster file system backup Veritas Cluster File System (CFS) allows cluster nodes to share access to the same file system. CFS is especially useful for sharing read-intensive data between cluster nodes.
168 Off-host cluster file system backup Mounting a file system for shared access Figure 14-1 System configuration for off-host file system backup scenarios Network Cluster nodes OHP host Local disks Local disks Controllers Controllers c1 c2 c3 c4 c1 c2 c3 c4 c1 c2 c3 c4 c1 c2 c3 c4 Volumes created on these disks are accessed by the cluster nodes Disk arrays c1 c2 c3 c4 Snapshot volumes created on these disks are accessed by all hosts Backup to disk, tape or other media by OHP host Mounting
Off-host cluster file system backup Using off-host processing to back up cluster file systems For example, to mount the volume cfs_vol in the disk group exampledg for shared access on the mount point, /mnt_pnt: # mount -F vxfs -o cluster /dev/vx/dsk/exampledg/cfs_vol /mnt_pnt Using off-host processing to back up cluster file systems Before using this procedure, you must prepare the volumes containing the file systems that are to be backed up.
170 Off-host cluster file system backup Using off-host processing to back up cluster file systems To back up a snapshot of a mounted file system which has shared access 1 On the master node, use the following command to make a full-sized snapshot, snapvol, of the volume containing the file system by breaking off plexes from the original volume: # vxsnap -g volumedg make \ source=volume/newvol=snapvol/nmirror=N The nmirror attribute specifies the number of mirrors, N, in the snapshot volume.
Off-host cluster file system backup Using off-host processing to back up cluster file systems 2 On any node, refresh the contents of the snapshot volumes from the original volume using the following command: # vxsnap -g volumedg refresh snapvol source=vol \ [snapvol2 source=vol2]... syncing=yes The syncing=yes attribute starts a synchronization of the snapshot in the background.
172 Off-host cluster file system backup Using off-host processing to back up cluster file systems 5 On the master node, deport the snapshot volume’s disk group using the following command: # vxdg deport snapvoldg For example, to deport the disk group splitdg: # vxdg deport splitdg 6 On the OHP host where the backup is to be performed, use the following command to import the snapshot volume’s disk group: # vxdg import snapvoldg For example, to import the disk group splitdg: # vxdg import splitdg 7 V
Off-host cluster file system backup Using off-host processing to back up cluster file systems 9 Back up the file system at this point using a command such as bpbackup in Symantec NetBackup. After the backup is complete, use the following command to unmount the file system.
174 Off-host cluster file system backup Using off-host processing to back up cluster file systems 13 VxVM will recover the volumes automatically after the join unless it is set not to recover automatically. Check if the snapshot volumes are initially disabled and not recovered following the join.
Off-host cluster file system backup Using off-host processing to back up cluster file systems Reattaching snapshot plexes Some or all plexes of an instant snapshot may be reattached to the specified original volume, or to a source volume in the snapshot hierarchy above the snapshot volume. Note: This operation is not supported for space-optimized instant snapshots. By default, all the plexes are reattached, which results in the removal of the snapshot.
176 Off-host cluster file system backup Using off-host processing to back up cluster file systems
Chapter 15 Decision support This chapter includes the following topics: ■ About decision support ■ Creating a replica database on the same host ■ Creating an off-host replica database About decision support You can use snapshots of a primary database to create its replica at a given moment in time. You can then implement decision support analysis and report generation operations that take their data from the database copy rather than from the primary database.
178 Decision support Creating a replica database on the same host ■ See “Script to suspend I/O for a DB2 database” on page 516. ■ See “Script to end Oracle database hot backup mode” on page 517. ■ See “Script to release a Sybase ASE database from quiesce mode” on page 517. ■ See “Script to resume I/O for a DB2 database” on page 518. ■ See “Script to create an off-host replica Oracle database” on page 519. ■ See “Script to complete, recover and start a replica Oracle database” on page 521.
Decision support Creating a replica database on the same host Example system configuration for decision support on the primary host Figure 15-1 Primary host for database Local disks Controllers c1 c2 c3 Database volumes are created on these disks c4 Disk arrays Snapshot volumes are created on these disks Note: It is assumed that you have already prepared the database volumes to be replicated as described in the example. See “About setting up volumes for instant snapshots” on page 141.
180 Decision support Creating a replica database on the same host To set up a replica database to be used for decision support on the primary host 1 Prepare, if you have not already done so, the host to use the snapshot volume that contains the copy of the database tables. Set up any new database logs and configuration files that are required to initialize the database.
Decision support Creating a replica database on the same host 2 Use the following command to make a full-sized snapshot, snapvol, of the tablespace volume by breaking off plexes from the original volume: # vxsnap -g volumedg make \ source=volume/newvol=snapvol/nmirror=N The nmirror attribute specifies the number of mirrors, N, in the snapshot volume. If the volume does not have any available plexes, or its layout does not support plex break-off, prepare an empty volume for the snapshot.
182 Decision support Creating a replica database on the same host Note: This step sets up the snapshot volumes, and starts tracking changes to the original volumes. When you are ready to create the replica database, proceed to step 3. 3 4 If the volumes to be backed up contain database tables in file systems, suspend updates to the volumes: ■ DB2 provides the write suspend command to temporarily suspend I/O activity for a database.
Decision support Creating a replica database on the same host 5 If you temporarily suspended updates to volumes in step 3, perform the following steps. Release all the tablespaces or databases from suspend, hot backup, or quiesce mode: 6 ■ As the DB2 database administrator, use a script such as the example script. See “Script to resume I/O for a DB2 database” on page 518.
184 Decision support Creating an off-host replica database dump transaction to dump_device with standby_access Then copy the dumped transaction log to the appropriate replica database directory. 8 As the database administrator, start the new database: ■ For an Oracle database, use a script such as the example script. See “Script to complete, recover and start a replica Oracle database” on page 521. ■ For a Sybase ASE database, use a script such as the example script.
Decision support Creating an off-host replica database 185 Example system configuration for off-host decision support Figure 15-2 Network Primary host for database OHP host Local disks Local disks Volumes created on local disks of OHP host are used for the replica database's logs and configuration files Controllers c1 c2 c3 Volumes created on these disks are accessed by the primary host Controllers c4 c1 Disk arrays c2 c3 Snapshot volumes created on these disks are accessed by both hosts
186 Decision support Creating an off-host replica database To set up a replica database to be used for decision support on an OHP host 1 If you have not already done so, prepare the OHP host to use the snapshot volume that contains the copy of the database tables. Set up any new database logs and configuration files that are required to initialize the database. See “About preparing a replica Oracle database” on page 525.
Decision support Creating an off-host replica database Note that if the replica database must be able to be rolled forward (for example, if it is to be used as a standby database), the primary database must be in LOGRETAIN RECOVERY mode. 4 ■ Oracle supports online backup by temporarily suspending updates to the datafiles of the tablespaces, provided that the database is running in archive mode and the tablespaces are online.
188 Decision support Creating an off-host replica database ■ 6 As the Sybase database administrator, release the database from quiesce mode using a script such as that shown in the example. See “Script to release a Sybase ASE database from quiesce mode” on page 517.
Decision support Creating an off-host replica database 10 VxVM will recover the volumes automatically after the disk group import unless it is set not to recover automatically. Check if the snapshot volume is initially disabled and not recovered following the split. If a volume is in the DISABLED state, use the following command on the OHP host to recover and restart the snapshot volume: # vxrecover -g snapvoldg -m snapvol ... # vxvol -g snapvoldg start snapvol ...
190 Decision support Creating an off-host replica database ■ If the replica DB2 database is not to be rolled forward, use the following commands to start and recover it: db2start db2inidb database as snapshot If the replica DB2 database is to be rolled forward: ■ The primary must have been placed in LOGRETAIN RECOVERY mode before the snapshot was taken.
Decision support Creating an off-host replica database Resynchronizing the data with the primary host This procedure describes how to resynchronize the data in a snapshot with the primary host.
192 Decision support Creating an off-host replica database Updating a warm standby Sybase ASE 12.5 database If you specified the for external dump clause when you quiesced the primary database, and you started the replica database by specifying the -q option to the dataserver command, you can use transaction logs to update the replica database.
Decision support Creating an off-host replica database To reattach a snapshot ◆ Use the following command, to re-attach some or all plexes of an instant snapshot to the specified original volume, or to a source volume in the snapshot hierarchy above the snapshot volume: # vxsnap [-g diskgroup] reattach snapvol source=vol \ [nmirror=number] For example the following command reattaches 1 plex from the snapshot volume, snapmyvol, to the volume, myvol: # vxsnap -g mydg reattach snapmyvol source=myvol nmirror
194 Decision support Creating an off-host replica database
Chapter 16 Database recovery This chapter includes the following topics: ■ About database recovery using Storage Checkpoints ■ Creating Storage Checkpoints ■ Rolling back a database About database recovery using Storage Checkpoints You can use Storage Checkpoints to implement efficient backup and recovery of databases that have been laid out on VxFS file systems.
196 Database recovery Creating Storage Checkpoints Note: Storage Checkpoints can only be used to restore from logical errors such as human mistakes or software faults. You cannot use them to restore files after a disk failure because all the data blocks are on the same physical device. Disk failure requires restoration of a database from a backup copy of the database files kept on a separate medium.
Database recovery Rolling back a database Rolling back a database The procedure in this section describes how to roll back a database using a Storage Checkpoint, for example, after a logical error has occurred. To roll back a database 1 Ensure that the database is offline. You can use the VxDBA utility to display the status of the database and its tablespaces, and to shut down the database: ■ Select 2 Display Database/VxDBA Information to access the menus that display status information.
198 Database recovery Rolling back a database Note: To find out when an error occurred, check the ../bdump/alert*.log file. See the Oracle documentation for complete and detailed information on database recovery. 5 To open the database after an incomplete media recovery, use the following command: alter database open resetlogs; Note: The resetlogs option is required after an incomplete media recovery to reset the log sequence.
Chapter 17 Administering volume snapshots This chapter includes the following topics: ■ About volume snapshots ■ Traditional third-mirror break-off snapshots ■ Full-sized instant snapshots ■ Space-optimized instant snapshots ■ Emulation of third-mirror break-off snapshots ■ Linked break-off snapshot volumes ■ Cascaded snapshots ■ Creating multiple snapshots ■ Restoring the original volume from a snapshot ■ Creating instant snapshots ■ Creating traditional third-mirror break-off snapsh
200 Administering volume snapshots About volume snapshots You can also take a snapshot of a volume set. See “Creating instant snapshots” on page 212. Volume snapshots allow you to make backup copies of your volumes online with minimal interruption to users. You can then use the backup copies to restore data that has been lost due to disk failure, software errors or human mistakes, or to create replica volumes for the purposes of report generation, application development, or testing.
Administering volume snapshots Traditional third-mirror break-off snapshots Traditional third-mirror break-off snapshots Figure 17-1 shows the traditional third-mirror break-off volume snapshot model that is supported by the vxassist command.
202 Administering volume snapshots Full-sized instant snapshots The FastResync feature minimizes the time and I/O needed to resynchronize the data in the snapshot. If FastResync is not enabled, a full resynchronization of the data is required. For more details: See the Veritas Volume Manager Administrator's Guide. Finally, you can use the vxassist snapclear command to break the association between the original volume and the snapshot volume.
Administering volume snapshots Full-sized instant snapshots plexes from the original volume (which is similar to the way that the vxassist command creates its snapshots). Unlike a third-mirror break-off snapshot created using the vxassist command, you can make a backup of a full-sized instant snapshot, instantly refresh its contents from the original volume, or attach its plexes to the original volume, without completely synchronizing the snapshot plexes from the original volume.
204 Administering volume snapshots Space-optimized instant snapshots Space-optimized instant snapshots Volume snapshots require a complete copy of the original volume, and use as much storage space as the original volume. In contrast, space-optimized instant snapshots do not require a complete copy of the original volume’s storage space. They use a storage cache. You may find it convenient to configure a single storage cache in a disk group that can be shared by all the volumes in that disk group.
Administering volume snapshots Emulation of third-mirror break-off snapshots Emulation of third-mirror break-off snapshots Third-mirror break-off snapshots are suitable for write-intensive volumes (such as for database redo logs) where the copy-on-write mechanism of space-optimized or full-sized instant snapshots might degrade performance.
206 Administering volume snapshots Linked break-off snapshot volumes As with third-mirror break-off snapshots, you must wait for the contents of the snapshot volume to be synchronized with the data volume before you can use the vxsnap make command to take the snapshot. When a link is created between a volume and the mirror that will become the snapshot, separate link objects (similar to snap objects) are associated with the volume and with its mirror.
Administering volume snapshots Cascaded snapshots See “Creating a volume for use as a full-sized instant or linked break-off snapshot” on page 217.
208 Administering volume snapshots Cascaded snapshots In such cases, it may be more appropriate to create a snapshot of a snapshot as described in the following section. See “Adding a snapshot to a cascaded snapshot hierarchy” on page 232. Note: Only unsynchronized full-sized or space-optimized instant snapshots are usually cascaded. It is of little use to create cascaded snapshots if the infrontof snapshot volume is fully synchronized (as, for example, with break-off type snapshots).
Administering volume snapshots Cascaded snapshots Figure 17-6 Using a snapshot of a snapshot to restore a database 1 Create instant snapshot S1 of volume V Original volume V Snapshot volume of V: S1 2 Create instant snapshot S2 of S1 Original volume V vxsnap make source=S1 Snapshot volume of V: S1 Snapshot volume of S1: S2 3 After contents of V have gone bad, apply the database to redo logs to S2 Apply redo logs Original volume V 4 Snapshot volume of V: S1 Snapshot volume of S1: S2 Restore co
210 Administering volume snapshots Creating multiple snapshots Figure 17-7 Dissociating a snapshot volume vxsnap dis is applied to snapshot S2, which has no snapshots of its own Original volume V Snapshot volume of V: S1 Snapshot volume of S1: S2 vxsnap dis S2 Original volume V Snapshot volume of V: S1 Volume S2 S1 remains owned by V S2 is independent vxsnap dis is applied to snapshot S1, which has one snapshot S2 Original volume V Snapshot volume of V: S1 Snapshot volume of S1: S2 vxsnap dis
Administering volume snapshots Restoring the original volume from a snapshot For traditional snapshots, you can create snapshots of all the volumes in a single disk group by specifying the option -o allvols to the vxassist snapshot command. By default, each replica volume is named SNAPnumber-volume, where number is a unique serial number, and volume is the name of the volume for which a snapshot is being taken. This default can be overridden by using the option -o name=pattern.
212 Administering volume snapshots Creating instant snapshots from an instant snapshot. The volume that is used to restore the original volume can either be a true backup of the original's contents, or it may have been modified in some way (for example, by applying a database log replay or by running a file system checking utility such as fsck). All synchronization of the contents of this backup must be completed before the original volume can be restored from it.
Administering volume snapshots Creating instant snapshots To create instant snapshots of volume sets, use volume set names in place of volume names in the vxsnap command. See “Creating instant snapshots of volume sets” on page 229.
214 Administering volume snapshots Creating instant snapshots Preparing to create instant and break-off snapshots To prepare a volume for the creation of instant and break-off snapshots 1 Use the following commands to see if the volume has a version 20 data change object (DCO) and DCO volume that allow instant snapshots and Persistent FastResync to be used with the volume, and to check that FastResync is enabled on the volume: # vxprint -g volumedg -F%instant volume # vxprint -g volumedg -F%fastresync v
Administering volume snapshots Creating instant snapshots 3 If you need several space-optimized instant snapshots for the volumes in a disk group, you may find it convenient to create a single shared cache object in the disk group rather than a separate cache object for each snapshot. See “Creating a shared cache object” on page 215. For full-sized instant snapshots and linked break-off snapshots, you must prepare a volume that is to be used as the snapshot volume.
216 Administering volume snapshots Creating instant snapshots 3 Use the vxmake cache command to create a cache object on top of the cache volume that you created in the previous step: # vxmake [-g diskgroup] cache cache_object \ cachevolname=volume [regionsize=size] [autogrow=on] \ [highwatermark=hwmk] [autogrowby=agbvalue] \ [maxautogrow=maxagbvalue]] If the region size, regionsize, is specified, it must be a power of 2, and be greater than or equal to 16KB (16k).
Administering volume snapshots Creating instant snapshots Creating a volume for use as a full-sized instant or linked break-off snapshot To create an empty volume for use by a full-sized instant snapshot or a linked break-off snapshot 1 Use the vxprint command on the original volume to find the required size for the snapshot volume. # LEN=`vxprint [-g diskgroup] -F%len volume` The command as shown assumes a Bourne-type shell such as sh, ksh or bash.
218 Administering volume snapshots Creating instant snapshots 3 Use the vxprint command on the DCO to discover its region size (in blocks): # RSZ=`vxprint [-g diskgroup] -F%regionsz $DCONAME` 4 Use the vxassist command to create a volume, snapvol, of the required size and redundancy, together with a version 20 DCO volume with the correct region size: # vxassist [-g diskgroup] make snapvol $LEN \ [layout=mirror nmirror=number] logtype=dco drl=off \ dcoversion=20 [ndcomirror=number] regionsz=$RSZ \ init=
Administering volume snapshots Creating instant snapshots If the region size of a space-optimized snapshot differs from the region size of the cache, this can degrade the system’s performance compared to the case where the region sizes are the same. See “Creating a shared cache object” on page 215. The attributes for a snapshot are specified as a tuple to the vxsnap make command. This command accepts multiple tuples. One tuple is required for each snapshot that is being created.
220 Administering volume snapshots Creating instant snapshots The ncachemirror attribute specifies the number of mirrors to create in the cache volume. For backup purposes, the default value of 1 should be sufficient.
Administering volume snapshots Creating instant snapshots Creating and managing full-sized instant snapshots Full-sized instant snapshots are not suitable for write-intensive volumes (such as for database redo logs) because the copy-on-write mechanism may degrade the performance of the volume. For full-sized instant snapshots, you must prepare a volume that is to be used as the snapshot volume.
222 Administering volume snapshots Creating instant snapshots This command exits (with a return code of zero) when synchronization of the snapshot volume is complete. The snapshot volume may then be moved to another disk group or turned into an independent volume. See “Controlling instant snapshot synchronization” on page 239.
Administering volume snapshots Creating instant snapshots ■ Reattach some or all of the plexes of the snapshot volume with the original volume. See “Reattaching an instant snapshot” on page 233. ■ Restore the contents of the original volume from the snapshot volume. You can choose whether none, a subset, or all of the plexes of the snapshot volume are returned to the original volume as a result of the operation. See “Restoring a volume from an instant snapshot” on page 235.
224 Administering volume snapshots Creating instant snapshots To create and manage a third-mirror break-off snapshot 1 To create the snapshot, you can either take some of the existing ACTIVE plexes in the volume, or you can use the following command to add new snapshot mirrors to the volume: # vxsnap [-b] [-g diskgroup] addmir volume [nmirror=N] \ [alloc=storage_attributes] By default, the vxsnap addmir command adds one snapshot mirror to a volume unless you use the nmirror attribute to specify a diffe
Administering volume snapshots Creating instant snapshots 2 To create a third-mirror break-off snapshot, use the following form of the vxsnap make command. # vxsnap [-g diskgroup] make source=volume[/newvol=snapvol]\ {/plex=plex1[,plex2,...]|/nmirror=number]} Either of the following attributes may be specified to create the new snapshot volume, snapvol, by breaking off one or more existing plexes in the original volume: plex Specifies the plexes in the existing volume that are to be broken off.
226 Administering volume snapshots Creating instant snapshots ■ Refresh the contents of the snapshot. This creates a new point-in-time image of the original volume ready for another backup. If synchronization was already in progress on the snapshot, this operation may result in large portions of the snapshot having to be resynchronized. See “Refreshing an instant snapshot” on page 233. ■ Reattach some or all of the plexes of the snapshot volume with the original volume.
Administering volume snapshots Creating instant snapshots 227 To create and manage a linked break-off snapshot 1 Use the following command to link the prepared snapshot volume, snapvol, to the data volume: # vxsnap [-g diskgroup] [-b] addmir volume mirvol=snapvol \ [mirdg=snapdg] The optional mirdg attribute can be used to specify the snapshot volume’s current disk group, snapdg. The -b option can be used to perform the synchronization in the background.
228 Administering volume snapshots Creating instant snapshots 4 To backup the data in the snapshot, use an appropriate utility or operating system command to copy the contents of the snapshot to tape, or to some other backup medium. 5 You now have the following options: ■ Refresh the contents of the snapshot. This creates a new point-in-time image of the original volume ready for another backup.
Administering volume snapshots Creating instant snapshots # vxsnap [-g diskgroup] make \ source=vol1/newvol=snapvol1/cache=cacheobj \ source=vol2/newvol=snapvol2/cache=cacheobj \ source=vol3/newvol=snapvol3/cache=cacheobj \ [alloc=storage_attributes] The vxsnap make command also allows the snapshots to be of different types, have different redundancy, and be configured from different storage, as shown here: # vxsnap [-g diskgroup] make source=vol1/snapvol=snapvol1 \ source=vol2[/newvol=snapvol2]/cache=cac
230 Administering volume snapshots Creating instant snapshots instant snapshot is to be created from a prepared volume set. A full-sized instant snapshot of a volume set must itself be a volume set with the same number of volumes, and the same volume sizes and index numbers as the parent.
Administering volume snapshots Creating instant snapshots # vxsnap -g mydg prepare vset2 # vxsnap -g mydg addmir vset2 nmirror=1 # vxsnap -g mydg make source=vset2/newvol=snapvset2/nmirror=1 See “Adding snapshot mirrors to a volume” on page 231.
232 Administering volume snapshots Creating instant snapshots Once you have added one or more snapshot mirrors to a volume, you can use the vxsnap make command with either the nmirror attribute or the plex attribute to create the snapshot volumes.
Administering volume snapshots Creating instant snapshots # vxsnap -g dbdg make source=dbvol/newvol=fri_bu/\ infrontof=thurs_bu/cache=dbdgcache See “Controlling instant snapshot synchronization” on page 239. Refreshing an instant snapshot Refreshing an instant snapshot replaces it with another point-in-time copy of a parent volume.
234 Administering volume snapshots Creating instant snapshots By default, all the plexes are reattached, which results in the removal of the snapshot. If required, the number of plexes to be reattached may be specified as the value assigned to the nmirror attribute. Warning: The snapshot that is being reattached must not be open to any application. For example, any file system configured on the snapshot volume must first be unmounted.
Administering volume snapshots Creating instant snapshots The sourcedg attribute must be used to specify the data volume’s disk group if this is different from the snapshot volume’s disk group, snapdiskgroup. Warning: The snapshot that is being reattached must not be open to any application. For example, any file system configured on the snapshot volume must first be unmounted. It is possible to reattach a volume to an unrelated volume provided that their sizes and region sizes are compatible.
236 Administering volume snapshots Creating instant snapshots Warning: For this operation to succeed, the volume that is being restored and the snapshot volume must not be open to any application. For example, any file systems that are configured on either volume must first be unmounted. It is not possible to restore a volume from an unrelated volume. The destroy and nmirror attributes are not supported for space-optimized instant snapshots.
Administering volume snapshots Creating instant snapshots # vxedit -g mydg -r rm snap2myvol You can also use this command to remove a space-optimized instant snapshot from its cache. See “Removing a cache” on page 244. Splitting an instant snapshot hierarchy Note: This operation is not supported for space-optimized instant snapshots.
238 Administering volume snapshots Creating instant snapshots vol1, which has a full-sized snapshot, snapvol1 whose contents have not been synchronized with vol1: # vxsnap -g mydg print NAME SNAPOBJECT vol1 -snapvol1_snp1 snapvol1 vol1_snp1 TYPE PARENT SNAPSHOT %DIRTY %VALID volume volume volume --vol1 -snapvol1 -- -1.30 1.30 100 -1.30 The %DIRTY value for snapvol1 shows that its contents have changed by 1.30% when compared with the contents of vol1.
Administering volume snapshots Creating instant snapshots vol-03 dg1 mvol2 dg2 plex vol detmir detvol vol vol dg1 dg1 - 20M (0.2%) 20M (0.2%) - This shows that the volume vol has three full-sized snapshots, svol1, svol2 and svol3, which are of types full-sized instant (fullinst), mirror break-off (mirbrk) and linked break-off (volbrk). It also has one snapshot plex (snapmir), vol-02, and one linked mirror volume (mirvol), mvol.
240 Administering volume snapshots Creating instant snapshots Table 17-1 Commands for controlling instant snapshot synchronization Command Description vxsnap [-g diskgroup] syncpause \ vol|vol_set Pause synchronization of a volume. vxsnap [-g diskgroup] syncresume \ Resume synchronization of a volume. vol|vol_set vxsnap [-b] [-g diskgroup] syncstart \ vol|vol_set Start synchronization of a volume. The -b option puts the operation in the background.
Administering volume snapshots Creating instant snapshots iosize=size Specifies the size of each I/O request that is used when synchronizing the regions of a volume. Specifying a larger size causes synchronization to complete sooner, but with greater impact on the performance of other processes that are accessing the volume. The default size of 1m (1MB) is suggested as the minimum value for high-performance array and controller hardware.
242 Administering volume snapshots Creating instant snapshots ■ When cache usage reaches the high watermark value, highwatermark (default value is 90 percent), vxcached grows the size of the cache volume by the value of autogrowby (default value is 20% of the size of the cache volume in blocks). The new required cache size cannot exceed the value of maxautogrow (default value is twice the size of the cache volume in blocks).
Administering volume snapshots Creating instant snapshots For example, to see how much space is used and how much remains available in all cache objects in the diskgroup mydg, enter the following: # vxcache -g mydg stat Growing and shrinking a cache You can use the vxcache command to increase the size of the cache volume that is associated with a cache object: # vxcache [-g diskgroup] growcacheto cache_object size For example, to increase the size of the cache volume associated with the cache object, myc
244 Administering volume snapshots Creating traditional third-mirror break-off snapshots Removing a cache To remove a cache completely, including the cache object, its cache volume and all space-optimized snapshots that use the cache: 1 Run the following command to find out the names of the top-level snapshot volumes that are configured on the cache object: # vxprint -g diskgroup -vne \ "v_plex.pl_subdisk.sd_dm_name ~ /cache_object/" where cache_object is the name of the cache object.
Administering volume snapshots Creating traditional third-mirror break-off snapshots ■ Run vxassist snapstart to create a snapshot mirror. ■ Run vxassist snapshot to create a snapshot volume. The vxassist snapstart step creates a write-only backup plex which gets attached to and synchronized with the volume. When synchronized with the volume, the backup plex is ready to be used as a snapshot mirror. The end of the update procedure is indicated by the new snapshot mirror changing its state to SNAPDONE.
246 Administering volume snapshots Creating traditional third-mirror break-off snapshots To back up a volume using the vxassist command 1 Create a snapshot mirror for a volume using the following command: # vxassist [-b] [-g diskgroup] snapstart [nmirror=N] volume For example, to create a snapshot mirror of a volume called voldef, use the following command: # vxassist [-g diskgroup] snapstart voldef The vxassist snapstart task creates a write-only mirror, which is attached to and synchronized from the
Administering volume snapshots Creating traditional third-mirror break-off snapshots 3 Create a snapshot volume using the following command: # vxassist [-g diskgroup] snapshot [nmirror=N] volume snapshot If required, use the nmirror attribute to specify the number of mirrors in the snapshot volume.
248 Administering volume snapshots Creating traditional third-mirror break-off snapshots ■ Remove the snapshot volume to save space with this command: # vxedit [-g diskgroup] -rf rm snapshot Dissociating or removing the snapshot volume loses the advantage of fast resynchronization if FastResync was enabled. If there are no further snapshot plexes available, any subsequent snapshots that you take require another complete copy of the original volume to be made.
Administering volume snapshots Creating traditional third-mirror break-off snapshots # vxplex -o dcoplex=trivol_dco-03 convert state=SNAPDONE \ trivol-03 Here the DCO plex trivol_dco_03 is specified as the DCO plex for the new snapshot plex.
250 Administering volume snapshots Creating traditional third-mirror break-off snapshots Note: The vxsnap command provides similiar functionality for creating multiple snapshots. Reattaching a snapshot volume The snapback operation merges a snapshot copy of a volume with its original. One or more snapshot plexes are detached from the snapshot volume and re-attached to the original volume. The snapshot volume is removed if all its snapshot plexes are snapped back.
Administering volume snapshots Creating traditional third-mirror break-off snapshots Warning: Always unmount the snapshot volume (if this is mounted) before performing a snapback. In addition, you must unmount the file system corresponding to the primary volume before using the resyncfromreplica option. Adding plexes to a snapshot volume If you want to retain the existing plexes in a snapshot volume after a snapback operation, you can create additional snapshot plexes that are to be used for the snapback.
252 Administering volume snapshots Creating traditional third-mirror break-off snapshots Dissociating a snapshot volume The link between a snapshot and its original volume can be permanently broken so that the snapshot volume becomes an independent volume.
Administering volume snapshots Adding a version 0 DCO and DCO volume In this example, Persistent FastResync is enabled on volume v1, and Non-Persistent FastResync on volume v2. Lines beginning with v, dp and ss indicate a volume, detached plex and snapshot plex respectively. The %DIRTY field indicates the percentage of a snapshot plex or detached plex that is dirty with respect to the original volume. Notice that no snap objects are associated with volume v2 or with its snapshot volume SNAP-v2.
254 Administering volume snapshots Adding a version 0 DCO and DCO volume To add a DCO object and DCO volume to an existing volume 1 Ensure that the disk group containing the existing volume has been upgraded to at least version 90.
Administering volume snapshots Adding a version 0 DCO and DCO volume 2 Use the following command to turn off Non-Persistent FastResync on the original volume if it is currently enabled: # vxvol [-g diskgroup] set fastresync=off volume If you are uncertain about which volumes have Non-Persistent FastResync enabled, use the following command to obtain a listing of such volumes. The ! character is a special character in some shells. The following example shows how to escape it in a bash shell.
256 Administering volume snapshots Adding a version 0 DCO and DCO volume placed on disks which are used to hold the plexes of other volumes, this may cause problems when you subsequently attempt to move volumes into other disk groups. You can use storage attributes to specify explicitly which disks to use for the DCO plexes. If possible, specify the same disks as those on which the volume is configured.
Administering volume snapshots Adding a version 0 DCO and DCO volume Removing a version 0 DCO and DCO volume To dissociate a version 0 DCO object, DCO volume and any snap objects from a volume, use the following command: # vxassist [-g diskgroup] remove log volume logtype=dco This completely removes the DCO object, DCO volume and any snap objects. It also has the effect of disabling FastResync for the volume.
258 Administering volume snapshots Adding a version 0 DCO and DCO volume # vxdco -g mydg att myvol myvol_dco See the vxdco(1M) manual page.
Chapter 18 Administering snapshot file systems This chapter includes the following topics: ■ About snapshot file systems ■ How a snapshot file system works ■ Snapshot file system backups ■ Snapshot file system performance ■ About snapshot file system disk structure ■ Differences between snapshots and Storage Checkpoints ■ Creating a snapshot file system ■ Backup examples About snapshot file systems A snapshot file system is an exact image of a VxFS file system, referred to as the snapped f
260 Administering snapshot file systems How a snapshot file system works unmounted until all of its snapshots are unmounted. Although it is possible to have multiple snapshots of a file system made at different times, it is not possible to make a snapshot of a snapshot. Note: A snapshot file system ceases to exist when unmounted. If mounted again, it is actually a fresh snapshot of the snapped file system. A snapshot file system must be unmounted before its dependent snapped file system can be unmounted.
Administering snapshot file systems Snapshot file system backups a consistent view of all file system structures on the snapped file system for the time when the snapshot was created. As data blocks are changed on the snapped file system, the snapshot gradually fills with data copied from the snapped file system. The amount of disk space required for the snapshot depends on the rate of change of the snapped file system and the amount of time the snapshot is maintained.
262 Administering snapshot file systems Snapshot file system performance exact time the snapshot was created. In both cases, however, the snapshot file system provides a consistent image of the snapped file system with all activity complete—it is an instantaneous read of the entire file system. This is much different than the results that would be obtained by a dd or read command on the disk device of an active file system.
Administering snapshot file systems Differences between snapshots and Storage Checkpoints Figure 18-1 The Snapshot Disk Structure super-block bitmap blockmap data block The super-block is similar to the super-block of a standard VxFS file system, but the magic number is different and many of the fields are not applicable. The bitmap contains one bit for every block on the snapped file system. Initially, all bitmap entries are zero.
264 Administering snapshot file systems Creating a snapshot file system Table 18-1 Differences between snapshots and Storage Checkpoints (continued) Snapshots Storage Checkpoints Are transient Are persistent Cease to exist after being unmounted Can exist and be mounted on their own Track changed blocks on the file system level Track changed blocks on each file in the file system Storage Checkpoints also serve as the enabling technology for two other Veritas features: Block-Level Incremental Backu
Administering snapshot file systems Backup examples Backup examples In the following examples, the vxdump utility is used to ascertain whether /dev/vx/dsk/fsvol/vol1 is a snapshot mounted as /backup/home and does the appropriate work to get the snapshot data through the mount point. These are typical examples of making a backup of a 300,000 block file system named /home using a snapshot file system on /dev/vx/dsk/fsvol/vol1 with a snapshot mount point of /backup/home.
266 Administering snapshot file systems Backup examples
Chapter 19 Administering Storage Checkpoints This chapter includes the following topics: ■ About Storage Checkpoints ■ Operation of a Storage Checkpoint ■ Types of Storage Checkpoints ■ Storage Checkpoint administration ■ Storage Checkpoint space management considerations ■ Restoring from a Storage Checkpoint ■ Storage Checkpoint quotas About Storage Checkpoints Veritas File System (VxFS) provides a Storage Checkpoint feature that quickly creates a persistent image of a file system at an exa
268 Administering Storage Checkpoints About Storage Checkpoints Storage Checkpoints are actually data objects that are managed and controlled by the file system. You can create, remove, and rename Storage Checkpoints because they are data objects with associated names. See “Operation of a Storage Checkpoint” on page 269.
Administering Storage Checkpoints Operation of a Storage Checkpoint ■ Can have multiple, read-only Storage Checkpoints that reduce I/O operations and required storage space because the most recent Storage Checkpoint is the only one that accumulates updates from the primary file system. ■ Can restore the file system to its state at the time that the Storage Checkpoint was taken. Various backup and replication solutions can take advantage of Storage Checkpoints.
270 Administering Storage Checkpoints Operation of a Storage Checkpoint A Storage Checkpoint of the primary fileset initially contains only pointers to the existing data blocks in the primary fileset, and does not contain any allocated data blocks of its own. Figure 19-1 shows the file system /database and its Storage Checkpoint. The Storage Checkpoint is logically identical to the primary fileset when the Storage Checkpoint is created, but it does not contain any actual data blocks.
Administering Storage Checkpoints Operation of a Storage Checkpoint Figure 19-2 Initializing a Storage Checkpoint Primary fileset Storage Checkpoint A B C D E The Storage Checkpoint presents the exact image of the file system by finding the data from the primary fileset. VxFS updates a Storage Checkpoint by using the copy-on-write technique. See “Copy-on-write” on page 271. Copy-on-write In Figure 19-3, the third data block in the primary fileset originally containing C is updated.
272 Administering Storage Checkpoints Types of Storage Checkpoints Figure 19-3 Updates to the primary fileset Primary fileset Storage Checkpoint A B C’ C D E Types of Storage Checkpoints You can create the following types of Storage Checkpoints: ■ Data Storage Checkpoints ■ Nodata Storage Checkpoints ■ Removable Storage Checkpoints ■ Non-mountable Storage Checkpoints Data Storage Checkpoints A data Storage Checkpoint is a complete image of the file system at the time the Storage Checkpoi
Administering Storage Checkpoints Types of Storage Checkpoints limit the life of data Storage Checkpoints to minimize the impact on system resources. See “Showing the difference between a data and a nodata Storage Checkpoint” on page 279. Nodata Storage Checkpoints A nodata Storage Checkpoint only contains file system metadata—no file data blocks. As the original file system changes, the nodata Storage Checkpoint records the location of every changed block.
274 Administering Storage Checkpoints Storage Checkpoint administration Removable Storage Checkpoints A removable Storage Checkpoint can “self-destruct" under certain conditions when the file system runs out of space. See “Storage Checkpoint space management considerations” on page 286. After encountering certain out-of-space (ENOSPC) conditions, the kernel removes Storage Checkpoints to free up space for the application to continue running on the file system.
Administering Storage Checkpoints Storage Checkpoint administration ctime mtime flags # of inodes # of blocks . . .
276 Administering Storage Checkpoints Storage Checkpoint administration mtime flags = Thu 3 Mar 2005 7:00:17 PM PST = nodata, largefiles The following example shows the creation of a removable Storage Checkpoint named thu_8pm on /mnt0 and lists all Storage Checkpoints of the /mnt0 file system: # fsckptadm -r create thu_8pm /mnt0 # fsckptadm list /mnt0 /mnt0 thu_8pm: ctime mtime flags thu_7pm: ctime mtime flags = Thu 3 Mar 2005 8:00:19 PM PST = Thu 3 Mar 2005 8:00:19 PM PST = largefiles, removable = Thu
Administering Storage Checkpoints Storage Checkpoint administration # fsckptadm list /mnt0 /mnt0 Accessing a Storage Checkpoint You can mount Storage Checkpoints using the mount command with the mount option -o ckpt=ckpt_name. See the mount_vxfs(1M) manual page. Observe the following rules when mounting Storage Checkpoints: ■ Storage Checkpoints are mounted as read-only Storage Checkpoints by default. If you must write to a Storage Checkpoint, mount it using the -o rw option.
278 Administering Storage Checkpoints Storage Checkpoint administration # mount -F vxfs -o ckpt=may_23 /dev/vx/dsk/fsvol/vol1:may_23 \ /fsvol_may_23 Note: The vol1 file system must already be mounted before the Storage Checkpoint can be mounted.
Administering Storage Checkpoints Storage Checkpoint administration Converting a data Storage Checkpoint to a nodata Storage Checkpoint A nodata Storage Checkpoint does not contain actual file data. Instead, this type of Storage Checkpoint contains a collection of markers indicating the location of all the changed blocks since the Storage Checkpoint was created. See “Types of Storage Checkpoints” on page 272.
280 Administering Storage Checkpoints Storage Checkpoint administration To show the difference between Storage Checkpoints 1 Create a file system and mount it on /mnt0, as in the following example: # mkfs -F vxfs /dev/vx/rdsk/dg1/test0 version 7 layout 134217728 sectors, 67108864 blocks of size 1024, log \ size 65536 blocks, largefiles supported # mount -F /dev/vx/rdsk/dg1/test0 /mnt0 2 Create a small file with a known content, as in the following example: # echo "hello, world" > /mnt0/file 3 Crea
Administering Storage Checkpoints Storage Checkpoint administration 7 Unmount the Storage Checkpoint, convert the Storage Checkpoint to a nodata Storage Checkpoint, and mount the Storage Checkpoint again: # umount /mnt0@5_30pm # fsckptadm -s set nodata ckpt@5_30pm /mnt0 # mount -F vxfs -o ckpt=ckpt@5_30pm \ /dev/vx/dsk/dg1/test0:ckpt@5_30pm /mnt0@5_30pm 8 Examine the content of both files.
282 Administering Storage Checkpoints Storage Checkpoint administration To convert multiple Storage Checkpoints 1 Create a file system and mount it on /mnt0: # mkfs -F vxfs /dev/vx/rdsk/dg1/test0 version 7 layout 13417728 sectors, 67108864 blocks of size 1024, log \ size 65536 blocks largefiles supported # mount -F vxfs /dev/vx/dsk/dg1/test0 /mnt0 2 Create four data Storage Checkpoints on this file system, note the order of creation, and list them: # fsckptadm create oldest /mnt0 # fsckptadm create ol
Administering Storage Checkpoints Storage Checkpoint administration 4 You can instead convert the latest Storage Checkpoint to a nodata Storage Checkpoint in a delayed or asynchronous manner. # fsckptadm set nodata latest /mnt0 5 List the Storage Checkpoints, as in the following example. You will see that the latest Storage Checkpoint is marked for conversion in the future.
284 Administering Storage Checkpoints Storage Checkpoint administration To create a delayed nodata Storage Checkpoint 1 Remove the latest Storage Checkpoint.
Administering Storage Checkpoints Storage Checkpoint administration 3 Convert the oldest Storage Checkpoint to a nodata Storage Checkpoint because no older Storage Checkpoints exist that contain data in the file system. Note: This step can be done synchronously.
286 Administering Storage Checkpoints Storage Checkpoint space management considerations 4 Remove the older and old Storage Checkpoints.
Administering Storage Checkpoints Restoring from a Storage Checkpoint ■ Remove the oldest Storage Checkpoint first. Restoring from a Storage Checkpoint Mountable data Storage Checkpoints on a consistent and undamaged file system can be used by backup and restore applications to restore either individual files or an entire file system.
288 Administering Storage Checkpoints Restoring from a Storage Checkpoint To restore a file from a Storage Checkpoint 1 Create the Storage Checkpoint CKPT1 of /home. $ fckptadm create CKPT1 /home 2 Mount Storage Checkpoint CKPT1 on the directory /home/checkpoints/mar_4. $ mount -F vxfs -o ckpt=CKPT1 /dev/vx/dsk/dg1/vol- \ 01:CKPT1 /home/checkpoints/mar_4 3 Delete the file MyFile.txt from your home directory. $ cd /home/users/me $ rm MyFile.
Administering Storage Checkpoints Restoring from a Storage Checkpoint To restore a file system from a Storage Checkpoint 1 Run the fsckpt_restore command: # fsckpt_restore -l /dev/vx/dsk/dg1/vol2 /dev/vx/dsk/dg1/vol2: UNNAMED: ctime = Thu 08 May 2004 06:28:26 PM PST mtime = Thu 08 May 2004 06:28:26 PM PST flags = largefiles, file system root CKPT6: ctime = Thu 08 May 2004 06:28:35 PM PST mtime = Thu 08 May 2004 06:28:35 PM PST flags = largefiles CKPT5: ctime = Thu 08 May 2004 06:28:34 PM PST mtime = Thu
290 Administering Storage Checkpoints Restoring from a Storage Checkpoint 2 In this example, select the Storage Checkpoint CKPT3 as the new root fileset: Select Storage Checkpoint for restore operation or (EOF) to exit or to list Storage Checkpoints: CKPT3 CKPT3: ctime = Thu 08 May 2004 06:28:31 PM PST mtime = Thu 08 May 2004 06:28:36 PM PST flags = largefiles UX:vxfs fsckpt_restore: WARNING: V-3-24640: Any file system changes or Storage Checkpoints made after Thu 08 May 2004 06:28:3
Administering Storage Checkpoints Restoring from a Storage Checkpoint 3 Type y to restore the file system from CKPT3: Restore the file system from Storage Checkpoint CKPT3 ? (ynq) y (Yes) UX:vxfs fsckpt_restore: INFO: V-3-23760: File system restored from CKPT3 If the filesets are listed at this point, it shows that the former UNNAMED root fileset and CKPT6, CKPT5, and CKPT4 were removed, and that CKPT3 is now the primary fileset. CKPT3 is now the fileset that will be mounted by default.
292 Administering Storage Checkpoints Storage Checkpoint quotas Storage Checkpoint quotas VxFS provides options to the fsckptadm command interface to administer Storage Checkpoint quotas. Storage Checkpoint quotas set the following limits on the amount of space used by all Storage Checkpoints of a primary file set: VxFS provides options to the fsckptadm command interface to administer Storage Checkpoint quotas.
Chapter 20 Administering FileSnaps This chapter includes the following topics: ■ About FileSnaps ■ Properties of FileSnaps ■ Creating FileSnaps ■ Concurrent I/O to FileSnaps ■ Copy-on-write and FileSnaps ■ Reading from FileSnaps ■ Block map fragmentation and FileSnaps ■ Backup and FileSnaps ■ Using FileSnaps ■ Best practices with FileSnaps ■ Comparison of the logical size output of the fsadm -S shared, du, and df commands About FileSnaps A FileSnap is an atomic space-optimized copy o
294 Administering FileSnaps Properties of FileSnaps level of the directory. The vxfilesnap command preserves the inode identify of the destination file, if the destination file exists. All regular file operations are supported on the FileSnap, and VxFS does not distinguish the FileSnap in any way. See the vxfilesnap(1) manual page. Properties of FileSnaps FileSnaps provide an ability to snapshot objects that are smaller in granularity than a file system or a volume.
Administering FileSnaps Creating FileSnaps with FileSnaps is closer to that of an allocating write than that of a traditional copy-on-write. In disk layout Version 8, to support block or extent sharing between the files, reference counts are tracked for each shared extent. VxFS processes reference count updates due to sharing and unsharing of extents in a delayed fashion. Also, an extent that is marked shared once will not go back to unshared until all the references are gone.
296 Administering FileSnaps Reading from FileSnaps write. However, in the event of a server crash, when the server has not flushed the new data to the newly allocated blocks, the data seen on the overwritten region would be similar to what you would find in the case of an allocating write where the server has crashed before the data is flushed. This is not the default behavior and with the default behavior the data that you find in the overwritten region will be either the new data or the old data.
Administering FileSnaps Using FileSnaps Using FileSnaps Table 20-1 provides a list of Veritas File System (VxFS) commands that enable you to administer FileSnaps. Table 20-1 Command Functionality fiostat The fiostat command has the -S option to display statistics for each interval. Otherwise, the command displays the accumulated statistics for the entire time interval. fsadm The fsadm command has the -S option to report shared block usage in the file system.
298 Administering FileSnaps Best practices with FileSnaps Best practices with FileSnaps The key to obtaining maximum performance with FileSnaps is to minimize the copy-on-write overhead. You can achieved this by enabling lazy copy-on-write. Lazy copy-on-write is easy to enable and usually results in significantly better performance. If lazy copy-on-write is not a viable option for the use case under consideration, an efficient allocation of the source file can reduce the need of copy-on-write.
Administering FileSnaps Comparison of the logical size output of the fsadm -S shared, du, and df commands 16 GB is the space for applications to write. Any data or binaries that are required by each instance of the virtual machine can still be part of the first 4 GB of the shared extent.
300 Administering FileSnaps Comparison of the logical size output of the fsadm -S shared, du, and df commands # mkfs -F vxfs -o version=8 /dev/vx/rdsk/dg/vol3 version 8 layout 104857600 sectors, 52428800 blocks of size 1024, log size 65536 blocks rcq size 4096 blocks largefiles supported # mount -F vxfs /dev/vx/dsk/dg/vol3 /mnt # df -k /mnt Filesystem 1K-blocks /dev/vx/dsk/dg1/vol3 52428800 Used 83590 Available Use% Mounted on 49073642 1% /mnt # /opt/VRTS/bin/fsadm -S shared /mnt Mountpoint /mnt Size(
Chapter 21 Backing up and restoring with Netbackup in an SFHA environment This chapter includes the following topics: ■ About Veritas NetBackup ■ About using Veritas NetBackup for backup and restore for DB2 ■ About using NetBackup for backup and restore for Sybase ■ About using Veritas NetBackup to backup and restore Quick I/O files for DB2 ■ About using Veritas NetBackup to backup and restore Quick I/O files for Sybase About Veritas NetBackup Veritas NetBackup provides backup, archive, and rest
302 Backing up and restoring with Netbackup in an SFHA environment About using Veritas NetBackup for backup and restore for DB2 Veritas NetBackup can be configured for DB2 in an Extended Edition (EE) or Extended-Enterprise Edition (EEE) environment. For detailed information and instructions on configuring DB2 for EEE, see “Configuring for a DB2 EEE (DPF) Environment” in the Veritas NetBackup for DB2 System Administrator's Guide for UNIX.
Backing up and restoring with Netbackup in an SFHA environment About using NetBackup for backup and restore for Sybase ■ Incremental backups of DB2 databases About performing a backup There are two types of DB2 backups: database logs and archive logs. These two types of backups can be performed by NetBackup automatically, manually, or by using the DB2 BACKUP DATABASE command. More information on performing a backup is available in the system administrator's guide.
304 Backing up and restoring with Netbackup in an SFHA environment About using Veritas NetBackup to backup and restore Quick I/O files for DB2 Veritas NetBackup for Sybase integrates the database backup and recovery capabilities of Sybase Backup Server with the backup and recovery management capabilities of NetBackup. Veritas NetBackup works with Sybase APIs to provide high-performance backup and restore for Sybase dataservers.
Backing up and restoring with Netbackup in an SFHA environment About using Veritas NetBackup to backup and restore Quick I/O files for Sybase If you want to back up all Quick I/O files in a directory, you can simplify the process by just specifying the directory to be backed up. In this case, both components of each Quick I/O file will be properly backed up. In general, you should specify directories to be backed up unless you only want to back up some, but not all files, in those directories.
306 Backing up and restoring with Netbackup in an SFHA environment About using Veritas NetBackup to backup and restore Quick I/O files for Sybase In the example above, you must include both the symbolic link dbfile and the hidden file .dbfile in the file list of the backup class. If you want to back up all Quick I/O files in a directory, you can simplify the process by just specifying the directory to be backed up. In this case, both components of each Quick I/O file will be properly backed up.
Section 5 Maximizing storage utilization ■ Chapter 22. Understanding storage tiering with SmartTier ■ Chapter 23. Creating and administering volume sets ■ Chapter 24. Multi-volume file systems ■ Chapter 25.
308
Chapter 22 Understanding storage tiering with SmartTier This chapter includes the following topics: ■ About SmartTier ■ SmartTier building blocks ■ How SmartTier works ■ SmartTier in a High Availability (HA) environment About SmartTier Note: SmartTier is the expanded and renamed feature previously known as Dynamic Storage Tiering (DST). SmartTier matches data storage with data usage requirements.
310 Understanding storage tiering with SmartTier SmartTier building blocks are used to designate which disks make up a particular tier. There are two common ways of defining storage classes: ■ Performance, or storage, cost class: The most-used class consists of fast, expensive disks. When data is no longer needed on a regular basis, the data can be moved to a different class that is made up of slower, less expensive disks.
Understanding storage tiering with SmartTier SmartTier building blocks of multiple volumes transparent to users and applications. Each volume retains a separate identity for administrative purposes, making it possible to control the locations to which individual files are directed. See “About multi-volume support” on page 323. This feature is available only on file systems meeting the following requirements: ■ The minimum disk group version is 140.
312 Understanding storage tiering with SmartTier How SmartTier works Warning: Multiple tagging should be used carefully. A placement class is a SmartTier attribute of a given volume in a volume set of a multi-volume file system. This attribute is a character string, and is known as a volume tag. See the Veritas Volume Manager Administrator's Guide. How SmartTier works SmartTier is a VxFS feature that enables you to allocate file storage space from different storage tiers according to rules you create.
Understanding storage tiering with SmartTier How SmartTier works In a database environment, the access age rule can be applied to some files. However, some data files, for instance are updated every time they are accessed and hence access age rules cannot be used. SmartTier provides mechanisms to relocate portions of files as well as entire files to a secondary tier.
314 Understanding storage tiering with SmartTier SmartTier in a High Availability (HA) environment SmartTier in a High Availability (HA) environment The DiskGroup agent brings online, takes offline, and monitors a Veritas Volume Manager (VxVM) disk group. This agent uses VxVM commands. When the value of the StartVolumes and StopVolumes attributes are both 1, the DiskGroup agent onlines and offlines the volumes during the import and deport operations of the disk group.
Chapter 23 Creating and administering volume sets This chapter includes the following topics: ■ About volume sets ■ Creating a volume set ■ Adding a volume to a volume set ■ Removing a volume from a volume set ■ Listing details of volume sets ■ Stopping and starting volume sets ■ Raw device node access to component volumes About volume sets Veritas File System (VxFS) uses volume sets to implement its Multi-Volume Support and SmartTier features.
316 Creating and administering volume sets Creating a volume set ■ The first volume (index 0) in a volume set must be larger than the sum of the total volume size divided by 4000, the size of the VxFS intent log, and 1MB. Volumes 258 MB or larger should always suffice. ■ Raw I/O from and to a volume set is not supported. ■ Raw I/O from and to the component volumes of a volume set is supported under certain conditions. See “Raw device node access to component volumes” on page 319.
Creating and administering volume sets Adding a volume to a volume set Adding a volume to a volume set Having created a volume set containing a single volume, you can use the following command to add further volumes to the volume set: # vxvset [-g diskgroup] [-f] addvol volset volume For example, to add the volume vol2, to the volume set myvset, use the following command: # vxvset -g mydg addvol myvset vol2 Warning: The -f (force) option must be specified if the volume being added, or any volume in the v
318 Creating and administering volume sets Stopping and starting volume sets # vxvset [-g diskgroup] list [volset] If the name of a volume set is not specified, the command lists the details of all volume sets in a disk group, as shown in the following example: # vxvset -g mydg list NAME set1 set2 GROUP mydg mydg NVOLS 3 2 CONTEXT - To list the details of each volume in a volume set, specify the name of the volume set as an argument to the command: # vxvset -g mydg list set1 VOLUME vol1 vol2 vol3 IN
Creating and administering volume sets Raw device node access to component volumes 319 # vxvset -g mydg stop set1 # vxvset -g mydg list set1 VOLUME vol1 vol2 vol3 INDEX 0 1 2 LENGTH 12582912 12582912 12582912 KSTATE DISABLED DISABLED DISABLED CONTEXT - LENGTH 12582912 12582912 12582912 KSTATE ENABLED ENABLED ENABLED CONTEXT - # vxvset -g mydg start set1 # vxvset -g mydg list set1 VOLUME vol1 vol2 vol3 INDEX 0 1 2 Raw device node access to component volumes To guard against accidental file system
320 Creating and administering volume sets Raw device node access to component volumes Access to the raw device nodes for the component volumes can be configured to be read-only or read-write. This mode is shared by all the raw device nodes for the component volumes of a volume set. The read-only access mode implies that any writes to the raw device will fail, however writes using the ioctl interface or by VxFS to update metadata are not prevented.
Creating and administering volume sets Raw device node access to component volumes The following example creates a volume set, myvset1, containing the volume, myvol1, in the disk group, mydg, with raw device access enabled in read-write mode: # vxvset -g mydg -o makedev=on -o compvol_access=read-write \ make myvset1 myvol1 Displaying the raw device access settings for a volume set You can use the vxprint -m command to display the current settings for a volume set.
322 Creating and administering volume sets Raw device node access to component volumes # vxvset [-g diskgroup] [-f] set \ compvol_access={read-only|read-write} vset The compvol_access attribute can be specified to the vxvset set command to change the access mode to the component volumes of a volume set. If any of the component volumes are open, the -f (force) option must be specified to set the attribute to read-only.
Chapter 24 Multi-volume file systems This chapter includes the following topics: ■ About multi-volume support ■ About volume types ■ Features implemented using multi-volume support ■ Creating multi-volume file systems ■ Converting a single volume file system to a multi-volume file system ■ Adding a volume to and removing a volume from a multi-volume file system ■ Volume encapsulation ■ Reporting file extents ■ Load balancing ■ Converting a multi-volume file system to a single volume fil
324 Multi-volume file systems About volume types expensive arrays. Using the MVS administrative interface, you can control which data goes on which volume types. See the Veritas Volume Manager Administrator's Guide. Note: Multi-volume support is available only on file systems using disk layout Version 6 or later. About volume types VxFS utilizes two types of volumes, one of which contains only data, referred to as dataonly, and the other of which can contain metadata or data, referred to as metadataok.
Multi-volume file systems Features implemented using multi-volume support ■ Encapsulating volumes so that a volume appears in the file system as a file. This is particularly useful for databases that are running on raw volumes. ■ Guaranteeing that a dataonly volume being unavailable does not cause a metadataok volume to be unavailable. To use the multi-volume file system features, Veritas Volume Manager must be installed and the volume set feature must be accessible.
326 Multi-volume file systems Creating multi-volume file systems Note: Do not mount a multi-volume system with the ioerror=disable or ioerror=wdisable mount options if the volumes have different availability properties. Symantec recommends the ioerror=mdisable mount option both for cluster mounts and for local mounts. Creating multi-volume file systems When a multi-volume file system is created, all volumes are dataonly, except volume zero, which is used to store the file system's metadata.
Multi-volume file systems Creating multi-volume file systems To create a multi-volume file system 1 After a volume set is created, create a VxFS file system by specifying the volume set name as an argument to mkfs: # mkfs -F vxfs /dev/vx/rdsk/rootdg/myvset version 7 layout 327680 sectors, 163840 blocks of size 1024, log size 1024 blocks largefiles supported After the file system is created, VxFS allocates space from the different volumes within the volume set.
328 Multi-volume file systems Converting a single volume file system to a multi-volume file system 4 List the volume availability flags using the fsvoladm command: # fsvoladm queryflags /mnt1 5 volname vol1 flags metadataok vol2 vol3 vol4 vol5 dataonly dataonly dataonly dataonly Increase the metadata space in the file system using the fsvoladm command: # fsvoladm clearflags dataonly /mnt1 vol2 # fsvoladm queryflags /mnt1 volname vol1 vol2 vol3 vol4 vol5 flags metadataok metadataok dataonly dataonl
Multi-volume file systems Adding a volume to and removing a volume from a multi-volume file system 4 If the disk layout version is less than 6, upgrade to Version 7.
330 Multi-volume file systems Adding a volume to and removing a volume from a multi-volume file system To add a volume to a multi-volume file system ◆ Add a new volume to a multi-volume file system: # fsvoladm add /mnt1 vol2 256m Removing a volume from a multi-volume file system Use the fsvoladm remove command to remove a volume from a multi-volume file system. The fsvoladm remove command fails if the volume being removed is the only volume in any allocation policy.
Multi-volume file systems Volume encapsulation Volume encapsulation Multi-volume support enables the ability to encapsulate an existing raw volume and make the volume contents appear as a file in the file system. Encapsulating a volume involves the following actions: ■ Adding the volume to an existing volume set. ■ Adding the volume to the file system using fsvoladm. Encapsulating a volume The following example illustrates how to encapsulate a volume.
332 Multi-volume file systems Volume encapsulation 5 Add the new volume to the volume set: # vxvset -g dg1 addvol myvset dbvol 6 Encapsulate dbvol: # fsvoladm encapsulate /mnt1/dbfile dbvol 100m # ls -l /mnt1/dbfile -rw------- 1 root other 104857600 May 22 11:30 /mnt1/dbfile 7 Examine the contents of dbfile to see that it can be accessed as a file: # head -2 /mnt1/dbfile root:x:0:1:Super-User:/:/sbin/sh daemon:x:1:1::/: The passwd file that was written to the raw volume is now visible in the new fil
Multi-volume file systems Reporting file extents Reporting file extents MVS feature provides the capability for file-to-volume mapping and volume-to-file mapping via the fsmap and fsvmap commands. The fsmap command reports the volume name, logical offset, and size of data extents, or the volume name and size of indirect extents associated with a file on a multi-volume file system. The fsvmap command maps volumes to the files that have extents on those volumes. See the fsmap(1M) and fsvmap(1M) manual pages.
334 Multi-volume file systems Load balancing Using the fsvmap command 1 Report the extents of files on multiple volumes: # fsvmap /dev/vx/rdsk/fstest/testvset vol1 vol2 vol1 vol1 vol1 vol1 vol2 vol2 2 /.
Multi-volume file systems Load balancing 335 Note: If a file has both a fixed extent size set and an allocation policy for load balancing, certain behavior can be expected. If the chunk size in the allocation policy is greater than the fixed extent size, all extents for the file are limited by the chunk size. For example, if the chunk size is 16 MB and the fixed extent size is 3 MB, then the largest extent that satisfies both the conditions is 15 MB.
336 Multi-volume file systems Converting a multi-volume file system to a single volume file system To rebalance extents 1 Define the policy by specifying the -o balance and -c options: # fsapadm define -o balance -c 2m /mnt loadbal vol1 vol2 vol4 \ vol5 vol6 2 Enforce the policy: # fsapadm enforcefile -f strict /mnt/filedb Converting a multi-volume file system to a single volume file system Because data can be relocated among volumes in a multi-volume file system, you can convert a multi-volume file
Multi-volume file systems Converting a multi-volume file system to a single volume file system Converting to a single volume file system 1 Determine if the first volume in the volume set, which is identified as device number 0, has the capacity to receive the data from the other volumes that will be removed: # df /mnt1 /mnt1 2 (/dev/vx/dsk/dg1/vol1):16777216 blocks 3443528 files If the first volume does not have sufficient capacity, grow the volume to a sufficient size: # fsvoladm resize /mnt1 vol1 1
338 Multi-volume file systems Converting a multi-volume file system to a single volume file system
Chapter 25 Administering SmartTier This chapter includes the following topics: ■ About SmartTier ■ Supported SmartTier document type definitions ■ Placement classes ■ Administering placement policies ■ File placement policy grammar ■ File placement policy rules ■ Calculating I/O temperature and access temperature ■ Multiple criteria in file placement policy rule statements ■ File placement policy rule and statement ordering ■ File placement policies and extending files ■ Using SmartTi
340 Administering SmartTier About SmartTier for administrative purposes, making it possible to control the locations to which individual files are directed. See “About multi-volume support” on page 323. Note: Some of the commands have changed or been removed between the 4.1 release and the 5.1 SP1 release to make placement policy management more user-friendly. The following commands have been removed: fsrpadm, fsmove, and fssweep.
Administering SmartTier Supported SmartTier document type definitions Supported SmartTier document type definitions Table 25-1 describes which releases of Veritas File System (VxFS) support specific SmartTier document type definitions (DTDs). Table 25-1 Supported SmartTier document type definitions DTD Version VxFS Version 1.0 1.1 5.0 Supported Not supported 5.1 Supported Supported 5.
342 Administering SmartTier Placement classes ■ Adding or removing volumes does not require a file placement policy change. If a volume with a tag value of vxfs.placement_class.tier2 is added to a file system’s volume set, all policies that refer to tier2 immediately apply to the newly added volume with no administrative action. Similarly, volumes can be evacuated, that is, have data removed from them, and be removed from a file system without a policy change.
Administering SmartTier Administering placement policies To list placement classes ◆ List the volume tags, including placement classes: # vxassist -g cfsdg listtag vsavola # vxvoladm -g cfsdg listtag vsavola Administering placement policies A VxFS file placement policy document contains rules by which VxFS creates, relocates, and deletes files, but the placement policy does not refer to specific file systems or volumes.
344 Administering SmartTier Administering placement policies To assign a placement policy ◆ Assign a placement policy to a file system: # fsppadm assign /mnt1 /tmp/policy1.xml Unassigning a placement policy The following example uses the fsppadm unassign command to unassign the active file placement policy from the file system at mount point /mnt1.
Administering SmartTier Administering placement policies Enforcing a placement policy Enforcing a placement policy for a file system requires that the policy be assigned to the file system. You must assign a placement policy before it can be enforced. See “Assigning a placement policy” on page 343. Enforce operations are logged in a hidden file, .__fsppadm_enforce.log, in the lost+found directory of the mount point.
346 Administering SmartTier File placement policy grammar To enforce a placement policy ◆ Enforce a placement policy for a file system: # fsppadm enforce -a -r /tmp/report /mnt1 Current Current Relocated Class Volume Class tier3 vole tier3 tier3 vole tier3 tier3 vole tier3 tier3 volf tier3 . . .
Administering SmartTier File placement policy rules allocating and relocating files are expressed in the file system's file placement policy. A VxFS file placement policy defines the desired placement of sets of files on the volumes of a VxFS multi-volume file system. A file placement policy specifies the placement classes of volumes on which files should be created, and where and under what conditions the files should be relocated to volumes in alternate placement classes or deleted.
348 Administering SmartTier File placement policy rules SELECT statement The VxFS placement policy rule SELECT statement designates the collection of files to which a rule applies.
Administering SmartTier File placement policy rules Either an exact file name or a pattern using a single wildcard character (*). The first "*" character is treated as a wildcard, while any subsequent "*" characters are treated as literal text. The pattern cannot contain "/". The following list contains examples of patterns: ■ abc* – Matches all files whose names begin with “abc". abc.* – Matches all files whose names are exactly "abc" followed by a period and any extension.
350 Administering SmartTier File placement policy rules One or more instances of any or all of the file selection criteria may be specified within a single SELECT statement. If two or more selection criteria of different types are specified in a single statement, a file must satisfy one criterion of each type to be selected.
Administering SmartTier File placement policy rules A placement policy rule's action statements apply to all files designated by any of the rule's SELECT statements. If an existing file is not designated by a SELECT statement in any rule of a file system's active placement policy, then SmartTier does not relocate or delete the file.
352 Administering SmartTier File placement policy rules If space cannot be allocated on any volume in any of the placement classes specified, file creation fails with an ENOSPC error, even if adequate space is available elsewhere in the file system's volume set. This situation can be circumvented by specifying a Flags attribute with a value of "any" in the clause.
Administering SmartTier File placement policy rules tier1 tier2 1 The element with a value of one megabyte is specified for allocations on tier2 volumes. For files allocated on tier2 volumes, the first megabyte would be allocated on the first volume, the second on the second volume, and so forth.
354 Administering SmartTier File placement policy rules placement_class_name chunk_size additional_placement_class_specifications relocation_conditions A RELOCATE statement contains the following clauses: An optional clause that contains a list of placement classes from whose volumes designated files should be relocated if
Administering SmartTier File placement policy rules Indicates the placement classes to which qualifying files should be relocated. Unlike the source placement class list in a FROM clause, placement classes in a clause are specified in priority order. Files are relocated to volumes in the first specified placement class if possible, to the second if not, and so forth.
356 Administering SmartTier File placement policy rules This criterion is met when files are unmodified for a designated period or during a designated period relative to the time at which the fsppadm enforce command was issued. This criterion is met when files exceed or drop below a designated size or fall within a designated size range. This criterion is met when files exceed or drop below a designated I/O temperature, or fall within a designated I/O temperature range.
Administering SmartTier File placement policy rules 357 max_access_age min_modification_age max_modification_age min_size max_size min_I/O_temperature
358 Administering SmartTier File placement policy rules Both the and elements require Flags attributes to direct their operation. For , the following Flags attributes values may be specified: gt The time of last access must be greater than the specified interval. eq The time of last access must be equal to the specified interval. gteq The time of last access must be greater than or equal to the specified interval. For , the following Flags attributes values may be specified.
Administering SmartTier File placement policy rules GB Gigabytes Specifying the I/O temperature relocation criterion The I/O temperature relocation criterion, , causes files to be relocated if their I/O temperatures rise above or drop below specified values over a specified period immediately prior to the time at which the fsppadm enforce command was issued. A file's I/O temperature is a measure of the read, write, or total I/O activity against it normalized to the file's size.
360 Administering SmartTier File placement policy rules I/O temperature is a softer measure of I/O activity than access age. With access age, a single access to a file resets the file's atime to the current time. In contrast, a file's I/O temperature decreases gradually as time passes without the file being accessed, and increases gradually as the file is accessed periodically.
Administering SmartTier File placement policy rules 3.4 6 If there are a number of files whose I/O temperature is greater than the given minimum value, the files with the higher temperature are first subject to the RELOCATE operation before the files with the lower temperature.
362 Administering SmartTier File placement policy rules The following formula computes the read IOTEMP of a given file: IOTEMP = (bytes of the file that are read in the PERIOD) / (PERIOD in hours * size of the file in bytes) The write and read/write IOTEMP are also computed accordingly. The following formula computes the average read IOTEMP: Average IOTEMP = (bytes read of all active files in the last h hours) / (h * size of all the active files in bytes) h is 24 hours by default.
Administering SmartTier File placement policy rules The following example statement causes the average file system activity be collected and computed over a period of 30 hours instead of the default 24 hours: PAGE 360364 Administering SmartTier File placement policy rules tier2 1 1000 30 Files designated by the rule's SELECT statement are relocated from tier1 volumes to tier2 volumes if they are between 1 MB and 1000 MB in size and have not been accessed for 30 days.
Administering SmartTier File placement policy rules 5 2 This rule relocates files that reside on tier2 volumes to tier1 volumes if their I/O temperatures are above 5 for the two day period immediately preceding the issuing of the fsppadm enforce command. VxFS relocates qualifying files in the order in which it encounters them during its file system directory tree scan.
366 Administering SmartTier File placement policy rules tier3 volumes instead. VxFS relocates qualifying files as it encounters them in its scan of the file system directory tree. The clause in the RELOCATE statement is optional. If the clause is not present, VxFS evaluates files designated by the rule's SELECT statement for relocation no matter which volumes they reside on when the fsppadm enforce command is issued.
Administering SmartTier File placement policy rules This rule relocates files smaller than 10 megabytes to tier1 volumes, files between 10 and 100 megabytes to tier2 volumes, and files larger than 100 megabytes to tier3 volumes. VxFS relocates all qualifying files that do not already reside on volumes in their DESTINATION placement classes when the fsppadm enforce command is issued.
368 Administering SmartTier File placement policy rules An optional clause specifying the conditions under which files to which the rule applies should be deleted. The form of the clause in a DELETE statement is identical to that of the clause in a RELOCATE statement. If a DELETE statement does not contain a clause, files designated by the rule's SELECT statement, and the clause if it is present, are deleted unconditionally.
Administering SmartTier Calculating I/O temperature and access temperature Calculating I/O temperature and access temperature An important application of VxFS SmartTier is automating the relocation of inactive files to lower cost storage. If a file has not been accessed for the period of time specified in the element, a scan of the file system should schedule the file for relocation to a lower tier of storage.
370 Administering SmartTier Calculating I/O temperature and access temperature file size. A large file to which 20 I/O requests are made over a 2-day period has the same average access temperature as a small file accessed 20 times over a 2-day period.
Administering SmartTier Calculating I/O temperature and access temperature The following XML snippet illustrates the use of IOTEMP in a policy rule to specify relocation of low activity files from tier1 volumes to tier2 volumes: tier1 tier2 3 4 This snippet specifies that files t
372 Administering SmartTier Calculating I/O temperature and access temperature due to inactivity or low temperatures, and relocating them to higher tiers in the storage hierarchy. The following XML snippet illustrates relocating files from tier2 volumes to tier1 when the activity level against them increases.
Administering SmartTier Multiple criteria in file placement policy rule statements Multiple criteria in file placement policy rule statements In certain cases, file placement policy rule statements may contain multiple clauses that affect their behavior. In general, when a rule statement contains multiple clauses of a given type, all clauses must be satisfied in order for the statement to be effective. There are four cases of note in which multiple clauses may be used.
374 Administering SmartTier Multiple criteria in file placement policy rule statements In the following example, a file need only reside in one of db/datafiles, db/indexes, or db/logs or be owned by one of DBA_Manager, MFG_DBA, or HR_DBA to be designated for possible action:
Administering SmartTier Multiple criteria in file placement policy rule statements In this statement, VxFS would allocate space for newly created files designated by the rule's SELECT statement on tier1 volumes if space was available. If no tier1 volume had sufficient free space, VxFS would attempt to allocate space on a tier2 volume. If no tier2 volume had sufficient free space, VxFS would attempt allocation on a tier3 volume.
376 Administering SmartTier File placement policy rule and statement ordering You cannot write rules to relocate or delete a single designated set of files if the files meet one of two or more relocation or deletion criteria. File placement policy rule and statement ordering You can use the SmartTier graphical user interface (GUI) to create any of four types of file placement policy documents. Alternatively, you can use a text editor or XML editor to create XML policy documents directly.
Administering SmartTier File placement policy rule and statement ordering tier1 other_statements The GeneralRule rule specifies that all files created in the file system, designated by *, should be created on tier2 volumes. The DatabaseRule rule specifies that files whose names include an extension of .db should be created on tier1 volumes.
378 Administering SmartTier File placement policies and extending files 90 As written with the RELOCATE statement preceding the DELETE statement, files will never be deleted, because the clause in the RELOCATE statement applies to all selected files that have not been accessed for at least 30 days. This includes those that have not been accessed for 90 days.
Administering SmartTier Using SmartTier with solid state disks ■ Support of the Prefer attribute for the and criteria ■ Provision of a mechanism to relocate based on average I/O activity ■ Reduction of the intensity and duration of scans to minimize the impact on resources, such as memory, CPU, and I/O bandwidth ■ Quick identification of cold files To gain these benefits, you must modify the existing placement policy as per the latest version of the DTD and assign the policy a
380 Administering SmartTier Using SmartTier with solid state disks Veritas File System (VxFS) has relocated the file to an SSD, it may be beneficial to keep the file on the SSD as long as the activity remains high to avoid frequent thrashing. You want to watch the activity for some time longer than the time that you watched the activity when you relocated the file to the SSD before you decide to move the file off of the SSD.
Administering SmartTier Using SmartTier with solid state disks activity levels can change during the day. As a result, SmartTier must scan more frequently, which leads to a higher scan load on the host systems. You must satisfy the following conflicting requirements simultaneously: ■ Bring down the temperature collection windows to hourly levels. ■ Reduce the impact of more frequent scans on resources, such as CPU, I/O, and memory.
382 Administering SmartTier Using SmartTier with solid state disks The -C option is useful to process active files before any other files. For best results, specify the -T option in conjunction with the -C option. Specifying both the -T option and -C option causes the fsppadm command to evacuate any cold files first to create room in the SSD tier to accommodate any active files that will be moved into the SSD tier via the -C option.
Administering SmartTier Using SmartTier with solid state disks ssdtier Move the files out of SSD if their last 3 hour write IOTEMP is more than 1.5 times the last 24 hour average write IOTEMP. The PERIOD is purposely shorter than the other RELOCATEs because we want to move it out as soon as write activity starts peaking. This criteria could be used to reduce SSD wear outs.
384 Administering SmartTier Using SmartTier with solid state disks ssdtier nonssd_tier 0.5 6 OR move the files into SSD if their last 3 hour read IOTEMP is more than or equal to 1.
Administering SmartTier Using SmartTier with solid state disks In this placement policy, new files are created on the SSD tiers if space is available, or elsewhere if space is not available. When enforce is performed, the files that are currently in SSDs whose write activity is increased above a threshold or whose read activity fell below a threshold over a given period are moved out of the SSDs. The first two RELOCATEs capture this intent.
386 Administering SmartTier Using SmartTier with solid state disks slower write times of SSDs. Lesser read activity means that you are not benefitting from the faster read times of SSDs with these files.
Section Migrating data ■ Chapter 26. Understanding data migration ■ Chapter 27. Offline data migration ■ Chapter 28.
388
Chapter 26 Understanding data migration This chapter includes the following topics: ■ Types of data migration Types of data migration This section describes the following types of data migration: ■ Migrating data from LVM to Storage Foundation using offline migration When you install Storage Foundation, you may already have some volumes that are controlled by the Logical Volume Manager. You can preserve your data and convert these volumes to Veritas Volume Manager volumes.
390 Understanding data migration Types of data migration Note: The procedures are different if you plan to migrate to a thin array from a thick array. See “Migrating to thin provisioning” on page 114.
Chapter 27 Offline data migration This chapter includes the following topics: ■ About migrating Logical Volume Manager to VxVM ■ About VxVM and LVM ■ Converting LVM to VxVM ■ Command differences ■ SMH and the VEA About migrating Logical Volume Manager to VxVM This section provides an overview of migrating Logical Volume Manager to VxVM. There are benefits of migrating from the HP-UX Logical Volume Manager (LVM) to VxVM. See “About LVM to VxVM conversion” on page 400.
392 Offline data migration About VxVM and LVM with the LVM and MirrorDisk/UX products today, including the following capabilities: ■ Veritas Volume Manager can coexist with LVM. Users can decide which volumes they want managed by each volume manager. For users who want to migrate LVM volume groups to VxVM disk groups, a conversion utility is included. The vxvmconvert utility is used to convert LVM to VxVM. See “About LVM to VxVM conversion” on page 400.
Offline data migration About VxVM and LVM to a disk is lost, the system continues to access the data over the other available connections to the disk. DMP can also provide improved I/O performance from disks with multiple pathways that are concurrently available. DMP can balance the I/O load uniformly across the multiple paths to the disk device. DMP can coexist with the native multi-pathing functionality that is provided in HP-UX 11i Version 3.
394 Offline data migration About VxVM and LVM a mirrored layout or to change a stripe unit size. The volume data remains available during the relayout. ■ Improved RAID-5 subdisk moves, using layered volume technology where the RAID-5 subdisk move operation leaves the old subdisk in place while the new one is being synchronized, thus maintaining redundancy and resiliency to failures during the move. Note: Additional information is available on LVM and VxVM commands.
Offline data migration About VxVM and LVM Table 27-1 A conceptual comparison of LVM and VxVM (continued) LVM term VxVM term Description Physical volume VxVM disk An LVM physical volume and a VxVM disk are conceptually the same. A physical disk is the basic storage device (media) where the data is ultimately stored. You can access the data on a physical disk by using a device name (devname) to locate the disk. In LVM, a disk that is initialized by LVM becomes known as a physical volume.
396 Offline data migration About VxVM and LVM Table 27-1 A conceptual comparison of LVM and VxVM (continued) LVM term VxVM term Description Logical volume Volume An LVM logical volume and a VxVM volume are conceptually the same. Both are virtual disk devices that appear to applications, databases, and file systems like physical disk devices, but do not have the physical limitations of physical disk devices.
Offline data migration About VxVM and LVM Table 27-1 A conceptual comparison of LVM and VxVM (continued) LVM term VxVM term Description Volume group Disk group LVM volume groups are conceptually similar to VxVM disk groups. An LVM volume group is the collective identity of a set of physical volumes, which provide disk storage for the logical volumes. A VxVM disk group is a collection of VxVM disks that share a common configuration.
398 Offline data migration About VxVM and LVM Table 27-1 LVM term A conceptual comparison of LVM and VxVM (continued) VxVM term Unused physical extent Free space Description VxVM can place a disk under its control without adding it to a disk group. The VxVM Storage Administrator shows these disks as “free space pool”. LVM contains unused physical extents that are not part of a logical volume, but are part of the volume group.
Offline data migration About VxVM and LVM Table 27-1 A conceptual comparison of LVM and VxVM (continued) LVM term VxVM term Description Import Import In LVM, import adds a volume group to the system and the volume group information to /etc/lvmtab but does not make the volumes accessible. The volume group must be activated by the vgchange -a y command to make volumes accessible. In VxVM, import imports a disk group and makes the disk group accessible by the system.
400 Offline data migration Converting LVM to VxVM See “About SMH and the VEA” on page 449. The vxvmconvert command is provided to enable LVM disks to be converted to a VxVM disk format without losing any data. See “Converting LVM volume groups to VxVM disk groups” on page 402. Converting LVM to VxVM About LVM to VxVM conversion This chapter explains how to convert your LVM configuration to a VxVM configuration.
Offline data migration Converting LVM to VxVM Converting unused LVM physical volumes to VxVM disks LVM disks which are not part of any volume group, and contain no user data, are simply cleaned up, so that there are no LVM disk headers. Then the disks are given over to VxVM through the normal means of initializing disks. Warning: Exercise caution while using this procedure to give disks over to VxVM. You must be absolutely certain that the disks are not in use in any LVM configuration.
402 Offline data migration Converting LVM to VxVM Or use the command: # vxdisk init disk_name VxVM utilities will not tamper with any disks that are recognized as owned by LVM (by virtue of the LVM disk headers). If you attempt to use vxdisk init, or vxdiskadm on an LVM disk without using the pvremove command first, the command fails. Note: The above behavior is displayed on both LVM version 1 and version 2 volume groups.
Offline data migration Converting LVM to VxVM undergoing conversion. Access to the LVM configuration itself (the metadata of LVM) must also be limited to the conversion process. Volume group conversion limitations There are certain LVM volume configurations that cannot be converted to VxVM. Some of the reasons a conversion could fail are: ■ A volume group with insufficient space for metadata.
404 Offline data migration Converting LVM to VxVM MWC volume being full, leaving no space for the DRL log. However it is very unlikely that this situation would occur. Note that the MWC and DRL are used only when the system crashes or is improperly shut down, to quickly bring all mirrors in the volume back into a consistent state. ■ A volume group containing the /usr file system.
Offline data migration Converting LVM to VxVM ■ Volume groups with mirrored volumes. A conversion fails if the LVM volume group being converted has mirrored volumes, but the system does not have a valid license installed that enables mirroring for VxVM. The analyze option in vxvmconvert, which is described in later sections, aids you in identifying which volume groups can be converted. Conversion process summary Several steps are used to convert LVM volume groups to VxVM disk groups.
406 Offline data migration Converting LVM to VxVM Identifying LVM disks and volume groups for conversion The obvious first step in the conversion process is to identify what you want to convert. The native LVM administrative utilities like vgdisplay and SMH can help you identify candidate LVM volume groups as well as the disks that comprise them. You can also use the vxvmconvert command and the vxdisk command to examine groups and their member disks.
Offline data migration Converting LVM to VxVM Note: The analysis option is presented as a separate menu item in vxvmconvert, but there is an implicit analysis with any conversion. If you simply select the “Convert LVM Volume Groups to VxVM” menu option, vxvmconvert will go through analysis on any group you specify. When you are using the convert option directly, you are given a chance to abort the conversion after analysis, and before any changes are committed to disk.
408 Offline data migration Converting LVM to VxVM During a conversion, any spurious reboots, power outages, hardware errors or operating system bugs can have unpredictable and undesirable consequences. You are advised to be on guard against disaster with a set of verified backups. Backing up an LVM configuration Use the vgcfgbackup(1M) utility before running vxvmconvert to save a copy of the LVM configuration.
Offline data migration Converting LVM to VxVM See “Implementing changes for new VxVM logical volume names” on page 413. File system back up of user data You can use the backup utility that you normally use to back up data on your logical volumes. For example, to back up logical volumes that contain file systems, the fbackup(1M) command can be used to back up the data to tape.
410 Offline data migration Converting LVM to VxVM ■ Scripts run by cron(1M). ■ Other administrative scripts. Workaround vxvmconvert records a mapping between the names of the LVM device nodes and VxVM device nodes. This data can be used to create symbolic links from the old LVM volume to the new VxVM device names. The mapping is recorded in the file: /etc/vx/reconfig.d/vgrecords/vol_grp_name/vol_grp_name.
Offline data migration Converting LVM to VxVM vxvmconvert tries to unmount mounted file systems during the conversion. Bear in mind though, that vxvmconvert makes no attempt to close down running applications on those file systems, nor does it attempt to deal with applications (e.g., databases) running on raw LVM volumes. See “Conversion and reboot” on page 411. Note: It is strongly recommended that you do not rely on vxvmconvert's mechanisms for unmounting file systems.
412 Offline data migration Converting LVM to VxVM Converting a volume group To do the actual conversion of LVM volume groups to VxVM disk groups, choose option 2 of the vxvmconvert utility. vxvmconvert will prompt for a name for the VxVM disk group that will be created to replace the LVM volume group you are converting. This is the only object naming that is done through vxvmconvert. Additional details are available on modifying VxVM volume names. See “Tailoring your VxVM configuration” on page 413.
Offline data migration Converting LVM to VxVM Implementing changes for new VxVM logical volume names You must be sure that all applications and configuration files refer properly to the new VxVM logical volumes. See “Planning for new VxVM logical volume names” on page 409. Restarting applications on the new VxVM volumes After the conversion to VxVM is complete, file systems can be mounted on the new devices and applications can be restarted.
414 Offline data migration Converting LVM to VxVM Note: You must only rename objects in the VxVM configuration after you are fully satisfied with that configuration. In particular, you should never use menu option 3 of vxvmconvert (Roll back) after name changes. If you have chosen to set up symbolic links to the VxVM volumes, avoid renaming VxVM objects. Additional information is available on setting up symbolic links. These symbolic links are made invalid if the underlying VxVM device node name changes.
Offline data migration Converting LVM to VxVM backup that was made before the conversion was done (frecover). Additional information is available on full LVM restoration. See “Full LVM restoration” on page 416. Note: Restoring user data using the vgrestore and frecover method will result in the loss of all user data changes made since the conversion, and the loss of all new volumes created since the conversion.
416 Offline data migration Converting LVM to VxVM Note: In many cases, if you choose the rollback method and the configuration has changed, you receive an error and must use the full restore method. If you used the workaround of creating symbolic links from the old LVM names to the new VxVM names, you must remove the symbolic links you created before beginning the rollback. Additional information is available on creating symbolic links. See “Planning for new VxVM logical volume names” on page 409.
Offline data migration Converting LVM to VxVM To use this method, you must have backed up data located on all the volume groups’ logical volumes before conversion to VxVM. Restoration of LVM volume groups is a two-step process consisting of a restoration of LVM internal data (metadata and configuration files), and restoration of user or application data. The process is limited to restoring the state of the logical volumes as they existed before conversion to VxVM disks.
418 Offline data migration Converting LVM to VxVM list listvg ? ?? q List disk information List LVM Volume Group information Display help about menu Display help about the menuing system Exit from menus Example: listing disk information The list option of vxvmconvert displays information about the disks on a system. Select the list option from the vxvmconvert Main Menu: Menu: Volume Manager/LVM_Conversion/list # list Use this menu option to display a list of disks.
Offline data migration Converting LVM to VxVM LVM VOLUME GROUP INFORMATION NAME VERSION TYPE PHYSICAL VOLUME vg00 1.0 ROOT disk10 vg09 2.0 Non-Root disk11 vg08 1.0 Non-Root disk12 Volume Group to list in detail [
,none,q,?] (default: none) none To display detailed information about a volume group, select any of the volume groups from the above list.420 Offline data migration Converting LVM to VxVM Allocated PE Used PV --- Physical volumes --PV Name PV Status Total PE Free PE 125 1 /dev/disk/disk12 available 250 0 List another LVM Volume Group? [y,n,q,?] (default: n) Select an operation to perform: Note: The volume groups you want to convert must not be a root volume group or have bootable volumes in the group.
Offline data migration Converting LVM to VxVM Here are some LVM volume group selection examples: all: analyze all LVM Volume Groups (all except Root VG) listvg: list all LVM Volume Groups list: list all disk devices vg_name:a single LVM Volume Group, named vg_name : for example vg08 vg09 vg05 Select volume groups to analyze: [,all,list,listvg,q,?] vg08 Name a new disk group [,list,q,?] (default: dg08) Each volume group will be analyzed one at a time.
422 Offline data migration Converting LVM to VxVM ? ?? q Display help about menu Display help about the menuing system Exit from menus Select an operation to perform: 1 Analyze one or more LVM Volume Groups Menu: Volume Manager/LVM_Conversion/Analyze_LVM_VGs Use this operation to analyze one or more LVM volume groups for possible conversion using the VxVM Volume Manager. This operation checks for problems that would prevent the conversion from completing successfully.
Offline data migration Converting LVM to VxVM RESERVED space sectors = 78 PRIVATE SPACE/FREE sectors = 98 AVAILABLE sector space = 49 AVAILABLE sector bytes = 50176 RECORDS neededs to convert = 399 MAXIMUM records allowable = 392 The smallest disk in the Volume Group (vg08) does not have sufficient private space for the conversion to succeed.
424 Offline data migration Converting LVM to VxVM effect. For this release, only Non-root LVM Volume Groups are allowed to be converted. More than one Volume Group or pattern may be entered at the prompt.
Offline data migration Converting LVM to VxVM The conversion process will update the /etc/fstab file so that volume devices are used to mount the file systems on this disk device. You will need to update any other references such as backup scripts, databases,or manually created swap devices. If you do not like the default names chosen for the corresponding logical volumes, you may change these to whatever you like using vxedit.
426 Offline data migration Converting LVM to VxVM Volume Manager: Adding dg0801 (disk12) as a converted LVM disk. Adding volumes for disk12... Starting new volumes... Updating /etc/fstab... The system will now Convert the LVM Volume Groups over to VxVM disk groups.
Offline data migration Converting LVM to VxVM NAME VERSION vg00 1.0 vg05 1.0 vg03 2.0 vg08 1.0 Select Volume Groups TYPE ROOT Non-Root Non-Root Non-Root to convert PHYSICAL VOLUME disk10 disk11 disk14 disk15 disk12 : [,all,list,listvg,q,?] vg08 vg08 Convert this Volume Group? [y,n,q,?] (default: y) Name a new disk group [,list,q,?] (default: dg08) The following disk has been found in the vg08 volume group and will be configured for conversion to a VxVM disk group.
428 Offline data migration Converting LVM to VxVM AVAILABLE sector bytes = 50176 RECORDS neededs to convert= 399 MAXIMUM records allowable = 392 The smallest disk in the Volume Group (vg08) does not have sufficient private space for the conversion to succeed. There is only enough private space for 392 VM Database records and the conversion of Volume Group (vg08) would require enough space to allow 399 VxVM Database records.
Offline data migration Converting LVM to VxVM What does vxvmconvert list display? The device indicates a physical disk, a disk with a name indicates if the disk is under VxVM control, a group shows the disk group name, and the status indicates if it is an LVM disk. If the status is online, that means VxVM acknowledges the disk but doesn’t have it under its control. Example vxprint output before conversion The list and listvg output is from within the vxvmconvert command. vxprint is a command line command.
430 Offline data migration Converting LVM to VxVM The vxprint output provides the following information: ■ The disk group dg08 contains the VxVM disk dg0801 and the volume dg08lv1. The VxVM disk dg0801 is associated with disk device c0t8d0 and is 2080768 blocks in length. The volume dg08lv1 is of type fsgen, is enabled in the VxVM kernel driver, is of length 102400, and is in the ACTIVE state. This means that the volume is started, and the plex is enabled.
Offline data migration Converting LVM to VxVM More than one Volume Group or pattern may be entered at the prompt.
432 Offline data migration Converting LVM to VxVM Another factor in converting stripes is that stripes create more work for the converter. In some cases, stripes require 1GB volume, although only the metadata is being changed. In other cases, where there are more physical disks in one volume than another, there is more metadata to deal with. The converter has to read every physical extent map to ensure there are no holes in the volume; if holes are found, the converter maps around them.
Offline data migration Converting LVM to VxVM To analyze volume groups for conversion ◆ Run the vxautoanalysis command: # /usr/sbin/vxautoanalysis [-f] [vgname ...] The volume groups may be specified by their names or full pathnames. If no volume groups are specified, analysis of all volume groups on the system is attempted. If the value of the system tunable, nproc, is too low, the analysis will report that the conversion analysis of the volume groups cannot be performed in parallel.
434 Offline data migration Command differences Converting disk groups back to volume groups The vxautorollback utility converts one or more VxVM disk groups back to the LVM volume groups from which they had previously been converted. Note: The VxVM configuration daemon (vxconfigd) must be running in order for the analysis to succeed. Reverse conversion is performed on each disk group in turn. Parallel conversion is not supported.
Offline data migration Command differences LVM and VxVM command equivalents The table below lists the LVM commands and a near equivalent command to use in VxVM. For more information, refer to the Task Comparison chart. Additional information is available on VxVM commands. Refer to the Veritas Volume Manager documentation package. Table 27-2 Command comparison LVM Description/action VxVM lvchange Changes the characteristics of logical vxedit volumes.
436 Offline data migration Command differences Table 27-2 Command comparison (continued) LVM Description/action VxVM Description/action lvreduce Decreases disk space allocated to a logical volume. vxassist Decreases a volume in size with the shrinkto or shrinkby parameters. Example: vxassist shrinkto vol_name 200M Make sure you shrink the file system before shrinking the volume. lvremove Removes one or more logical volumes vxedit from a volume group.
Offline data migration Command differences Table 27-2 Command comparison (continued) LVM Description/action VxVM lvsync Synchronizes mirrors that are stale in vxrecover one or more logical volumes. vxvol start Description/action The vxrecover command performs resynchronize operations for the volumes, or for volumes residing on the named disks (medianame or the VxVM name for the disk). Example: vxrecover vol_name media_name pvcreate Makes a disk an LVM disk.
438 Offline data migration Command differences Table 27-2 Command comparison (continued) LVM Description/action VxVM pvmove Moves allocated physical extents from vxevac source to destination within a volume vxsd mv group. vxdiskadm Description/action Moves volumes off a disk. Performs volume operations on a subdisk. Moves the contents of old subdisk onto the new subdisks and replaces old sub disk with the new subdisks for any associations.
Offline data migration Command differences Table 27-2 Command comparison (continued) LVM Description/action VxVM Description/action vgscan Scans all disks and looks for logical volume groups. vxinfo Displays information about volumes. vxprint Displays complete or partial information from records in VxVM disk group configurations. vxdiskadm Option list in the vxdiskadm menu displays disk information. vgsync Synchronizes mirrors that are stale in vxrecover one or more logical volumes.
440 Offline data migration Command differences Note: The following features in VxVM require an additional license: Mirroring, Mirroring and Striping, Dynamic Multi-Pathing of Active/Active Devices, Hot-relocation, Online Migration, and RAID-5. All the VxVM tasks listed in the task comparison chart can be performed by the Veritas Enterprise Administrator. See the Veritas Enterprise Administrator User’s Guide. Additional information is available on LVM and VxVM commands.
Offline data migration Command differences Table 27-3 441 LVM and VxVM task comparisons (continued) Task type Description Example LVM Extend a logical volume or increase space allocated to a logical volume. lvextend -l 50 /dev/vol_grp/lvol_name l– indicates the number of logical extents in the logical volume VxVM Increase the volume by or to a given length.
442 Offline data migration Command differences Table 27-3 LVM and VxVM task comparisons (continued) Task type Description Example LVM Back up volume group configuration information. vgcfgbackup -f /pathname/filename vol_grp VxVM Back up volume group configuration information. dgcfgbackup -f /pathname/filename vol_grp LVM Restore volume group configuration to a particular physical volume.
Offline data migration Command differences Table 27-3 LVM and VxVM task comparisons (continued) Task type Description Example LVM Mirroring a disk involves several steps.
444 Offline data migration Command differences Table 27-3 LVM and VxVM task comparisons (continued) Task type Description Example VxVM Display all volume information. vxprint -vt Display information about a specific volume. vxprint -ht vol_name LVM Display information about volume groups. vgdisplay -v /dev/vol_grp VxVM Display disk group information. vxdisk list Display information about a specific disk group.
Offline data migration Command differences Table 27-3 LVM and VxVM task comparisons (continued) Task type Description Example LVM Set up alternate links to a physical volume. vgcreate /dev/vol_grp\ /dev/dsk/ disk_name /dev/dsk/disk_name_2 If a disk has two controllers, you can make one primary and the other an alternate link.
446 Offline data migration Command differences Table 27-3 LVM and VxVM task comparisons (continued) Task type Description Example VxVM Snapshot a volume and create a new volume. vxassist snapshot vol_name new_vol_name LVM Combine two logical volumes back into a mirrored logical volume lvmerge /dev/vol_grp/split_vol_name\ /dev/vol_grp/lvol_name split_vol_name= active logical volume VxVM Returns the snapshot plex to the original volume from which it was snapped.
Offline data migration Command differences Table 27-3 LVM and VxVM task comparisons (continued) Task type Description Example LVM Make a disk available as a hot spare. pvchange -z y /dev/dsk/disk_name VxVM Make a disk available as a hot spare.
448 Offline data migration Command differences Table 27-4 Additional VxVM tasks with no LVM equivalents (continued) Task descriptions Examples Evacuate a disk. vxevac -g disk_group medianame new_medianame Replace a disk. Select menu option 4 of vxdiskadm. Recover volumes on a disk. vxrecover -g disk_group vol_name medianame Display a DMP node. vxdisk list meta_device Rename a disk group. vxdg -tC -n newdg_name Rename a volume.
Offline data migration SMH and the VEA Table 27-5 LVM features and VxVM equivalents (continued) LVM Feature VxVM Equivalent Powerfail timeout feature: Automatically re-enable a disk or a path to a disk, after temporary error condition (resulting inEPOWERF error on I/Os) disappears on that disk or path. Powerfail timeout feature: After the EPOWERF error condition disappears, the reconfiguration command must be run manually to re-enable the paths and the disks which were disabled due to EPOWERF error.
450 Offline data migration SMH and the VEA For information about the VEA, see the Veritas Enterprise Administrator User’s Guide and the online help that is available from within the VEA. Displaying disk devices in SMH To display disk devices in SMH, select Tools > Disks and File Systems > Disks. The Disks tab of the HP-UX Disks and File Systems Tool screen lists the system’s disk devices. To switch between legacy device names and new agile device names, click on Toggle Global Device View.
Offline data migration SMH and the VEA Figure 27-1 Displaying disk devices in SMH Displaying volume groups and disk groups in SMH To display volume groups and disk groups in SMH, select Tools > Disks and File Systems > Volume Groups. The Volume Groups screen lists all the LVM volume groups and VxVM disk groups that are on the system. A more detailed description of a volume group’s properties can be obtained by selecting the radio button to the left of a listed volume group or disk group.
452 Offline data migration SMH and the VEA Figure 27-2 Displaying LVM volume groups and VxVM disk groups in SMH Displaying logical volumes in SMH To display logical volumes in SMH, select Tools > Disks and File Systems > Logical Volumes. The Logical Volumes screen lists the LVM logical volumes and VxVM volumes on the system. The “Type” column indicates whether a volume is controlled by LVM or VxVM. The “Use” column shows whether a volume is in use and if so, what it is used for.
Offline data migration SMH and the VEA Figure 27-3 shows an example Logical Volumes screen. The LVM logical volumes in the vg00 volume group are being used for HFS and VxFS file systems and for swap and dump. The myvol1 and myvol2 VxVM volumes in the mydg disk group are being used for VxFS and HFS file systems. The remaining VxVM volume, myvol3, is not currently in use.
454 Offline data migration SMH and the VEA
Chapter 28 Migrating data between platforms This chapter includes the following topics: ■ Overview of CDS ■ CDS disk format and disk groups ■ Setting up your system ■ Maintaining your system ■ File system considerations ■ Alignment value and block size ■ Moving disk groups between HP-UX and Linux systems ■ Migrating a snapshot volume Overview of CDS This section presents an overview of the Cross-Platform Data Sharing (CDS) feature of Symantec’s Veritas Storage Foundation™ software.
456 Migrating data between platforms Overview of CDS The Cross-Platform Data Sharing feature is also known as Portable Data Containers (PDC). For consistency, this document uses the name Cross-Platform Data Sharing throughout. The following levels in the device hierarchy, from disk through file system, must provide support for CDS to be used: End-user applications Application level. Veritas™ File System (VxFS) File system level. Veritas™ Volume Manager (VxVM) Volume level.
Migrating data between platforms CDS disk format and disk groups Note: You do not need a file system in the stack if the operating system provides access to raw disks and volumes, and the application can utilize them. Databases and other applications can have their data components built on top of raw volumes without having a file system to store their data files.
458 Migrating data between platforms CDS disk format and disk groups CDS disk access and format For a disk to be accessible by multiple platforms, the disk must be consistently recognized by the platforms, and all platforms must be capable of performing I/O on the disk. CDS disks contain specific content at specific locations to identify or control access to the disk on different platforms.
Migrating data between platforms CDS disk format and disk groups configuration database. The default private region size is 32MB, which is large enough to record the details of several thousand VxVM objects in a disk group. The public region covers the remainder of the disk, and is used for the allocation of storage space to subdisks. The private and public regions are aligned and sized in multiples of 8K to permit the operation of CDS.
460 Migrating data between platforms CDS disk format and disk groups supported platform. A CDS disk group is composed only of CDS disks (VM disks with the disk format cdsdisk), and is only available for disk group version 110 and greater. Starting with disk group version 160, CDS disk groups can support disks of greater than 1 TB.
Migrating data between platforms CDS disk format and disk groups You can limit the number of devices that can be created in a given CDS disk group by setting the device quota. See “Setting the maximum number of devices for CDS disk groups” on page 478. When you create a device, an error is returned if the number of devices would exceed the device quota. You then either need to increase the quota, or remove some objects using device numbers, before the device can be created.
462 Migrating data between platforms CDS disk format and disk groups ■ Volume length ■ Log length ■ Stripe width The offset value specifies how an object is positioned on a drive. The disk group alignment is assigned at disk group creation time. See “Disk group tasks” on page 475. Alignment values The disk group block alignment has two values: 1 block or 8k (8 kilobytes). All CDS disk groups must have an alignment value of 8k.
Migrating data between platforms Setting up your system In a version 110 disk group, a version 20 DCO volume has the following region requirements: ■ Minimum region size of 16K ■ Incremental region size of 8K Note: The map layout within a Data Change Object (DCO) volume changed with the release of VxVM 4.0 to version 20. This can accommodate both FastResync and DRL maps within the DCO volume. The original version 0 layout for DCO volumes only accommodates FastResync maps.
464 Migrating data between platforms Setting up your system Table 28-1 Setting up CDS disks and CDS disk groups Task Procedures Create the CDS disks. You can create a CDS disk in one of the following ways: Creating CDS disks from uninitialized disks See “Creating CDS disks from uninitialized disks” on page 464. ■ Creating CDS disks from initialized VxVM disks See “Creating CDS disks from initialized VxVM disks” on page 465.
Migrating data between platforms Setting up your system # vxdisksetup -i disk [format=disk_format] The format defaults to cdsdisk unless this is overridden by the /etc/default/vxdisk file, or by specifying the disk format as an argument to the format attribute. See “Defaults files” on page 471. See the vxdisksetup(1M) manual page.
466 Migrating data between platforms Setting up your system Creating a CDS disk from a disk that is not in a disk group To create a CDS disk from a disk that is not in a disk group 1 Run the following command to remove the VM disk format for the disk: # vxdiskunsetup disk This is necessary as non-auto types cannot be reinitialized by vxdisksetup.
Migrating data between platforms Setting up your system ■ Type the following command: # vxdg init diskgroup disklist [cds={on|off}] The format defaults to a CDS disk group, unless this is overridden by the /etc/default/vxdg file, or by specifying the cds argument. See the vxdg(1M) manual page for more information. Creating a CDS disk group by using vxdiskadm You cannot create a CDS disk group when encapsulating an existing disk, or when converting an LVM volume.
468 Migrating data between platforms Setting up your system alldisks Converts all non-CDS disks in the disk group into CDS disks. disk Specifies a single disk for conversion. You would use this option under the following circumstances: If a disk in the non-CDS disk group has cross-platform exposure, you may want other VxVM nodes to recognize the disk, but not to assume that it is available for initialization.
Migrating data between platforms Setting up your system To verify whether a non-CDS disk group can be converted to a CDS disk group, type the following command: # vxcdsconvert -g diskgroup -A group 3 If the disk group does not have a CDS-compatible disk group alignment, the objects in the disk group must be relayed out with a CDS-compatible alignment.
470 Migrating data between platforms Setting up your system ■ Non-CDS disk groups are upgraded by using the vxdg upgrade command. If the disk group was originally created by the conversion of an LVM volume group (VG), rolling back to the original LVM VG is not possible. If you decide to go through with the conversion, the rollback records for the disk group will be removed, so that an accidental rollback to an LVM VG cannot be done.
Migrating data between platforms Setting up your system Defaults files The following system defaults files in the /etc/default directory are used to specify the alignment of VxVM objects, the initialization or encapsulation of VM disks, the conversion of LVM disks, and the conversion of disk groups and their disks to the CDS-compatible format vxassist Specifies default values for the following parameters to the vxcdsconvert command that have an effect on the alignment of VxVM objects: dgalign_checking, di
472 Migrating data between platforms Maintaining your system vxdisk Specifies default values for the format and privlen parameters to the vxdisk and vxdisksetup commands. These commands are used when disks are initialized by VxVM for the first time.They are also called implicitly by the vxdiskadm command and the Storage Foundation Manager (SFM) GUI. The following is a sample vxdisk defaults file: format=cdsdisk privlen=2048 See the vxdisk(1M) manual page. See the vxdisksetup(1M) manual page.
Migrating data between platforms Maintaining your system See “Disk tasks” on page 473. ■ Disk group tasks See “Disk group tasks” on page 475. ■ Displaying information See “Displaying information” on page 481. ■ Default activation mode of shared disk groups See “Default activation mode of shared disk groups” on page 484. ■ Additional considerations when importing CDS disk groups See “Defaults files” on page 471.
474 Migrating data between platforms Maintaining your system ■ AIX coexistence label ■ HP-UX coexistence or VxVM ID block There are also backup copies of each. If any of the primary labels become corrupted, VxVM will not bring the disk online and user intervention is required. If two labels are intact, the disk is still recognized as a cdsdisk (though in the error state) and vxdisk flush can be used to restore the CDS disk labels from their backup copies.
Migrating data between platforms Maintaining your system # vxdisk -f flush disk_access_name This command rewrites all labels if there exists a valid VxVM ID block that points to a valid private region. The -f option is required to rewrite sectors 7 and 16 when a disk is taken offline due to label corruption (possibly by a Windows system on the same fabric).
476 Migrating data between platforms Maintaining your system Changing the alignment of a non-CDS disk group The alignment value can only be changed for disk groups with version 110 or greater. For a CDS disk group, alignment can only take a value of 8k. Attempts to set the alignment of a CDS disk group to 1 fail unless you first change it to a non-CDS disk group. Increasing the alignment may require vxcdsconvert to be run to change the layout of the objects in the disk group.
Migrating data between platforms Maintaining your system Moving objects between CDS disk groups and non-CDS disk groups The alignment of a source non-CDS disk group must be 8K to allow objects to be moved to a target CDS disk group. If objects are moved from a CDS disk group to a target non-CDS disk group with an alignment of 1, the alignment of the target disk group remains unchanged.
478 Migrating data between platforms Maintaining your system ■ Type the following vxdg command: # vxdg -T version init diskgroup disk_name=disk_access_name Upgrading an older version non-CDS disk group You may want to upgrade a non-CDS disk group with a version lower than 110 in order to use new features other than CDS. After upgrading the disk group, the cds attribute is set to off, and the disk group has an alignment of 1.
Migrating data between platforms Maintaining your system ■ Type the following vxdg set command: # vxdg -g diskgroup set maxdev=max-devices The maxdev attribute can take any positive integer value that is greater than the number of devices that are currently in the disk group. Changing the DRL map and log size If DRL is enabled on a newly-created volume without specifying a log or map size, default values are used.
480 Migrating data between platforms Maintaining your system # vxassist -g diskgroup make volume length mirror=2 \ logtype=drl [loglen=len-blocks] [logmap_len=len-bytes] This command creates log subdisks that are each equal to the size of the DRL log. Note the following restrictions If neither logmap_len nor loglen is specified ■ If only loglen is specified ■ For pre-version 110 disk groups, maplen is set to zero.
Migrating data between platforms Maintaining your system If both logmap_len and loglen are specified ■ if logmap_len is greater than loglen/2, vxvol fails with an error message. Either increase loglen to a sufficiently large value, or decrease logmap_len to a sufficiently small value. ■ The value of logmap_len cannot exceed the number of bytes in the on-disk map. If logmap_len is specified ■ The value is constrained by size of the log, and cannot exceed the size of the on-disk map.
482 Migrating data between platforms Maintaining your system Determining the setting of the CDS attribute on a disk group To determine the setting of the CDS attribute on a disk group ■ Use the vxdg list command or the vxprint command to determine the setting of the CDS attribute, as shown in the following examples: # vxdg list NAME dgTestSol2 STATE enabled,cds ID 1063238039.206.vmesc1 # vxdg list dgTestSol2 Group: dgid: import-id: flags: version: alignment: . . . dgTestSol2 1063238039.206.
Migrating data between platforms Maintaining your system # vxprint -g diskgroup -vl volume # vxprint -g diskgroup -vF '%name %logmap_len %logmap_align' \ volume Displaying the disk group alignment To display the disk group alignment ■ Type the following command: # vxprint -g diskgroup -G -F %align Utilities such as vxprint and vxdg list that print information about disk group records also output the disk group alignment.
484 Migrating data between platforms Maintaining your system logging: type=REGION loglen=528 serial=0/0 mapalign=16 maplen=512 (enabled) apprecov: seqno=0/0 recovery: mode=default recov_id=0 device: minor=46000 bdev=212/46000 cdev=212/46000 path=/dev/vx/dsk/dgTestSol/drlvol perms: user=root group=root mode=0600 guid: {d968de3e-1dd1-11b2-8fc1-080020d223e5} Displaying offset and length information in units of 512 bytes To display offset and length information in units of 512 bytes ■ Specify the -b option
Migrating data between platforms File system considerations Does the target system know about the disks? For example, the disks may not have been connected to the system either physically (not cabled) or logically (using FC zoning or LUN masking) when the system was booted up, but they have subsequently been connected without rebooting the system. This can happen when bringing new storage on-line, or when adding an additional DMP path to existing storage.
486 Migrating data between platforms File system considerations Considerations about data in the file system Data within a file system might not be in the appropriate format to be accessed if moved between different types of systems. For example, files stored in proprietary binary formats often require conversion for use on the target platform.
Migrating data between platforms File system considerations to as one-time file system migration. When ongoing file system migration between multiple systems is desired, this is known as ongoing file system migration. Different actions are required depending on the kind of migration, as described in the following sections.
488 Migrating data between platforms File system considerations Note: The default CDS limits information file, /etc/vx/cdslimitstab, is installed as part of the VxFS package. The contents of this file are used by the VxFS CDS commands and should not be altered. Examples of target specifications The following are examples of target specifications: os_name=HP-UX Specifies the target operating system and uses defaults for the remainder. os_name=HP-UX, os_rel=11.23, arch=pa, vxfs_version=5.
Migrating data between platforms File system considerations Maintaining the list of target operating systems When a file system is migrated on an ongoing basis between multiple systems, the types of operating systems that are involved in these migrations are maintained in a target_list file. Knowing what these targets are allows VxFS to determine file system limits that are appropriate to all of these targets. The file system limits that are enforced are file size, user ID, and group ID.
490 Migrating data between platforms File system considerations To enforce the established CDS limits on a file system ■ Type the following command: # fscdsadm -l enforce mount_point Ignoring the established CDS limits on a file system By default, CDS ignores the limits that are implied by the operating system targets that are listed in the target_list file.
Migrating data between platforms File system considerations Migrating a file system one time This example describes a one-time migration of data between two operating systems. Some of the following steps require a backup of the file system to be created. To simplify the process, you can create one backup before performing any of the steps instead of creating multiple backups as you go.
492 Migrating data between platforms File system considerations be created. To simplify the process, you can create one backup before performing any of the steps instead of creating multiple backups as you go.
Migrating data between platforms File system considerations 6 Make the physical storage and Volume Manager logical storage accessible on the target system by exporting the disk group from the source system and importing the disk group on the target system after resolving any other physical storage attachment issues. See “Disk tasks” on page 473. 7 Mount the file system on the target system.
494 Migrating data between platforms File system considerations To convert the byte order of a file system 1 Determine the disk layout version of the file system that you will migrate: # fstyp -v /dev/vx/rdsk/diskgroup/volume | grep version magic a501fcf5 version 7 ctime Thu Jun 1 16:16:53 2006 Only file systems with Version 6 or later disk layout can be converted. If the file system has an earlier disk layout version, convert the file system to Version 6 or Version 7 disk layout before proceeding.
Migrating data between platforms File system considerations 5 Use the fscdsconv command to export the file system to the required target: # fscdsconv -f recovery_file -t target -e special_device target specifies the system to which you are migrating the file system. See “Specifying the migration target” on page 487. recovery_file is the name of the recovery file to be created by the fscdsconv command. special_device is the raw device or volume that contains the file system to be converted.
496 Migrating data between platforms File system considerations 8 If the byte order of the file system must be converted to migrate the file system to the specified target, fscdsconv prompts you to confirm the migration. Enter y to convert the byte order of the file system. If the byte order does not need to be converted, a message displays indicating this fact.
Migrating data between platforms Alignment value and block size Importing and mounting a file system from another system The fscdsconv command can be used to import and mount a file system that was previously used on another system. To import and mount a file system from another system ◆ Convert the file system: # fscdsconv -f recovery_file -i special_device If the byte order of the file system needs to be converted Enter y to convert the byte order of the file system when prompted by fscdsconv.
498 Migrating data between platforms Migrating a snapshot volume imported on that platform because import would exhaust available minor devices for the VxVM driver. Although the case of minor number exhaustion is possible in a homogeneous environment, it will be more pronounced between platforms with different values for the maximum number of devices supported, such as Linux with a pre-2.6 kernel.
Migrating data between platforms Migrating a snapshot volume 6 499 Check the integrity of the file system, and then mount it on a suitable mount point: # fsck -F vxfs /dev/vx/rdsk/datadg/snapvol # mount -F vxfs /dev/vx/dsk/datadg/snapvol /mnt 7 Confirm whether the file system can be converted to the target operating system: # fscdstask validate Linux /mnt 8 Unmount the snapshot: # umount /mnt 9 Convert the file system to the opposite endian: # fscdsconv -f /tmp/fs_recov/recov.
500 Migrating data between platforms Migrating a snapshot volume
Section Reference ■ Appendix A. Recovering from CDS errors ■ Appendix B. Conversion error messages ■ Appendix C. Files and scripts for sample scenarios ■ Appendix D.
502
Appendix A Recovering from CDS errors This appendix includes the following topics: ■ CDS error codes and recovery actions CDS error codes and recovery actions Table A-1 lists the CDS error codes and the action that is required. Table A-1 Error codes and required actions Error number Message Action 329 Cannot join a non-CDS disk group Change the non-CDS disk group and a CDS disk group into a CDS disk group (or vice versa), then retry the join operation.
504 Recovering from CDS errors CDS error codes and recovery actions Table A-1 Error codes and required actions (continued) Error number Message Action 333 Non-CDS disk cannot be placed in Do one of the following: a CDS disk group ■ Add the disk to another disk group that is a non-CDS disk group. ■ Re-initialize the disk as a CDS disk so that it can be added to the CDS disk group. ■ Change the CDS disk group into a non-CDS disk group and then add the disk.
Recovering from CDS errors CDS error codes and recovery actions Table A-1 Error codes and required actions (continued) Error number Message Action 341 Too many device nodes in disk group Increase the number of device nodes allowed in the disk group, if not already at the maximum. Otherwise, you need to remove volumes from the disk group, possibly by splitting the disk group.
506 Recovering from CDS errors CDS error codes and recovery actions
Appendix B Conversion error messages This appendix includes the following topics: ■ List of conversion error messages List of conversion error messages This appendix lists the error messages that you may encounter when converting LVM volume groups to VxVM disk groups and volumes. For each error message, a description is provided of the problem, and the action that you can take to troubleshoot it. Table B-1 shows the error messages that you may encounter during conversion.
508 Conversion error messages List of conversion error messages Table B-1 Conversion error messages (continued) Message Description Device device_name has the following bad blocks... Cannot convert LVM Volume Group Unlike LVM, VxVM does not support bad block revectoring at the physical volume level. If there appear to be any valid bad blocks in the bad block directory (BBDIR) of any disk used in an LVM volume group, the group cannot be converted.
Conversion error messages List of conversion error messages Table B-1 Conversion error messages (continued) Message Description This Volume Group contains one or more logical volumes with mirrored data If you attempt to convert a Mirrored LVM Volume Group without a valid VxVM license installed, the conversion is not allowed. Install the required license before attempting the conversion.
510 Conversion error messages List of conversion error messages Table B-1 Message Conversion error messages (continued) Description VxVM ERROR V-5-2-0 The LVM Volume Group The conversion of a logical volume containing more than 28 characters in the logical volume (longVGname) has Logical Volume name is not supported. (1234567890123456789012345678901) having > 28 characters. VxVM doesn't allow Volume names that long. Please reduce the name of the Logical Volume and retry the conversion.
Appendix C Files and scripts for sample scenarios This appendix includes the following topics: ■ About files and scripts for sample scenarios ■ Script to initiate online off-host backup of an Oracle database ■ Script to put an Oracle database into hot backup mode ■ Script to quiesce a Sybase ASE database ■ Script to suspend I/O for a DB2 database ■ Script to end Oracle database hot backup mode ■ Script to release a Sybase ASE database from quiesce mode ■ Script to resume I/O for a DB2 datab
512 Files and scripts for sample scenarios About files and scripts for sample scenarios Note: These scripts are not supported by Symantec, and are provided for informational use only. You can purchase customization of the environment through Veritas Vpro Consulting Services. Table C-1 list the files and scripts. Table C-1 Files and scripts for sample scenarios File or script Used for... Sample script to initiate online offhost backup of an Oracle database. ■ Online off-host backup.
Files and scripts for sample scenarios Script to initiate online off-host backup of an Oracle database Table C-1 513 Files and scripts for sample scenarios (continued) File or script Used for... Sample script to create off-host replica Oracle database. ■ Decision support. See “Creating an off-host replica database” on page 184. ■ Decision support. See “Creating an off-host replica database” on page 184. See “Script to create an off-host replica Oracle database” on page 519.
514 Files and scripts for sample scenarios Script to initiate online off-host backup of an Oracle database newvollist=”snap_dbase_vol source=dbase_vol/newvol=snap_dbase_vol” snapvollist=”snap_dbase_vol” volsnaplist=”snap_dbase_vol source=dbase_vol” exit_cnt=0 arch_loc=/archlog # Put the Oracle database in hot-backup mode; # see the backup_start.sh script for information. su oracle -c backup_start.sh # # # # # # Refresh the snapshots of the volumes.
Files and scripts for sample scenarios Script to put an Oracle database into hot backup mode # cluster-shareable, you must also specify the -s option. vxdg import $snapdg # Join the snapshot disk group to the original volume disk group. vxdg join $snapdg $dbasedg # Restart the snapshot volumes. for i in ‘echo $snapvollist‘ do vxrecover -g $dbasedg -m $i vxvol -g $dbasedg start $i done # Reattach the snapshot volumes ready for the next backup cycle.
516 Files and scripts for sample scenarios Script to quiesce a Sybase ASE database alter tablespace tsN begin backup; quit ! Script to quiesce a Sybase ASE database Use this script to quiesce a Sybase ASE database. #!/bin/ksh # # script: backup_start.sh # # Sample script to quiesce example Sybase ASE database. # # Note: The “for external dump” clause was introduced in Sybase # ASE 12.5 to allow a snapshot database to be rolled forward. # See the Sybase ASE 12.5 documentation for more information.
Files and scripts for sample scenarios Script to end Oracle database hot backup mode 517 Script to end Oracle database hot backup mode Use this script to end Oracle database hot backup mode. #!/bin/ksh # # script: backup_end.sh # # Sample script to end hot backup mode for example Oracle database. export ORACLE_SID=dbase export ORACLE_HOME=/oracle/816 export PATH=$ORACLE_HOME/bin:$PATH svrmgrl <
518 Files and scripts for sample scenarios Script to resume I/O for a DB2 database isql -Usa -Ppassword -SFMR <
Files and scripts for sample scenarios Script to create an off-host replica Oracle database 519 vxdg import $snapvoldg # Mount the snapshot volumes (the mount points must already exist). for i in $* do fsck -F vxfs /dev/vx/rdsk/$dbasedg/snap_$i mount -F vxfs /dev/vx/dsk/$dbasedg/snap_$i /bak/$i done # Back up each tablespace. # back up /bak/ts1 & ... # back up /bak/tsN & wait # Unmount snapshot volumes. for i in ′echo $vollist′ do umount /bak/$i done # Deport snapshot volume disk group.
520 Files and scripts for sample scenarios Script to create an off-host replica Oracle database # you understand the procedure and commands for implementing # an off-host point-in-time copy solution.
Files and scripts for sample scenarios Script to complete, recover and start a replica Oracle database 521 vxdg deport $snapdg # # # # Copy the archive logs that were generated while the database was in hot backup mode (as reported by the Oracle Server Manager) to the archive log location for the replica database on the OHP node (in this example, /rep/archlog).
522 Files and scripts for sample scenarios Script to complete, recover and start a replica Oracle database export PATH=$ORACLE_HOME/bin:$PATH snapvoldg=snapdbdg rep_mnt_point=/rep # Import the snapshot volume disk group. vxdg import $snapvoldg # Mount the snapshot volumes (the mount points must already exist). for i in $* do fsck -F vxfs /dev/vx/rdsk/$snapvoldg/snap_$i mount -F vxfs /dev/vx/dsk/$snapvoldg/snap_$i ${rep_mnt_point}/$i done # Fix any symbolic links required by the database.
Files and scripts for sample scenarios Script to start a replica Sybase ASE database Script to start a replica Sybase ASE database Use this script to start a replica Sybase ASE database. #!/bin/ksh # # script: startdb.sh # # Sample script to recover and start replica Sybase ASE database. # Import the snapshot volume disk group. vxdg import $snapvoldg # Mount the snapshot volumes (the mount points must already exist).
524 Files and scripts for sample scenarios Script to start a replica Sybase ASE database quit !
Appendix D Preparing a replica Oracle database This appendix includes the following topics: ■ About preparing a replica Oracle database ■ Text control file for original production database ■ SQL script to create a control file ■ Initialization file for original production database ■ Initialization file for replica Oracle database About preparing a replica Oracle database This appendix describes how to set up a replica off-host Oracle database to be used for decision support.
526 Preparing a replica Oracle database About preparing a replica Oracle database To prepare a replica Oracle database on a host other than the primary host 1 If not already present, install the Oracle software onto the host’s local disks. The location of the Oracle home directory ($ORACLE_HOME) is used for the database instance that is created from the snapshot volumes. Note: In the examples shown here, the home directory is /rep/oracle in the local disk group, localdg.
Preparing a replica Oracle database About preparing a replica Oracle database 5 Mount the redo log and archive log volumes on their respective mount points using the following command: # mount -F vxfs /dev/vx/dsk/diskgroup/volume mount_point In this example, the commands would be: # mount -F vxfs /dev/vx/dsk/localdg/rep_dbase_logs \ /rep/dbase_logs # mount -F vxfs /dev/vx/dsk/localdg/rep_dbase_arch \ /rep/dbase_arch 6 As the Oracle database administrator on the primary host, obtain an ASCII version of
528 Preparing a replica Oracle database About preparing a replica Oracle database CREATE CONTROLFILE REUSE DATABASE "odb" NORESETLOGS \ ARCHIVELOG so that it reads: CREATE CONTROLFILE SET DATABASE "ndb" RESETLOGS \ NOARCHIVELOG where odb is the name of the original database and ndb is the name of the replica database (DBASE and REP1 in the example). Note that to reduce unnecessary overhead, the new database is not run in archive log mode. See “SQL script to create a control file” on page 530.
Preparing a replica Oracle database Text control file for original production database Text control file for original production database The following example shows the text control file for the original production database. /oracle/816/admin/dbase/udump/dbase_ora_20480.trc Oracle8i Enterprise Edition Release 8.1.6.0.0 - Production With the Partitioning option JServer Release 8.1.6.0.0 - Production ORACLE_HOME = /oracle/816 System name: SunOS Node name: node01 Release: 5.
530 Preparing a replica Oracle database SQL script to create a control file # . ’/dbase_vol/tsN’ CHARACTER SET US7ASCII; # Recovery is required if any of the datafiles are restored backups, # or if the last shutdown was not normal or immediate. RECOVER DATABASE # All logs need archiving and a log switch is needed. ALTER SYSTEM ARCHIVE LOG ALL; # Database can now be opened normally. ALTER DATABASE OPEN; # No tempfile entries found to add.
Preparing a replica Oracle database Initialization file for original production database Initialization file for original production database The following example shows the initialization file for the original production database. #==================================================================+ # FILENAME initdbase.ora # DESCRIPTION Oracle parameter file for primary database, dbase.
532 Preparing a replica Oracle database Initialization file for replica Oracle database distributed_transactions = 0 transactions_per_rollback_segment = 1 rollback_segments = (s1,s2,s3,s4,s5,s6,s7,s8,s9,s10,s11,s12,s13,s14,s15,s16,s17,s18,s19,s20,s21,s22,s23,s2 4,s25,s26,s27,s28,s29,s30) shared_pool_size = 7000000 cursor_space_for_time = TRUE audit_trail = FALSE cursor_space_for_time = TRUE background_dump_dest = /oracle/816/admin/dbase/bdump core_dump_dest = /oracle/816/admin/dbase/cdump user_dump_dest =
Preparing a replica Oracle database Initialization file for replica Oracle database log_checkpoints_to_alert = TRUE log_buffer = 1048576 max_rollback_segments = 220 processes = 300 sessions = 400 open_cursors = 200 transactions = 400 distributed_transactions = 0 transactions_per_rollback_segment = 1 rollback_segments = (s1,s2,s3,s4,s5,s6,s7,s8,s9,s10,s11,s12,s13,s14,s15,s16,s17,s18,s19,s20,s21,s22,s23,s2 4,s25,s26,s27,s28,s29,s30) shared_pool_size = 7000000 cursor_space_for_time = TRUE audit_trail = FALSE
534 Preparing a replica Oracle database Initialization file for replica Oracle database
Glossary address-length pair Identifies the starting block address and the length of an extent (in file system or logical blocks). asynchronous I/O A format of I/O that performs non-blocking reads and writes. This enables the system to handle multiple I/O requests simultaneously. atomic operation An operation that either succeeds completely or fails and leaves everything as it was before the operation was started.
536 Glossary clicking on the task in the Command Launcher. concatenation A Veritas Volume Manager layout style characterized by subdisks that are arranged sequentially and contiguously. concurrent I/O A form of Direct I/O that does not require file-level write locks when writing to a file. Concurrent I/O allows the relational database management system (RDBMS) to write to a given file concurrently.
Glossary (for example, enc0_2). The term disk access name can also be used to refer to a device name. direct I/O An unbuffered form of I/O that bypasses the kernel’s buffering of data. With direct I/O, data is transferred directly between the disk and the user application. Dirty Region Logging The procedure by which the Veritas Volume Manager monitors and logs modifications to a plex. A bitmap of changed regions is kept in an associated subdisk called a log subdisk.
538 Glossary extent A logical database attribute that defines a group of contiguous file system data blocks that are treated as a unit. An extent is defined by a starting block and a length. extent attributes An extent allocation policy that is associated with a file and/or file system. See also address-length pair. failover The act of moving a service from a failure state back to a running/available state.
Glossary ownership, access mode (permissions), access time, file size, file type, and the block map for the data contents of the file. Each inode is identified by a unique inode number in the file system where it resides. The inode number is used to find the inode in the inode list for the file system. The inode list is a series of inodes. There is one inode in the list for every file in the file system. intent logging A logging scheme that records pending changes to a file system structure.
540 Glossary megabyte A measure of memory or storage. A megabyte is approximately 1,000,000 bytes (technically, 2 to the 20th power, or 1,048,576 bytes). Also MB, Mbyte, mbyte, and K-byte. metadata Data that describes other data. Data dictionaries and repositories are examples of metadata. The term may also refer to any file or database that holds information about another database's structure, attributes, processing, or changes.
Glossary parity A calculated value that can be used to reconstruct data after a failure. While data is being written to a RAID-5 volume, parity is also calculated by performing an exclusive OR (XOR) procedure on data. The resulting parity is then written to the volume. If a portion of a RAID-5 volume fails, the data that was on that portion of the failed volume can be recreated from the remaining data and the parity. partition The logical areas into which a disk is divided.
542 Glossary shared disk group A disk group in which the disks are shared by multiple hosts (also referred to as a cluster-shareable disk group). sector A minimal unit of the disk partitioning. The size of a sector can vary between systems. A sector is commonly 1024 bytes. segment Any partition, reserved area, partial component, or piece of a larger structure. System Global Area See SGA. single threading The processing of one transaction to completion before starting the next.
Glossary RAID 5, the stripe unit size is 32 sectors (16K). A stripe unit size has also historically been referred to as a stripe width. striping A layout technique that spreads data across several physical disks using stripes. The data is allocated alternately to the stripes within the subdisks of each plex. subdisk A consecutive set of contiguous disk blocks that form a logical disk segment. Subdisks can be associated with plexes to form volumes.
544 Glossary (RAID-0), mirroring (RAID-1), mirrored stripe volumes (RAID-0+1), striped mirror volumes (RAID-1+0), and RAID 5. volume manager objects Volumes and their virtual components. See object (VxVM). VVR See “Veritas Volume Replicator (VVR).” vxfs or VxFS The acronym for Veritas File System. vxvm or VxVM The acronym for Veritas Volume Manager.
Index Symbols $DSQUERY 64 $LD_LIBRARY_PATH 64 $PATH 64 $SYBASE 64 /etc/default/vxassist defaults file 471 /etc/default/vxcdsconvert defaults file 471 /etc/default/vxdg defaults file 471 /etc/default/vxdisk defaults file 472 /etc/default/vxencap defaults file 472 /etc/vx/darecs file 466 A absolute path names using with Quick I/O 46, 65 absolute pathnames use with symbolic links 44, 63 access type 459 accessing Quick I/O files with symbolic links 44, 63 activation default 482 ACTIVE state 206 AIX coexistenc
546 Index BCV 138 benefits of Concurrent I/O 102 benefits of Quick I/O 34 block size 457 blockdev --rereadpt 485 blockmap for a snapshot file system 263 break-off snapshots emulation of 205 BROKEN state 206 Business Continuance Volume (BCV) 138 C cache autogrow attributes 153 creating for use by space-optimized snapshots 151 for space-optimized instant snapshots 135 cache advisory checking setting for 87, 100 cache attribute 219 cache hit ratio calculating 83, 95 cache objects creating 215 enabling 216 l
Index Concurrent I/O benefits 102 disabling 104–105 enabling 102, 105 configuration LVM 400 configuration VxVM 400 conversion errors 507 non-interactive 432 speed 431 vxvmconvert 402 converting Quick I/O files back to regular files Quick I/O converting back to regular files 65 Quick I/O files back to regular filesQuick I/O converting back to regular files 46 regular files to Quick I/O files 48, 68 converting a data Storage Checkpoint to a nodata Storage Checkpoint 279 converting non-CDS disks to CDS 467 co
548 Index disk access type 459 change format 473 evacuate 447 labels 473 LVM 473 offline 447 online 447 recover 447 rename 447 replace 447 replacing 478 disk access 457 disk format 458 disk group 440 rename 448 disk group alignment 476 displaying 483 Disk Group Split/Join 136 disk groups 459 alignment 461 creating 477 joining 477 layout of DCO plexes 147 non-CDS 461 upgrading 478 disk headers 402 disk quotas setting 478 disk types 458 disks coexistence 399 effects of formatting or partitioning 484 layout
Index Example analyze LVM groups 418 conversion 418 failed conversion 418 list 418 list disk information 418 list LVM volume group information 418 listvg 418 LVM to VxVM 418 vxprint output 418 example Failed Analysis 418 export volume group 441 extend volume group 442 extending a file 41, 60 extending Quick I/O files 51, 72 extracting file list for Quick I/O conversion 47, 67 F FastResync Persistent 134 snapshot enhancements 201 file space allocation 41, 60 file fragmentation reporting on 65 File System 4
550 Index instant snapshots backing up multiple volumes 228 cascaded 207 creating backups 212 creating for volume sets 229 creating full-sized 221 creating space-optimized 218 creating volumes for use as full-sized 217 displaying information about 237 dissociating 236 full-sized 135, 202 improving performance of synchronization 240 reattaching 175, 193, 233 refreshing 233 removing 236 restoring volumes using 235 space-optimized 135, 204 splitting hierarchies 237 synchronizing 239 intent log multi-volume s
Index mirrors (continued) creating for root disk 443 creating snapshot 246 mirvol attribute 227 mirvol snapshot type 239 mkqio.dat file 47–49, 67–68, 75 mkqio.
552 Index qiomkfile command 51–52, 72–73 options for creating files symbolic links 41, 60 qiostat output of 83, 95 qiostat command 82, 94 Quick I/O accessing regular VxFS files as 43, 62 benefits 34 converting files to 48, 68 determining file fragmentation before converting 65 determining status 71 disabling 56, 75 enabling 39, 57 environment variable requirements 64 extending files 51, 72 extracting file list for conversion 47, 67 improving database performance with 35 list file for conversion 47, 67 pre
Index shared access mounting file systems for 168 showing Quick I/O file resolved to raw device 51, 72 SmartMove feature 114 SmartSync Recovery Accelerator 23 SmartTier 23, 315 multi-volume support 324 SMH 439 snap volume naming 211 snapabort 201 snapback defined 201 merging snapshot volumes 250 resyncfromoriginal 211 resyncfromreplica 211, 250 snapclear creating independent volumes 252 snapmir snapshot type 239 snapof 264 snapped file systems 259 performance 262 unmounting 260 snapread 262 snapshot file s
554 Index space-optimized instant snapshots 135, 204 creating 218 spaceopt snapshot type 239 sparse files 49 specifying master device path 65 specifying database name for Quick I/O 64 split subdisk 448 states of link objects 206 storage cache 135 used by space-optimized instant snapshots 204 Storage Checkpoints 137 accessing 277 administering with VxDBA 196 administration of 274 converting a data Storage Checkpoint to a nodata Storage Checkpoint with multiple Storage Checkpoints 281 creating 196, 275 data
Index V v_logmap displaying 483 verifying caching using vxfstune parameters 80, 92 verifying vxtunefs system parameters 80, 93 Veritas Cached Quick I/O 23 Veritas Extension for Oracle Disk Manager 23 Veritas Quick I/O 23 vgchange 438 vgcreate 438 vgdisplay 438 vgexport 439 vgextend 438 vgimport 439 vgreduce 438 vgremove 439 vgscan 439 vgsync 439 volbrk snapshot type 239 volume concatenated 443 logical 443 RAID-5 443 reduce 441 striped 443 Volume Manager features 392 volume sets adding volumes to 317 admini
556 Index VxDBA administering Storage Checkpoints using 196 vxdco dissociating version 0 DCOs from volumes 257 reattaching version 0 DCOs to volumes 257 removing version 0 DCOs from volumes 257 vxdctl enable 485 vxdg 438 vxdg init 466 vxdg split 476 vxdisk 437 vxdisk scandisks 485 vxdisk set 437 vxdiskadd 437–438 vxdiskadm 438, 465, 467 vxdisksetup 464 vxedit 435–437 removing a cache 244 removing instant snapshots 236 removing snapshots from a cache 244 vxevac 438 VxFS 441 vxinfo 439 vxmake creating cache