Veritas File System 5.
Legal Notices © Copyright 2008 Hewlett-Packard Development Company, L.P. Publication Date: 2008 Confidential computer software. Valid license from HP required for possession, use, or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor’s standard commercial license. The information contained herein is subject to change without notice.
Contents Chapter 1 Introducing Veritas File System About Veritas File System .............................................................. Logging ................................................................................ Extents ................................................................................ File system disk layouts ........................................................... Veritas File System features ...........................................................
Contents Intent log size ........................................................................ Mounting a VxFS file system .......................................................... The log mode ......................................................................... The delaylog mode ................................................................. The tmplog mode ................................................................... The logiosize mode .................................................
Contents Cache advisories .......................................................................... 64 Freezing and thawing a file system .................................................. 65 Getting the I/O size ...................................................................... 65 Chapter 5 Online backup using file system snapshots About snapshot file systems ........................................................... Snapshot file system backups ....................................................
Contents Turning on quotas at mount time ............................................ Editing user quotas ............................................................... Modifying time limits ............................................................ Viewing disk quotas and usage ................................................ Displaying blocks owned by users or groups .............................. Turning off quotas ................................................................
Contents Rebalancing extents .............................................................. 129 Converting a multi-volume file system to a single volume file system ................................................................................ 130 Converting to a single volume file system .................................. 130 Chapter 10 Dynamic Storage Tiering About Dynamic Storage Tiering ..................................................... Placement classes .........................................
Contents Converting a file system to VxFS .................................................... Example of converting a file system ......................................... Mounting a file system ................................................................ Mount options ..................................................................... Example of mounting a file system ........................................... Editing the fstab file ..............................................................
Contents VxFS Version 7 disk layout ...........................................................
Contents
Chapter 1 Introducing Veritas File System This chapter includes the following topics: ■ About Veritas File System ■ Veritas File System features ■ Veritas File System performance enhancements ■ Using Veritas File System About Veritas File System A file system is simply a method for storing and organizing computer files and the data they contain to make it easy to find and access them.
Introducing Veritas File System Veritas File System features ■ File system disk layouts Logging A key aspect of any file system is how to recover if a system crash occurs. Earlier methods required a time-consuming scan of the entire file system. A better solution is the method logging (or journaling) the metadata of files. VxFS logs new attribute information into a reserved area of the file system, whenever file system changes occur.
Introducing Veritas File System Veritas File System features ■ Extent-based allocation Extents allow disk I/O to take place in units of multiple blocks if storage is allocated in consecutive blocks. ■ Extent attributes Extent attributes are the extent allocation policies associated with a file. ■ Fast file system recovery VxFS provides fast recovery of a file system from system failure.
Introducing Veritas File System Veritas File System features ■ Multi-volume support The multi-volume support feature allows several volumes to be represented by a single logical object. ■ Dynamic Storage Tiering The Dynamic Storage Tiering (DST) option allows you to configure policies that automatically relocate files from one volume to another, or relocate files by running file relocation commands, which can improve performance for applications that access specific types of files.
Introducing Veritas File System Veritas File System features Each indirect address extent is 8K long and contains 2048 entries. All indirect data extents for a file must be the same size; this size is set when the first indirect data extent is allocated and stored in the inode. Directory inodes always use an 8K indirect data extent size.
Introducing Veritas File System Veritas File System features The current typed format is used on regular files and directories only when indirection is needed. Typed records are longer than the previous format and require less direct entries in the inode. Newly created files start out using the old format, which allows for ten direct extents in the inode. The inode's block map is converted to the typed format when indirection is needed to offer the advantages of both formats.
Introducing Veritas File System Veritas File System features VxFS intent log resizing The VxFS intent log is allocated when the file system is first created. The size of the intent log is based on the size of the file system—the larger the file system, the larger the intent log. The maximum default intent log size for disk layout Versions 4, 5, and 6 is 16 megabytes. The maximum default intent log size for disk layout Version 7 is 64 megabytes.
Introducing Veritas File System Veritas File System features was not flushed to disk before the system failure occurred, uninitialized data can appear in the file. For the most common type of write, delayed extending writes (a delayed write that increases the file size), VxFS avoids the problem of uninitialized data appearing in the file by waiting until the data has been flushed to disk before updating the new file size to disk.
Introducing Veritas File System Veritas File System features The delaylog option and enhanced performance The default VxFS logging mode, mount -o delaylog, increases performance by delaying the logging of some structural changes. However, delaylog does not provide the equivalent data integrity as the previously described modes because recent changes may be lost during a system failure.
Introducing Veritas File System Veritas File System features file systems created with the Version 5, 6, or 7 disk layouts, up to 1024 ACL entries can be specified. ACLs are also supported on cluster file systems. See the getacl(1) and setacl(1) manual pages. Storage Checkpoints To increase availability, recoverability, and performance, Veritas File System offers on-disk and online backup and restore capabilities that facilitate frequent and efficient backup strategies.
Introducing Veritas File System Veritas File System features The hard limit represents an absolute limit on data blocks or files. A user can never exceed the hard limit under any circumstances. The soft limit is lower than the hard limit and can be exceeded for a limited amount of time. This allows users to exceed limits temporarily as long as they fall under those limits before the allotted time expires. See “About quota limits” on page 101.
Introducing Veritas File System Veritas File System features To be a cluster mount, a file system must be mounted using the mount -o cluster option. File systems mounted without the -o cluster option are termed local mounts. Cross-platform data sharing Cross-platform data sharing (CDS) allows data to be serially shared among heterogeneous systems where each system has direct access to the physical devices that hold the data.
Introducing Veritas File System Veritas File System performance enhancements Veritas File System performance enhancements Traditional file systems employ block-based allocation schemes that provide adequate random access and latency for small files, but which limit throughput for larger files. As a result, they are less than optimal for commercial environments.
Introducing Veritas File System Using Veritas File System Enhanced I/O clustering I/O clustering is a technique of grouping multiple I/O operations together for improved performance. VxFS I/O policies provide more aggressive clustering processes than other file systems and offer higher I/O throughput when using large files. The resulting performance is comparable to that provided by raw disk.
Introducing Veritas File System Using Veritas File System that is running Volume Manager and VxFS. The client runs on any platform that supports the Java Runtime Environment.
Introducing Veritas File System Using Veritas File System is lost over time as files are created, removed, and resized. The file system is spread farther along the disk, leaving unused gaps or fragments between areas that are in use. This process is known as fragmentation and leads to degraded performance because the file system has fewer options when assigning a free extent to a file (a group of contiguous data blocks).
Chapter 2 VxFS performance: creating, mounting, and tuning file systems This chapter includes the following topics: ■ Creating a VxFS file system ■ Mounting a VxFS file system ■ Tuning the VxFS file system ■ Monitoring free space ■ Tuning I/O Creating a VxFS file system When you create a file system with the mkfs command, you can select the following characteristics: ■ Block size ■ Intent log size Block size The unit of allocation in VxFS is a block.
VxFS performance: creating, mounting, and tuning file systems Mounting a VxFS file system You specify the block size when creating a file system by using the mkfs -o bsize option. The block size cannot be altered after the file system is created. The smallest available block size for VxFS is 1K, which is also the default block size. Choose a block size based on the type of application being run. For example, if there are many small files, a 1K block size may save space.
VxFS performance: creating, mounting, and tuning file systems Mounting a VxFS file system ■ minicache ■ convosync ■ ioerror ■ largefiles|nolargefiles ■ cio Caching behavior can be altered with the mincache option, and the behavior of O_SYNC and D_SYNC writes can be altered with the convosync option. See the fcntl(2) manual page. The delaylog and tmplog modes can significantly improve performance.
VxFS performance: creating, mounting, and tuning file systems Mounting a VxFS file system most system calls are not persistent until approximately 30 seconds or more after the call has returned. Fast file system recovery works with this mode. The rename(2) system call flushes the source file to disk to guarantee the persistence of the file data before renaming it. In the log and delaylog modes, the rename is also guaranteed to be persistent when the system call returns.
VxFS performance: creating, mounting, and tuning file systems Mounting a VxFS file system and closesync mount options. However, most commercially available applications work well with the default VxFS mount options, including the delaylog mode. The logiosize mode The logiosize=size option enhances the performance of storage devices that employ a read-modify-write feature.
VxFS performance: creating, mounting, and tuning file systems Mounting a VxFS file system ■ mincache=closesync ■ mincache=direct ■ mincache=dsync ■ mincache=unbuffered ■ mincache=tmpcache The mincache=closesync mode is useful in desktop environments where users are likely to shut off the power on the machine without halting it first. In this mode, any changes to the file are flushed to disk when the file is closed.
VxFS performance: creating, mounting, and tuning file systems Mounting a VxFS file system the other mincache modes, tmpcache does not flush the file to disk the file is closed. When the mincache=tmpcache option is used, bad data can appear in a file that was being extended when a crash occurred.
VxFS performance: creating, mounting, and tuning file systems Mounting a VxFS file system specify the O_SYNC fcntl in order to write the file data synchronously. However, this has the undesirable side effect of updating inode times and therefore slowing down performance. The convosync=dsync, convosync=unbuffered, and convosync=direct modes alleviate this problem by allowing applications to take advantage of synchronous writes without modifying inode times as well.
VxFS performance: creating, mounting, and tuning file systems Mounting a VxFS file system For file data read and write errors, VxFS sets the VX_DATAIOERR flag in the super-block. For metadata read errors, VxFS sets the VX_FULLFSCK flag in the super-block. For metadata write errors, VxFS sets the VX_FULLFSCK and VX_METAIOERR flags in the super-block and may mark associated metadata as bad on disk. VxFS then prints the appropriate error messages to the console.
VxFS performance: creating, mounting, and tuning file systems Mounting a VxFS file system Creating a file system with large files To create a file system with a file capability, type the following command: # mkfs -F vxfs -o largefiles special_device size Specifying largefiles sets the largefiles flag. This lets the file system to hold files that are two terabytes or larger. This is the default option.
VxFS performance: creating, mounting, and tuning file systems Mounting a VxFS file system To switch capabilities on an unmounted file system, type the following command: # fsadm -F vxfs -o [no]largefiles special_device You cannot change a file system to nolargefiles if it holds large files. See the mount_vxfs(1M), fsadm_vxfs(1M), and mkfs_vxfs(1M) manual pages. The cio option The cio (Concurent I/O) option specifies the file system to be mounted for concurrent readers and writers.
VxFS performance: creating, mounting, and tuning file systems Tuning the VxFS file system # mount -F vxfs -o log,convosync=dsync /dev/dsk/c1t3d0 /mnt This combination can be used to improve the performance of applications that perform O_SYNC writes, but only require data synchronous write semantics. Performance can be significantly improved if the file system is mounted using convosync=dsync without any loss of data integrity.
VxFS performance: creating, mounting, and tuning file systems Tuning the VxFS file system fails, the value of the vx_ninode tunable is not changed. In such a case, the kctune command can be specified with the -h option so that the new value of vx_ninode takes effect after a system reboot. Be careful when changing the value of vx_ninode, as the value can affect file system performance.
VxFS performance: creating, mounting, and tuning file systems Tuning the VxFS file system This command restores vx_ninode to its default value by clearing the user-specified value. The default value is the value determined by VxFS to be optimal based on the amount of system memory, which is used if vx_ninode is not explicitly set.
VxFS performance: creating, mounting, and tuning file systems Tuning the VxFS file system You can use the vxfsstat command to monitor buffer cache statistics and inode cache usage. See the vxfsstat(1M) manual page. Number of links to a file In VxFS, the number of possible links to a file is determined by the vx_maxlink global tunable. The default value of vx_maxlink is 32767, the maximum value is 65535. This is a static tunable. You can set the value of vx_maxlink using the sam or kctune commands.
VxFS performance: creating, mounting, and tuning file systems Monitoring free space Note: The default value of vx_ifree_timelag typically provides optimal VxFS performance. Be careful when adjusting the tunable because incorrect tuning can adversely affect system performance. Veritas Volume Manager maximum I/O size When using VxFS with Veritas Volume Manager (VxVM), VxVM by default breaks up I/O requests larger than 256K.
VxFS performance: creating, mounting, and tuning file systems Monitoring free space ■ Less than 5 percent of free space in extents of less than 64 blocks in length ■ More than 5 percent of the total file system size available as free extents in lengths of 64 or more blocks A badly fragmented file system has one or more of the following characteristics: ■ Greater than 5 percent of free space in extents of less than 8 blocks in length ■ More than 50 percent of free space in extents of less than 64 blo
VxFS performance: creating, mounting, and tuning file systems Tuning I/O Tuning I/O The performance of a file system can be enhanced by a suitable choice of I/O sizes and proper alignment of the I/O requests based on the requirements of the underlying special device. VxFS provides tools to tune the file systems. Note: The following tunables and the techniques work on a per file system basis.
VxFS performance: creating, mounting, and tuning file systems Tuning I/O 45 The vxtunefs command can be used to print the current values of the I/O parameters.
VxFS performance: creating, mounting, and tuning file systems Tuning I/O Table 2-1 Tunable VxFS I/O parameters (continued) Parameter Description write_nstream The number of parallel write requests of size write_pref_io to have outstanding at one time. The file system uses the product of write_nstream multiplied by write_pref_io to determine when to do flush behind on writes. The default value for write_nstream is 1.
VxFS performance: creating, mounting, and tuning file systems Tuning I/O Table 2-1 Tunable VxFS I/O parameters (continued) Parameter Description fcl_maxalloc Specifies the maximum amount of space that can be allocated to the VxFS File Change Log (FCL). The FCL file is a sparse file that grows as changes occur in the file system.
VxFS performance: creating, mounting, and tuning file systems Tuning I/O Table 2-1 Tunable VxFS I/O parameters (continued) Parameter Description hsm_write_ prealloc For a file managed by a hierarchical storage management (HSM) application, hsm_write_prealloc preallocates disk blocks before data is migrated back into the file system. An HSM application usually migrates the data back through a series of writes to the file, each of which allocates a few blocks.
VxFS performance: creating, mounting, and tuning file systems Tuning I/O Table 2-1 Tunable VxFS I/O parameters (continued) Parameter Description inode_aging_count Specifies the maximum number of inodes to place on an inode aging list. Inode aging is used in conjunction with file system Storage Checkpoints to allow quick restoration of large, recently deleted files. The aging list is maintained in first-in-first-out (fifo) order up to maximum number of inodes specified by inode_aging_count.
VxFS performance: creating, mounting, and tuning file systems Tuning I/O Table 2-1 Tunable VxFS I/O parameters (continued) Parameter Description max_seqio_extent_size Increases or decreases the maximum size of an extent. When the file system is following its default allocation policy for sequential writes to a file, it allocates an initial extent which is large enough for the first write to the file.
VxFS performance: creating, mounting, and tuning file systems Tuning I/O Table 2-1 Tunable VxFS I/O parameters (continued) Parameter Description qio_cache_enable Enables or disables caching on Quick I/O files. The default behavior is to disable caching. To enable caching, set qio_cache_enable to 1. On systems with large memories, the database cannot always use all of the memory as a cache. By enabling file system caching as a second level cache, performance may be improved.
VxFS performance: creating, mounting, and tuning file systems Tuning I/O Table 2-1 Tunable VxFS I/O parameters (continued) Parameter Description write_throttle The write_throttle parameter is useful in special situations where a computer system has a combination of a large amount of memory and slow storage devices. In this configuration, sync operations, such as fsync(), may take long enough to complete that a system appears to hang.
VxFS performance: creating, mounting, and tuning file systems Tuning I/O Note: VxFS does not query VxVM with multiple volume sets. To improve I/O performance when using multiple volume sets, use the vxtunefs command. If the file system is being used with a hardware disk array or volume manager other than VxVM, try to align the parameters to match the geometry of the logical disk.
VxFS performance: creating, mounting, and tuning file systems Tuning I/O
Chapter 3 Extent attributes This chapter includes the following topics: ■ About extent attributes ■ Commands related to extent attributes About extent attributes Veritas File System (VxFS) allocates disk space to files in groups of one or more adjacent blocks called extents. VxFS defines an application interface that allows programs to control various aspects of the extent allocation for a given file. The extent allocation policies associated with a file are referred to as extent attributes.
Extent attributes About extent attributes Some of the extent attributes are persistent and become part of the on-disk information about the file, while other attributes are temporary and are lost after the file is closed or the system is rebooted. The persistent attributes are similar to the file's permissions and are written in the inode for the file. When a file is copied, moved, or archived, only the persistent attributes of the source file are preserved in the new file.
Extent attributes About extent attributes smaller pieces. By erring on the side of minimizing fragmentation for the file system, files may become so non-contiguous that their I/O characteristics would degrade. Fixed extent sizes are particularly appropriate in the following situations: ■ If a file is large and contiguous, a large fixed extent size can minimize the number of extents in the file.
Extent attributes Commands related to extent attributes Write operations beyond reservation A reservation request can specify that no allocations can take place after a write operation fills the last available block in the reservation. This request can be used a way similar to the function of the ulimit command to prevent a file's uncontrolled growth. Reservation trimming A reservation request can specify that any unused reservation be released when the file is closed.
Extent attributes Commands related to extent attributes extent alignment. The extent attribute information may be lost if the destination file system does not support extent attributes, has a different block size than the source file system, or lacks free extents appropriate to satisfy the extent attribute requirements.
Extent attributes Commands related to extent attributes
Chapter 4 VxFS I/O Overview This chapter includes the following topics: ■ About VxFS I/O ■ Buffered and Direct I/O ■ Concurrent I/O ■ Cache advisories ■ Freezing and thawing a file system ■ Getting the I/O size About VxFS I/O VxFS processes two basic types of file system I/O: ■ Sequential ■ Random or I/O that is not sequential For sequential I/O, VxFS employs a read-ahead policy by default when the application is reading data. For writing, it allocates contiguous blocks if possible.
VxFS I/O Overview Buffered and Direct I/O Buffered and Direct I/O VxFS responds with read-ahead for sequential read I/O. This results in buffered I/O. The data is prefetched and retained in buffers for the application. This is the default VxFS behavior. On the other hand, direct I/O does not buffer the data when the I/O to the underlying device is completed. This saves system resources like memory and CPU usage. Direct I/O is possible only when alignment and sizing criteria are satisfied.
VxFS I/O Overview Buffered and Direct I/O Direct I/O versus synchronous I/O Because direct I/O maintains the same data integrity as synchronous I/O, it can be used in many applications that currently use synchronous I/O. If a direct I/O request does not allocate storage or extend the file, the inode is not immediately written. Direct I/O CPU overhead The CPU cost of direct I/O is about the same as a raw disk transfer.
VxFS I/O Overview Concurrent I/O transferred to disk synchronously before the write returns to the user. If the file is not extended by the write, the times are updated in memory, and the call returns to the user. If the file is extended by the operation, the inode is written before the write returns. The direct I/O and VX_DSYNC advisories are maintained on a per-file-descriptor basis. Data synchronous I/O vs.
VxFS I/O Overview Freezing and thawing a file system These advisories are in memory only and do not persist across reboots. Some advisories are currently maintained on a per-file, not a per-file-descriptor, basis. Only one set of advisories can be in effect for all accesses to the file. If two conflicting applications set different advisories, both must use the advisories that were last set. All advisories are set using the VX_SETCACHE ioctl command.
VxFS I/O Overview Getting the I/O size
Chapter 5 Online backup using file system snapshots This chapter includes the following topics: ■ About snapshot file systems ■ Snapshot file system backups ■ Creating a snapshot file system ■ Backup examples ■ Snapshot file system performance ■ Differences Between Snapshots and Storage Checkpoints ■ About snapshot file system disk structure ■ How a snapshot file system works About snapshot file systems A snapshot file system is an exact image of a VxFS file system, referred to as the snap
Online backup using file system snapshots Snapshot file system backups its snapshots are unmounted. Although it is possible to have multiple snapshots of a file system made at different times, it is not possible to make a snapshot of a snapshot. Note: A snapshot file system ceases to exist when unmounted. If mounted again, it is actually a fresh snapshot of the snapped file system. A snapshot file system must be unmounted before its dependent snapped file system can be unmounted.
Online backup using file system snapshots Creating a snapshot file system Creating a snapshot file system You create a snapshot file system by using the -o snapof= option of the mount command. The -o snapsize= option may also be required if the device you are mounting does not identify the device size in its disk label, or if you want a size smaller than the entire device.
Online backup using file system snapshots Snapshot file system performance To create a backup using a snapshop file system 1 To back up files changed within the last week using cpio: # mount -F vxfs -o snapof=/home,snapsize=100000 \ /dev/vx/dsk/fsvol/vol1 /backup/home # cd /backup # find home -ctime -7 -depth -print | cpio -oc > /dev/rmt/0m # umount /backup/home 2 To do a level 3 backup of /dev/vx/dsk/fsvol/vol1 and collect those files that have changed in the current directory: # vxdump 3f - /dev/
Online backup using file system snapshots Differences Between Snapshots and Storage Checkpoints application running an online transaction processing (OLTP) workload on a snapped file system was measured at about 15 to 20 percent slower than a file system that was not snapped.
Online backup using file system snapshots How a snapshot file system works Figure 5-1 The Snapshot Disk Structure super-block bitmap blockmap data block The super-block is similar to the super-block of a standard VxFS file system, but the magic number is different and many of the fields are not applicable. The bitmap contains one bit for every block on the snapped file system. Initially, all bitmap entries are zero.
Online backup using file system snapshots How a snapshot file system works data for block n can be found on the snapshot file system. The blockmap entry for block n is changed from 0 to the block number on the snapshot file system containing the old data. A subsequent read request for block n on the snapshot file system will be satisfied by checking the bitmap entry for block n and reading the data from the indicated block on the snapshot file system, instead of from block n on the snapped file system.
Online backup using file system snapshots How a snapshot file system works
Chapter 6 Storage Checkpoints This chapter includes the following topics: ■ About Storage Checkpoints ■ How a Storage Checkpoint works ■ Types of Storage Checkpoints ■ Storage Checkpoint administration ■ Space management considerations ■ Restoring a file system from a Storage Checkpoint ■ Storage Checkpoint quotas About Storage Checkpoints Veritas File System provides a Storage Checkpoint feature that quickly creates a persistent image of a file system at an exact point in time.
Storage Checkpoints About Storage Checkpoints See “How a Storage Checkpoint works” on page 77. Unlike a disk-based mirroring technology that requires a separate storage space, Storage Checkpoints minimize the use of disk space by using a Storage Checkpoint within the same free space available to the file system.
Storage Checkpoints How a Storage Checkpoint works availability and data integrity by increasing the frequency of backup and replication solutions. Storage Checkpoints can be taken in environments with a large number of files, such as file servers with millions of files, with little adverse impact on performance. Because the file system does not remain frozen during Storage Checkpoint creation, applications can access the file system even while the Storage Checkpoint is taken.
Storage Checkpoints How a Storage Checkpoint works Primary fileset and its Storage Checkpoint Figure 6-1 Primary fileset Storage Checkpoint /database emp.dbf /database jun.dbf emp.dbf jun.dbf In Figure 6-2, a square represents each block of the file system. This figure shows a Storage Checkpoint containing pointers to the primary fileset at the time the Storage Checkpoint is taken, as in Figure 6-1.
Storage Checkpoints How a Storage Checkpoint works The Storage Checkpoint presents the exact image of the file system by finding the data from the primary fileset. As the primary fileset is updated, the original data is copied to the Storage Checkpoint before the new data is written. When a write operation changes a specific data block in the primary fileset, the old data is first read and copied to the Storage Checkpoint before the primary fileset is updated.
Storage Checkpoints Types of Storage Checkpoints Figure 6-3 Updates to the primary fileset Primary fileset Storage Checkpoint A B C’ C D E Types of Storage Checkpoints You can create the following types of Storage Checkpoints: ■ Data Storage Checkpoints ■ nodata Storage Checkpoints ■ Removable Storage Checkpoints ■ Non-mountable Storage Checkpoints Data Storage Checkpoints A data Storage Checkpoint is a complete image of the file system at the time the Storage Checkpoint is created.
Storage Checkpoints Types of Storage Checkpoints limit the life of data Storage Checkpoints to minimize the impact on system resources. See “Showing the difference between a data and a nodata Storage Checkpoint” on page 87. nodata Storage Checkpoints A nodata Storage Checkpoint only contains file system metadata—no file data blocks. As the original file system changes, the nodata Storage Checkpoint records the location of every changed block.
Storage Checkpoints Storage Checkpoint administration Removable Storage Checkpoints A removable Storage Checkpoint can “self-destruct” under certain conditions when the file system runs out of space. See “Space management considerations” on page 94. After encountering certain out-of-space (ENOSPC) conditions, the kernel removes Storage Checkpoints to free up space for the application to continue running on the file system.
Storage Checkpoints Storage Checkpoint administration ctime mtime flags # of inodes # of blocks . . .
Storage Checkpoints Storage Checkpoint administration mtime flags = Thu 3 Mar 2005 7:00:17 PM PST = nodata, largefiles The following example shows the creation of a removable Storage Checkpoint named thu_8pm on /mnt0 and lists all Storage Checkpoints of the /mnt0 file system: # fsckptadm -r create thu_8pm /mnt0 # fsckptadm list /mnt0 /mnt0 thu_8pm: ctime mtime flags thu_7pm: ctime mtime flags = Thu 3 Mar 2005 8:00:19 PM PST = Thu 3 Mar 2005 8:00:19 PM PST = largefiles, removable = Thu 3 Mar 2005 7:0
Storage Checkpoints Storage Checkpoint administration # fsckptadm list /mnt0 /mnt0 Accessing a Storage Checkpoint You can mount Storage Checkpoints using the mount command with the mount option -o ckpt=ckpt_name. See the mount_vxfs(1M) manual page. Observe the following rules when mounting Storage Checkpoints: ■ Storage Checkpoints are mounted as read-only Storage Checkpoints by default. If you must write to a Storage Checkpoint, mount it using the -o rw option.
Storage Checkpoints Storage Checkpoint administration Note: The vol1 file system must already be mounted before the Storage Checkpoint can be mounted.
Storage Checkpoints Storage Checkpoint administration See “Types of Storage Checkpoints” on page 80. You can use either the synchronous or asynchronous method to convert a data Storage Checkpoint to a nodata Storage Checkpoint; the asynchronous method is the default method. In a synchronous conversion, fsckptadm waits for all files to undergo the conversion process to “nodata” status before completing the operation.
Storage Checkpoints Storage Checkpoint administration To show the difference between Storage Checkpoints 1 Create a file system and mount it on /mnt0, as in the following example: # mkfs -F vxfs /dev/vx/rdsk/dg1/test0 version 7 layout 134217728 sectors, 67108864 blocks of size 1024, log \ size 65536 blocks, largefiles supported # mkfs -F /dev/vx/rdsk/dg1/test0 /mnt0 2 Create a small file with a known content, as in the following example: # echo "hello, world" > /mnt0/file 3 Create a Storage Che
Storage Checkpoints Storage Checkpoint administration 7 Unmount the Storage Checkpoint, convert the Storage Checkpoint to a nodata Storage Checkpoint, and mount the Storage Checkpoint again: # umount /mnt0@5_30pm # fsckptadm -s set nodata ckpt@5_30pm /mnt0 # mount -F vxfs -o ckpt=ckpt@5_30pm \ /dev/vx/dsk/dg1/test0:ckpt@5_30pm /mnt0@5_30pm 8 Examine the content of both files.
Storage Checkpoints Storage Checkpoint administration To convert multiple Storage Checkpoints 1 Create a file system and mount it on /mnt0: # mkfs -F vxfs /dev/vx/rdsk/dg1/test0 version 7 layout 13417728 sectors, 67108864 blocks of size 1024, log \ size 65536 blocks largefiles supported # mount -F vxfs /dev/vx/dsk/dg1/test0 /mnt0 2 Create four data Storage Checkpoints on this file system, note the order of creation, and list them: # fsckptadm create oldest /mnt0 # fsckptadm create older /mnt0 # fsc
Storage Checkpoints Storage Checkpoint administration 3 Try to convert synchronously the latest Storage Checkpoint to a nodata Storage Checkpoint. The attempt will fail because the Storage Checkpoints older than the latest Storage Checkpoint are data Storage Checkpoints, namely the Storage Checkpoints old, older, and oldest: # fsckptadm -s set nodata latest /mnt0 UX:vxfs fsckptadm: ERROR: V-3-24632: Storage Checkpoint set failed on latest.
Storage Checkpoints Storage Checkpoint administration To create a delayed nodata Storage Checkpoint 1 Remove the latest Storage Checkpoint.
Storage Checkpoints Storage Checkpoint administration 3 Convert the oldest Storage Checkpoint to a nodata Storage Checkpoint because no older Storage Checkpoints exist that contain data in the file system. Note: This step can be done synchronously.
Storage Checkpoints Space management considerations 4 Remove the older and old Storage Checkpoints.
Storage Checkpoints Restoring a file system from a Storage Checkpoint ■ Remove the oldest Storage Checkpoint first. Restoring a file system from a Storage Checkpoint Mountable data Storage Checkpoints on a consistent and undamaged file system can be used by backup and restore applications to restore either individual files or an entire file system.
Storage Checkpoints Restoring a file system from a Storage Checkpoint To restore a file from a Storage Checkpoint 1 Create the Storage Checkpoint CKPT1 of /home. $ fckptadm create CKPT1 /home 2 Mount Storage Checkpoint CKPT1 on the directory /home/checkpoints/mar_4. $ mount -F vxfs -o ckpt=CKPT1 /dev/vx/dsk/dg1/vol- \ 01:CKPT1 /home/checkpoints/mar_4 3 Delete the file MyFile.txt from your home directory. $ cd /home/users/me $ rm MyFile.
Storage Checkpoints Restoring a file system from a Storage Checkpoint To restore a file system from a Storage Checkpoint 1 Run the fsckpt_restore command: # fsckpt_restore -l /dev/vx/dsk/dg1/vol2 /dev/vx/dsk/dg1/vol2: UNNAMED: ctime = Thu 08 May 2004 06:28:26 PM PST mtime = Thu 08 May 2004 06:28:26 PM PST flags = largefiles, file system root CKPT6: ctime = Thu 08 May 2004 06:28:35 PM PST mtime = Thu 08 May 2004 06:28:35 PM PST flags = largefiles CKPT5: ctime = Thu 08 May 2004 06:28:34 PM PST mtime = Thu
Storage Checkpoints Restoring a file system from a Storage Checkpoint 2 In this example, select the Storage Checkpoint CKPT3 as the new root fileset: Select Storage Checkpoint for restore operation or (EOF) to exit or to list Storage Checkpoints: CKPT3 CKPT3: ctime = Thu 08 May 2004 06:28:31 PM PST mtime = Thu 08 May 2004 06:28:36 PM PST flags = largefiles UX:vxfs fsckpt_restore: WARNING: V-3-24640: Any file system changes or Storage Checkpoints made after Thu 08 May 2004 06:28:31
Storage Checkpoints Restoring a file system from a Storage Checkpoint 3 Type y to restore the file system from CKPT3: Restore the file system from Storage Checkpoint CKPT3 ? (ynq) y (Yes) UX:vxfs fsckpt_restore: INFO: V-3-23760: File system restored from CKPT3 If the filesets are listed at this point, it shows that the former UNNAMED root fileset and CKPT6, CKPT5, and CKPT4 were removed, and that CKPT3 is now the primary fileset. CKPT3 is now the fileset that will be mounted by default.
Storage Checkpoints Storage Checkpoint quotas Storage Checkpoint quotas VxFS provides options to the fsckptadm command interface to administer Storage Checkpoint quotas. Storage Checkpoint quotas set the following limits on the number of blocks used by all Storage Checkpoints of a primary file set: hard limit An absolute limit that cannot be exceeded. If a hard limit is exceeded, all further allocations on any of the Storage Checkpoints fail, but existing Storage Checkpoints are preserved.
Chapter 7 Quotas This chapter includes the following topics: ■ About quota limits ■ About quota files on Veritas File System ■ About quota commands ■ Using quotas About quota limits Veritas File System (VxFS) supports user quotas. The quota system limits the use of two principal resources of a file system: files and data blocks. For each of these resources, you can assign quotas to individual users to limit their usage.
Quotas About quota files on Veritas File System See “About quota files on Veritas File System” on page 102. The quota soft limit can be exceeded when VxFS preallocates space to a file. Quota limits cannot exceed two terabytes on a Version 5 disk layout. See “About extent attributes” on page 55. About quota files on Veritas File System A quotas file (named quotas) must exist in the root directory of a file system for any of the quota commands to work.
Quotas Using quotas Turning on quotas To use the quota functionality on a file system, quotas must be turned on. You can turn quotas on at mount time or after a file system is mounted. To turn on quotas ◆ To turn on user quotas for a VxFS file system, enter: # quotaon /mount_point Turning on quotas at mount time Quotas can be turned on with the mount command when you mount a file system.
Quotas Using quotas To modify time limits ◆ Specify the -t option to modify time limits for any user: # edquota -t Viewing disk quotas and usage Use the quota command to view a user's disk quotas and usage on VxFS file systems.
Chapter 8 File Change Log This chapter includes the following topics: ■ About File Change Log ■ About the File Change Log file ■ File Change Log administrative interface ■ File Change Log programmatic interface ■ Summary of API functions ■ Reverse path name lookup About File Change Log The VxFS File Change Log (FCL) tracks changes to files and directories in a file system.
File Change Log About the File Change Log file About the File Change Log file File Change Log records file system changes such as creates, links, unlinks, renaming, data appended, data overwritten, data truncated, extended attribute modifications, holes punched, and miscellaneous file property updates. Note: FCL is supported only on disk layout Version 6 and later. FCL stores changes in a sparse file in the file system namespace. The FCL file is located in mount_point/lost+found/changelog.
File Change Log File Change Log administrative interface File Change Log administrative interface The FCL can be set up and tuned through the fcladm and vxtunefs VxFS administrative commands. See the fcladm(1M) and vxtunefs(1M) manual pages. The FCL keywords for fcladm are as follows: clear Disables the recording of the audit, open, close, and statistical events after it has been set. dump Creates a regular file image of the FCL file that can be downloaded too an off-host processing system.
File Change Log File Change Log administrative interface fcl_keeptime Specifies the duration in seconds that FCL records stay in the FCL file before they can be purged. The first records to be purged are the oldest ones, which are located at the beginning of the file. Additionally, records at the beginning of the file can be purged if allocation to the FCL file exceeds fcl_maxalloc bytes. The default value of fcl_keeptime is 0.
File Change Log File Change Log programmatic interface # fcladm off mount_point To remove the FCL file for a mounted file system, on which FCL must be turned off, type the following: # fcladm rm mount_p oint To obtain the current FCL state for a mounted file system, type the following: # fcladm state mount_point To enable tracking of the file opens along with access information with each event in the FCL, type the following: # fcladm set fileopen,accessinfo mount_point To stop tracking file I/O statist
File Change Log File Change Log programmatic interface Backward compatibility Providing API access for the FCL feature allows backward compatibility for applications. The API allows applications to parse the FCL file independent of the FCL layout changes. Even if the hidden disk layout of the FCL changes, the API automatically translates the returned data to match the expected output record.
File Change Log Summary of API functions } if (fclsb.
File Change Log Reverse path name lookup vxfs_fcl_seek() Extracts data from the specified cookie and then seeks to the specified offset. vxfs_fcl_seektime() Seeks to the first record in the FCL after the specified time. Reverse path name lookup The reverse path name lookup feature obtains the full path name of a file or directory from the inode number of that file or directory.
Chapter 9 Multi-volume file systems This chapter includes the following topics: ■ About multi-volume support ■ About volume types ■ Features implemented using multi-volume support ■ About volume sets ■ Creating multi-volume file systems ■ Converting a single volume file system to a multi-volume file system ■ Removing a volume from a multi-volume file system ■ About allocation policies ■ Assigning allocation policies ■ Querying allocation policies ■ Assigning pattern tables to director
Multi-volume file systems About multi-volume support About multi-volume support VxFS provides support for multi-volume file systems when used in conjunction with the Veritas Volume Manager. Using multi-volume support (MVS), a single file system can be created over multiple volumes, each volume having its own properties. For example, it is possible to place metadata on mirrored storage while placing file data on better-performing volume types such as RAID-1+0 (striped and mirrored).
Multi-volume file systems Features implemented using multi-volume support functionality is available in the Veritas File System Dynamic Storage Tiering (DST) feature. ■ Placing the VxFS intent log on its own volume to minimize disk head movement and thereby increase performance. This functionality can be used to migrate from the Veritas QuickLog™ feature. ■ Separating Storage Checkpoints so that data allocated to a Storage Checkpoint is isolated from the rest of the file system.
Multi-volume file systems About volume sets ■ Using the vxdump command. Volume availability is supported only on a file system with disk layout Version 7 or later. Note: Do not mount a multi-volume system with the ioerror=disable or ioerror=wdisable mount options if the volumes have different availability properties. Symantec recommends the ioerror=mdisable mount option for cluster mounts and ioerror=mwdisable for local mounts.
Multi-volume file systems Creating multi-volume file systems 3 117 List the component volumes of the previously created volume set: # vxvset list myvset 4 VOLUME vol1 INDEX 0 LENGTH 20480 STATE ACTIVE CONTEXT - vol2 vol3 1 2 102400 102400 ACTIVE ACTIVE - Use the ls command to see that when a volume set is created, the volumes contained by the volume set are removed from the namespace and are instead accessed through the volume set name: # ls -l /dev/vx/rdsk/rootdg/myvset 1 root 5 root 108,
Multi-volume file systems Creating multi-volume file systems Example of creating a multi-volume file system The following procedure is an example of creating a multi-volume file system.
Multi-volume file systems Converting a single volume file system to a multi-volume file system 4 List the volume availability flags using the fsvoladm command: # fsvoladm queryflags /mnt1 volname vol1 vol2 vol3 vol4 vol5 5 flags metadataok dataonly dataonly dataonly dataonly Increase the metadata space in the file system using the fsvoladm command: # fsvoladm clearflags dataonly /mnt1 vol2 # fsvoladm queryflags /mnt1 volname vol1 vol2 vol3 vol4 vol5 flags metadataok metadataok dataonly dataonly dataon
Multi-volume file systems Removing a volume from a multi-volume file system 4 If the disk layout version is less than 6, upgrade to Version 7.
Multi-volume file systems About allocation policies Forcibly removing a volume If you must forcibly remove a volume from a file system, such as if a volume is permanently destroyed and you want to clean up the dangling pointers to the lost volume, use the fsck -o zapvol=volname command. The zapvol option performs a full file system check and zaps all inodes that refer to the specified volume. The fsck command prints the inode numbers of all files that the command destroys; the file names are not printed.
Multi-volume file systems Querying allocation policies To assign allocation policies 1 List the volumes in the volume set: # vxvset -g rootdg list myvset VOLUME vol1 vol2 2 INDEX 0 1 LENGTH 102400 102400 STATE ACTIVE ACTIVE CONTEXT - Create a file system on the myvset volume set and mount it: # mkfs -F vxfs /dev/vx/rdsk/rootdg/myvset version 7 layout 204800 sectors, 102400 blocks of size 1024, log size 1024 blocks largefiles supported # mount -F vxfs /dev/vx/dsk/rootdg/myvset /mnt1 3 Define
Multi-volume file systems Assigning pattern tables to directories To query allocation policies ◆ Query the allocation policy: # fsapadm query /mnt1 datapolicy Assigning pattern tables to directories A pattern table contains patterns against which a file's name and creating process' UID and GID are matched as a file is created in a specified directory. The first successful match is used to set the allocation policies of the file, taking precedence over inheriting per-file allocation policies.
Multi-volume file systems Allocating data The following example shows how to assign pattern tables to a file system in a volume set that contains two volumes from different classes of storage. The pattern table is contained within the pattern file mypatternfile.
Multi-volume file systems Volume encapsulation Allocating data from vol1 to vol2 ◆ Assign an allocation policy that allocates user data from vol1 to vol2 if space runs out on vol1: # fsapadm define /mnt1 datapolicy vol1 vol2 Volume encapsulation Multi-volume support enables the ability to encapsulate an existing raw volume and make the volume contents appear as a file in the file system. Encapsulating a volume involves the following actions: ■ Adding the volume to an existing volume set.
Multi-volume file systems Volume encapsulation 3 Create a file system on the volume set: # mkfs -F vxfs /dev/vx/rdsk/rootdg/myvset version 7 layout 204800 sectors, 102400 blocks of size 1024, log size 1024 blocks largefiles supported 4 Mount the volume set: # mount -F vxfs /dev/vx/dsk/rootdg/myvset /mnt1 5 Add the new volume to the volume set: # vxvset addvol myvset dbvol 6 Encapsulate dbvol: # fsvoladm encapsulate /mnt1/dbfile dbvol 100m # ls -l /mnt1/dbfile -rw------- 1 root other 104857600 M
Multi-volume file systems Reporting file extents To deencapsulate a volume 1 List the volumes: # vxvset list myvset VOLUME vol1 vol2 dbvol INDEX 0 1 2 LENGTH 102400 102400 102400 STATE ACTIVE ACTIVE ACTIVE CONTEXT - The volume set has three volumes. 2 Deencapsulate dbvol: # fsvoladm deencapsulate /mnt1/dbfile Reporting file extents MVS feature provides the capability for file-to-volume mapping and volume-to-file mapping via the fsmap and fsvmap commands.
Multi-volume file systems Load balancing Using the fsvmap command 1 Report the extents of files on multiple volumes: # fsvmap /dev/vx/rdsk/fstest/testvset vol1 vol2 vol1 vol1 vol1 vol1 vol2 vol2 2 /.
Multi-volume file systems Load balancing 129 Note: If a file has both a fixed extent size set and an allocation policy for load balancing, certain behavior can be expected. If the chunk size in the allocation policy is greater than the fixed extent size, all extents for the file are limited by the chunk size. For example, if the chunk size is 16 MB and the fixed extent size is 3 MB, then the largest extent that satisfies both the conditions is 15 MB.
Multi-volume file systems Converting a multi-volume file system to a single volume file system To rebalance extents 1 Define the policy by specifying the -o balance and -c options: # fsapadm define -o balance -c 2m /mnt loadbal vol1 vol2 vol4 \ vol5 vol6 2 Assign the policy: # fsapadm enforcefile -f strict /mnt/filedb Converting a multi-volume file system to a single volume file system Because data can be relocated among volumes in a multi-volume file system, you can convert a multi-volume file s
Multi-volume file systems Converting a multi-volume file system to a single volume file system Converting to a single volume file system 1 Determine if the first volume in the volume set, which is identified as device number 0, has the capacity to receive the data from the other volumes that will be removed: # df /mnt1 /mnt1 2 (/dev/vx/dsk/dg1/vol1):16777216 blocks 3443528 files If the first volume does not have sufficient capacity, grow the volume to a sufficient size: # fsvoladm resize /mnt1 vol1 1
Multi-volume file systems Converting a multi-volume file system to a single volume file system
Chapter 10 Dynamic Storage Tiering This chapter includes the following topics: ■ About Dynamic Storage Tiering ■ Placement classes ■ Administering placement policies ■ File placement policy grammar ■ File placement policy rules ■ Calculating I/O temperature and access temperature ■ Multiple criteria in file placement policy rule statements ■ File placement policy rule and statement ordering ■ File placement policies and extending files About Dynamic Storage Tiering VxFS uses multi-tier o
Dynamic Storage Tiering Placement classes Note: Some of the commands have changed or removed between the 4.1 release and the 5.0 release to make placement policy management more user-friendly. The following commands have been removed: fsrpadm, fsmove, and fssweep. The output of the queryfile, queryfs, and list options of the fsapadm command now print the allocation order by name instead of number.
Dynamic Storage Tiering Placement classes is known as a volume tag. A volume may have different tags, one of which could be the placment class. The placement class tag makes a volume distinguishable by DST. Volume tags are organized as hierarchical name spaces in which the levels of the hierarchy are separated by periods. By convention, the uppermost level in the volume tag hierarchy denotes the Storage Foundation component or application that uses a tag, and the second level denotes the tag’s purpose.
Dynamic Storage Tiering Administering placement policies To tag volumes ◆ Tag the volumes as placement classes: # vxvoladm -g cfsdg settag vsavola vxfs.placement_class.tier1 # vxvoladm -g cfsdg settag vsavolb vxfs.placement_class.tier2 # vxvoladm -g cfsdg settag vsavolc vxfs.placement_class.tier3 # vxvoladm -g cfsdg settag vsavold vxfs.placement_class.tier4 Listing placement classes Placement classes are listed using the vxvoladm listtag command. See the vxvoladm(1M) manual page.
Dynamic Storage Tiering Administering placement policies forces assignment of a placement policy to a file system, the file system's active placement policy is overwritten and any local changes that had been made to the placement policy are lost. Assigning a placement policy The following example uses the fsppadm assign command to assign the file placement policy represented in the XML policy document /tmp/policy1.xml for the file system at mount point /mnt1.
Dynamic Storage Tiering Administering placement policies To query which files will be affected by enforcing a placement policy ◆ Query the files: # fsppadm query /mnt1/dir1/dir2 /mnt2 /mnt1/dir3 Enforcing a placement policy Enforcing a placement policy for a file system requires that the policy be assigned to the file system. You must assign a placement policy before it can be enforced. Enforce operations are logged in a hidden file, .__fsppadm_enforce.
Dynamic Storage Tiering File placement policy grammar 139 To enforce a placement policy ◆ Enforce a placement policy to a file system: # fsppadm enforce -a -r /tmp/report /mnt1 Current Current Relocated Class Volume Class tier3 dstvole tier3 tier3 dstvole tier3 tier3 dstvole tier3 tier3 dstvolf tier3 . . .
Dynamic Storage Tiering File placement policy grammar allocating and relocating files are expressed in the file system's file placement policy. A VxFS file placement policy defines the desired placement of sets of files on the volumes of a VxFS multi-volume file system. A file placement policy specifies the placement classes of volumes on which files should be created, and where and under what conditions the files should be relocated to volumes in alternate placement classes or deleted.
Dynamic Storage Tiering File placement policy grammar 141 DELETE elements, if any, are recommeded to preceed RELOCATE elements, so that if a file is meant for deletion, will not unnecessarily be subject relocation. The reason being, if any of the DELETE elements triggers an action, subsequent elements (DELETE and/or RELOCATE elements, if any) will not be processed.
Dynamic Storage Tiering File placement policy grammar The elements can appear in any order. -->
Dynamic Storage Tiering File placement policy rules 153 ]> File placement policy rules A VxFS file placement policy consists of one or more rules.
Dynamic Storage Tiering File placement policy rules A SELECT statement may designate files by using the following selection criteria: A full path name relative to the file system mount point.
Dynamic Storage Tiering File placement policy rules Either an exact file name or a pattern using a single wildcard character (*). For example, the pattern “abc*” denotes all files whose names begin with “abc”. The pattern “abc.*” denotes all files whose names are exactly "abc" followed by a period and any extension. The pattern “*abc” denotes all files whose names end in “abc”, even if the name is all or part of an extension. The pattern “*.
Dynamic Storage Tiering File placement policy rules In the following example, only files that reside in either the ora/db or the crash/dump directory, and whose owner is either user1 or user2 are selected for possible action: A rule may include multiple SELECT statements.
Dynamic Storage Tiering File placement policy rules the last rule in the policy document on which the file system's active placement policy is based should specify * as the only selection criterion in its SELECT statement, and a CREATE statement naming the desired placement class for files not selected by other rules.
Dynamic Storage Tiering File placement policy rules Failing that, VxFS resorts to its internal space allocation algorithms, so file allocation does not fail unless there is no available space any-where in the file system's volume set. The Flags=”any” attribute differs from the catchall rule in that this attribute applies only to files designated by the SELECT statement in the rule, which may be less inclusive than the * file selection specification of the catchall rule.
Dynamic Storage Tiering File placement policy rules The element with a value of one megabyte is specified for allocations on tier2 volumes. For files allocated on tier2 volumes, the first megabyte would be allocated on the first volume, the second on the second volume, and so forth.
Dynamic Storage Tiering File placement policy rules ...additional placement class specifications... ...relocation conditions... A RELOCATE statement contains the following clauses: An optional clause that contains a list of placement classes from whose volumes designated files should be relocated if the files meet the conditions specified in the clause.
Dynamic Storage Tiering File placement policy rules Indicates the placement classes to which qualifying files should be relocated. Unlike the source placement class list in a FROM clause, placement classes in a clause are specified in priority order. Files are relocated to volumes in the first specified placement class if possible, to the second if not, and so forth.
Dynamic Storage Tiering File placement policy rules This criterion is met when files are unmodified for a designated period or during a designated period relative to the time at which the fsppadm enforce command was issued. This criterion is met when files exceed or drop below a designated size or fall within a designated size range. This criterion is met when files exceed or drop below a designated I/O temperature, or fall with in a designated I/O temperature range.
Dynamic Storage Tiering File placement policy rules ...max_I/O_temperature... ...days_of_interest... ...min_access_temperature... ...max_access_temperature... ...days_of_interest...
Dynamic Storage Tiering File placement policy rules Including a element in a clause causes VxFS to relocate files to which the rule applies that have been inactive for longer than the specified interval. Such a rule would typically be used to relocate inactive files to less expensive storage tiers. Conversely, including causes files accessed within the specified interval to be relocated.
Dynamic Storage Tiering File placement policy rules element, or as a range by using both. However, I/O temperature is dimensionless and therefore has no specification for units. VxFS computes files' I/O temperatures over the period between the time when the fsppadm enforce command was issued and the number of days in the past specified in the element, where a day is a 24 hour period.
Dynamic Storage Tiering File placement policy rules tier1 tier2 The files designated by the rule's SELECT statement that reside on volumes in placement class tier1 at the time the fsppadm enforce command executes would be unconditionally relocated to volumes in placement class tier2 as long as space permitted.
Dynamic Storage Tiering File placement policy rules Files designated by the rule's SELECT statement are relocated from tier1 volumes to tier2 volumes if they are between 1 MB and 1000 MB in size and have not been accessed for 30 days. VxFS relocates qualifying files in the order in which it encounters them as it scans the file system's directory tree.
Dynamic Storage Tiering File placement policy rules tier1 volumes are fully occupied, VxFS stops scheduling qualifying files for relocation. VxFS file placement policies are able to control file placement across any number of placement classes.
Dynamic Storage Tiering File placement policy rules tier1 10 tier2 10 100 tier3 100
Dynamic Storage Tiering File placement policy rules DELETE statement The DELETE file placement policy rule statement is very similar to the RELOCATE statement in both form and function, lacking only the clause. File placement policy-based deletion may be thought of as relocation with a fixed destination. Note: Use DELETE statements with caution. The following XML snippet illustrates the general form of the DELETE statement: ...placement_class_name...
Dynamic Storage Tiering Calculating I/O temperature and access temperature tier3 tier2 120 The first DELETE statement unconditionally deletes files designated by the rule's SELECT statement that reside on tier3 volumes when the fsppadm enforce command is issued.
Dynamic Storage Tiering Calculating I/O temperature and access temperature though it may have been inactive for the month preceding. If the intent of a policy rule is to relocate inactive files to lower tier volumes, it will perform badly against files that happen to be accessed, however casually, within the interval defined by the value of the pa-rameter. ■ Access age is a poor indicator of resumption of significant activity.
Dynamic Storage Tiering Calculating I/O temperature and access temperature As its name implies, the File Change Log records information about changes made to files in a VxFS file system. In addition to recording creations, deletions, extensions, the FCL periodically captures the cumulative amount of I/O activity (number of bytes read and written) on a file-by-file basis.
Dynamic Storage Tiering Calculating I/O temperature and access temperature 3 4 This snippet specifies that files to which the rule applies should be relocated from tier1 volumes to tier2 volumes if their I/O temperatures fall below 3 over a period of 4 days.
Dynamic Storage Tiering Multiple criteria in file placement policy rule statements 5 2 The statement specifies that files on tier2 volumes whose I/O temperature as calculated using the number of bytes read is above 5 over a 2-day period are to be relocated to tier1 volumes. Bytes written to the file during the period of interest are not part of this calculation.
Dynamic Storage Tiering Multiple criteria in file placement policy rule statements This example is in direct contrast to the treatment of selection criteria clauses of different types.
Dynamic Storage Tiering Multiple criteria in file placement policy rule statements Multiple placement classes in clauses of CREATE statements and in clauses of RELOCATE statements Both the clause of the CREATE statement and the clause of the RELOCATE statement can specify priority ordered lists of placement classes using multiple XML elements. VxFS uses a volume in the first placement class in a list for the designated purpose of file creation or relocation, if possible.
Dynamic Storage Tiering File placement policy rule and statement ordering Multiple placement classes in clauses of RELOCATE and DELETE statements The clause in RELOCATE and DELETE statements can include multiple source placement classes. However, unlike the and clauses, no order or priority is implied in clauses.
Dynamic Storage Tiering File placement policy rule and statement ordering a file is designated by a SELECT statement is the only rule against which that file is evaluated. Thus, rules whose purpose is to supersede a generally applicable behavior for a special class of files should precede the general rules in a file placement policy document. The following XML snippet illustrates faulty rule placement with potentially unintended consequences:
Dynamic Storage Tiering File placement policy rule and statement ordering will never apply to any file. This fault can be remedied by exchanging the order of the two rules. If the DatabaseRule rule occurs first in the policy document, VxFS encounters it first when determining where to new place files whose names follow the pattern *.db, and correctly allocates space for them on tier1 volumes.
Dynamic Storage Tiering File placement policies and extending files precede statements that specify more inclusive criteria in a file placement policy document. The GUI automatically produce the correct statement order for the policies it creates. File placement policies and extending files In a VxFS file system with an active file placement policy, the placement class on whose volume a file resides is part of its metadata, and is attached when it is created and updated when it is relocated.
Dynamic Storage Tiering File placement policies and extending files
Appendix A Quick Reference This appendix includes the following topics: ■ Command summary ■ Online manual pages ■ Creating a VxFS file system ■ Converting a file system to VxFS ■ Mounting a file system ■ Unmounting a file system ■ Displaying information on mounted file systems ■ Identifying file system types ■ Resizing a file system ■ Backing up and restoring a file system ■ Using quotas Command summary Symbolic links to all VxFS command executables are installed in the /opt/VRTS/bin
Quick Reference Command summary Table A-1 VxFS commands Command Description cfscluster SFCFS cluster configuration command. cfsdgadm Adds or deletes shared disk groups to/from a cluster configuration. cfsmntadm Adds, deletes, modifies, and sets policy on cluster mounted file systems. cfsmount, cfsumount Mounts or unmounts a cluster file system. df Reports the number of free disk blocks and inodes for a VxFS file system. diskusg Generates VxFS disk accounting data by user ID.
Quick Reference Command summary Table A-1 VxFS commands (continued) Command Description fsppmk Creates placement policies. fstag Creates, deletes, or lists file tags. fstyp Returns the type of file system on a specified disk partition. fsvmap Maps volumes of VxFS file systems to files. fsvoladm Administers VxFS volumes. glmconfig Configures Group Lock Managers (GLM). mkfs Constructs a VxFS file system. mount Mounts a VxFS file system.
Quick Reference Online manual pages Table A-1 VxFS commands (continued) Command Description vxrestore Restores a file system incrementally. vxtunefs Tunes a VxFS file system. vxumount Unmounts a VxFS file system. vxupgrade Upgrades the disk layout of a mounted VxFS file system. Online manual pages This release includes the following online manual pages as part of the VRTSvxfs package.
Quick Reference Online manual pages Table A-3 Section 1M Section 1M manual pages Description cfscluster cfsdgadm cfsmntadm cfsmount, cfsumount df_vxfs Reports the number of free disk blocks and inodes for a VxFS file system. extendfs_vxfs Extends the size of a VxFS file system. fcladm Administers VxFS File Change Logs. ff_vxfs Lists file names and inode information for a VxFS file system. fsadm_vxfs Resizes or reorganizes a VxFS file system. fsapadm Administers VxFS allocation policies.
Quick Reference Online manual pages Table A-3 Section 1M manual pages (continued) Section 1M Description glmconfig Configures Group Lock Managers (GLM). This functionality is available only with the Veritas Cluster File System product. mkfs_vxfs Constructs a VxFS file system. mount_vxfs Mounts a VxFS file system. ncheck_vxfs Generates path names from inode numbers for a VxFS file system. newfs_vxfs Creates a new VxFS file system. quot Summarizes ownership on a VxFS file system.
Quick Reference Online manual pages Table A-4 Section 3 manual pages (continued) Section 3 Description fsckpt_fclose Closes a Storage Checkpoint file. fsckpt_finfo Returns the status information from a Storage Checkpoint file. fsckpt_fopen Opens a Storage Checkpoint file. fsckpt_fpromote Promotes a file from a Storage Checkpoint into another fileset. fsckpt_fsclose Closes a mount point opened for Storage Checkpoint management.
Quick Reference Online manual pages Table A-4 Section 3 manual pages (continued) Section 3 Description vxfs_ap_assign_fs_pat Assigns an pattern-based allocation policy for a file system. vxfs_ap_define Defines a new allocation policy. vxfs_ap_define2 Defines a new allocation policy. vxfs_ap_enforce_file Ensures that all blocks in a specified file match the file allocation policy. vxfs_ap_enforce_file2 Reallocates blocks in a file to match allocation policies.
Quick Reference Online manual pages Table A-4 Section 3 manual pages (continued) Section 3 Description vxfs_nattr_check Checks for the existence of named data streams. vxfs_nattr_fcheck vxfs_nattr_link Links to a named data stream. vxfs_nattr_open Opens a named data stream. vxfs_nattr_rename Renames a named data stream. vxfs_nattr_unlink Removes a named data stream. vxfs_nattr_utimes Sets access and modification times for named data streams.
Quick Reference Creating a VxFS file system Table A-5 Section 4 manual pages (continued) Section 4 Description tunefstab Describes the VxFS file system tuning parameters table. Table A-6 describes the VxFS-specific section 7 manual pages. Table A-6 Section 7 manual pages Section 7 Description vxfsio Describes the VxFS file system control functions. Creating a VxFS file system The mkfs command creates a VxFS file system by writing to a special character device file.
Quick Reference Creating a VxFS file system 193 To create a file system ◆ Use the mkfs command to create a file system: mkfs [-F vxfs] [-m] [generic_options] [-o specific_options] \ special [size] -F vxfs Specifies the VxFS file system type. -m Displays the command line that was used to create the file system. The file system must already exist. This option enables you to determine the parameters used to construct the file system. generic_options Options common to most other file system types.
Quick Reference Converting a file system to VxFS Converting a file system to VxFS The vxfsconvert command can be used to convert a HFS file system to a VxFS file system. See the vxfsconvert(1M) manual page. To convert a HFS file system to a VxFS file system ◆ Use the vxfsconvert command to convert a HFS file system to VxFS: vxfsconvert [-l logsize] [-s size] [-efnNvyY] special -e Estimates the amount of space required to complete the conversion.
Quick Reference Mounting a file system and Veritas-installed products, the generic mount command executes the VxFS mount command from the directory /sbin/fs/vxfs. If the -F option is not supplied, the command searches the file /etc/fstab for a file system and an fstype matching the special file or mount point provided. If no file system type is specified, mount uses the default file system type (VxFS).
Quick Reference Mounting a file system Support for cluster file If you specify the cluster option, the file system is mounted in systems shared mode. HP Serviceguard Storage Management Suite environments require HP Serviceguard to be configured correctly before a complete clustering environment is enabled. Using Storage Checkpoints The ckpt=checkpoint_name option mounts a Storage Checkpoint of a mounted file system that was previously created by the fsckptadm command.
Quick Reference Unmounting a file system You must specify the following: ■ The special block device name to mount ■ The mount point ■ The file system type (vxfs) ■ The mount options ■ The backup frequency ■ Which fsck pass looks at the file system Each entry must be on a single line. See the fstab(4) manual page. The following is a typical fstab file with the new file system on the last line: # System /etc/fstab file.
Quick Reference Displaying information on mounted file systems Example of unmounting a file system The following are examples of unmounting file systems. To unmount the file system /dev/vx/dsk/fsvol/vol1 ◆ Unmount the file system: # umount /dev/vx/dsk/fsvol/vol1 To unmount all file systems not required by the system ◆ Unmount the file system mounted at /mnt1: # vxumount /mnt1 Displaying information on mounted file systems Use the mount command to display a list of currently mounted file systems.
Quick Reference Identifying file system types To display information on mounted file systems ◆ Invoke the mount command without options: # mount /dev/vg00/lvol3 on / type vxfs ioerror=mwdisable,delaylog \ Wed Jun 5 3:23:40 2004 /dev/vg00/lvol8 on /var type vxfs ioerror=mwdisable,delaylog Wed Jun 5 3:23:56 2004 /dev/vg00/lvol7 on /usr type vxfs ioerror=mwdisable,delaylog Wed Jun 5 3:23:56 2004 /dev/vg00/lvol6 on /tmp type vxfs ioerror=mwdisable,delaylog Wed Jun 5 3:23:56 2004 /dev/vg00/lvol5 on /opt type v
Quick Reference Resizing a file system Example of determining a file system's type The following example uses the fstyp command to determine the file system type of the /dev/vx/dsk/fsvol/vol1 device.
Quick Reference Resizing a file system To extend a VxFS file system ◆ Use the fsadm command to extend a VxFS file system: /usr/lib/fs/vxfs/fsadm [-F vxfs] [-b newsize] [-r rawdev] \ mount_point vxfs The file system type. newsize The size (in sectors) to which the file system will increase. mount_point The file system's mount point. -r rawdev Specifies the path name of the raw device if there is no entry in /etc/fstab and fsadm cannot determine the raw device.
Quick Reference Resizing a file system mount_point The file system's mount point. -r rawdev Specifies the path name of the raw device if there is no entry in /etc/fstab and fsadm cannot determine the raw device. Example of shrinking a file system The following example shrinks a VxFS file system mounted at /ext to 20480 sectors.
Quick Reference Resizing a file system -E Reports on extent fragmentation. mount_point The file system's mount point. -r rawdev Specifies the path name of the raw device if there is no entry in /etc/fstab and fsadm cannot determine the raw device. Example of reorganizing a file system The following example reorganizes the file system mounted at /ext.
Quick Reference Backing up and restoring a file system To increase the capacity of a file system 1 Unmount the file system: # umount /dev/vg00/lvol7 2 Extend the volume so that the volume can contain the larger file system: # lvextend -L larger_size /dev/vg00/lvol7 3 Extend the file system: # extendfs -F vxfs /dev/vg00/rlvol7 4 Mount the file system: # mount -F vxfs /dev/vg00/lvol7 mount_point Backing up and restoring a file system To back up a VxFS file system, you first create a read-only s
Quick Reference Backing up and restoring a file system destination The name of the special device on which to create the snapshot. size The size of the snapshot file system in sectors. snap_mount_point Location where to mount the snapshot; snap_mount_pointmust exist before you enter this command. Example of creating and mounting a snapshot of a VxFS file system The following example creates a snapshot file system of the file system at /home on /dev/vx/dsk/fsvol/vol1, and mounts it at /snapmount.
Quick Reference Using quotas To back up a VxFS snapshot file system ◆ Back up the VxFS snapshot file system mounted at /snapmount to the tape drive with device name /dev/rmt/: # vxdump -cf /dev/rmt /snapmount Restoring a file system After backing up the file system, you can restore it using the vxrestore command. First, create and mount an empty file system.
Quick Reference Using quotas Turning on quotas You can enable quotas at mount time or after a file system is mounted. The root directory of the file system must contain a file named quotas that is owned by root.
Quick Reference Using quotas limits or assign them specific values. Users are allowed to exceed the soft limit, but only for a specified time. Disk usage can never exceed the hard limit. The default time limit for exceeding the soft limit is seven days on VxFS file systems. edquota creates a temporary file for a specified user. This file contains on-disk quotas for each mounted VxFS file system that has a quotas file.
Quick Reference Using quotas To turn off quotas for a file system ◆ Turn off quotas for a file system: quotaoff mount_point 209
Quick Reference Using quotas
Appendix B Diagnostic messages This appendix includes the following topics: ■ File system response to problems ■ About kernel messages ■ Kernel messages ■ About unique message identifiers ■ Unique message identifiers File system response to problems When the file system encounters problems, it responds in one of the following ways: Marking an inode bad Inodes can be marked bad if an inode update or a directory-block update fails.
Diagnostic messages About kernel messages Disabling a file system If an error occurs that compromises the integrity of the file system, VxFS disables itself. If the intent log fails or an inode-list error occurs, the super-block is ordinarily updated (setting the VX_FULLFSCK flag) so that the next fsck does a full structural check. If this super-block update fails, any further changes to the file system can cause inconsistencies that are undetectable by the intent log replay.
Diagnostic messages Kernel messages instance of the message to guarantee that the sequence of events is known when analyzing file system problems. Each message is also written to an internal kernel buffer that you can view in the file /var/adm/syslog/syslog.log. In some cases, additional data is written to the kernel buffer. For example, if an inode is marked bad, the contents of the bad inode are written.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 002 WARNING: msgcnt x: mesg 002: V-2-2: vx_snap_strategy mount_point file system write attempt to read-only file system WARNING: msgcnt x: mesg 002: V-2-2: vx_snap_copyblk mount_point file system write attempt to read-only file system Description The kernel tried to write to a read-only file system. This is an unlikely problem, but if it occurs, the file system is disabled.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 006, 007 WARNING: msgcnt x: mesg 006: V-2-6: vx_sumupd - mount_point file system summary update in au aun failed WARNING: msgcnt x: mesg 007: V-2-7: vx_sumupd - mount_point file system summary update in inode au iaun failed Description An I/O error occurred while writing the allocation unit or inode allocation unit bitmap summary to disk. This sets the VX_FULLFSCK flag on the file system.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 010 WARNING: msgcnt x: mesg 010: V-2-10: vx_ialloc - mount_point file system inode inumber not free Description When the kernel allocates an inode from the free inode bitmap, it checks the mode and link count of the inode. If either is non-zero, the free inode bitmap or the inode list is corrupted.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 013 WARNING: msgcnt x: mesg 013: V-2-13: vx_iposition - mount_point file system inode inumber invalid inode list extent Description For a Version 2 and above disk layout, the inode list is dynamically allocated. When the kernel tries to read an inode, it must look up the location of the inode in the inode list file. If the kernel finds a bad extent, the inode cannot be accessed.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 015 WARNING: msgcnt x: mesg 015: V-2-15: vx_ibadinactive mount_point file system cannot mark inode inumber bad WARNING: msgcnt x: mesg 015: V-2-15: vx_ilisterr - mount_point file system cannot mark inode inumber bad Description An attempt to mark an inode bad on disk, and the super-block update to set the VX_FULLFSCK flag, failed.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 017 219
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition WARNING: msgcnt x: mesg 017: V-2-17: vx_attr_getblk mount_point file system inode inumber marked bad in core WARNING: msgcnt x: mesg 017: V-2-17: vx_attr_iget - mount_point file system inode inumber marked bad in core WARNING: msgcnt x: mesg 017: V-2-17: vx_attr_indadd mount_point file system inode inumber marked bad in core WARNING: msgcnt x: mesg 017: V-2-17: vx_attr_indtrunc mount_point
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition WARNING: msgcnt x: mesg 017: V-2-17: vx_get_alloc - mount_point file system inode inumber marked bad in core 017 (continued) WARNING: msgcnt x: mesg 017: V-2-17: vx_ilisterr - mount_point file system inode inumber marked bad in core WARNING: msgcnt x: mesg 017: V-2-17: vx_indtrunc - mount_point file system inode inumber marked bad in core WARNING: msgcnt x: mesg 017: V-2-17: vx_iread - mount_p
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 017 (continued) Description When inode information is no longer dependable, the kernel marks it bad in memory. This is followed by a message to mark it bad on disk as well unless the mount command ioerror option is set to disable, or there is subsequent I/O failure when updating the inode on disk. No further operations can be performed on the inode.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 019 WARNING: msgcnt x: mesg 019: V-2-19: vx_log_add - mount_point file system log overflow Description Log ID overflow. When the log ID reaches VX_MAXLOGID (approximately one billion by default), a flag is set so the file system resets the log ID at the next opportunity.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 021 WARNING: msgcnt x: mesg 021: V-2-21: vx_fs_init - mount_point file system validation failure ■ Description When a VxFS file system is mounted, the structure is read from disk. If the file system is marked clean, the structure is correct and the first block of the intent log is cleared.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 022 WARNING: msgcnt x: mesg 022: V-2-22: vx_mountroot - root file system remount failed Description The remount of the root file system failed. The system will not be usable if the root file system cannot be remounted for read/write access. When a root Veritas File System is first mounted, it is mounted for read-only access.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 024 WARNING: msgcnt x: mesg 024: V-2-24: vx_cutwait - mount_point file system current usage table update error Description Update to the current usage table (CUT) failed. For a Version 2 disk layout, the CUT contains a fileset version number and total number of blocks used by each fileset. The VX_FULLFSCK flag is set in the super-block.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 027 WARNING: msgcnt x: mesg 027: V-2-27: vx_snap_bpcopy mount_point snapshot file system write error Description A write to the snapshot file system failed. As the primary file system is updated, copies of the original data are read from the primary file system and written to the snapshot file system. If one of these writes fails, the snapshot file system is disabled.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 029, 030 WARNING: msgcnt x: mesg 029: V-2-29: vx_snap_getbp mount_point snapshot file system block map write error WARNING: msgcnt x: mesg 030: V-2-30: vx_snap_getbp mount_point snapshot file system block map read error Description During a snapshot backup, each snapshot file system maintains a block map on disk.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 033 WARNING: msgcnt x: mesg 033: V-2-33: vx_check_badblock mount_point file system had an I/O error, setting VX_FULLFSCK Description When the disk driver encounters an I/O error, it sets a flag in the super-block structure. If the flag is set, the kernel will set the VX_FULLFSCK flag as a precautionary measure.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 036 WARNING: msgcnt x: mesg 036: V-2-36: vx_lctbad - mount_point file system link count table lctnumber bad Description Update to the link count table (LCT) failed. For a Version 2 and above disk layout, the LCT contains the link count for all the structural inodes. The VX_FULLFSCK flag is set in the super-block. If the super-block cannot be written, the file system is disabled.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 038 WARNING: msgcnt x: mesg 038: V-2-38: vx_dataioerr - volume_name file system file data [read|write] error in dev/block device_ID/block Description A read or a write error occurred while accessing file data. The message specifies whether the disk I/O that failed was a read or a write. File data includes data currently in files and free blocks.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 039 WARNING: msgcnt x: mesg 039: V-2-39: vx_writesuper - file system super-block write error Description An attempt to write the file system super block failed due to a disk I/O error. If the file system was being mounted at the time, the mount will fail.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 056 WARNING: msgcnt x: mesg 056: V-2-56: vx_mapbad - mount_point file system extent allocation unit state bitmap number number marked bad ■ Description If there is an I/O failure while writing a bitmap, the map is marked bad. The kernel considers the maps to be invalid, so does not do any more resource allocation from maps.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 058 WARNING: msgcnt x: mesg 058: V-2-58: vx_isum_bad - mount_point file system inode allocation unit summary number number marked bad Description An I/O error occurred reading or writing an inode allocation unit summary. The VX_FULLFSCK flag is set. If the VX_FULLFSCK flag cannot be set, the file system is disabled. ■ Action Check the console log for I/O errors.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 060 WARNING: msgcnt x: mesg 060: V-2-60: vx_snap_getbitbp mount_point snapshot file system bitmap read error Description An I/O error occurred while reading the snapshot file system bitmap. There is no problem with snapped file system, but the snapshot file system is disabled. ■ Action Check the console log for I/O errors. If the problem is a disk failure, replace the disk.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 063 WARNING: msgcnt x: mesg 063: V-2-63: vx_fset_markbad mount_point file system mount_point fileset (index number) marked bad Description An error occurred while reading or writing a fileset structure. VX_FULLFSCK flag is set. If the VX_FULLFSCK flag cannot be set, the file system is disabled. ■ Action Unmount the file system and use fsck to run a full structural check.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 067 WARNING: msgcnt x: mesg 067: V-2-67: mount of device_path requires HSM agent Description The file system mount failed because the file system was marked as being under the management of an HSM agent, and no HSM agent was found during the mount. ■ Action Restart the HSM agent and try to mount the file system again.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 071 NOTICE: msgcnt x: mesg 071: V-2-71: cleared data I/O error flag in mount_point file system Description The user data I/O error flag was reset when the file system was mounted. This message indicates that a read or write error occurred while the file system was previously mounted. See Message Number 038. ■ Action Informational only, no action required.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 076 NOTICE: msgcnt x: mesg 076: V-2-76: checkpoint asynchronous operation on mount_point file system still in progress ■ Description An EBUSY message was received while trying to unmount a file system. The unmount failure was caused by a pending asynchronous fileset operation, such as a fileset removal or fileset conversion to a nodata Storage Checkpoint.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 079
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition WARNING: msgcnt x: mesg 017: V-2-79: vx_attr_getblk mount_point file system inode inumber marked bad on disk WARNING: msgcnt x: mesg 017: V-2-79: vx_attr_iget - mount_point file system inode inumber marked bad on disk WARNING: msgcnt x: mesg 017: V-2-79: vx_attr_indadd mount_point file system inode inumber marked bad on disk WARNING: msgcnt x: mesg 017: V-2-79: vx_attr_indtrunc mount_point file
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition WARNING: msgcnt x: mesg 017: V-2-79: vx_get_alloc - mount_point file system inode inumber marked bad on disk 079 (continued) WARNING: msgcnt x: mesg 017: V-2-79: vx_ilisterr - mount_point file system inode inumber marked bad on disk WARNING: msgcnt x: mesg 017: V-2-79: vx_indtrunc - mount_point file system inode inumber marked bad on disk WARNING: msgcnt x: mesg 017: V-2-79: vx_iread - mo
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 079 (continued) ■ Description When inode information is no longer dependable, the kernel marks it bad on disk. The most common reason for marking an inode bad is a disk I/O failure. If there is an I/O failure in the inode list, on a directory block, or an indirect address extent, the integrity of the data in the inode, or the data the kernel tried to write to the inode list, is questionable.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 081 WARNING: msgcnt x: mesg 081: V-2-81: possible network partition detected Description This message displays when CFS detects a possible network partition and disables the file system locally, that is, on the node where the message appears. ■ Action There are one or more private network links for communication between the nodes in a cluster.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 084 WARNING: msgcnt x: mesg 084: V-2-84: in volume_name quota on failed during assumption. (stage stage_number) Description In a cluster file system, when the primary of the file system fails, a secondary file system is chosen to assume the role of the primary. The assuming node will be able to enforce quotas after becoming the primary.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 088 WARNING: msgcnt x: mesg 088: V-2-88: quotaon on file_system failed; limits exceed limit Description The external quota file, quotas, contains the quota values, which range from 0 up to 2147483647. When quotas are turned on by the quotaon command, this message displays when a user exceeds the quota limit. ■ Action Correct the quota values in the quotas file.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 092 WARNING: msgcnt x: mesg 092: V-2-92: vx_mkfcltran - failure to map offset offset in File Change Log file Description The vxfs kernel was unable to map actual storage to the next offset in the File Change Log file. This is mostly likely caused by a problem with allocating to the FCL file. Because no new FCL records can be written to the FCL file, the FCL has been deactivated.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 098 WARNING: msgcnt x: mesg 098: V-2-98: VxFS failed to initialize File Change Log for fileset fileset (index number) of mount_point file system Description VxFS mount failed to initialize FCL structures for the current fileset mount. As a result, FCL could not be turned on. The FCL file will have no logging records. ■ Action Reactivate the FCL.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 101 WARNING: msgcnt x: mesg 101: V-2-101: File Change Log on mount_point for file set index approaching max file size supported. File Change Log will be reactivated when its size hits max file size supported. ■ Description The size of the FCL file is approching the maximum file size supported. This size is platform specific.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 104 WARNING: msgcnt x: mesg 104: V-2-104: File System mount_point device volume_name disabled ■ Description The volume manager detected that the specified volume has failed, and the volume manager has disabled the volume. No further I/O requests are sent to the disabled volume. ■ 105 Action The volume must be repaired.
Diagnostic messages Kernel messages Table B-1 Kernel messages (continued) Message Number Message and Definition 108 WARNING: msgcnt x: mesg 108: V-2-108: vx_dexh_error - error: fileset fileset, directory inode number dir_inumber, bad hash inode hash_inode, seg segment bno block_number ■ Description The supplemental hash for a directory is corrupt. ■ 109 Action If the file system is mounted read/write, the hash for the directory will be automatically removed and recreated.
Diagnostic messages About unique message identifiers Table B-1 Kernel messages (continued) Message Number Message and Definition 111 WARNING: msgcnt x: mesg 111: V-2-111: You have exceeded the authorized usage (maximum maxfs unique mounted user-data file systems) for this product and are out of compliance with your License Agreement. Please email sales_mail@symantec.com or contact your Symantec sales representative for information on how to obtain additional licenses for this product.
Diagnostic messages Unique message identifiers Unique message identifiers Some commonly encountered UMIs and the associated messages are described on the following table: Table B-2 Unique message identifiers and messages Message Number Message and Definition 20002 UX:vxfs command: ERROR: V-3-20002: message Description The command attempted to call stat() on a device path to ensure that the path refers to a character device before opening the device, but the stat() call failed.
Diagnostic messages Unique message identifiers Table B-2 Unique message identifiers and messages (continued) Message Number Message and Definition 20076 UX:vxfs command: ERROR: V-3-20076: message Description The command called stat() on a file, which is usually a file system mount point, but the call failed. ■ Action Check that the path specified is what was intended and that the user has permission to access that path.
Diagnostic messages Unique message identifiers Table B-2 Unique message identifiers and messages (continued) Message Number Message and Definition 21264 UX:vxfs command: ERROR: V-3-21264: message ■ Description The attempt to mount a VxFS file system has failed because either the volume being mounted or the directory which is to be the mount point is busy.
Diagnostic messages Unique message identifiers Table B-2 Unique message identifiers and messages (continued) Message Number Message and Definition 21272 UX:vxfs command: ERROR: V-3-21272: message Description The mount options specified contain mutually-exclusive options, or in the case of a remount, the new mount options differed from the existing mount options in a way that is not allowed to change in a remount.
Appendix C Disk layout This appendix includes the following topics: ■ About disk layouts ■ Supported disk layouts and operating systems ■ VxFS Version 4 disk layout ■ VxFS Version 5 disk layout ■ VxFS Version 6 disk layout ■ VxFS Version 7 disk layout About disk layouts The disk layout is the way file system information is stored on disk. On VxFS, seven different disk layout versions were created to take advantage of evolving technological developments.
Disk layout About disk layouts Version 3 Version 3 disk layout encompasses all file system structural information in files, rather than at fixed locations on disk, allowing for greater scalability. Version 3 supports files and file systems up to one terabyte in size. Not Supported Version 4 Version 4 disk layout encompasses all file system structural information in files, rather than at fixed locations on disk, allowing for greater scalability.
Disk layout Supported disk layouts and operating systems Supported disk layouts and operating systems Table C-1 shows which disk layouts supported on the which operating systems. File system type and operating system versions Table C-1 JFS 3.3, HP-UX 11.11 VxFS 3.5, HP-UX 11.
Disk layout VxFS Version 4 disk layout File system type and operating system versions (continued) Table C-1 JFS 3.3, HP-UX 11.11 VxFS 3.5, HP-UX 11.11 mkfs No No No Yes Yes Yes Local Mount No No No Yes Yes Yes Shared Mount No No No Yes Yes Yes mkfs No No No Yes Yes Yes Local Mount No No No Yes Yes Yes Shared Mount No No No Yes Yes Yes Disk Layout Version 6 Version 7 VxFS VxFS VxFS VxFS 5.0, 3.5.2, 4.1, 5.0, HP-UX 11i HP-UX HP-UX HP-UX Version 3 11.
Disk layout VxFS Version 4 disk layout the file system structures simply requires extending the appropriate structural files. This removes the extent size restriction imposed by the previous layouts. All Version 4 structural files reside in the structural fileset. The structural files in the Version 4 disk layout are: File Description object location table file Contains the object location table (OLT). The OLT, which is referenced from the super-block, is used to locate the other structural files.
Disk layout VxFS Version 4 disk layout File Description quotas files Contains quota information in records. Each record contains resources allocated either per user or per group. The Version 4 disk layout supports Access Control Lists and Block-Level Incremental (BLI) Backup. BLI Backup is a backup method that stores and retrieves only the data blocks changed since the previous backup, not entire files. This saves times, storage space, and computing resources required to backup large databases.
Disk layout VxFS Version 5 disk layout VxFS Version 4 disk layout Figure C-1 Super-block Object Location Table OLT Extent Addresses Initial Inode Extents Fileset Header/ File Inode Number Fileset Header File Inode Initial Inode List Extent Addresses Inode List Inode Inode Allocation Unit Inode .... .... OLT Replica Primary Fileset Header Fileset Header File Inode List inum Structural Fileset Header Fileset Index and Name Primary Fileset Header max_inodes Features .... ....
Disk layout VxFS Version 6 disk layout Block Size Maximum File System Size 2048 bytes 8,589,934,078 sectors (≈8 TB) 4096 bytes 17,179,868,156 sectors (≈16 TB) 8192 bytes 34,359,736,312 sectors (≈32 TB) If you specify the file system size when creating a file system, the block size defaults to the appropriate value as shown above. See the mkfs(1M) manual page. See “About quota files on Veritas File System” on page 102.
Disk layout VxFS Version 7 disk layout VxFS Version 7 disk layout VxFS disk layout Version 7 is similar to Version 6, except that Version 7 enables support for variable and large size history log records, more than 2048 volumes, large directory hash, and Dynamic Storage Tiering. The Version 7 disk layout can theoretically support files and file systems up to 8 exabytes (263). For a file system to take advantage of VxFS 8-exabyte support, it must be created on a Veritas Volume Manager volume.
Disk layout VxFS Version 7 disk layout
Glossary access control list (ACL) The information that identifies specific users or groups and their access privileges for a particular file or directory. agent A process that manages predefined Veritas Cluster Server (VCS) resource types. Agents bring resources online, take resources offline, and monitor resources to report any state changes to VCS. When an agent is started, it obtains configuration information from VCS and periodically monitors the resources and updates VCS with the resource status.
Glossary on the disk before the write returns, but the inode modification times may be lost if the system crashes. defragmentation The process of reorganizing data on disk by making file data blocks physically adjacent to reduce access times. direct extent An extent that is referenced directly by an inode. direct I/O An unbuffered form of I/O that bypasses the kernel’s buffering of data. With direct I/O, the file system transfers data directly between the disk and the user-supplied buffer.
Glossary inode A unique identifier for each file within a file system that contains the data and metadata associated with that file. inode allocation unit A group of consecutive blocks containing inode allocation information for a given fileset. This information is in the form of a resource summary and a free inode map. intent logging A method of recording pending changes to the file system structure. These changes are recorded in a circular intent log file.
Glossary quotas file The quotas commands read and write the external quotas file to get or change usage limits. When quotas are turned on, the quota limits are copied from the external quotas file to the internal quotas file. See quotas, internal quotas file, and external quotas file. reservation An extent attribute used to preallocate space for a file. root disk group A special private disk group that always exists on the system. The root disk group is named rootdg.
Glossary volume A virtual disk which represents an addressable range of disk blocks used by applications such as file systems or databases. volume set A container for multiple different volumes. Each volume can have its own geometry. vxfs The Veritas File System type. Used as a parameter in some commands. VxFS Veritas File System. VxVM Veritas Volume Manager.
Glossary
Index A access control lists 20 allocation policies 56 default 56 extent 14 extent based 14 multi-volume support 121 B bad block revectoring 31 blkclear 18 blkclear mount option 31 block based architecture 23 block size 14 blockmap for a snapshot file system 72 buffer cache high water mark 40 buffered file systems 17 buffered I/O 63 C cache advisories 64 cio Concurent I/O 37 closesync 18 cluster mount 22 commands cron 26 fsadm 26 getext 58 setext 58 contiguous reservation 57 converting a data Storage Che
Index E edquota how to set up user quotas 208 encapsulating volumes 115 enhanced data integrity modes 17 ENOENT 215 ENOSPC 94 ENOTDIR 215 expansion 26 extent 14, 55 attributes 55 indirect 15 reorganization 43 extent allocation 14 aligned 56 control 55 fixed size 55 unit state file 261 unit summary file 261 extent size indirect 15 external quotas file 102 F fc_foff 108 fcl_inode_aging_count tunable parameter 49 fcl_inode_aging_size tunable parameter 49 fcl_keeptime tunable parameter 46 fcl_maxalloc tu
Index how to edit the fstab file 196 how to edit the vfstab file 196 how to mount a Storage Checkpoint 85 how to remove a Storage Checkpoint 84 how to reorganize a file system 202 how to resize a file system 200 how to restore a file system 206 how to set up user quotas 208 how to turn off quotas 208 how to turn on quotas 207 how to unmount a Storage Checkpoint 86 how to view quotas 208 HSM agent error message 236–237 hsm_write_prealloc 48 I I/O direct 62 sequential 63 synchronous 63 I/O requests asynchro
Index mounting a Storage Checkpoint 86 mounting a Storage Checkpoint of a cluster file system 86 msgcnt field 213 multi-volume support 114 creating a MVS file system 118 multiple block operations 14 mv 58 N name space preserved by Storage Checkpoints 76 ncheck 112 nodata Storage Checkpoints 86 nodata Storage Checkpoints definition 81 nodatainlog mount option 28, 31 O O_SYNC 29 object location table file 261 P parameters default 44 tunable 45 tuning 44 performance overall 28 snapshot file systems 70
Index Storage Checkpoints (continued) converting a data Storage Checkpoint to a nodata Storage Checkpoint with multiple Storage Checkpoints 89 creating 83 data Storage Checkpoints 80 definition of 75 difference between a data Storage Checkpoint and a nodata Storage Checkpoint 87 freezing and thawing a file system 77 mounting 85 multi-volume support 115 nodata Storage Checkpoints 81, 86 operation failures 94 pseudo device 85 read-only Storage Checkpoints 85 removable Storage Checkpoints 82 removing 84 space
Index VX_THAW 65 VX_UNBUFFERED 63 VxFS storage allocation 27 vxfs_bc_bufhwm tunable parameter 40 vxfs_inotopath 112 vxfsstat 41 vxfsu_fcl_sync 47 vxlsino 112 vxrestore 58, 206 vxtunefs changing extent size 15 vxvset 116 W writable Storage Checkpoints 85 write size 57 write_nstream tunable parameter 46 write_pref_io tunable parameter 45 write_throttle tunable parameter 52