HP-UX Reference (11i v2 04/09) - 1M System Administration Commands N-Z (vol 4)
v
vxtunefs(1M) vxtunefs(1M)
For an application to do efficient direct I/O or discovered direct I/O, it should issue read requests that are
equal to the product of read_nstream and
read_pref_io. In general, any multiple or factor of
read_nstream multiplied by read_pref_io is a good size for performance. For writing, the same
general rule applies to the write_pref_io
and write_nstream
parameters. When tuning a file
system, the best thing to do is use the tuning parameters under a real workload.
If an application is doing sequential I/O to large files, it should issue requests larger than the
discovered_direct_iosz
. This performs the I/O requests as discovered direct I/O requests which
are unbuffered like direct I/O, but which do not require synchronous inode updates when extending the
file. If the file is too large to fit in the cache, using unbuffered I/O avoids losing useful data out of the
cache, and lowers CPU overhead.
The VxFS tuneable parameters are:
default_indir_size
On VxFS, files can have up to 10 variable sized extents stored in the inode. After these extents are
used, the file must use indirect extents which are a fixed size that is set when the file first uses
indirect extents. These indirect extents are 8K by default. The file system does not use larger
indirect extents because it must fail a write and return ENOSPC if there are no extents available
that are the indirect extent size. For file systems with many large files, the 8K indirect extent size is
too small. The files that get into indirect extents use a lot of smaller extents instead of a few larger
ones. By using this parameter, the default indirect extent size can be increased so that large files in
indirects use fewer larger extents.
Be careful using this tuneable. If it is too large, then writes fail when they are unable to allocate
extents of the indirect extent size to a file. In general, the fewer and the larger the files on a file sys-
tem, the larger
default_indir_size
can be. The value of this parameter is generally a multi-
ple of the
read_pref_io parameter.
This tuneable does not apply to disk layout Version 4.
discovered_direct_iosz
Any file I/O requests larger than the discovered_direct_iosz
are handled as discovered
direct I/O. A discovered direct I/O is unbuffered like direct I/O, but it does not require a synchro-
nous commit of the inode when the file is extended or blocks are allocated. For larger I/O requests,
the CPU time for copying the data into the buffer cache and the cost of using memory to buffer the
I/O becomes more expensive than the cost of doing the disk I/O. For these I/O requests, using
discovered direct I/O is more efficient than regular I/O. The default value of this parameter is 256K.
hsm_write_prealloc
For a file managed by a hierarchical storage management (HSM) application,
hsm_write_prealloc
preallocates disk blocks before data is migrated back into the file system.
An HSM application usually migrates the data back through a series of writes to the file, each of
which allocates a few blocks. By setting
hsm_write_prealloc
(hsm_write_prealloc=1), a
sufficient number of disk blocks will be allocated on the first write to the empty file so that no disk
block allocation is required for subsequent writes, which improves the write performance during
migration.
The
hsm_write_prealloc parameter is implemented outside of the DMAPI specification, and its
usage has limitations depending on how the space within an HSM controlled file is managed. It is
advisable to use hsm_write_prealloc only when recommended by the HSM application control-
ling the file system.
initial_extent_size
Changes the default size of the initial extent.
VxFS determines, based on the first write to a new file, the size of the first extent to allocate to the
file. Typically the first extent is the smallest power of 2 that is larger than the size of the first write.
If that power of 2 is less than 8K, the first extent allocated is 8K. After the initial extent, the file sys-
tem increases the size of subsequent extents (see
max_seqio_extent_size) with each alloca-
tion.
For a file managed by a hierarchical storage management (HSM) application,
hsm_write_prealloc preallocates disk blocks before data is migrated back into the file system.
An HSM application usually migrates the data back through a series of writes to the file, each of
which allocates a few blocks. By setting hsm_write_prealloc (hsm_write_prealloc=1), a
sufficient number of disk blocks will be allocated on the first write to the empty file so that no disk
block allocation is required for subsequent writes, which improves the write performance during
Section 1M−−928 Hewlett-Packard Company − 2 − HP-UX 11i Version 2: September 2004