Common Misconfigured HP-UX Resources (April 2006)
• Read ahead
If file system access is generally sequential, the buffer cache provides enhanced performance
via read ahead. When the file system detects sequential access to a file, it begins doing
asynchronous reads on subsequent blocks so that the data is already available in the buffer
cache when the application requests it.
For HFS file systems, general sequential reads are configured via the hfs_ra_per_disk
system tunable. If you are using LVM striping, multiply the hfs_ra_per_disk value by the
number of stripes.
For JFS 3.1, the initial read ahead starts out small and, as the sequential access continues,
JFS reads ahead more aggressively.
For JFS 3.3 and later, the read-ahead range is the product of the read_pref_io and
read_nstream parameters. When sequential access is first detected, four ranges are read
into the buffer cache (4 * read_pref_io * read_nstream). When an application
finishes reading in a range, a subsequent range is prefetched. The read-ahead size can
greatly benefit sequential file access. However, applications that generally do random I/O
may inadvertently trigger the large read ahead by occasionally reading sequential blocks.
This read-ahead data will likely be unused due to the overall random nature of the reads.
For JFS 3.3 and later, you can control the size of the read-ahead data with the vxtunefs
read_nstream and read_pref_io parameters; for JFS 3.5/4.1, you can turn the read
ahead size off by setting the vxtunefs parameter read_ahead to 0. For JFS 3.1, you
cannot tune the read-ahead size.
• Hot blocks
If a file system block is repeatedly accessed by the application (either a single process or
multiple processes), then the block will stay in the buffer cache and can be used without
having to go to the disk each time the data is needed. The buffer cache is particularly helpful
when the application repeatedly searches a large directory, perhaps to create a temporary
file. The directory blocks will likely be in buffer cache if they are frequently used and physical
disk access will not be required.
• Delayed writes
The buffer cache lets applications perform delayed or asynchronous writes. An application
can write the data to the buffer cache and the system call will return without waiting for the
I/O to complete. The buffer will be flushed to disk later using commands such as syncer,
sync, or fsync. Performing delayed writes is sometimes referred to as write behind.
Disadvantages of Using the Buffer Cache
While it may seem that every application would benefit from using the buffer cache, using the buffer
cache does have some costs, including:
• Memory
Depending on how it is configured, the buffer cache may be the largest single user of
memory. By default, a system with 8 GB of memory may use as much as 4 GB for buffer
cache pages alone (with dbc_max_pct set to 50). Even with a dynamic buffer cache, a
large buffer cache can contribute to overall memory pressure. Remember that the buffer
cache will not return the buffer pages unless there is memory pressure. Once memory
pressure is present, buffer pages are aged and stolen by vhand. Under memory pressure,
buffer cache pages are stolen at a rate three times that of user pages.
12