Designing a High Performance Network File Server

6
For example, the characteristics of the application used during this customer engagement were:
Read intensive workload
Huge file sizes
Massively parallel simultaneous access
Large data transfers
With these application attributes in mind, several vxtunefs(1M) parameters were identified that
helped improve application throughput tremendously. These parameter settings are listed in Table 2.
Table 2 – vxtunefs Parameters Tuned during the Engagement
vxtunefs Parameter Description Default Tuned
discovered_direct_iosz
I/O requests larger than discovered_direct_iosz
are handled as unbuffered
256k 16m
initial_extent_size Changes the default initial extent size 1k 32k
max_seqio_extent_size Changes the maximum size of an extent 2k 32k
max_buf_data_size
Specifies the maximum buffer size allocated for
file data, either 8k or 64k
8k 64k
read_ahead
Perform additional read operations during
sequential reads
1 0
The reasoning behind setting discovered_direct_iosz to a large value, like 16M, is to ensure that
every I/O request generated by this application would be serviced by the buffer cache and not
treated as direct (unbuffered) I/O, thus taking full advantage of the system’s memory resources.
Given the size of the data files used by the application and the amount of contention associated with
having hundreds, or thousands of simultaneous threads reading from these files, it was critical to
create the files with the fewest extents and the largest extents possible. This was accomplished by
setting initial_extent_size and max_seqio_extent_size to their maximum values of 32K.
This customer’s application performs huge amounts of sequential read requests. For this reason it was
beneficial to set max_buf_data_size to the higher 64K value, allowing NFS to read larger chunks of
data from the file at a time.
Finally, because of the number of simultaneous reading processes all competing for different blocks of
the data files, disabling read_ahead allowed the filesystem to only retrieve the requested data for
each I/O operation. Disabling read_ahead might seem counterintuitive, since the read ahead
mechanism is designed to improve performance for applications that primarily perform sequential
reads. However, testing confirmed that overall application performance was improved significantly
when read_ahead was disabled because the filesystem was able to complete a higher number of I/O
requests for the reading processes by not pre-fetching blocks for any one specific reader.
For more information about tuning VxFS filesystems, refer to the “JFS Tuning and Performance”
whitepaper available at http://docs.hp.com
.