HP-UX Encrypted Volume and File System Performance and Tuning
The test results in this chapter illustrate the performance differential between direct I/O (for both EVFS
and clear I/O) and utilizing system memory for buffer caching. Clearly, utilizing buffer cache is an
effective method to improve application performance with EVFS. But system memory is not free or
inexhaustible, and therefore observing how buffer cache sizing affects performance is important.
In the following graphs, buffer cache has been set to 5%. All prior tests have been run with the
default system buffer cache setting of 5%-50%. This means that when the system is started, buffer
cache sizing will initiate at 5% of system memory – in the test system case this is (64GB *.05 =
3.2GB)
Note: Buffer Cache Initialization
Figure 4 shows buffer cache at startup as 3.4GB – that is
because system initiation required some memory processing.
The 50% upper limit indicates that buffer cache can grow to 50% of total system memory – in the test
system case this is (64GB * .5 = 32GB) (see figure 5 and 6). So for the following tests, buffer cache
is set to a static amount of 3.2GB and cannot grow larger.
64k Block Sequential Write - 100mb File Size
6%
8%
6%
11%
17%
19%
26%
27%
36%
37%
1 10 25 50 100
IOZone Threads
CPU Utilization %
Throughput KBs -
Scale20
Clear CPU EVFS CPU Clear write EVFS write
Figure 21: Sequential Writes VxFS Tuning 5% dbc_max
With buffer cache set low, both clear I/O and EVFS throughput drop noticeably after 50 IOZone
threads due to buffer cache exhaustion.