HP-UX Encrypted Volume and File System Performance and Tuning
EVFS Direct I/O, and Buffer Cache
Some applications utilize direct I/O, and therefore cannot benefit from the usage of the system buffer
cache. In many cases, reading data does not utilize buffer cache anyway, unless the cache is
intentionally populated with read data ahead of time. So the read results that have been reported
earlier are representative of how EVFS and clear I/O perform when fetching data from the storage
device without a significant benefit of buffer cache. Write performance is almost always enhanced
due to the effects of buffer cache. But with direct I/O, buffer cache is bypassed for writes *and*
reads, so these results will indicate how the EVFS pseudo-driver and the clear I/O file system stack
perform without assistance from the cache.
Note: IOZone and direct I/O
IOZone is instrumented to test with direct I/O. The following
tests were performed using the IOZone direct I/O settings.
The test system was not actually configured for direct I/O.
The following direct I/O graphs all are represented by “Scale1” – as discussed earlier.
Figure 14: Direct I/O Sequential Writes with VxFS Tuning
64k Block Sequential Write - 100mb File Size
7%
7%
10%
9%
10%
6%
22%
32%
41%
39%
1 10 25 50 100
IOZone Threads
CPU Utilization %
Throughput KBs -
Scale1
Clear CPU EVFS CPU Clear write EVFS write