HP-UX Encrypted Volume and File System Performance and Tuning
Figure 17: Direct I/O Random Reads with VxFS Tuning
The direct I/O data as displayed indicates that the throughput of clear I/O and EVFS is effectively
consistent. Sequential writes skew CPU utilization higher than the other tests, but the basic data
shows that the various methods (sequential versus random) of actually committing data to disk and
reading it off of disk without caching have very similar throughput and CPU characteristics. It is
important to note that these results hold true for clear I/O and for EVFS I/O, so that the primary
statement of the tuning exercise holds true: that tuning for either clear I/O or EVFS results in collateral
results from the other, so that improving EVFS performance will not degrade clear, and improving
clear I/O performance will not degrade EVFS.
Observe how the direct I/O throughput and CPU data for clear and EVFS sequential and random
reads so closely resembles the same data for default reads (which use buffer cache). Up to this point
in the testing, all read results have been represented by Scale1 on the graphs, while writes are mostly
on Scale20, due to the effect of buffer cache. This indicates that buffer cache has very little effect on
enhancing the performance of reads. However, many applications that are heavy read-data oriented
will pre-fetch the bulk of the read data and store it in buffer cache. Data that is encrypted with EVFS
responds favorably to this application scenario.
In the following 3 graphs, the performance progression of reading EVFS data is illustrated via
Scale30. The default tuning data is essentially unviewable due to the scale, but Scale30 is required
to display how the read operations benefit from buffer cache utilization and VxFS tuning.
64k Block Random Read - 100mb File Size
2%
3%
4%
4% 4%4%
15%
23%
22%
24%
1 10 25 50 100
IOZone Threads
CPU Utilization %
Throughput KBs -
Scale1
Clear CPU EVFS CPU Clear Random Read EVFS Random Read