HP-UX Encrypted Volume and File System Performance and Tuning
IOZone Results: VxFS Tuning
HP-UX system tuning for EVFS performance would intuitively call for kernel tuning and file system
tuning. However, empirical testing has proven that HP-UX kernel testing has very little influence on
EVFS performance, or clear I/O performance (using IOZone). VxFS tuning showed dramatic
performance improvement. Therefore, tuning for these tests is limited specifically to VxFS parameters.
In addition, there were no tuning variable combinations that improved EVFS but degraded clear I/O.
After exhaustive testing, the following VxFS 4.1 tuning variables and values proved most beneficial
for both clear I/O and EVFS throughput.
• read_ahead = 2 (default = 1)
o (fancy read ahead)
o (multi-thread and non-sequential patterns)
• max_buf_data_size = 64 (default = 8192)
o (matches application read request size)
o (matches array LUN stripe size)
• read_nstream = 20 (default = 1)
o (increase the amount of read-ahead)
• write_nstream = 20 (default = 1)
o (increase the amount of write-behind)
• Max_diskq = 100mb (1048576 – 1mb)
o (matches largest file in the test)
64k Block Sequential Write - 100mb File Size
5%
8%
6%
6%
7%
10%
36%
28% 28%
30%
1 10 25 50 100
IOZone Threads
CPU Utilization %
Throughput KBs -
Scale20
Clear CPU EVFS CPU Clear write EVFS write
Figure 11 – Tuned Sequential Writes
Comparing these Tuned Sequential Write results with Default Sequential Write results shows that clear
I/O throughput is tripled and CPU utilization is reduced by a factor of 3 to 4 (except for the curious
10-thread test case). For EVFS, the gains in throughput are significant, and CPU utilization is reduced
as well. The tuning is so effective that at first glance it is possible to mis-interpret the data, because
the differential between clear and EVFS has actual grown. This is because the clear I/O performance