Common Misconfigured HP-UX Resources (April 2006)

You can tune the maximum size of the JFS inode cache using the vx_ninode tunable. With JFS 4.1
on HP-UX 11i v2, vx_ninode can be tuned dynamically using kctune.
At a minimum, you must have at least one JFS inode cache entry for each file that is opened at any
given time on your system. If you are concerned about the amount of memory that JFS can potentially
take, then try to tune vx_ninode down so that the cache only takes about1-2 percent of overall
memory. Most systems will work fine with vx_ninode tuned to20,000-50,000. However, you need
to consider how many processes are running on the system and how many files each process will
have open on average. Systems used as file servers and Web servers may have performance benefits
from using a large JFS inode cache and the defaults are sufficient.
Note that tuning ninode does not affect the JFS inode cache as the JFS inode cache is maintained
separately from the HFS inode cache. If your only HFS file system is /stand, then ninode can
usually be tuned to a low value (for example, 400).
Tuning the Maximum Size of the JFS Inode Cache on JFS 4.1 or later
The introduction of allocating inodes in chunks has added a new dynamic to the tuning of vx_ninode.
On previous JFS versions, it was safe to tune the vx_ninode down to a smaller value, such as 20,000
on some systems. Now the number of free lists becomes a major factor, especially on large memory
systems (>8GB of physical memory) which have a large number of free lists. For example, if you
have 1000 free lists and vx_ninode is tuned to 20,000, then there are only 20 inodes allocated per
free list. Note that 20 is an average. Some free lists will have 11 and some free lists will have 22 as
the inodes are allocated in chunks of 11. Since JFS inodes are less likely moved from one free list to
another, the possibility of prematurely running out of inodes is greater when too few inodes are
spread over a large number of free lists. As a general rule of thumb, you would like to have 250
inodes per free list. However, if memory pressure is an issue, then vx_ninode can be tuned down to
100 inodes per free list. This will greatly limit the ability to reduce vx_ninode. So before tuning
down vx_ninode to a smaller value, be sure to check the number of free lists on the system using the
following adb command for JFS 4.1 (available on 11.23 and 11.31).
# echo “vx_nfreelists/D” | adb –o /stand/vmunix /dev/kmem
Tuning Your System to Use a Static JFS Inode Cache
By default, the JFS inode cache is dynamic in size. It grows and shrinks as needed. However, since
the inodes are freed to the kernel memory allocator’s free object chain, the memory may not be
available for use for other reasons (except for other same-sized memory allocations). The freed inodes
on the object freelists are still considered “used” system memory. Also, the massive kernel memory
allocations and subsequent frees add additional overhead to the kernel.
The dynamic nature of the JFS inode cache does not provide much benefit. This may change in the
future as the kernel memory allocator continues to evolve. However, using a statically sized JFS inode
cache has the advantage of keeping inodes in the cache longer and reducing overhead of continued
kernel memory allocations and frees. Instead, the unused inodes are retained on the JFS inode cache
freelist chains. If you need to bring a new JFS inode in from disk, use the oldest inactive inode. Using
a static JFS inode cache also avoids the long kernel memory object free chains for each CPU. Another
benefit to a static JFS inode cache is that the vxfsd daemon will not use as much CPU. On large-
memory systems, vxfsd can use a considerable amount of CPU, reducing the size of the JFS inode
cache.
Beginning with HP-UX 11i v2 (with JFS 3.5 and above) a new system-wide tunable
vxfs_ifree_timelag was introduced to vary the length of time an inode stayed in the cache
before it is considered for removal. Setting vxfs_ifree_timelag to -1 effectively makes the JFS
inode cache a static cache. Setting vxfs_ifree_timelag is especially useful on large memory
25