HP-UX 11i v3 Memory Management Subsystem

16
filecache size of 50%. Two thirds of the systems tuned the filecache_max parameter downward into the range
of 5% to 10% of physical memory. A small minority of systems sized the filecache to be more than half of memory.
Tuning JFS (VxFS)
The Journaled File system, also known as HP OnlineJFS, JFS, or VxFS, is often a major consumer of system memory,
in particular for its inode cache. The amount of memory used for the JFS cache is controlled by tunable parameters
vx_ninode and vxfs_ifree_timelag.
The JFS inode cache is a holding location for inodes from disk. Each inode in memory is a superset of the inode
from disk. The disk inode stores information for each file such as the file type, permissions, timestamps, size of file,
number of blocks, and extent map. The in-memory inode stores the on-disk inode information along with
information such as pointers to other structures, pointers used to maintain linked lists, and lock primitives used to
manage the inode in memory. The inode does not store the file data itself: that goes in the filecache.
Once the inode is brought into memory, subsequent access to the inode can be done through memory without
having to read or write it to disk.
For JFS 3.5 and above, you can use the vxfsstat command to display the current number of inodes in the inode
cache:
# /opt/VRTS/bin/vxfsstat / | grep inodes
93775 inodes current 93874 peak 1331354 maximum
96910 inodes alloced 3135 freed
# /opt/VRTS/bin/vxfsstat -v / | grep curino
vxi_icache_curino 93566 vxi_iaccess 1244468
This shows that the current number of inodes in the cache is 93566, and this count includes both the inodes actively
in use and the inactive inodes still stored in the cache.
For JFS 3.5 and above, you can use vxfsstat to determine the actual number of JFS inodes actively in use:
# /opt/VRTS/bin/vxfsstat -v / | grep inuse
vxi_icache_inuseino 1096 vxi_icache_maxino 1331354
The inode cache is filled with 93566 inodes but only 1096 are in use. The remaining inodes are inactive, and if
they remain inactive one of the
vxfsd daemon threads will start freeing the inodes after a certain period of time.
Each inode consumes approximately 2 KB of memory. Having an inode cache larger than is needed by the system
workload is wasteful of memory resources. In such a case, it would be appropriate to reduce the size of the cache.
You can tune the maximum size of the JFS inode cache using the
vx_ninode tunable. With JFS 4.1, vx_ninode
can be tuned dynamically using kctune.
At a minimum, you must have at least one JFS inode cache entry for each file that is opened at any given time on
your system. If you are concerned about the amount of memory that JFS can potentially take, then tune
vx_ninode
down so that the cache only takes around 1% or 2% of overall memory. Most systems will work fine with
vx_ninode tuned in the range of 20,000 to 50,000. However, you need to consider how many processes are
running on the system and how many files each process will have open on average. Systems used as file servers
and Web servers may have performance benefits from using a large JFS inode cache and the defaults are sufficient.
The default value of vx_ninode is 0, which means that the system will size the JFS inode cache automatically.
The automatic values are computed based on physical memory size according to a sliding scale shown on the
vx_ninode(5) manpage. For example, an 8 GB system will devote approximately 6% of physical memory to the
inode cache. Systems with more memory will have a larger inode cache, but the fraction of physical memory
decreases to around 1.5% for systems with memory sizes of 128 GB and above.