Common Misconfigured HP-UX Resources (April 2006)

You can use the following table to estimate the memory cost of each JFS inode in the inode cache
(measured in bytes). Each item reflects the size as allocated by the kernel memory allocator:
Structures JFS 3.3
11.11
32-bit
JFS 3.3
11.11
64-bit
JFS 3.5
11.11
64-bit
JFS 3.5
11.23
JFS 4.1
11.23
JFS 5.1
11.23
JFS 4.1
11.31
inode 1024 1364 1364 1364 1490 1490 1490
vnode 128 184 184 248 248 248 376
locks 272 384 352 96 96 120 240
Total 1352 1902 1850 1708 1834 1858 2106
Note that the previous table lists a minimal set of memory requirements. There are also other
supporting structures, such as hash headers and free list headers. Other features may use more
memory. For example, using Fancy Readahead on JFS 3.3 on a file will consume approximately
1024 additional bytes per inode. Access control lists (ACLs) and quotas can also take up additional
space, as well as Cluster File System information.
For example, consider an HP-UX 11i v3 system using JFS 4.1 that has 2 GB of memory. If you enter
an ll or find command on a file system with a large number of files (greater than 128,000), then
the inode cache is likely to fill up. Based on the default JFS inode cache size of 128000, the minimum
memory cost would be approximately 256 MB or 12 percent of total memory.
However, if you then add more memory so the system has 8 GB of memory instead of 2 GB, then the
memory cost for the JFS inode will increase to approximately 512 MB. While the total memory cost
increases, the percentage of overall memory used for the JFS inode cache drops to 6.25 percent.
Effects of the Kernel Memory Allocator
The internals of the kernel memory allocator have changed over time, but the concept remains the
same. When the kernel requests dynamic memory, it allocates an entire page (4096 bytes) and then
subdivides the memory into equal sized chunks known as “objects”. The kernel does allocate pages
using different sized objects. The allocator then maintains free lists based on the object size.
In the previous example, the memory pages are divided into four objects. In some implementations,
there is some page overhead and object overhead associated with the object. For this example,
assume that the size of each object is 1024 bytes. Each page may have both used and freed objects
associated with it. All of the free pages are linked to a linked list pointed to by an object free list
head. There is typically one object free list head associated with each CPU on the system for each
object size. Therefore, CPU 0 can have a object free list for the 32-byte objects, the 64-byte objects,
and so on. CPU 1 would also have corresponding object free lists.
22