MemFS v2 û A Memory-based File System on HP-UX 11i v2

quit
RUN_LOAD={1000, 10000, 25000, 75000, 125000, 300000, 500000}
The filesystems under test were:
tmpfs on Red Hat Linux 2.4.21-4
MemFS on HP-UX 11iv2 of size=2GB. dbc_max_pct was set to 50% (4GB)
RAMdisk on HP-UX 11iv2: Basefile system: VxFS 3.5 using the delaylog mount option.
VxFS 3.5 on HP-UX 11iv2 was mounted with the highest caching options:
tmplog, mincache=tmpcache, convosync=delay
The following graph shows Postmark results operating on 10,000 simultaneous files spread across
100 subdirectories. Linux’s tmpfs shows the best performance, which can be attributed to a
lightweight filesystem design and a large system page size of 16k. However, it can be seen that
when the transaction load increases, tmpfs will start swapping and the performance can drop.
MemFS (even while in swapping mode) performs better than VxFS and RAMdisk(VxFS).
Also shown is VxFS performance when the buffer cache is filled with MemFS buffers, resulting in
marginal performance degradation.
Figure 2: Postmark benchmark results for 10,000 simultaneous files and 100 subdirectories
Postmark 10000_100
0
5000
10000
15000
20000
25000
1 2 3 4 5 6 7 8 9 10111213
Transaction load
Trans per sec
Mem FS
Linux/tmpfs
VxFS_MemFSbuffercache
RAMdisk/VxFS
VxFS
MemFS_swapping
The following graph shows Postmark results operating on 10,000 simultaneous files created all under
one directory. MemFS in such instances does not perform too well. This can be attributed to linear