Installation guide

too low (less than 100).
Firstly, pdflush will work on is writing pages that have been dirty for longer than it deems acceptable. This is
controlled by:
dirty_expire_centisecs (default 3000): in hundredths of second, how long data can be in the page cash before it
is considered expired and must be written at the next opportunity. Note that this default is very long: a full 30
seconds. That means that under normal circumstances, unless you write enough to trigger the other pflush
method, Linux willnot actually commit anything you write until 30 seconds later. This may be acceptable for
general desktop and computational applications but for write-heavy workloads the value of this parameter
should be lowered although not to extremely low levels. Because of the way the dirty page writing mechanism
works, attempting to lower this value to less than a few seconds is unlikely to work well. Constantly trying to
write dirty pages out will just trigger the I/O congestion code more frequently.
Secondly, pdflush will work on is writing pages if memory is low. This is controlled by:
dirty_background_ratio (default 10): Maximum percentage of active memory that can be filled with dirty
pages before pdflush begins to write them. In terms of the meminfo output, the active memory is
MemFree + Cached – Mapped
This is the primary tunable to adjust downward on systems with the large amount of memory and heavy writing
applications. The usual issue with these applications on Linux is buffering too much data in the attempt to im-
prove efficiency. This is particularly troublesome for operations that require synchronizing the file system using
system calls like fsync. If there is a lot of data in the buffer cache when this call is made, the system can freeze
for quite some time to process the sync.
Another common issue is that because so much data must be written and cached before any physical writes
start, the I/O appears more in bursts than would seem optimal. Long periods are observed where no physical
writes occur as the large page cache is filled, followed by writes at the highest speed the device can achieve
once one of the pdflush triggers has been tripped. If the goal is to reduce the amount of data Linux keeps cached
in memory so that it writes it more consistently to the disk rather than in a batch, decreasing
dirty_background_ratio is most effective.
There is another parameter that affects page cache management:
dirty_ratio (default 40): Maximum percentage of total memory that can be filled with dirty pages before user
processes are forced to write dirty buffers themselves during their time slice instead of being allowed to do
more writes.
Note that all processes are blocked for writes when this happens, not just the one that filled the write buffers.
This can cause what is perceived as an unfair behavior where a single process can ”hog” all I/O on the system.
Applications that can cope with their writes being blocked altogether might benefit from substantially decreas-
ing this value.
Summary of Recommended Virtual Memory Management Settings
There is no “one size fit all” best set of the kernel VM tuning parameters for database servers because each
www.redhat.com | 11