1.1

Table Of Contents
Example
sqlf validate-disk-store ds1 hostB/bupDirectory
/partitioned_table entryCount=6 bucketCount=10
Disk store contains 1 compactable records.
Total number of table entries in this disk store is: 6
Compacting Disk Store Log Files
You can congure automatic compaction for an operation log based on percentage of garbage content. You can
also request compaction manually for online and ofine disk stores.
The following topics deal with compaction:
How Compaction Works on page 89
Online Compaction Diagram on page 89
Run Online Compaction on page 90
Run Ofine Compaction on page 90
Performance Benets of Manual Compaction on page 91
Directory Size Limits on page 91
Example Compaction Run on page 91
How Compaction Works
When a DML operation is added to a disk store, any preexisting operation record for the same record becomes
obsolete, and SQLFire marks it as garbage. For example, when you update a record, the update operation is
added to the store. If you delete the record later, the delete operation is added and the update operation becomes
garbage. SQLFire does not remove garbage records as it goes, but it tracks the percentage of garbage in each
operation log, and provides mechanisms for removing garbage to compact your log les.
SQLFire compacts an old operation log by copying all non-garbage records into the current log and discarding
the old les. As with logging, oplogs are rolled as needed during compaction to stay within the MAXLOGSIZE
setting.
You can congure the system to automatically compact any closed operation log when its garbage content
reaches a certain percentage. You can also manually request compaction for online and ofine disk stores. For
the online disk store, the current operation log is not available for compaction, no matter how much garbage it
contains.
Online Compaction Diagram
89
Using Disk Stores to Persist Data