1.0

Table Of Contents
Reduce Overhead of Eviction to Disk
Reduce the memory overhead of tables, as well as the performance overhead.
Reduce the memory overhead of tables. Some applications may benet from evicting rows to disk in order to
reduce heap space. However, enabling eviction also increases the per-row overhead that SQLFire requires to
perform LRU eviction for the table. As a general rule, table eviction is only helpful for conserving memory if
the non-primary key columns in a table are large: 100 bytes or more.
Reduce the performance overhead of eviction to disk:
Scale the number of data stores and/or increase the heap size to keep more rows in memory.
Do not congure eviction on small, frequently-used tables.
Only congure eviction on large tables that contain data that is used infrequently.
Use the DESTROY action rather than overowing to disk to clean out data that is no longer needed.
Tune disk I/IO. See Tuning Disk I/O on page 293.
See How LRU Eviction Works on page 179.
Minimize Update Latency for Replicated Tables
Keep in mind that when an application updates a replicated table, SQLFire performs the update on each member
that hosts a replica of the table. The update latency for replicated tables increases with the number of SQLFire
members that host the table.
SQLFire clients experience the update latency at different times. Peer clients incur the latency cost at execution
time, while thin clients incur the latency cost only at commit time.
When possible, update highly-replicated tables outside of larger transactions.
Tune FabricServers
These JVM-related recommendations pertain to the 64-bit version 1.6 JVM running on Linux.
Use JDK 1.6.0_26 or higher. This provides signicantly better performance than earlier versions for some
SQLFire applications.
Use server for JVMs that will start servers with the FabricServer API.
Set initial and maximum heap to the same value. This allocates a contiguous memory segment. When using
the sqlf script to start the server, set initial-heap equal to max-heap . When using the FabricServer
API, set Xms equal to Xmx.
Set XX:+UseConcMarkSweepGC to use the concurrent low-pause garbage collector and the parallel young
generation collector. The low-pause collector sacrices some throughput in order to minimize stop-the-world
GC pauses for tenured collections. It does require more headroom in the heap, so increase the heap size to
compensate. The sqlf script starts fabric servers with this collector by default.
Set -XX:+DisableExplicitGC to disable full garbage collection. This causes calls to System.gc()
to be ignored, avoiding the associated long latencies.
Set XX:CMSInitiatingOccupancyFraction=50 or even lower for high throughput latency-sensitive
applications that generate large amounts of garbage in the tenured generation, such as those that have high
rates of updates, deletes, or evictions. This setting tells the concurrent collector to start a collection when tenured
occupancy is at the given percentage. With the default setting, a high rate of tenured garbage creation can
outpace the collector and result in OutOfMemoryError. Too low of a setting can affect throughput by doing
unnecessary collections, so test to determine the best setting. The sqlf script sets this for fabric servers by
default.
If short-lived objects are being promoted to the tenured generation, set XX:NewSize=<n>, where n is large
enough to prevent this from occurring. Increasing this value tends to increase throughput but also latency, so
test to nd the optimum value, usually somewhere between 64m and 1024m.
vFabric SQLFire User's Guide292
Managing and Monitoring vFabric SQLFire