1.1.1

Table Of Contents
If short-lived objects are being promoted to the tenured generation, set XX:NewSize=<n>, where n is large
enough to prevent this from occurring. Increasing this value tends to increase throughput but also latency, so
test to nd the optimum value, usually somewhere between 64m and 1024m.
Tune heap settings so that occupancy stays below 70%. This helps reduce latency.
The parallel compactor in JDK 6 is not available with the concurrent low-pause collector. Churn in the tenured
generation causes fragmentation that can eventually cause stop-the-world compactions. You can postpone the
issue by using the largest heap that ts into memory, after allowing for the operating system.
If heap space is an issue, XX:+UseCompressedOops is turned on by default if you are running with
64-bit JDK 1.6.0_24 or higher. This can reduce heap usage by up to 40% by reducing managed pointers for
certain objects to 32-bit. However, this can lower throughput and increase latency. It also limits the application
to about four billion objects.
Set conserve-sockets=false in the boot properties. This causes each server to use a dedicated threads
to send to and receive from each of its peers. This uses more system resources, but can improve performance
by removing socket contention between threads and allowing SQLFire to optimize certain operations. If your
application has very large numbers of servers and/or peer clients, test to see which setting gives the best results.
Peer clients that are read-heavy with very high throughput can benet from conserving sockets while leaving
conserve-sockets false in the data stores
Set enable-time-statistics=false in the boot properties and set enable-timestats=false
in all connections (including client connections) to turn off time statistics. This eliminates a large number of
calls to gettimeofday.
For applications that will always use a single server, you can make it a "loner" by setting the mcast-port=0
and conguring no locators. Knowing there will be no distribution allows SQLFire to do a few additional
optimizations. Thin clients must then connect directly to the server. Also, peer clients cannot be used in this
case.
Tuning Disk I/O
Most SQLFire applications that access the hard disk work well with synchronous disk stores and reasonably fast
disks. If disk I/O becomes a bottleneck, you can congure the system to minimize seek time.
The degree of tuning depends on your applications data access patterns:
Place the disk store directories used by a SQLFire server on local drives or on a high-performance Storage
Area Network (SAN).
If you have limited disk space, use smaller oplog sizes for disk stores to avoid expensive le deletes. A value
of 512 MB for MAXLOGSIZE is a good starting point.
If space allows, turn off automatic oplog compaction in disk stores by setting AUTOCOMPACT to false. This
prevents latency-sensitive writes to new oplogs from competing with seeks and writes related to compaction.
Oplogs are still removed when they no longer contain current data. When compaction is unavoidable, set
ALLOWFORCECOMPACTION to true and use sqlf to do manual compaction at a time when system activity
is low or the system is ofine.
Run backups with sqlf when system activity is low or the system is ofine.
ASYNCHRONOUS disk stores can give some latency benet for applications with infrequent writes that ush
using a TIMEINTERVAL. They can also reduce disk I/O for applications that do very frequent writes to the
same data, because these are conated in the buffer. But asynchronous disk stores offer insignicant benet
in most cases and use heap space that can be better utilized elsewhere. Instead, use SYNCHRONOUS disk
stores and let the operating system handle buffering. This simplies application design and performs well with
modern operating systems and disk storage.
Avoid conguring tables for expiration with overow to disk, especially when expired data is frequently read.
This increases disk seek times and latency, in addition to fragmenting the heap.
Use different disk stores for persistence and overow, and map them to different physical disks. This practice
isolates the fast sequential writes used in persistence from the higher latency random access caused by faulting
data from disk into memory.
313
Best Practices for Tuning Performance