1.0

Table Of Contents
Preload "hot" data and lazy-load the rest. If the data set in the backend database is very large, a better option
would be to maintain all the "hot" tables in SQLFire and lazy load the historical data into "historical" tables. With
this design, the entire supported SQL syntax can be used on fully populated "hot" tables, but, only primary key based
queries can be issued on historical tables. Typically, these historical tables will be congured for LRU eviction.
Integration with Hibernate
SQLFire can also be integrated in object relational mapping products such as Hibernate and provide an L2 cache -
entire query result sets can be cached in SQLFire. Updates done using the Hibernate API will be synchronously
propagated to the database and query result sets will be invalidated in SQLFire.
Exporting and Transforming an Existing Schema
Application developers can use a tool such as Exporting and Importing Data with vFabric SQLFire to export an existing
relational database schema, transform it to SQLFire, and import the new schema along with data into a SQLFire cluster.
vFabric SQLFire User's Guide174
Caching Data with vFabric SQLFire