User's Guide
for reference ONLY and it is recommended to read and follow the appropriate SAP OSS notes for SAP's
latest recommendation. Whenever possible the SAP OSS note number is given.
More About Hot Standby
A fatal liveCache failure concludes in a restart attempt of the liveCache instance. This will take place either
locally or, as part of a cluster failover, on a remote node. It is a key aspect here, that liveCache is an
in-memory database technology. While the bare instance restart as such is quick, the reload of the liveCache
in-memory content can take a significant amount of time, depending on the size of the liveCache data spaces.
Modern Supply Chain Management (SCM) business scenarios cannot afford unplanned loss of liveCache
functionality for the span of time that it takes to reload the liveCache after failover. The situation becomes
worse, since a restarted liveCache requires additional runtime to gain back its full performance. In many
cases, systems that are connected to the liveCache can't continue with the information exchange and the
outbound queues run full. The communication gets stuck until a manual restart of the queues is triggered.
The requirement to enable even the most mission- and time-critical use-cases of SCM triggered the introduction
of the hot standby liveCache system (hss). Refer ti figure 4-1.
Figure 4-1 Hot Standby liveCache
A hot standby liveCache is a second liveCache instance that runs with the same System ID as the original
master liveCache. It will be waiting on the secondary node of the cluster during normal operation. A failover
of the liveCache cluster package does not require any time consuming filesystem move operations or instance
restarts. The hot standby simply gets notified to promote itself to become the new master. The Serviceguard
cluster software will make sure that the primary system is shut down already in order to prevent a split-brain
situation in which two liveCache systems try to serve the same purpose. Thus, a hot standby scenario provides
extremely fast and reliable failover. The delay caused by a failover becomes predictable and tunable. No
liveCache data inconsistencies can occur during failover.
The hot standby mechanism also includes data replication. The standby maintains its own set of liveCache
data on storage at all times.
SGeSAP provides a runtime library to liveCache that allows to automatically create a valid local set of
liveCache devspace data via Storageworks XP Business Copy volume pairs (pvol/svol BCVs) as part of the
standby startup. If required, the master liveCache can remain running during this operation. The copy utilizes
fast storage replication mechanisms within the storage array hardware to keep the effect on the running
master liveCache minimal. Once the volume pairs are synchronized, they get split up immediately. During
normal operation, each of the two liveCache instances operates on a set of LUNs in SIMPLEX state.
94 SAP Supply Chain Management