Managing Serviceguard Extension for SAP, December 2007
SAP Supply Chain Management
More About Hot Standby
Chapter 4204
More About Hot Standby
A fatal liveCache failure concludes in a restart attempt of the liveCache
instance. This will take place either locally or, as part of a cluster
failover, on a remote node. It is a key aspect here, that liveCache is an
in-memory database technology. While the bare instance restart as such
is quick, the reload of the liveCache in-memory content can take a
significant amount of time, depending on the size of the liveCache data
spaces. Modern Supply Chain Management (SCM) business scenarios
can not afford unplanned loss of liveCache functionality for the span of
time that it takes to reload the liveCache after failover. The situation
becomes worse, since a restarted liveCache requires additional runtime
to gain back its full performance. In many cases, systems that are
connected to the liveCache can’t continue with the information exchange
and the outbound queues run full. The communication gets stuck until a
manual restart of the queues is triggered.
The requirement to enable even the most mission- and time-critical
use-cases of SCM triggered the introduction of the hot standby liveCache
system (hss). Refer ti figure 4-1.
Figure 4-1 Hot Standby liveCache
master data
(not shared)
master data
(not shared)
If required (e.g. after long downtime of standby):
standby data rebuild via hardware copy mechanism
storage level
replication
trigger
storage
dependent
hss scripts
storage
hss scripts
dependent
SGeSAP
library
SGeSAP
library
master
liveCahce
hot standby
liveCache
liveCache
& health monitoring
failover package
“master token”
configured
package failover path
synchronization
trigger
r
e
a
d
o
n
l
y
p
r
i
m
a
r
y
r
e
a
d
-
w
r
i
t
e
saplog
(cluster
shared)