Cost-Effective High-Availability Solutions with HP Instant Capacity on HP-UX
Next, the script activates the additional cores on db2, and the package starts up on db2, with all
cores active. Figure 17 shows this failover state.
Figure 17: GiCAP/Serviceguard failover completed
The package can be configured to fail back automatically to db1 when it is available, or the
cmhaltpkg/cmrunpkg commands can be used to provide the failback manually. The customized
scripts should be written generically to work properly with these operations. Sample scripts for this
example are shown in the “Scripts for implementing failover with Serviceguard
” sub-section. The
scripts provide the failover capability but assume that failback is done manually using the
cmhaltpkg/cmrunpkg commands as needed. They also assume that the failing partition is powered
down manually as needed. For related considerations, especially with respect to virtual partitions, see
the “Additional Serviceguard scripting considerations
” sub-section.
Note that in this GiCAP HA scenario, there might be no need to do failback because the systems are
symmetrical. This is unlike the TiCAP HA scenario, where failback is more compelling to avoid
continued depletion of the TiCAP balance.
However, if db1 is going to be offline for more than twelve hours, the iCAP software on db3 assumes
that all cores might become active on db1 (it might be running another OS without iCAP software).
For more detail, see the “Recovery from a failure involving one or more nPartitions
” sub-section.
34