Implement high-availability solutions with HP Instant Capacity - easily and effectively

28
Figure 16. HP GiCAP and Serviceguard failover completed, including failover to standby Group Manager
Extra care must be taken when considering failback of a Group Manager. When ap1 comes back online, the package is
restarted on ap1. If the package startup does icapmanage -Q -n to resume control of the group, and if this happens while
ap2 is the Group Manager, it means that all database changes done while ap2 was in control will be lost. To preserve
database changes, what is needed is to re-issue the icapmanage -Q -n command on the currently active Group Manager
node first, so that both Group Managers can synchronize information, and then issue the icapmanage -Q -n command
again, on the node which is intended to be the active Group Manager (where the startup script is operating). This is
accomplished by doing the icapmanage -Q -n command in both the startup and the shutdown scripts. For more
information on sample scripts and additional scripting considerations, see the Appendix.
One final note applies only to Serviceguard configurations where the primary node and the failover node are both on the
same system complex. In this situation failover works, but you should always fail back to the primary, instead of making
the failover node the new primary (do not use the rotating standby model). This is because you cannot seize rights that
have already been seized from the same complex.
Server 1
A
Active core
Inactive (iCAP) core
Db3: 2 blades,
0 active core
Server 2
Db4: 2 blades, 10
active and 6 iCAP
cores
Reserved core usage right
A A
A
A
A A
A A
A A
A
R
Serviceguard
cluster
Db1: 2 blades,
1 reserved core
R
Db2: 2 blades,
16 active cores
A A
A
A
A A
A
A
A A
A A
A A
A A
Serviceguard
cluster
ap1: Active Group
Manager
ap2: Standby Group Manager