HP-UX OSRA for Web Services 2.5 Blueprint and Configuration Guide

Figure 4-1 Architecture of MySQL High Availability Using HP Serviceguard
In an HP Serviceguard environment, MySQL must have the same configuration on all cluster
nodes that are configured to run the package. The node currently running the package is called
the primary node. All other nodes are called standby nodes. In the event of a failure on the
primary node, the package fails over to a standby node and the database continues to function.
To ensure that the database can fail over properly, all data must be stored on shared storage,
and this storage must be accessible to all nodes configured to run the package.
When the package fails over from one node to another, the following actions occur:
On the primary node:
— The package is halted on the node where it is currently running. As a result, all package
resources are halted.
— The relocatable IP address is removed from this node
— The file systems are unmounted and all volume groups assigned to this package are
deactivated.
— The systems are going to halt when the storage or networks fail.
On the standby node:
— The volume groups are activated and file systems are mounted.
— The relocatable IP address is moved to the new node.
— All resources are started up, and the database is displayed.
— The clients connect through the same relocatable IP address.
Preparing for Installation
Before installing HP Serviceguard, verify the following:
System Environment
Two or more HP-UX systems are set up, all connected to the appropriate external hardware
for a shared data drive. These systems act as the MySQL Serviceguard cluster used in the
following test example. For example, these may be two or more HP PA-RISC or Integrity
servers.
An additional server is set up, to test the MySQL Serviceguard cluster. This server has the
MySQL database installed, as well as the MySQL bench testing tool, also called sql-bench.
36 MySQL Tools and Tips