Using Serviceguard Extension for RAC Version A.11.20 - (August 2011)
Monitoring Hardware
Good standard practice in handling a high-availability system includes careful fault monitoring so
as to prevent failures if possible, or at least to react to them swiftly when they occur.
The following should be monitored for errors or warnings of all kinds.
• Disks
• CPUs
• Memory
• LAN cards
• Power sources
• All cables
• Disk interface cards
Some monitoring can be done through simple physical inspection, but for the most comprehensive
monitoring, you should examine the system log file (/var/adm/syslog/syslog.log) periodically
for reports on all configured HA devices. The presence of errors relating to a device will show the
need for maintenance.
Using Event Monitoring Service
Event Monitoring Service (EMS) allows you to configure monitors of specific devices and system
resources. You can direct alerts to an administrative workstation where operators can be notified
of further action in case of a problem. For example, you could configure a disk monitor to report
when a mirror was lost from a mirrored volume group being used in a non-RAC package.
For additional information, refer to www.hp.com/go/hpux-ha-monitoring-docs —> HP Event
Monitoring Service.
Using EMS Hardware Monitors
A set of hardware monitors is available for monitoring and reporting on memory, CPU, and many
other system values. For additional information, refer to Online Diagnostics (EMS and STM)
Administrator's Guide at www.hp.com/go/hpux-diagnostics-docs —> Diagnostics and Monitoring
Tools.
Adding Disk Hardware
As your system expands, you may need to add disk hardware. This also means modifying the
logical volume structure. Use the following general procedure:
1. Halt packages.
2. Ensure that the Oracle database is not active on either node.
3. Deactivate and mark as unshareable any shared volume groups.
4. Halt the cluster.
5. Deactivate automatic cluster startup.
6. Shutdown and power off system before installing new hardware.
7. Install the new disk hardware with connections on all nodes.
8. Reboot all nodes.
9. On the configuration node, add the new physical volumes to existing volume groups, or create
new volume groups as needed.
10. Start up the cluster.
11. Make the volume groups shareable, then import each shareable volume group onto the other
nodes in the cluster.
12. Activate the volume groups in shared mode on all nodes.
Monitoring Hardware 131