Managing Serviceguard Nineteenth Edition, Reprinted June 2011

Monitoring LVM Disks Through Event Monitoring Service
If you are using LVM, you can configure disk monitoring to detect a failed mechanism by using
the disk monitor capabilities of the EMS HA Monitors, available as a separate product . Monitoring
can be set up to trigger a package failover or to report disk failure events to a Serviceguard, to
another application, or by email. For more information, see “Using EMS to Monitor Volume Groups
(page 96).
Monitoring VxVM and CVM Disks
The HP Serviceguard VxVM Volume Monitor provides a means for effective and persistent monitoring
of VxVM and CVM volumes. The Volume Monitor supports Veritas Volume Manager versions 4.1
and 5.0, as well as Veritas Cluster Volume Manager (CVM) versions 4.1and 5.0.
You can configure the Volume Monitor (cmvxserviced) to run as a service in a package that
requires the monitored volume or volumes. When a monitored volume fails or becomes inaccessible,
the service will exit, causing the package to fail on the current node. (The package’s failover
behavior depends on its configured settings, as with any other failover package.)
For example, the following service_cmd monitors two volumes at the default log level 0, with a
default polling interval of 60 seconds, and prints all log messages to the console:
/usr/sbin/cmvxserviced /dev/vx/dsk/cvm_dg0/lvol1 /dev/vx/dsk/cvm_dg0/lvol2
For more information, see the cmvxserviced (1m) manpage. For more information about
configuring package services, see the parameter descriptions starting with service_name
(page 232).
Replacing Failed Disk Mechanisms
Mirroring provides data protection, but after a disk failure, the failed disk must be replaced. With
conventional disks, this is done by bringing down the cluster and replacing the mechanism. With
disk arrays and with special HA disk enclosures, it is possible to replace a disk while the cluster
stays up and the application remains online. The process is described under “Replacing Disks
(page 310) .
Replacing Failed I/O Cards
Depending on the system configuration, it is possible to replace failed disk I/O cards while the
system remains online. The process is described under “Replacing I/O Cards” (page 313).
Sample SCSI Disk Configurations
Figure 5 shows a two node cluster. Each node has one root disk which is mirrored and one package
for which it is the primary node. Resources have been allocated to each node so that each node
may adopt the package from the other node. Each package has one disk volume group assigned
to it and the logical volumes in that volume group are mirrored. Please note that Package A’s disk
and the mirror of Package B’s disk are on one interface while Package B’s disk and the mirror of
Package A’s disk are on a separate bus. This arrangement eliminates single points of failure and
makes either the disk or its mirror available in the event one of the buses fails.
Redundant Disk Storage 33