Managing HP Serviceguard A.11.20.00 for Linux, June 2012
Logical Volume and File System Planning
Use logical volumes in volume groups as the storage infrastructure for package operations on a
cluster. When the package moves from one node to another, it must still be able to access the
same data on the same disk as it did when it was running on the previous node. This is accomplished
by activating the volume group and mounting the file system that resides on it.
In Serviceguard, high availability applications, services, and data are located in volume groups
that are on a shared bus. When a node fails, the volume groups containing the applications,
services, and data of the failed node are deactivated on the failed node and activated on the
adoptive node (the node the packages move to). In order for this to happen, you must configure
the volume groups so that they can be transferred from the failed node to the adoptive node.
NOTE: To prevent an operator from accidentally activating volume groups on other nodes in the
cluster, versions A.11.16.07 and later of Serviceguard for Linux include a type of VG activation
protection. This is based on the “hosttags” feature of LVM2.
This feature is not mandatory, but HP strongly recommends you implement it as you upgrade
existing clusters and create new ones. See “Enabling Volume Group Activation Protection” (page 133)
for instructions.
As part of planning, you need to decide the following:
• What volume groups are needed?
• How much disk space is required, and how should this be allocated in logical volumes?
• What file systems need to be mounted for each package?
• Which nodes need to import which logical volume configurations.
• If a package moves to an adoptive node, what effect will its presence have on performance?
• What hardware/software resources need to be monitored as part of the package? You can
then configure these as generic resources in the package and write appropriate monitoring
scripts for monitoring the resources.
NOTE: Generic resources influence the package based on their status. The actual monitoring
of the resource should be done in a script and this must be configured as a service. The script
sets the status of the resource based on the availability of the resource. See “Monitoring Script
for Generic Resources” (page 267).
Create a list by package of volume groups, logical volumes, and file systems. Indicate which nodes
need to have access to common file systems at different times.
HP recommends that you use customized logical volume names that are different from the default
logical volume names (lvol1, lvol2, etc.). Choosing logical volume names that represent the
high availability applications that they are associated with (for example, lvoldatabase) will
simplify cluster administration.
To further document your package-related volume groups, logical volumes, and file systems on
each node, you can add commented lines to the /etc/fstab file. The following is an example
for a database application:
# /dev/vg01/lvoldb1 /applic1 ext3 defaults 0 1 # These six entries are
# /dev/vg01/lvoldb2 /applic2 ext3 defaults 0 1 # for information purposes
# /dev/vg01/lvoldb3 raw_tables ignore ignore 0 0 # only. They record the
# /dev/vg01/lvoldb4 /general ext3 defaults 0 2 # logical volumes that
# /dev/vg01/lvoldb5 raw_free ignore ignore 0 0 # exist for Serviceguard's
# /dev/vg01/lvoldb6 raw_free ignore ignore 0 0 # HA package. Do not uncomment.
Create an entry for each logical volume, indicating its use for a file system or for a raw device.
CAUTION: Do not use /etc/fstab to mount file systems that are used by Serviceguard packages.
Package Configuration Planning 93