Managing HP Serviceguard A.12.00.00 for Linux, June 2014
The control script imports disk groups using the vxdg command with the -tfC options. The -t
option specifies that the disk is imported with the noautoimport flag, which means that the disk
will not be automatically re-imported at boot time. Since disk groups included in the package
control script are only imported by Serviceguard packages, they should not be auto-imported.
The -foption allows the disk group to be imported even if one or more disks (a mirror, for example)
is not currently available. The -C option clears any existing host ID that might be written on the
disk from a prior activation by another node in the cluster. If the disk had been in use on another
node which has gone down with a TOC, then its host ID may still be written on the disk, and this
needs to be cleared so the new node’s ID can be written to the disk. Note that the disk groups are
not imported clearing the host ID if the host ID is set and matches a node that is not in a failed
state. This is to prevent accidental importation of a disk group on multiple nodes which could result
in data corruption.
CAUTION: Although Serviceguard uses the -C option within the package control script framework,
this option should not normally be used from the command line. Chapter 10: “Troubleshooting
Your Cluster” (page 269), shows some situations where you might need to use -C from the command
line.
The following example shows the command with the same options that are used by the control
script:
# vxdg -tfC import dg_01
This command takes over ownership of all the disks in disk group dg_01, even though the disk
currently has a different host ID written on it. The command writes the current node’s host ID on all
disks in disk group dg_01 and sets the noautoimport flag for the disks. This flag prevents a disk
group from being automatically re-imported by a node following a reboot. If a node in the cluster
fails, the host ID is still written on each disk in the disk group. However, if the node is part of a
Serviceguard cluster then on reboot the host ID will be cleared by the owning node from all disks
which have the noautoimport flag set, even if the disk group is not under Serviceguard control.
This allows all cluster nodes, which have access to the disk group, to be able to import the disks
as part of the cluster operation.
The control script also uses the vxvol startall command to start up the logical volumes in
each disk group that is imported.
6.9 Creating a Disk Monitor Configuration
Serviceguard provides disk monitoring for the shared storage that is activated by packages in the
cluster. The monitor daemon on each node tracks the status of all the disks on that node that you
have configured for monitoring.
The configuration must be done separately for each node in the cluster, because each node monitors
only the group of disks that can be activated on that node, and that depends on which packages
are allowed to run on the node.
To set up monitoring, include a monitoring service in each package that uses disks you want to
track. Remember that service names must be unique across the cluster; you can use the package
name in combination with the string cmresserviced. The following shows an entry in the package
configuration file for pkg1:
service_name cmresserviced_pkg1
service_fail_fast_enabled yes
service_halt_timeout 300
service_cmd "$SGSBIN/cmresserviced /dev/sdd1 /dsv/sde1"
service_restart none
CAUTION: Because of a limitation in LVM, service_fail_fast_enabled must be set to
yes, forcing the package to fail over to another node if it loses its storage.
6.9 Creating a Disk Monitor Configuration 209