Managing Serviceguard Eighteenth Edition, September 2010
To run this script from cron, you would create the following entry in /var/spool/
cron/crontabs/root:
0 8,20 * * * verification.sh
See the cron (1m) manpage for more information.
Limitations
Serviceguard does not check the following conditions:
• Access Control Policies properly configured (see “Controlling Access to the Cluster”
(page 251) for information about Access Control Policies)
• File systems configured to mount automatically on boot (that is, Serviceguard does
not check /etc/fstab)
• Shared volume groups configured to activate on boot
• Volume group major and minor numbers unique
• Redundant storage paths functioning properly
• Kernel parameters and driver configurations consistent across nodes
• Mount point overlaps (such that one file system is obscured when another is
mounted)
• Unreachable DNS server
• Consistency of settings in .rhosts and /var/admin/inetd.sec
• Consistency across cluster of major and minor numbers device-file numbers
• Nested mount points
• Staleness of mirror copies
Managing the Cluster and Nodes
This section covers the following tasks:
• Starting the Cluster When All Nodes are Down
• Adding Previously Configured Nodes to a Running Cluster
• Removing Nodes from Operation in a Running Cluster
• Halting the Entire Cluster
• Halting a Node or the Cluster while Keeping Packages Running (page 344)
In Serviceguard A.11.16 and later, these tasks can be performed by non-root users with
the appropriate privileges, except where specifically noted. See “Controlling Access to
the Cluster” (page 251) for more information about configuring access.
You can use Serviceguard Manager or the Serviceguard command line to start or stop
the cluster, or to add or halt nodes. Starting the cluster means running the cluster
daemon on one or more of the nodes in a cluster. You use different Serviceguard
commands to start the cluster, depending on whether all nodes are currently down
(that is, no cluster daemons are running), or whether you are starting the cluster daemon
on an individual node.
Managing the Cluster and Nodes 341