HP Serviceguard A.11.20 Release Notes, September 2012

Redundant storage paths functioning properly
Kernel parameters and driver configurations consistent across nodes
Mount point overlaps (such that one file system is obscured when another is mounted)
Unreachable DNS server
Consistency of settings in .rhosts and /var/admin/inetd.sec
Consistency across cluster of major and minor numbers device-file numbers
Nested mount points
Staleness of mirror copies
Cluster Verification and ccmon
The Cluster Consistency Monitor (ccmon) provides even more comprehensive verification capabilities
than those described in this section. ccmon is a separate product, available for purchase; ask your
HP Sales Representative for details.
NFS-mounted File Systems
As of Serviceguard A.11.20, you can use NFS-mounted (imported) file systems as shared storage
in packages.
The same package can mount more than one NFS-imported file system, and can use both cluster-local
shared storage and NFS imports.
The following rules and restrictions apply.
NFS mounts are supported for modular, failover packages.
See chapter 6 of Managing Serviceguard for a discussion of types of packages.
You can create a Multi-Node Package that uses an NFS file share, and this is useful only if
you want to create a HP Integrity Virtual Machine (HPVM) in a Serviceguard Package, where
the virtual machine itself uses a remote NFS share as backing store.
For details on how to configure NFS as a backing store for HPVM, see the HP-UX vPars and
Integrity VM V6.1 Administrator Guide at http://www.hp.com/go/virtualization-manuals
> HP Integrity Virtual Machines and Online VM Migration.
So that Serviceguard can ensure that all I/O from a node on which a package has failed is
flushed before the package restarts on an adoptive node, all the network switches and routers
between the NFS server and client must support a worst-case timeout, after which packets and
frames are dropped. This timeout is known as the Maximum Bridge Transit Delay (MBTD).
IMPORTANT: Find out the MBTD value for each affected router and switch from the vendors'
documentation; determine all of the possible paths; find the worst case sum of the MBTD values
on these paths; and use the resulting value to set the Serviceguard
CONFIGURED_IO_TIMEOUT_EXTENSION parameter. For instructions, see the discussion of
this parameter under “Cluster Configuration Parameters in chapter 4 of Managing
Serviceguard.
Switches and routers that do not support MBTD value must not be used in a Serviceguard NFS
configuration. This might lead to delayed packets that in turn could lead to data corruption.
Networking among the Serviceguard nodes must be configured in such a way that a single
failure in the network does not cause a package failure.
What’s in this Release 23