Serviceguard NFS Toolkit A.11.11.06, A.11.23.05 and A.11.31.05 Administrator's Guide

7. Configure the disk hardware for high availability.
Disks must be protected using HP's MirrorDisk/UX product or an HP High Availability
Disk Array with PV links. Data disks associated with Serviceguard NFS must be external
disks. All the nodes that support the Serviceguard NFS package must have access to the
external disks. For most disks, this means that the disks must be attached to a shared bus
that is connected to all nodes that support the package. For information on configuring disks,
see the Managing Serviceguard manual.
8. Use SAM or LVM commands to set up volume groups, logical volumes, and file systems as
needed for the data that is exported to clients.
The names of the volume groups must be unique within the cluster, and the major and minor
numbers associated with the volume groups must be the same on all nodes. In addition, the
mounting points and exported file system names must be the same on all nodes.
The preceding requirements exist because NFS uses the major number, minor number, inode
number, and exported directory as part of a file handle to uniquely identify each NFS file.
If differences exist between the primary and adoptive nodes, the client file handle would
no longer point to the correct file location after movement of the package to a different node.
It is recommended that filesystems used for NFS be created as journaled file systems (FStype
vxfs). This ensures fast recovery time in the event of a package switch to another node.
9. Make sure the user IDs and group IDs of those who access the Serviceguard NFS file system
are the same on all nodes that can run the package.
Make sure the /etc/passwd and /etc/group files are the same on the primary node and
all adoptive nodes, or use NIS to manage the password and group databases. For information
on configuring NIS, see the NFS Services Administrator's Guide.
10. Create an entry for the name of the package in the DNS or NIS name resolution files, or in
/etc/hosts, so that users will mount the exported file systems from the correct node. This
entry maps the package name to the package's relocatable IP address.
11. Decide whether to place executables locally on each client or on the NFS server. There are
a number of trade-offs to be aware of regarding the location of executables with Serviceguard
NFS.
Following are the advantages of keeping executables local to each client are as follows:
No failover time. If the executables are local to the client, there is no delay if the NFS
server fails.
Faster access to the executables than accessing them through the network.
The advantage of putting the executables on the NFS server is as follows:
Executable management. If the executables are located in one centralized location, the
administrator must update only one copy when changes are made.
If executables are placed on the NFS server, you need to ensure that interrupts are handled
correctly in a Serviceguard environment. The client must mount the filesystem using the
nointr option. This mount option ensures that the executable continues running correctly
on the client after the server failover. For example, enter the following command on the NFS
client:
mount -o nointr relocatable_ip:/usr/src /usr/src
where relocatable_ip is the IP address of the package, and /usr/src represents the
mount points of the server and the client, respectively.
Without the nointr option, if an interrupt (or a SIGKILL, SIGHUP, SIGINT, SIGQUIT,
SIGTERM, or SIGALRM signal) is sent to an executable while the NFS server is failing over,
NFS will terminate the executable. This is a standard feature of NFS that allows interrupts
such as ^C to kill a “hung” client executable if the NFS server is down. Specifying the nointr
option resolves this problem. See the mount_nfs(1M) man page for more information.
Before Creating a Serviceguard NFS Package 51