Serviceguard NFS Toolkit A.11.11.06, A.11.23.05 and A.11.31.
© Copyright 2009 © Hewlett-Packard Development Company, L. P Legal Notices The information in this document is subject to change without notice. Hewlett-Packard makes no warranty of any kind with regard to this manual, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose.
Table of Contents 1 Overview of Serviceguard NFS....................................................................................9 Limitations of Serviceguard NFS............................................................................................................9 Overview of Serviceguard NFS Toolkit A.11.31.05 with Serviceguard A.11.18 and Veritas Cluster File System Support............................................................................................................................
Starting a Serviceguard NFS failover package...........................................................................46 3 Installing and Configuring Serviceguard NFS Modular Package...........................49 Installing Serviceguard NFS Modular Package....................................................................................49 Before Creating a Serviceguard NFS Package......................................................................................
NFS Control Scripts for pkg02.........................................................................................................80 The nfs.cntl Control Script.........................................................................................................80 The hanfs.sh Control Script........................................................................................................80 Example Four - Two Servers with NFS Cross-Mounts..........................................................
List of Figures 1-1 1-2 1-3 1-4 1-5 1-6 1-7 1-8 1-9 2-1 2-2 2-3 3-1 5-1 5-2 5-3 5-4 5-5 5-6 5-7 5-8 6-1 6-2 6-3 6-4 6-5 6-6 6-7 6-8 6 CFS versus Non-CFS (VxFS) Implementation...............................................................................14 SG NFS Servers over VxFS — High Availability...........................................................................15 SG NFS Servers over CFS — High Availability, Scalability, Load Balancing................................
List of Tables 4-1 Names of parameters in legacy and modular package.................................................................
List of Examples 3-1 8 In the example below, two module names mod1 and mod2 are configured with hanfs module. The hanfs is specified as the last module name in the cmmakepkg command..........................
1 Overview of Serviceguard NFS Serviceguard NFS is a tool kit that enables you to use Serviceguard to set up highly available NFS servers. You must set up a Serviceguard cluster before you can set up Highly Available NFS. For instructions on setting up a Serviceguard cluster, see the Managing Serviceguard manual. Serviceguard NFS is a separately purchased set of configuration files and control script, which you customize for your specific needs.
• • • AutoFS mounts may fail when mounting file systems exported by an HA-NFS package soon after that package has been restarted. To avoid these mount failures, AutoFS clients should wait at least 60 seconds after an HA-NFS package has started before mounting file systems exported from that package. The Serviceguard supports Cross Subnet Failover. However, HA-NFS has a few limitations with Cross Subnet configurations.
Overview of the NFS File Lock Migration Feature Serviceguard NFS introduced the “File Lock Migration” feature beginning with versions A.11.11.03 and A.11.23.02. The detailed information on this feature is as follows: • • • Each HA/NFS package designates a unique holding directory located in one of the filesystems associated with the package. In other words, an empty directory is created in one of the filesystems that moves between servers as part of the package.
• • • • directory is a configurable parameter and must be dedicated to hold the v4_state entries only. Both holding directories should be located in the same filesystem. The nfs.flm script now copies v4_state entries from /var/nfs4/v4_state to the NFSv4 holding directory when copying SM entries from the /var/statmon/sm directory into the NFSv2 and NFSv3 holding directory.
http://docs.hp.com/en/ha.html#Serviceguard Serviceguard A.11.17 is not available on HP-UX 11i v1 systems, so Serviceguard CFS support is only applicable to HP-UX 11i v2. Limitations The following is a list of limitations when using Serviceguard NFS Toolkit A.11.23.05 with Serviceguard A.11.17: • • Serviceguard A.11.17 introduces a new MULTI_NODE package type which is not supported by Serviceguard NFS Toolkit. The only supported package type is FAILOVER. Serviceguard A.11.
Figure 1-1 CFS versus Non-CFS (VxFS) Implementation In a Serviceguard CFS environment, files and filesystems are concurrently accessible on multiple nodes. When a package fails over, the adoptive systems do not have to mount the disks from the failed system because they are already mounted. There is a new multi-node package that runs on each server in the cluster and exports all the cluster filesystems. The exported filesystems do not have to be reexported when a package fails over.
NOTE: The implementation of a load balancer or DNS round-robin scheme is optional and is beyond the scope of this publication. For more information about DNS round-robin addressing refer to the BIND Name Service Overview section in the HP-US IP Address and Client Administrator's Guide.
Figure 1-4 SG NFS Servers over CFS — High Availability, File Locking Supported Configurations Serviceguard NFS supports the following configurations: • • • • • Simple failover from an active NFS server node to an idle NFS server node. Failover from one active NFS server node to another active NFS server node, where the adoptive node supports more than one NFS package after the failover. A host configured as an adoptive node for more than one NFS package.
Figure 1-5 Simple Failover to an Idle NFS Server Node_A is the primary node for NFS server package Pkg_1. When Node_A fails, Node_B adopts Pkg_1. This means that Node_B locally mounts the file systems associated with Pkg_1 and exports them. Both Node_A and Node_B must have access to the disks that hold the file systems for Pkg_1. Failover from One Active NFS Server to Another Figure 1-6 shows a failover from one active NFS server node to another active NFS server node.
Figure 1-6 Failover from One Active NFS Server to Another In Figure 1-6, Node_A is the primary node for Pkg_1, and Node_B is the primary node for Pkg_2. When Node_A fails, Node_B adopts Pkg_1 and becomes the server for both Pkg_1 and Pkg_2. A Host Configured as Adoptive Node for Multiple Packages Figure 1-7 shows a three-node configuration where one node is the adoptive node for packages on both of the other nodes. If either Node_A or Node_C fails, Node_B adopts the NFS server package from that node.
Figure 1-7 A Host Configured as Adoptive Node for Multiple Packages When Node_A fails, Node_B becomes the server for Pkg_1. If Node_C fails, Node_B will become the server for Pkg_2. Alternatively, you can set the package control option in the control script, nfs.cntl, to prevent Node_B from adopting more than one package at a time. With the package control option, Node_B may adopt the package of the first node that fails, but if the second node fails, Node_B will not adopt its package.
Figure 1-8 Cascading Failover with Three Adoptive Nodes Server-to-Server Cross Mounting Two NFS server nodes may NFS-mount each other's file systems and still act as adoptive nodes for each other's NFS server packages. Figure 1-9 illustrates this configuration.
Figure 1-9 Server-to-Server Cross Mounting The advantage of server-to-server cross-mounting is that every server has an identical view of the file systems. The disadvantage is that, on the node where a file system is locally mounted, the file system is accessed through an NFS mount, which has poorer performance than a local mount. Each node NFS-mounts the file systems for both packages. If Node_A fails, Node_B mounts the filesystem for Pkg_1, and the NFS mounts are not interrupted.
• • • Initiates the NFS monitor script to check periodically on the health of NFS services, if you have configured your NFS package to use the monitor script. Exports each file system associated with the package so that it can later be NFS-mounted by clients. Assigns a package IP address to the LAN card on the current node. After this sequence, the NFS server is active, and clients can NFS-mount the exported file systems associated with the package.
rpc.statd, rpc.lockd, nfsd, rpc.mountd, rpc.pcnfsd, and nfs.flm processes. You can monitor any or all of these processes as follows: • • • • To monitor the rpc.statd, rpc.lockd, and nfsd processes, you must set the NFS_SERVER variable to 1 in the /etc/rc.config.d/nfsconf file. If one nfsd process dies or is killed, the package fails over, even if other nfsd processes are running. To monitor the rpc.mountd process, you must set the START_MOUNTD variable to 1 in the /etc/rc.config.d/nfsconf file.
2 Installing and Configuring Serviceguard NFS Legacy Package This chapter explains how to configure Serviceguard NFS legacy package. You must set up your Serviceguard cluster before you can configure legacy package. For instructions on setting up an Serviceguard cluster, see the Managing Serviceguard manual.
cmmakepkg -p /opt/cmcluster/nfs/nfs.conf 2. Run the following command to create the package control template file: cmmakepkg -s /opt/cmcluster/nfs/nfs.cntl 3. 4. Create a directory, /etc/cmcluster/nfs. Run the following command to copy the Serviceguard NFS template files to the newly created /etc/cmcluster/nfs directory: cp /opt/cmcluster/nfs/* /etc/cmcluster/nfs Before Creating a Serviceguard NFS Legacy Package Before creating a Serviceguard NFS legacy package, perform the following tasks: 1. 2.
7. 8. Configure the disk hardware for high availability. Disks must be protected using HP's MirrorDisk/UX product or an HP High Availability Disk Array with PV links. Data disks associated with Serviceguard NFS must be external disks. All the nodes that support the Serviceguard NFS package must have access to the external disks. For most disks, this means that the disks must be attached to a shared bus that is connected to all nodes that support the package.
Configuring a Serviceguard NFS Legacy Package To configure a Serviceguard NFS legacy package, complete the following tasks, included in this section: • • • • • • • • “Copying the Template Files” “Editing the Control Script (nfs.cntl)” “Editing the NFS Control Script (hanfs.sh) ” “Editing the File Lock Migration Script (nfs.flm)” “Editing the NFS Monitor Script (nfs.mon)” “Editing the Package Configuration File (nfs.
Editing nfs.cntl for NFS Toolkit A.11.00.05, A.11.11.02 (or above) and A.11.23.01 (or above) Starting with Serviceguard A.11.13, a package can have LVM volume groups, CVM disk groups and VxVM disk groups. Example steps: 1. Create a separateVG[n] variable for each LVM volume group that is used by the package: VG[0]=/dev/vg01 VG[1]=/dev/vg02 ... 2. Create a separate VXVM_DG[n] variable for each VxVM disk group that is used by the package: VXVM_DG[0]=dg01 VXVM_DG[1]=dg02 ... 3.
There is a short time, after one package has failed over but before the cmmodpkg command has executed, when the other package can fail over and the host will adopt it. In other words, if two packages fail over at approximately the same time, a host may adopt both packages, even though the package control option is specified. See “Example Two - One Adoptive Node for Two Packages with File Lock Migration” (page 71) for a sample configuration using the package control option.
defaults), the package will switch to the next adoptive node or to a standby network interface in the event of a node or network failure. The NFS monitor script causes the package failover if any of the monitored NFS services fails. 6. If you run the NFS monitor script, set the NFS_SERVICE_CMD variable to the full path name of the NFS monitor script. NFS_SERVICE_CMD[0]=/etc/cmcluster/nfs/nfs.mon The path name for the executable script does not have to be unique to each package.
happens if the server is an adoptive node for a file system, and the file system is available on the server only after failover of the primary node. “-o fsid=” must be used to force the file system ID portion of the file handle to be when clusters are composed of mixed architectures such as HP Integrity servers and HP 9000 Series 800 computers over CFS. A value between 1 and 32767 may be used, but must be unique among the shared file systems. See share_nfs(1m) for detailed information. 2.
NOTE: The file name of the NFS_FLM_SCRIPT script must be limited to 13 characters or fewer. Editing the File Lock Migration Script (nfs.flm) The File Lock Migration script, nfs.flm, handles the majority of the work involved in maintaining file lock integrity that follows an HA/NFS failover. The nfs.flm script includes the following configurable parameters: • NFS_FLM_HOLDING_DIR - Name of a unique directory created in one of the shared volumes associated with this package.
NOTE: fewer. The file name of the NFS_FLM_SCRIPT script must be limited to 13 characters or NOTE: The nfs.mon script uses rpcinfo calls to check the status of various processes. If the rpcbind process is not running, the rpcinfo calls time out after 75 seconds. Because 10 rpcinfo calls are attempted before failover, it takes approximately 12 minutes to detect the failure. This problem has been fixed in release version 11.11.04 and 11.23.03. 2. You can call the nfs.
Each package must have a unique service name. The SERVICE_NAME variable in the package configuration file must match the NFS_SERVICE_NAME variable in the NFS control script. If you do not want to run the NFS monitor script, comment out the SERVICE_NAME variable: # SERVICE_NAME nfs.monitor 6. Set the SUBNET variable to the subnet that will be monitored for the package. SUBNET 15.13.112.
In order to make a Serviceguard file system available to all servers, all servers must NFS-mount the file system. That way, access to the file system is not interrupted when the package fails over to an adoptive node. An adoptive node cannot access the file system through the local mount, because it would have to unmount the NFS-mounted file system before it could mount it locally. And in order to unmount the NFS-mounted file system, it would have to kill all processes using the file system.
1. Use the cmquerycl command in the following manner to create the cluster configuration file from your package configuration files. You must run this command on all nodes in the cluster: cmquerycl -v -C /etc/cmcluster/nfs/cluster.conf -n basil -n sage -n thyme 2. 3. Set the FIRST_CLUSTER_LOCK_VG and MAX_CONFIGURED_PACKAGES variables in the cluster.conf script on each node.
Create a directory on each server in the cluster to hold all of the configuration files (if this directory already exists you should save the contents before continuing): # mkdir /etc/cmcluster/nfs The rest of the configuration is dependent upon whether or not the cluster requires file locking (as described in the Issues and Limitations with the current CFS Implementation section.
3. Edit the nfs-export.cntl script and set the HA_NFS_SCRIPT_EXTENSION to export.sh. Note that when you edit the configuration scripts referred to throughout this document, you may have to uncomment the lines as you edit them. # HA_NFS_SCRIPT_EXTENSION = "export.sh" This will set the NFS specific control script to be run by the package to hanfs.export.sh as we have named it in the copy command above. No other changes are needed in this script. 4. Edit the hanfs.export.
XFS[0]="/cfs1" XFS[1]="/cfs2" CAUTION: Do not modify other variables or contents in this script since doing so is not supported. 5. Edit the nfs-export.conf file as follows: a. Set the PACKAGE_NAME variable to SG-NFS-XP-1 (by default this variable is set to FAILOVER) PACKAGE_NAME b. SG-NFS-XP-1 Change the PACKAGE_TYPE from FAILOVER to MULTI_NODE PACKAGE_TYPE c.
# cmapplyconf -v -C /etc/cmcluster/cluster.conf -P /etc/cmcluster/nfs/nfs-export.conf 4. Run the export package on a single server with the following command # cmrunpkg -v SG-NFS-XP-1 5. You can verify the export package is running with the cmviewcl command.
4. Edit the hanfs.sh scripts (hanfs.1.sh and hanfs.2.sh). if you want to monitor NFS services (by running the NFS monitor script). To monitor NFS services, set the NFS_SERVICE_NAME and NFS_SERVICE_CMD variables NFS_SERVICE_NAME[0]=nfs1.monitor NFS_SERVICE_CMD[0]=/etc/cmcluster/nfs/nfs1.mon In hanfs.2.sh, set NFS_SERVICE_NAME[0] to nfs2.monitor and set NFS_SERVICE_CMD[0] to /etc/cmcluster/nfs/nfs2.mon. If you do not want to monitor NFS services, leave these variables commented out. 5. Edit the nfs.
3. Verify and apply the cluster package configuration files on a single server # cmapplyconf -v -C /etc/cmcluster/cluster.conf -P /etc/cmcluster/nfs/nfs1.conf -P /etc/cmcluster/nfs/nfs2.conf 4.
Figure 2-3 Serviceguard NFS over CFS with file locking Configuring a Serviceguard NFS failover package Configuring a Serviceguard NFS failover package for a CFS environment is similar to configuring the package for a non-CFS environment. The main difference is that you must configure one failover package for each server that exports CFS. Use the following procedure to configure a failover package. 1.
a. Set the exported directory in hanfs.1.sh XFS[0]="/cfs1" b. c. Set XFS[0] to /cfs2 in hanfs.2sh If you want to monitor NFS services (by running the NFS monitor script), set the NFS_SERVICE_NAME and NFS_SERVICE_CMD variables in hanfs1.sh NFS_SERVICE_NAME[0]=nfs1.monitor NFS_SERVICE_CMD[0]=/etc/cmcluster/nfs/nfs1.mon d. 5. In hanfs.2.sh, set NFS_SERVICE_NAME[0] to nfs2.monitor and set NFS_SERVICE_CMD[0] to /etc/cmcluster/nfs/nfs2.mon.
6. Edit the nfs.conf scripts (nfs1.conf and nfs2.conf a. Specify the package name PACKAGE_NAME b. c. SG-NFS1 In nfs2.conf set the PACKAGE_NAME to SG-NFS2 Set the NODE_NAME variables for each node that can run the package. The first NODE_NAME should specify the primary node, followed by adoptive node(s) in the order in which they will be tried NODE_NAME thyme NODE_NAME basil d.
4.
3 Installing and Configuring Serviceguard NFS Modular Package This chapter explains how to configure Serviceguard NFS modular package. You must set up your Serviceguard cluster before you can configure Serviceguard NFS modular package. For more information on setting up a Serviceguard cluster, see the Managing Serviceguard manual.
NOTE: If the package is getting configured to have NFS file lock migration feature, it is strongly recommended to list the hanfs module as the last module name parameter in the cmmakepkg command. Example 3-1 In the example below, two module names mod1 and mod2 are configured with hanfs module. The hanfs is specified as the last module name in the cmmakepkg command. cmmakepkg -m mod1 -m mod2 -m nfs/hanfs /etc/cmcluster/pkg1/nfs.
7. Configure the disk hardware for high availability. Disks must be protected using HP's MirrorDisk/UX product or an HP High Availability Disk Array with PV links. Data disks associated with Serviceguard NFS must be external disks. All the nodes that support the Serviceguard NFS package must have access to the external disks. For most disks, this means that the disks must be attached to a shared bus that is connected to all nodes that support the package.
Configuring a Serviceguard NFS Modular Package To configure a Serviceguard NFS modular package, complete the following tasks, included in this section: • • • “Editing the Package Configuration File (nfs.conf)” “Configuring Server-to-Server Cross-Mounts (Optional)” “Creating the Cluster Configuration File and Bringing Up the Cluster” Editing the Package Configuration File (nfs.conf) The nfs.conf file lists all the Serviceguard and HA-NFS related parameters.
vxvm_dg vxvm_dg 8.
NOTE: The NFS parameters in the package configuration file generated on Serviceguard A.11.18 and Serviceguard A.11.19 are listed in the steps below. The parameter for Serviceguard A.11.19 includes the module name (nfs/hanfs_export/ or nfs/hanfs_flm/) of the parameters. 9. Create a separate XFS variable for each NFS directory to be exported. Specify the directory name and any export options. The directories must be defined in the above mounted file system list.
13. Modify the MONITOR_DAEMONS_RETRY parameter, that represents the number of attempts to ping the rpc.statd, rpc.mountd, nfsd, rpc.pcnfsd and file lock migration script, processes before exiting. The default is 4 attempts. An example for this parameter is as follows: For nfs.conf generated on Serviceguard A.11.18 MONITOR_DAEMONS_RETRY 4 For nfs.conf generated on Serviceguard A.11.19 nfs/hanfs_export/MONITOR_DAEMONS_RETRY 4 14.
NOTE: If you enable the File Lock Migration feature, an NFS client (or a group of clients) may encounter a situation where requesting a file lock on the HA/NFS server and not receiving a crash recovery notification message when the HA/NFS package migrates to an adoptive node.
Figure 3-1 Server-to-Server Cross-Mounting The advantage of server-to-server cross-mounting is that every server has an identical view of the file systems. The disadvantage is that, on the node where a file system is locally mounted, the file system is accessed through an NFS mount, which has poorer performance than a local mount. In order to make a Serviceguard file system available to all servers, all servers must NFS-mount the file system.
server location of the file system, and the CNFS[n] variable is the client mount point of the file system. SNFS[0]="nfs1:/hanfs/nfsu011";CNFS[0]="/nfs/nfsu011" In this example, nfs1 is the name that maps to the relocatable IP address of the package. It must be configured in the service name used by the server (DNS, NIS, or /etc/hosts file). If a server for the package will NFS-mount the package file systems, the client mount point (CNFS) must be different from the server location (SNFS). 4. 5.
You must run this command on all nodes in the cluster. 2. 3. Set the FIRST_CLUSTER_LOCK_VG and MAX_CONFIGURED_PACKAGES variables in the cluster.conf script on each node. Verify the cluster and package configuration files on each node: cmcheckconf -k -v -C /etc/cmcluster/nfs/cluster.conf -P /etc/cmcluster/nfs/nfs1.conf -P /etc/cmcluster/nfs/nfs2.conf ... 4. Activate the cluster lock volume group (corresponding to the FIRST_CLUSTER_LOCK_VG value) on one node: vgchange -a y /dev/vg_nfsu01 5.
4 Migration of Serviceguard NFS Legacy Package to Serviceguard NFS Modular Package This chapter explains how to migrate from the current Serviceguard NFS Legacy package to the new Serviceguard NFS Modular package. The migration of the Legacy packages to the Modular package configuration is done using the cmmigratepkg command. The cmmigratepkg command automates the migration of a legacy package to a modular package for standard Serviceguard module parameters.
6. Run cmmigratepkg command to create a temporary configuration file with the standard Serviceguard parameters derived using the information from the legacy package configuration. cmmigratepkg -p pkg01 -o /etc/cmcluster/nfs/modular/param.conf If there are any changes in the customer-defined functions area of the legacy package control script (for example, in customer_defined_run_cmds function of nfs1.cntl file), cmmigratepkg command copies the changes to an external script by using -x option.
9. There are two methods to configure Server-to-Server Cross-Mounts. See the “Configuring Server-to-Server Cross-Mounts (Optional)” (page 56) section for more information. The second method utilizes the external script generated by the -x option in cmmigratepkg command. 10. Check the configuration of the new modular package using cmcheckconf command. cmcheckconf -P /etc/cmcluster/nfs/modular/pkg.conf 11. Apply the new modular package configuration using cmapplyconf command.
5 Sample Configurations for Legacy Package This chapter explains how to create sample configuration files for Serviceguard NFS legacy package. It gives sample cluster configuration files, package configuration files, and control scripts for the following configurations: • • • • Example One - Three-Server Mutual Takeover: This configuration has three servers and three Serviceguard NFS packages. Each server is the primary node for one package and an adoptive node for the other two packages.
Figure 5-1 Three-Server Mutual Takeover Figure 5-2 shows the three-server mutual takeover configuration after host basil has failed and host sage has adopted pkg02. Dotted lines indicate which servers are adoptive nodes for the packages.
Figure 5-2 Three-Server Mutual Takeover after One Server Fails Cluster Configuration File for Three-Server Mutual Takeover This section shows the cluster configuration file (cluster.conf) for this configuration example. The comments available in the generated configuration file are not displayed in this sample. CLUSTER_NAME MutTakOvr FIRST_CLUSTER_LOCK_VG /dev/nfsu01 NODE_NAME thyme NETWORK_INTERFACE lan0 HEARTBEAT_IP 15.13.119.
AUTO_START_TIMEOUT 600000000 NETWORK_POLLING_INTERVAL 2000000 MAX_CONFIGURED_PACKAGES 3 VOLUME_GROUP VOLUME_GROUP VOLUME_GROUP /dev/nfsu01 /dev/nfsu02 /dev/nfsu03 Package Configuration File for pkg01 This section shows the package configuration file (nfs1.conf) for the package pkg01 in this sample configuration. The comments available in the generated configuration file are not displayed in this sample.
IP[0]=15.13.114.243 SUBNET[0]=15.13.112.0 The hanfs.sh Control Script This section shows the NFS control script (hanfs1.sh) for the pkg01 package in this sample configuration. This example includes only the user-configured part of the script; the executable part of the script and most of the comments are omitted. This example does not enable the File Lock Migration feature. XFS[0]=/hanfs/nfsu011 NFS_SERVICE_NAME[0]="nfs1.monitor" NFS_SERVICE_CMD[0]="/etc/cmcluster/nfs/nfs.
FS_UMOUNT_COUNT=1 FS_MOUNT_RETRY_COUNT=0 IP[0]=15.13.112.244 SUBNET[0]=15.13.112.0 The hanfs.sh Control Script This section shows the NFS control script (hanfs2.sh) for the pkg02 package in this sample configuration. This example includes only the user-configured part of the script; the executable part of the script and most of the comments are omitted. This example does not enable the File Lock Migration feature. XFS[0]=/hanfs/nfsu021 NFS_SERVICE_NAME[0]="nfs2.
CVM_ACTIVATION_CMD="vxdg -g \$DiskGroup set activation=exclusivewrite" VG[0]=nfsu03 LV[0]=/dev/nfsu03/lvol1; FS[0]=/hanfs/nfsu031; FS_MOUNT_OPT[0]="-o rw" VXVOL="vxvol -g \$DiskGroup startall" #Default FS_UMOUNT_COUNT=1 FS_MOUNT_RETRY_COUNT=0 IP[0]=15.13.114.245 SUBNET[0]=15.13.112.0 The hanfs.sh Control Script This section shows the NFS control script (hanfs3.sh) for the pkg03 package in this sample configuration.
Figure 5-4 shows this sample configuration after host basil has failed. Host sage has adopted pkg02. The package control option prevents host sage from adopting another package, so host sage is no longer an adoptive node for pkg01. Figure 5-4 One Adoptive Node for Two Packages after One Server Fails This sample configuration also enables the File Lock Migration feature.
AUTO_START_TIMEOUT 600000000 NETWORK_POLLING_INTERVAL 2000000 MAX_CONFIGURED_PACKAGES 2 VOLUME_GROUP VOLUME_GROUP /dev/nfsu01 /dev/nfsu02 Package Configuration File for pkg01 This section shows the package configuration file (nfs1.conf) for the package pkg01 in this sample configuration. The comments available in the generated configuration file are not displayed in this sample.
function customer_defined_run_cmds { cmmodpkg -d -n 'hostname' pkg02 & } The function customer_defined_run_cmds calls the cmmodpkg command with the package control option (-d). This command prevents the host that is running pkg01 from adopting pkg02. The ampersand (&) causes the cmmodpkg command to run in the background. It must run in the background to allow the control script to complete.
FAILOVER_POLICY FAILBACK_POLICY CONFIGURED_NODE MANUAL NODE_NAME NODE_NAME basil sage AUTO_RUN YES LOCAL_LAN_FAILOVER_ALLOWED YES NODE_FAIL_FAST_ENABLED NO RUN_SCRIPT RUN_SCRIPT_TIMEOUT HALT_SCRIPT HALT_SCRIPT_TIMEOUT /etc/cmcluster/nfs/nfs2.cntl NO_TIMEOUT /etc/cmcluster/nfs/nfs2.cntl NO_TIMEOUT SERVICE_NAME nfs2.monitor SERVICE_FAIL_FAST_ENABLED NO SERVICE_HALT_TIMEOUT 300 SUBNET 15.13.112.0 NFS Control Scripts for pkg02 The nfs.
The hanfs.sh Control Script This section shows the NFS control script (hanfs2.sh) for the pkg02 package in this sample configuration. This example includes only the user-configured part of the script; the executable part of the script and most of the comments are omitted. This example enables the File Lock Migration feature. XFS[0]=/hanfs/nfsu021 NFS_SERVICE_NAME[0]="nfs2.monitor" NFS_SERVICE_CMD[0]="/etc/cmcluster/nfs/nfs2.mon" NFS_FILE_LOCK_MIGRATION=1 NFS_FLM_SCRIPT="${0%/*}/nfs2.
Figure 5-5 Cascading Failover with Three Servers Figure 5-6 shows the cascading failover configuration after host thyme has failed. Host basil is the first adoptive node configured for pkg01, and host sage is the first adoptive node configured for pkg02. Figure 5-6 Cascading Failover with Three Servers after One Server Fails Cluster Configuration File for Three-Server Cascading Failover This section shows the cluster configuration file (cluster.conf) for this configuration example.
CLUSTER_NAME Cascading FIRST_CLUSTER_LOCK_VG /dev/nfsu01 NODE_NAME thyme NETWORK_INTERFACE lan0 HEARTBEAT_IP 15.13.119.146 NETWORK_INTERFACE lan1 FIRST_CLUSTER_LOCK_PV /dev/dsk/c0t1d0 NODE_NAME basil NETWORK_INTERFACE lan0 HEARTBEAT_IP 15.13.113.168 FIRST_CLUSTER_LOCK_PV /dev/dsk/c1t1d0 NODE_NAME sage NETWORK_INTERFACE lan0 HEARTBEAT_IP 15.13.115.
NFS Control Scripts for pkg01 The nfs.cntl Control Script This section shows the NFS control script (nfs1.cntl) for the pkg01 package in this sample configuration. Only the user-configured part of the script is shown; the executable part of the script and most of the comments are omitted.
HALT_SCRIPT HALT_SCRIPT_TIMEOUT /etc/cmcluster/nfs/nfs2.cntl NO_TIMEOUT SERVICE_NAME nfs2.monitor SERVICE_FAIL_FAST_ENABLED NO SERVICE_HALT_TIMEOUT 300 SUBNET 15.13.112.0 NFS Control Scripts for pkg02 The nfs.cntl Control Script This section shows the NFS control script (nfs2.cntl) for the pkg02 package in this sample configuration. Only the user-configured part of the script is shown; the executable part of the script and most of the comments are omitted.
Figure 5-7 Two Servers with NFS Cross-Mounts Figure 5-8 shows two servers with NFS cross-mounted file systems after server thyme has failed. The NFS mounts on server basil are not interrupted.
Figure 5-8 Two Servers with NFS Cross-Mounts after One Server Fails Cluster Configuration File for Two-Server NFS Cross-Mount This section shows the cluster configuration file (cluster.conf) for this configuration example. The comments available in the generated configuration file are not displayed in this sample. CLUSTER_NAME XMnt FIRST_CLUSTER_LOCK_VG /dev/nfsu01 NODE_NAME thyme NETWORK_INTERFACE lan0 HEARTBEAT_IP 15.13.119.
Package Configuration File for pkg01 This section shows the package configuration file (nfs1.conf) for the package pkg01 in this sample configuration. The comments available in the generated configuration file are not displayed in this sample.
The function customer_defined_run_cmds calls a script called nfs1_xmnt. This script NFS-mounts the file system exported by the package pkg01. If you configured the file system in the /etc/fstab file, the package might not be active yet when the servers tried to mount the file system at system boot. By configuring the NFS control script to NFS-mount the file system, you ensure that the package is active before the mount command is invoked.
NODE_FAIL_FAST_ENABLED NO RUN_SCRIPT RUN_SCRIPT_TIMEOUT HALT_SCRIPT HALT_SCRIPT_TIMEOUT /etc/cmcluster/nfs/nfs2.cntl NO_TIMEOUT /etc/cmcluster/nfs/nfs2.cntl NO_TIMEOUT SERVICE_NAME nfs2.monitor SERVICE_FAIL_FAST_ENABLED NO SERVICE_HALT_TIMEOUT 300 SUBNET 15.13.112.0 NFS Control Scripts for pkg02 The nfs.cntl Control Script This section shows the NFS control script (nfs2.cntl) for the pkg02 package in this sample configuration.
The client mount point, specified in the CNFS[0] variable, must be different from the location of the file system on the server (SNFS[0]). The hanfs.sh Control Script This section shows the NFS control script (hanfs2.sh) for the pkg02 package in this sample configuration. This example includes only the user-configured part of the script; the executable part of the script and most of the comments are omitted. This example does not enable the File Lock Migration feature.
6 Sample Configurations for Modular Package This chapter explains how to create sample configuration files for Serviceguard NFS modular package. It gives sample cluster configuration files and package configuration files for the following configurations: • • • • Example One - Three-Server Mutual Takeover: This configuration has three servers and three Serviceguard NFS packages. Each server is the primary node for one package and an adoptive node for the other two packages.
Figure 6-1 Three-Server Mutual Takeover Figure 6-2 shows the three-server mutual takeover configuration after host basil has failed and host sage has adopted pkg02. Dotted lines indicate which servers are adoptive nodes for the packages.
Figure 6-2 Three-Server Mutual Takeover after One Server Fails Cluster Configuration File for Three-Server Mutual Takeover This section shows the cluster configuration file (cluster.conf) for this configuration example. The comments available in the generated configuration file are not displayed in this sample. CLUSTER_NAME MutTakOvr FIRST_CLUSTER_LOCK_VG /dev/nfsu01 NODE_NAME thyme NETWORK_INTERFACE lan0 HEARTBEAT_IP 15.13.119.
AUTO_START_TIMEOUT 600000000 NETWORK_POLLING_INTERVAL 2000000 MAX_CONFIGURED_PACKAGES 3 VOLUME_GROUP VOLUME_GROUP VOLUME_GROUP /dev/nfsu01 /dev/nfsu02 /dev/nfsu03 Package Configuration File for pkg01 This section shows the package configuration file (nfs1.conf) for the package pkg01 in this sample configuration. The comments available in the generated configuration file are not displayed in this sample.
failover_policy failback_policy configured_node manual node_name node_name node_name basil sage thyme auto_run yes local_lan_failover_allowed yes node_fail_fast_enabled no service_name service_fail_fast_enabled service_halt_timeout nfs2.
vgchange_cmd "vgchange -a e" cvm_activation_cmd vg # Default "vxdg -g \$DiskGroup set activation=exclusivewrite" nfsu03 fs_name /dev/nfsu03/lvol1 fs_directory /hanfs/nfsu031 fs_mount_opt "-o rw" vxvol "vxvol -g \$DiskGroup startall" fs_umount_count_retry fs_mount_retry_count ip_address ip_subnet XFS #Default 1 0 15.13.114.245 15.13.112.
Figure 6-4 One Adoptive Node for Two Packages after One Server Fails This sample configuration also enables the File Lock Migration feature. Cluster Configuration File for Adoptive Node for Two Packages with File Lock Migration This section shows the cluster configuration file (cluster.conf) for this configuration example. The comments available in the generated configuration file are not displayed in this sample.
VOLUME_GROUP VOLUME_GROUP /dev/nfsu01 /dev/nfsu02 Package Configuration File for pkg01 This section shows the package configuration file (nfs1.conf) for the package pkg01 in this sample configuration. The comments available in the generated configuration file are not displayed in this sample.
function start_command { cmmodpkg -d -n 'hostname' pkg02 & return 0 } The function start_command in the external script calls the cmmodpkg command with the package control option (-d). This command prevents the host that is running pkg01 from adopting pkg02. The ampersand (&) causes the cmmodpkg command to run in the background. It must run in the background to allow the control script to complete.
XFS "/hanfs/nfsu021" FILE_LOCK_MIGRATION FLM_HOLDING_DIR 1 "/hanfs/nfsu021/sm" PROPAGATE_INTERVAL 5 The external script file for this package can be created in any package specific location using the external script template file. cp /etc/cmcluster/examples/external_script.template /pkg02_location/pkg02_ext Also, specify the external script file location in the package configuration file (nfs2.
Figure 6-5 Cascading Failover with Three Servers Figure 6-6 shows the cascading failover configuration after host thyme has failed. Host basil is the first adoptive node configured for pkg01, and host sage is the first adoptive node configured for pkg02. Figure 6-6 Cascading Failover with Three Servers after One Server Fails Cluster Configuration File for Three-Server Cascading Failover This section shows the cluster configuration file (cluster.conf) for this configuration example.
CLUSTER_NAME Cascading FIRST_CLUSTER_LOCK_VG /dev/nfsu01 NODE_NAME thyme NETWORK_INTERFACE lan0 HEARTBEAT_IP 15.13.119.146 NETWORK_INTERFACE lan1 FIRST_CLUSTER_LOCK_PV /dev/dsk/c0t1d0 NODE_NAME basil NETWORK_INTERFACE lan0 HEARTBEAT_IP 15.13.113.168 FIRST_CLUSTER_LOCK_PV /dev/dsk/c1t1d0 NODE_NAME sage NETWORK_INTERFACE lan0 HEARTBEAT_IP 15.13.115.
fs_name /dev/nfsu01/lvol1 fs_directory /hanfs/nfsu011 fs_mount_opt "-o rw" vxvol_cmd "vxvol -g \$DiskGroup startall" fs_umount_retry_count fs_mount_retry_count ip_address ip_subnet XFS #Default 1 0 15.13.114.243 15.13.112.0 "/hanfs/nfsu011" FILE_LOCK_MIGRATION 0 Package Configuration File for pkg02 This section shows the package configuration file (nfs2.conf) for the package pkg02 in this sample configuration.
Example Four - Two Servers with NFS Cross-Mounts This configuration has two servers and two packages. The primary node for each package NFS-mounts the file systems from its own package and the other package. Figure 6-7 illustrates this configuration. If one server fails, the other server adopts its package. The NFS mounts are not interrupted when a package fails over. Figure 6-7 Two Servers with NFS Cross-Mounts Figure 6-8 shows two servers with NFS cross-mounted file systems after server thyme has failed.
Figure 6-8 Two Servers with NFS Cross-Mounts after One Server Fails Cluster Configuration File for Two-Server NFS Cross-Mount This section shows the cluster configuration file (cluster.conf) for this configuration example. The comments available in the generated configuration file are not displayed in this sample. CLUSTER_NAME XMnt FIRST_CLUSTER_LOCK_VG /dev/nfsu01 NODE_NAME thyme NETWORK_INTERFACE lan0 HEARTBEAT_IP 15.13.119.
Package Configuration File for pkg01 This section shows the package configuration file (nfs1.conf) for the package pkg01 in this sample configuration. The comments available in the generated configuration file are not displayed in this sample. package_name package_type pkg01 failover failover_policy failback_policy configured_node manual node_name node_name thyme basil auto_run yes local_lan_failover_allowed yes node_fail_fast_enabled no service_name nfs1.
The external script invokes another script called nfs1_xmnt. This script NFS-mounts the file system exported by the package pkg01. If you configured the file system in the /etc/fstab file, the package might not be active yet when the servers tried to mount the file system at system boot. By configuring the external script to NFS-mount the file system, you ensure that the package is active before the mount command is invoked.
vxvol_cmd "vxvol -g \$DiskGroup startall" #Default fs_umount_retry_count fs_mount_retry_count ip_address ip_subnet XFS 1 0 15.13.114.244 15.13.112.0 "/hanfs/nfsu021" FILE_LOCK_MIGRATION 0 The external script file for this package can be created in any package specific location using the external script template file. cp /etc/cmcluster/examples/external_script.template /pkg02_location/pkg02_ext Also, specify the external script file location in the package configuration file (nfs2.
Index ADF, 10 adoptive nodes, 16 configuring, 34, 52 example of package control option, 71, 92 for multiple packages, 16, 18, 29, 31 illustration of cascading failover, 19 automounter timeout, 23 /etc/exports file, 26, 30, 32, 50, 54 /etc/fstab file, 84, 85, 103, 104 /etc/group file, 27, 51 /etc/hosts file, 27, 29, 30, 36, 51, 52, 58, 84, 85, 103, 104 /etc/passwd file, 27, 51 /etc/rc.config.
IP variable, in nfs.cntl script, 30 J journalled file systems (xvfs), 27, 51 L lockd monitoring, 23 restarting, 22 stopping, 22 logging, NFS monitor script, 23 logical volumes configuration, 27, 51 specifying in nfs.cntl, 29, 30 specifying in pkg.conf, 52 LV variable, in nfs.cntl script, 29, 30 LV variable, in pkg1.conf, 52 LVM volume groups, 29 M monitor script (nfs.mon), 22 logging, 23 specified in hanfs.sh, 32, 52 specified in nfs.cntl, 30 specified in nfs.
restarting, 22 stopping, 22 rpcinfo command, 22 RUN_SCRIPT, in nfs.conf, 34 RUN_SCRIPT_TIMEOUT, in nfs.conf, 34 S sample configurations, 65, 87 SD-UX (Software Distributor), 25, 49 servers, multiple active, 16, 65, 87 SERVICE_NAME, in nfs.