VERITAS Storage Foundation™ Cluster File System 4.
Disclaimer The information contained in this publication is subject to change without notice. VERITAS Software Corporation makes no warranty of any kind with regard to this manual, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. VERITAS Software Corporation shall not be liable for errors contained herein or for incidental or consequential damages in connection with the furnishing, performance, or use of this manual.
Contents Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix How This Guide is Organized . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x Getting Help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xii Chapter 1. Technical Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Installing the Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Starting VEA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Creating a Dynamic (Shared) Disk Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Creating a Dynamic (Shared) Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
CVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 About CFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 SFCFS and the Group Lock Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Asymmetric Mounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Cluster/Shared Mounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 CFS Primary and CFS Secondary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Asymmetric Mounts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 CFS Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Chapter 9. CVM Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Overview of Cluster Volume Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 Private and Shared Disk Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Appendix A. Troubleshooting and Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 Installation Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 Incorrect Permissions for Root on Remote System . . . . . . . . . . . . . . . . . . . . . . . . . . 123 Resource Temporarily Unavailable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 Inaccessible System . . . . . . . . . . . . . . . . . .
Preface This guide provides selected extracts from the VERITAS Storage Foundation Cluster File System 4.1 Installation and Administration Guide that are relevant for deployments in HP Serviceguard Storage Management Suite environments. These extracts describe the VERITAS Storage Foundation Cluster File System (SFCFS).
How This Guide is Organized How This Guide is Organized Chapter 1. “Technical Overview” on page 1 provides a technical overview of the VERITAS Storage Foundation Cluster File System (SFCFS). Chapter 2. “Installing and Configuring” on page 9 provides lists of key terms, software packages, and prerequisites. Also includes instructions on installing and configuring the product and describes licensing requirements. Chapter 6.
How This Guide is Organized Conventions Preface Typeface Usage Examples monospace Computer output, files, directories, software elements such as command options, function names, and parameters Read tunables from the /etc/vx/tunefstab file. monospace (bold) User input # mount -F vxfs /h/filesys italic New terms, book titles, emphasis, variables replaced with a name or value See the User’s Guide for details.
Getting Help Getting Help For technical assistance, visit http://support.veritas.com and select phone or email support. This site also provides access to resources such as TechNotes, product alerts, software downloads, hardware compatibility lists, and the VERITAS customer email notification service. Use the Knowledge Base Search feature to access additional product information, including current and past releases of product documentation.
1 Technical Overview The VERITAS Storage Foundation Cluster File System (SFCFS) allows clustered servers to mount and use a file system simultaneously as if all applications using the file system were running on the same server. The VERITAS Volume Manager cluster functionality (CVM) makes logical volumes and raw device applications accessible throughout a cluster.
VERITAS Cluster File System Architecture VERITAS Cluster File System Architecture Master/Slave File System Design The VERITAS Cluster File System uses a master/slave, or primary/secondary, architecture to manage file system metadata on shared disk storage. The first server to mount each cluster file system becomes its primary; all other nodes in the cluster become secondaries. Applications access the user data in files directly from the server on which they are running.
VxFS Functionality on Cluster File Systems CFS and the Group Lock Manager CFS uses the VERITAS Group Lock Manager (GLM) to reproduce UNIX single-host file system semantics in clusters. This is most important in write behavior. UNIX file systems make writes appear to be atomic.
VxFS Functionality on Cluster File Systems Supported Features Features and Commands Supported on CFS Quick I/O The Quick I/O for Databases feature, using clusterized Oracle Disk Manager (ODM), is supported on CFS. Quick I/O is licensable only through VERITAS Database Editions products. Storage Checkpoints Storage Checkpoints are supported on cluster file systems, but are licensed only with other VERITAS products. Snapshots Snapshots are supported on cluster file systems.
VxFS Functionality on Cluster File Systems Unsupported Features Functionality described as not supported may not be expressly prevented from operating on cluster file systems, but the actual behavior is indeterminate. It is not advisable to use unsupported functionality on CFS, or to alternate mounting file systems with these options as local and cluster mounts. Features and Commands Not Supported on CFS Swap files Swap files are not supported on cluster mounted file system.
VxFS Functionality on Cluster File Systems Cluster File System Benefits and Applications Advantages to Using CFS CFS simplifies or eliminates system administration tasks that result from hardware limitations: 6 ◆ The CFS single file system image administrative model simplifies administration by making all file system management operations, except resizing and reorganization (defragmentation), independent of the location from which they are invoked.
VxFS Functionality on Cluster File Systems When to Use CFS You should use CFS for any application that requires the sharing of files, such as for home directories and boot server files, Web pages, and for cluster-ready applications. CFS is also applicable when you want highly available standby data, in predominantly read-only environments where you just need to access data, or when you do not want to rely on NFS for file sharing. Almost all applications can benefit from CFS.
VxFS Functionality on Cluster File Systems Using CFS on File Servers Two or more servers connected in a cluster configuration (that is, connected to the same clients and the same storage) serve separate file systems. If one of the servers fails, the other recognizes the failure, recovers, assumes the primaryship, and begins responding to clients using the failed server’s IP addresses.
2 Installing and Configuring This chapter describes how to install the VERITAS Storage Foundation Cluster File System (SFCFS). SFCFS requires several VERITAS software packages to configure a cluster and to provide messaging services.
Hardware Overview Hardware Overview VxFS cluster functionality runs optimally on a Fibre Channel fabric. Fibre Channel technology provides the fastest, most reliable, and highest bandwidth connectivity currently available. By employing Fibre Channel technology, CFS can be used in conjunction with the latest VERITAS Storage Area Network (SAN) applications to provide a complete data storage and retrieval solution.
Software Components Shared Storage Shared storage can be one or more shared disks or a disk array connected either directly to the nodes of the cluster or through a Fibre Channel Switch. Nodes can also have non-shared or local devices on a local I/O channel. The root file system is on a local device. Fibre Channel Switch Each node in the cluster must have a Fibre Channel I/O channel to access shared storage devices. The primary component of the Fibre Channel fabric is the Fibre Channel switch.
Software Components Required Patches Required patches include the following: 14 HP-UX Patch ID Description PHCO_32385 Enables fscat(1M). PHCO_32387 Enables getext(1M). PHCO_32388 Enables setext(1M). PHCO_32389 Enables vxdump(1M). PHCO_32390 Enables vxrestore(1M). PHCO_32391 Enables vxfsstat(1M). PHCO_32392 Enables vxtunefs(1M). PHCO_32393 Enables vxupgrade(1M). PHCO_32488 Enables LIBC for VxFS 4.1 file system. PHCO_32523 Enhancement to quota(1) for supporting large uids.
Software Components In addition to the above patches the EnableVXFS41 bundle needs to be installed before installing the SFCFS 4.1. This bundle is a HP bundle and contains enhancements to various commands to understand the new Version 6 layout. The EnableVXFS41 bundle contains the following patches: HP-UX Patch ID Description FSLibEnh Enhancement to LIBC libraries to understand VxFS disk layout Version 6. DiskQuota-Enh Enhancements to various quota related commands to support large uids.
Installing the Product To prevent conflicts with VxFS manual pages previously installed with JFS/OnLineJFS 3.5, the VxFS 4.1 manual pages are installed in the /opt/VRTS/vxfs4.1/man directory. The /opt/VRTS/vxfs4.1/man directory is automatically added to /etc/MANPATH when the VxFS 4.1 package is installed. Make sure that the /opt/VRTS/man directory or the /opt/VRTS/vxfs4.
Starting VEA Starting VEA The VERITAS Enterprise Administrator (VEA) is the graphical administrative interface for configuring shared storage devices used for CFS. The VEA GUI server package, VRTSob, is installed by the installer script on all nodes. To use VEA, the client package, VRTSobgui, must also be installed. The VEA client can be a node in the cluster or a remote system. If installed on a remote system, VEA can be used to configure storage for multiple clusters.
Creating a Dynamic (Shared) Disk Group 4. Enter a name for the dynamic disk group. 5. Check the Create Cluster Group checkbox. 6. Select Activation Mode and choose the mode from the displayed submenu. Click OK. 7. Select the shared disks you want to include in the group. Make sure the disks you want to include are in the right pane of the window, then click Next. 8. The next screen confirms the disks you have selected. Choose Yes to continue if the the disk selection is correct.
Creating a Dynamic (Shared) Volume Creating a Dynamic (Shared) Volume ▼ To create a dynamic volume, on the CVM master node 1. Right-click a dynamic disk group in the tree view of the VEA right pane and select New Volume from the context menu. The Create Volume wizard appears. You can also select the command from the Actions menu or click the New Volume tool on the toolbar (the third tool from the left side of the toolbar). Note The Activation Mode must be set to SW (shared write). 2.
Creating a Dynamic (Shared) Volume 5. Select the Concatenated volume type. Note You can choose any type for shared volumes except RAID-5. 6. Select one or more shared disks in the Select disks to use for volume screen. The default setting is for Volume Manager to assign the disks for you. To manually select the disks, click the Manually select disks to create volume radio button. If you select disks manually, the disks that you select will be displayed in the right pane when you click Next.
Creating a Dynamic (Shared) Volume 7. You can create a file system at this time by clicking the Cluster Mount box under Mount File System Details. 8. Check your selections in the final screen and click Finish to create the volume. By clicking the Previous button, you can go back and make changes before you click Finish. Notes Deleting a volume using VEA does not remove the Serviceguard multi-node package for that volume.
Creating a Dynamic (Shared) Volume Concatenated A concatenated volume consists of one or more regions of the specified disks. You have the option of placing a file system on the new volume or mirroring the volume. You can create a regular concatenated volume or a concatenated pro volume. A concatenated pro volume is layered and mirrored. Layout: Choose Concatenated or Concatenated Pro for the volume layout. Options: - To mirror the volume, select Mirrored.
Creating a Dynamic (Shared) Volume Command Line Examples You can also create shared volumes, create shared disk groups, and mount cluster file systems from the command line or using a script as shown in the examples below. Creating a Shared Disk Group from the Command Line You can use the following script to create a new shared disk group, for example, “cfsdg,” and add the disks to it. Fill in the name of your shared disk group, shared_dg_name, and the list of devices and controllers, shared_device_list.
Creating a Dynamic (Shared) Volume Creating a Shared Volume from the Command Line Create a shared volume on the CVM master.
3 Upgrading This chapter has been deleted Topics covered in this chapter include: ◆ Upgrade requirements ◆ Upgrading SFCFS 3.
4 Adding and Removing a Node This chapter has been deleted.
5 Uninstalling This chapter has been deleted ▼ To uninstall SFCFS HA 1. Log in as root. 2. Stoodes. For example: # hastop -all 3. Run the uninstallsfcfs command to uninstall SFCFS. For example: # cd /opt/VRTS/install # ./uninstallsfcfs 4. Enter the system names to unstall SFCFS. For example, Enter the system names separated by spaces on which to uninstall SFCFS: system01 system02 . . . Are you sure you want to uninstall SFCFS packages? [y,n,q] (y) 5. Enter y to uninstall the SFCFS packages. 6.
6 SFCFS Architecture The Role of Component Products SFCFS includes VERITAS Volume Manager (VxVM). In HP Serviceguard Storage Management Suite environments, Serviceguard provides the communication, configuration, and membership services required to create a cluster. Serviceguard is the first component installed and configured to set up a cluster file system. GAB/LTT GAB and LLT protocols are implemented directly on an Ethernet data link.
About CFS ✥❁❃❈ ❃❏❍❐❏■❅■▼ ❉■ ✳✦✣✦✳ ❒❅❇❉▲▼❅❒▲ ◗❉▼❈ ❁ ❍❅❍❂❅❒▲❈❉❐ ❐❏❒▼✎ ✴❈❅ ❐❏❒▼ ❍❅❍ ❉❄❅■▼❉❆❉❅▲ ■❏❄❅▲ ▼❈❁▼ ❈❁❖❅ ❆❏❒❍❅❄ ❁ ❃●◆▲▼❅❒ ❆❏❒ ▼❈❅ ❉■❄❉❖❉❄◆❁● ❃❏❍❐❏■❅■ ❐❏❒▼ ❍❅❍❂❅❒▲❈❉❐▲ ❉■❃●◆❄❅✚ ❐❏❒▼ ❁ ❈❅❁❒▼❂❅❁▼ ❍❅❍❂❅❒▲❈❉❐ ❐❏❒▼ ❂ ✩✏✯ ❆❅■❃❉■❇ ❍❅❍❂❅❒▲❈❉❐ ❐❏❒▼ ❆ ✣●◆▲▼❅❒ ✦❉●❅ ▲❙▲▼❅❍ ❍❅❍❂❅❒▲❈❉❐ ❐❏❒▼ ◆ ✴❅❍❐❏❒❁❒❉●❙ ◆▲❅❄ ❂❙ ✣✶✭ ❐❏❒▼ ❖ ✣●◆▲▼❅❒ ✶❏●◆❍❅ ✭❁■❁❇❅❒ ❍❅❍❂❅❒▲❈❉❐ CVM The VERITAS Volume Manager cluster functionality (CVM) makes logical volumes accessible throughout a cluster.
About CFS SFCFS and the Group Lock Manager SFCFS uses the VERITAS Group Lock Manager (GLM) to reproduce UNIX single-host file system semantics in clusters. UNIX file systems make writes appear atomic. This means when an application writes a stream of data to a file, a subsequent application reading from the same area of the file retrieves the new data, even if it has been cached by the file system and not yet written to disk. Applications cannot retrieve stale data or partial results from a previous write.
About CFS Mounting the primary with only the –o cluster,ro option prevents the secondaries from mounting in a different mode; that is, read/write. Note that rw implies read/write capability throughout the cluster. Parallel I/O Some distributed applications read and write to the same file concurrently from one or more nodes in the cluster; for example, any distributed application where one thread appends to a file and there are one or more threads reading from various regions in the file.
About CFS SFCFS Backup Strategies The same backup strategies used for standard VxFS can be used with SFCFS because the APIs and commands for accessing the namespace are the same. File System checkpoints provide an on-disk, point-in-time copy of the file system. Because performance characteristics of a checkpointed file system are better in certain I/O patterns, they are recommended over file system snapshots (described below) for obtaining a frozen image of the cluster file system.
About CFS In addition to file-level frozen images, there are volume-level alternatives available for shared volumes using mirror split and rejoin. Features such as Fast Mirror Resync and Space Optimized snapshot are also available. See the VERITAS Volume Manager System Administrator’s Guide for details. Synchronizing Time on Cluster File Systems SFCFS requires that the system clocks on all nodes are synchronized using some external component such as the Network Time Protocol (NTP) daemon.
About CFS File System Tuneables Tuneable parameters are updated at the time of mount using the tunefstab file or vxtunefs command. The file system tunefs parameters are set to be identical on all nodes by propagating the parameters to each cluster node. When the file system is mounted on the node, the tunefs parameters of the primary node are used. The tunefstab file on the node is used if this is the first node to mount the file system. VERITAS recommends that this file be identical on each node.
About CFS Recovering from Jeopardy The disabled file system can be restored by a force unmount and the resource can be brought online without rebooting, which also brings the shared disk group resource online. Note that if the jeopardy condition is not fixed, the nodes are susceptible to leaving the cluster again on subsequent node failure. For a detailed explanation of this topic, see the VERITAS Cluster Server User’s Guide.
About CFS LLT links can be added or removed while clients are connected. Shutting down GAB or the high-availability daemon, HAD, is not required. ▼ To add a link # lltconfig -d device -t tag ▼ To remove a link lltconfig -u tag Changes take effect immediately and are lost on the next reboot. For changes to span reboots you must also update /etc/llttab. Note LLT clients do not recognize the difference unless only one link is available and GAB declares jeopardy.
About CVM About CVM CVM allows up to 4 nodes in a cluster to simultaneously access and manage a set of disks under VxVM control (VM disks). The same logical view of the disk configuration and any changes are available on each node. When the cluster functionality is enabled, all cluster nodes can share VxVM objects. Features provided by the base volume manager, such as mirroring, fast mirror resync and dirty region logging are also supported in the cluster environment.
About CVM Redundant Private Network Node 0 (master) Node 1 (slave) Node 2 (slave) Node 3 (slave) Redundant Fibre Channel Connectivity Cluster-Shareable Disks Cluster-Shareable Disk Groups Example of a Four-Node Cluster To the cluster monitor, all nodes are the same. VxVM objects configured within shared disk groups can potentially be accessed by all nodes that join the cluster.
About CVM Private and Shared Disk Groups There are two types of disk groups: ◆ Private, which belong to only one node. A private disk group is only imported by one system. Disks in a private disk group may be physically accessible from one or more systems, but import is restricted to one system only. The root disk group is always a private disk group. ◆ Shared, which is shared by all nodes. A shared (or cluster-shareable) disk group is imported by all cluster nodes.
About CVM Reconfiguring a shared disk group is performed with the co-operation of all nodes. Configuration changes to the disk group happen simultaneously on all nodes and the changes are identical. Such changes are atomic in nature, which means that they either occur simultaneously on all nodes or not at all.
About CVM The following table summarizes the allowed and conflicting activation modes for shared disk groups: Disk group activated in cluster as... Attempt to activate disk group on another node as...
About CVM To display the activation mode for a shared disk group, use the vxdg list disk_group command. You can also use the vxdg command to change the activation mode on a shared disk group. Connectivity Policy of Shared Disk Groups The nodes in a cluster must always agree on the status of a disk. In particular, if one node cannot write to a given disk, all nodes must stop accessing that disk before the results of the write operation are returned to the caller.
About CVM 70 Installation and Administration Guide
7 SFCFS Administration The VERITAS Cluster File System (CFS) is a shared file system that enables multiple hosts to mount and perform file operations concurrently on the same file. To operate in a cluster configuration, CFS requires the integrated set of VERITAS products included in the VERITAS Storage Foundation Cluster File System (SFCFS). To configure a cluster, CFS requires select HP Serviceguard Storage Management Suite bundles.
CVM Overview CVM Overview The cluster functionality (CVM) of the VERITAS Volume Manager allows multiple hosts to concurrently access and manage a given set of logical devices under VxVM control. A VxVM cluster is a set of hosts sharing a set of devices; each host is a node in the cluster. The nodes are connected across a network. If one node fails, other nodes can still access the devices. The VxVM cluster feature presents the same logical view of the device configurations, including changes, on all nodes.
CFS Overview CFS Primary and CFS Secondary The primary file system handles the metadata intent logging for the cluster file system. The first node of a cluster file system to mount is called the primary node. Other nodes are called secondary nodes. If a primary node fails, an internal election process determines which of the secondaries becomes the primary file system.
CFS Administration CFS ation on CFS and CVM agents). CFS Administration This section describes some of the major aspects of cluster file system administration and the ways in which it differs from single-host VxFS administration. CFS Resource Management Commands To make resources easier to manage, five CFS administrative commands were introduced in this release.
CFS Administration fsclustadm The fsclustadm command reports various attributes of a cluster file system. Using fsclustadm you can show and set the primary node in a cluster, translate node IDs to host names and vice versa, list all nodes that currently have a cluster mount of the specified file system mount point, and determine whether a mount is a local or cluster mount.
CFS Administration Growing a Cluster File System There is a master node for CVM as well as a primary for CFS. When growing a file system, you grow the volume from the CVM master, and then grow the file system from the CFS primary. The CVM master and the CFS primary can be two different nodes.
Snapshots on CFS Using GUIs Use the VERITAS Enterprise Administrator (VEA) for various VxFS functions such as making and mounting file systems, on both local and cluster file systems. Snapshots on CFS A snapshot provides a consistent point-in-time image of a VxFS file system. A snapshot can be accessed as a read-only mounted file system to perform efficient online backups of the file system.
Snapshots on CFS Performance Considerations Mounting a snapshot file system for backup increases the load on the system because of the resources used to perform copy-on-writes and to read data blocks from the snapshot. In this situation, cluster snapshots can be used to do off-host backups. Off-host backups reduce the load of a backup application from the primary server. Overhead from remote snapshots is small when compared to overall snapshot overhead.
Snapshots on CFS 4. Mount the snapshot: # cfsmount /mnt1snap 5. A snapped file system cannot be unmounted until all of its snapshots are unmounted.
Snapshots on CFS 82 Installation and Administration Guide
8 Fencing Administration This chapter has been deleted. D device. se coordinator disks do not store data, cluster nodes need only register with them, not reserve them.
Troubleshooting Fenced Configurations 96 Installation and Administration Guide
9 CVM Administration Introduction Note See the VERITAS Volume Manager Adminstrator’s Guide for complete information on VxVM and CVM. Online versions of the VxVM documentation set are installed under the /opt/VRTSvmdoc directory. A cluster consists of a number of hosts or nodes that share a set of disks. The main benefits of cluster configurations are: ◆ Availability—If one node fails, the other nodes can still access the shared disks.
Overview of Cluster Volume Management Overview of Cluster Volume Management Tightly coupled cluster systems have become increasingly popular in enterprise-scale mission-critical data processing. The primary advantage of clusters is protection against hardware failure. If the primary node fails or otherwise becomes unavailable, applications can continue to run by transferring their execution to standby nodes in the cluster.
Overview of Cluster Volume Management Example of a 4-Node Cluster Redundant Private Network Node 0 (master) Node 1 (slave) Node 2 (slave) Node 3 (slave) Redundant Fibre Channel Connectivity Cluster-Shareable Disks Cluster-Shareable Disk Groups To the cluster monitor, all nodes are the same. VxVM objects configured within shared disk groups can potentially be accessed by all nodes that join the cluster.
Overview of Cluster Volume Management Private and Shared Disk Groups Two types of disk groups are defined: ◆ Private disk groups—belong to only one node. A private disk group is only imported by one system. Disks in a private disk group may be physically accessible from one or more systems, but access is restricted to one system only. The root disk group (rootdg) is always a private disk group. ◆ Shared disk groups—shared by all nodes.
Overview of Cluster Volume Management Whether all members of the cluster have simultaneous read and write access to a cluster-shareable disk group depends on its activation mode setting as discussed in “Activation Modes of Shared Disk Groups.” The data contained in a cluster-shareable disk group is available as long as at least one node is active in the cluster. The failure of a cluster node does not affect access by the remaining active nodes.
Overview of Cluster Volume Management The table “Allowed and Conflicting Activation Modes” summarizes the allowed and conflicting activation modes for shared disk groups: Allowed and Conflicting Activation Modes Disk group activated in cluster as... Attempt to activate disk group on another node as...
Overview of Cluster Volume Management Connectivity Policy of Shared Disk Groups The nodes in a cluster must always agree on the status of a disk. In particular, if one node cannot write to a given disk, all nodes must stop accessing that disk before the results of the write operation are returned to the caller. Therefore, if a node cannot contact a disk, it should contact another node to check on the disk’s status. If the disk fails, no node can access it and the nodes can agree to detach the disk.
Overview of Cluster Volume Management 104 Installation and Administration Guide
10 Agents for SFCFS/SFCFS HA This chapter has been deleted.
A Troubleshooting and Recovery Installation Issues If you encounter any issues installing SFCFS/SFCFS HA, refer to the following paragraphs for typical problems and their solutions. Incorrect Permissions for Root on Remote System The permissions are inappropriate. Make sure you have remote root access permission on each system to which you are installing. Checking communication with system01 .................
Installation Issues Resource Temporarily Unavailable If the installation fails with the following error message on the console: fork() failed: Resource temporarily unavailable The value of HP-UX nkthread tunable parameter nay not be large enough. The nkthread tunable requires a minimum value of 600 on all systems in the cluster.
Installation Issues CFS Problems If there is a device failure or controller failure to a device, the file system may become disabled cluster-wide. To address the problem, unmount all secondary mounts, unmount the primary, then run a full fsck. When the file system check completes, mount all nodes again. Unmount Failures The umount command can fail for the following reasons: ◆ When unmounting shared file systems, you must unmount the secondaries before unmounting the primary.
Installation Issues ◆ If mount fails with an error message: vxfs mount: cannot open mnttab /etc/mnttab is missing or you do not have root privileges. ◆ If mount fails with an error message: vxfs mount: device already mounted, ... The device is in use by mount, mkfs or fsck on the same node. This error cannot be generated from another node in the cluster. ◆ If this error message displays: mount: slow The node may be in the process of joining the cluster.
Installation Issues Command Failures ◆ Manual pages not accessible with the man command. Set the MANPATH environment variable as listed under “Setting PATH and MANPATH Environment Variables” on page 15. ◆ The mount, fsck, and mkfs utilities reserve a shared volume. They fail on volumes that are in use. Be careful when accessing shared volumes with other utilities such as dd, it is possible for these commands to destroy data on the disk.
Installation Issues Jeopardy is a condition where a node in the cluster has a problem connecting to other nodes. In this situation, the link or disk heartbeat may be down, so a jeopardy warning may be displayed. Specifically, this message appears when a node has only one remaining link to the cluster and that link is a network link. This is considered a critical event because the node may lose its only remaining connection to the network.