HP StorageWorks Clustered File System 3.2.
Legal and notice information © Copyright 1999–2006 Polyserve, Inc. © Portions copyright 2006 Hewlett-Packard Development Company, L.P. Neither PolyServe, Inc. nor Hewlett-Packard Company makes any warranty of any kind with regard to this material, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose.
Contents 1 HP Technical Support HP Storage Web Site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 HP NAS Services Web Site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2 Introduction Product Features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 The Structure of a Cluster . . . . . . . . . . .
Contents Filesystems Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Applications Tab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Cluster Alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Assign or Change Passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Start HP StorageWorks Clustered File System . . . . . . . . . . . . . . . . . Stop HP Clustered File System . . . . .
Contents Device Database and Membership Partitions . . . . . . . . . . . . . . . Import SAN Disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deport SAN Disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Local Disk Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Display Disk Information with sandiskinfo. . . . . . . . . . . . . . . . . . . .
Contents vi View Filesystem Status from the Command Line . . . . . . . . . . . . Suspend a Filesystem for Backups. . . . . . . . . . . . . . . . . . . . . . . . . . . . Check a Filesystem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Perform a Filesystem Check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enable or Disable FZBMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Delete a Service Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Disable a Service Monitor on a Specific Server . . . . . . . . . . . . . Enable a Previously Disabled Service Monitor . . . . . . . . . . . . . View Service Monitor Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . Clear Service Monitor Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii 117 117 117 118 118 12 Configure Device Monitors Overview . . . . . . . . . . . . . . . . . .
Contents viii 14 Test Your Configuration Test SAN Shared Filesystem Operation . . . . . . . . . . . . . . . . . . . . . . Test SAN Connectivity and Shared Filesystem Operation. . . . Test Filesystem Failure Recovery . . . . . . . . . . . . . . . . . . . . . . . . . Test Virtual Host Operation and Failover . . . . . . . . . . . . . . . . . . . . Test Failure and Reintegration of Servers . . . . . . . . . . . . . . . . . . Test Failure and Reintegration of Service Monitors . . . . . . . . .
Contents ix HP CFS Management Console . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 Audit Administrative Commands . . . . . . . . . . . . . . . . . . . . . . . . 175 Add Your Own Messages to the Event Log . . . . . . . . . . . . . . . . 175 Cluster Alerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 Check the Server Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 Disable a Server for Maintenance . . . . . . . . . . . . .
1 HP Technical Support Telephone numbers for worldwide technical support are listed on the following HP web site: http://www.hp.com/support. From this web site, select the country of origin. For example, the North American technical support number is 800-633-3600. NOTE: For continuous quality improvement, calls may be recorded or monitored.
HP Technical Support 2 HP NAS Services Web Site The HP NAS Services site allows you to choose from convenient HP Care Pack Services packages or implement a custom support solution delivered by HP ProLiant Storage Server specialists and/or our certified service partners. For more information see us at http://www.hp.com/hps/storage/ns_nas.html.
2 Introduction HP StorageWorks Clustered File System provides a cluster structure for managing a group of network servers and a Storage Area Network (SAN) as a single entity. Product Features HP Clustered File System includes the following features: • Fully distributed data-sharing environment. The PSFS filesystem enables all servers in the cluster to directly access shared data stored on a SAN.
Chapter 2: Introduction 4 • Cluster-wide administration. The HP CFS Management Console (a Java-based graphical user interface) and the corresponding commandline interface enable you to configure and manage the entire cluster either remotely or from any server in the cluster. • Failover support for network applications. HP Clustered File System uses virtual hosts to provide highly available client access to missioncritical data for Web, e-mail, file transfer, and other TCP/IP-based applications.
Chapter 2: Introduction 5 Overview The Structure of a Cluster A cluster includes the following physical components. Internet Public LANs Administrative Network (LAN) RAID Subsystem FC Switch RAID Subsystem Servers. Each server must be running HP Clustered File System. Public LANs. A cluster can include up to four network interfaces per server.
Chapter 2: Introduction 6 For performance reasons, we recommend that these networks be isolated from the networks used by external clients to access the cluster. Storage Area Network (SAN). The SAN includes FibreChannel switches and RAID subsystems. Disks in a RAID subsystem are imported into the cluster and managed from there. After a disk is imported, you can create PSFS filesystems on it.
Chapter 2: Introduction 7 HP CFS Management Console. Provides a graphical interface for configuring an HP Clustered File System cluster and monitoring its operation. The console can be run either remotely or from any server in the cluster. ClusterPulse process. Monitors the cluster, controls failover of virtual hosts and devices, handles communications with the HP CFS Management Console, and manages device monitors, service monitors, and event notification. Distributed Lock Manager (DLM) process.
Chapter 2: Introduction 8 mxlog module. Allows HP Clustered File System kernel modules to send messages to the mxlogd process. grpcommd process. Manages HP Clustered File System group communications across the cluster. Administrative Network. Handles HP Clustered File System administrative traffic. Most HP Clustered File System processes communicate with each other over the administrative network. Shared SAN Devices Before a SAN disk can be used, the disk will need to be imported into the cluster.
Chapter 2: Introduction 9 • Support for standard filesystem operations such as assigning or deassigning drive letters. These operations can be performed with the HP CFS Management Console or from the command line. • Support for existing applications. The PSFS filesystem uses standard read/write semantics and does not require changes to applications. • Journaling and live crash recovery. Filesystem metadata operations are written to a journal before they are performed.
Chapter 2: Introduction 10 A virtual host is a hostname/IP address configured on one or more servers. The network interfaces selected on those servers to participate in the virtual host must be on the same subnet. One server is selected as the primary server for the virtual host. The remaining servers are backups. The primary and backup servers do not need to be dedicated to these activities; all servers can support other independent functions.
Chapter 2: Introduction 11 The DISK device monitor can be used to watch local disk drives or to check access to a partition on a SAN disk. The GATEWAY device monitor watches gateway devices. You can also define your own custom device monitors. By default, when a device monitor is assigned to a server, all virtual hosts on that server are dependent on the device monitor. However, you can select the virtual hosts that will be dependent on the device monitor.
Chapter 2: Introduction 12 Supported Configurations HP Clustered File System supports multiple FibreChannel switches configured as a single fabric and multiported SAN disks. The following diagrams show some sample cluster configurations using these components. Single FC Port, Single FC Switch, Single Fabric This is the simplest configuration. Each server has a single FibreChannel port connected to the FibreChannel switch. The SAN includes two RAID arrays.
Chapter 2: Introduction 13 Single FC Port, Dual FC Switch, Single Fabric In this example, the fabric includes two FibreChannel switches. Servers 1–3 are connected to the first FC switch; servers 4–6 are connected to the second switch. The FC switches are connected to two RAID arrays, which contain multiported disks. If a switch fails, the servers connected to the other switch will survive and access to storage will be maintained.
3 Cluster Administration HP StorageWorks Clustered File System can be administered either with the HP CFS Management Console or from the command line. Administrative Considerations You should be aware of the following when managing HP Clustered File System: Normal operation of the cluster depends on a reliable network hostname resolution service. If the hostname lookup facility becomes unreliable, this can cause reliability problems for the running cluster.
Chapter 3: Cluster Administration 15 – If one of these hostnames has already been referenced unsuccessfully, the DNS resolver cache may need to be flushed with “ipconfig /flushdns” (see Microsoft Knowledge Base article 320845). – Certain Microsoft Knowledge Base articles caution that in the case of Exchange SMTP, and possibly other applications, the use of the hosts file can interfere with mail flow (see Microsoft Knowledge Base article 296215).
Chapter 3: Cluster Administration 16 – Disable or re-enable a network interface via the Device Manager. – Update network drivers. – Hot-swapping of PCI cards. • If servers from multiple clusters can access the SAN via a shared FC fabric, avoid importing the same disk into more than one cluster. Filesystem corruption can occur when different clusters attempt to share the same filesystem.
Chapter 3: Cluster Administration 17 Tested Configuration Limits HP has tested HP Clustered File System configurations up to the following limits: • 16 servers per cluster • 256 imported LUNs per cluster • 128 filesystems per cluster • 2048 filesystem mounts per cluster • 64 virtual hosts per cluster • 64 service and/or device monitors per cluster (the total number of service and device monitors cannot exceed 64) • 10 event notifiers per cluster • 4 network interface cards per server • 1 FibreChannel port
Chapter 3: Cluster Administration 18 When you invoke mxconsole or mx from a local machine, by default the application checks the current software version on the server to which it is being connected and then downloads the software only if that version is not already in the local cache. Manage a Cluster with the Management Console The HP CFS Management Console can manipulate all entities in the cluster, including entities that are currently down.
Chapter 3: Cluster Administration 19 Disconnect from a Cluster To close the HP CFS Management Console window for the current server, either select File > Disconnect or click the Disconnect icon on the toolbar. You can then use either File > Connect or the Connect icon on the toolbar to connect to a cluster from another server. Exit a Management Console Session To end an HP Clustered File System console session, select File > Exit. The cluster will continue to operate after you disconnect from it.
Chapter 3: Cluster Administration 20 The toolbar at the top of the window can be used to connect or disconnect from a cluster, to add new cluster entities (servers, virtual hosts, notifiers, device monitors, service monitors, and filesystems), to import or deport disks, to collapse or expand the entity lists, and to display the online help. “Management Console Icons” on page 185 describes the icons used to represent cluster entries and their status.
Chapter 3: Cluster Administration 21 Virtual Hosts Tab The Virtual Hosts tab shows all virtual hosts in the cluster. For each virtual host, the window lists the network interfaces on which the virtual host is configured, any service monitors configured on that virtual host, and any device monitors associated with that virtual host.
Chapter 3: Cluster Administration Notifiers Tab The Notifiers tab shows all notifiers configured in the cluster.
Chapter 3: Cluster Administration Filesystems Tab The Filesystems tab shows all PSFS filesystems in the cluster.
Chapter 3: Cluster Administration 24 Applications Tab This view shows the application monitors configured in the cluster and provides the ability to manage and monitor them from a single screen. The tab uses a table format, with a column for each server in the cluster. The application monitors appear in the rows of the table. You can reorder the information on this tab or limit the information that is displayed.
Chapter 3: Cluster Administration 25 Cluster Alerts The Alerts section at the bottom of the HP CFS Management Console window lists errors that have occurred in cluster operations. Double click an alert to view the error in the cluster tree structure. If you receive an alert telling you to reboot a server, the message will remain in the Alert section until either HP Clustered File System is restarted on the rebooted server or the server is removed from the cluster.
Chapter 3: Cluster Administration 26 Assign or Change Passwords You must be user admin to make changes to the configuration of the cluster. Other users can view the cluster configuration but cannot make any changes to it. You can use the Cluster Configuration window to change the password for admin. The HP Clustered File System UserManager or the mxpasswd command can be used to assign or change passwords for admin and other users.
Chapter 3: Cluster Administration 27 Start HP StorageWorks Clustered File System By default, HP Clustered File System starts automatically when the system is booted. This feature is controlled by the Startup dialog. If you do not want HP Clustered File System to start when the system is booted, use the Startup dialog to change the service from Automatic to Manual.
4 Configure Servers Before adding a server to a cluster, verify the following: • The server is connected to the SAN if it will be accessing PSFS filesystems. • The server is configured as a fully networked host supporting the services to be monitored. For example, if you want HP Clustered File System to provide failover protection for your Web service, the appropriate Web server software must be installed and configured on the servers.
Chapter 4: Configure Servers 29 Server: Enter the name or IP address of the server. Server Severity: When a server fails completely because of a power failure or other serious event, HP Clustered File System attempts to move any virtual hosts from the network interfaces on the failed server to backup network interfaces on healthy servers in the cluster.
Chapter 4: Configure Servers 30 The NOAUTORECOVER setting can be useful when integrating HP Clustered File System with custom applications, where additional actions may be necessary after server recovery and before the server is made available to host services provided by virtual hosts. After adding a new server, it appears on the Servers window. In the following example, two servers have been added to a cluster. NOTE: For improved performance, the HP CFS Management Console caches hostname lookups.
Chapter 4: Configure Servers 31 When you disable a server, any active virtual hosts and device monitors on the server will become inactive. Those virtual hosts and device monitors will then fail over to a backup server, if one is available. Solution Pack virtual hosts and device monitors on the disabled server are affected in the same manner as virtual hosts and device monitors. To disable servers from the command line, use this command: mx server disable ...
Chapter 4: Configure Servers 32 HP Clustered File System License File To operate properly, HP Clustered File System requires that a license file be installed on each server in the cluster. Upgrade the License File If you receive a new license file from HP, you will need to install it on the cluster servers. You can either copy the file manually to each server, or you can install it from the HP CFS Management Console.
Chapter 4: Configure Servers 33 Migrate Existing Servers to HP Clustered File System In HP Clustered File System, the names of your servers should be different from the names of the virtual hosts they support. A virtual host can then respond regardless of the state of any one of the servers. In some cases, the name of an existing server may have been published as a network host before HP Clustered File System was configured.
Chapter 4: Configure Servers 34 • Keep the existing name on the server. If you do not rename the server, clients will need to use the new virtual host name to benefit from failover protection. Clients can still access the server by its name, but those requests are not protected by HP Clustered File System. If the server fails, requests to the server’s hostname fail, whereas requests to the new virtual hostname are automatically redirected by HP Clustered File System to a backup server.
Chapter 4: Configure Servers 35 acmd1 acmd2 Primary: virtual_acmd1 Primary: virtual_acmd2 Backup: virtual_acmd2 Backup: virtual_amcd1 Virtual host traffic The addresses on the name server are virtual_acmd1 and virtual_acmd2. Two virtual hosts have also been created with those names. The first virtual host uses acmd1 as the primary server and acmd2 as the backup. The second virtual host uses acmd2 as the primary and acmd1 as the backup.
Chapter 4: Configure Servers 36 IP address: The IP addresses for the virtual hosts that you will use for each server in the cluster. These are the IP addresses that the DNS will use to send alternate requests. With this setup, the domain name server sends messages in a roundrobin fashion to the two virtual hosts indicated by the IP addresses, causing them to share the request load.
5 Configure Network Interfaces When you add a server to the cluster, HP Clustered File System determines whether each network interface on that server meets the following conditions: • The network interface is up and running. • The network interface is multicast-capable. • 802.3x Ethernet flow control is not used. • Each network interface card (NIC) is on a separate network. Network interfaces meeting these conditions are automatically configured into the cluster.
Chapter 5: Configure Network Interfaces 38 By default, HP Clustered File System administrative traffic is allowed on all network interfaces. However, when you configure the cluster, you can specify the networks that you prefer to use for the administrative traffic. For performance reasons, we recommend that these networks be isolated from the networks used by external clients to access the cluster.
Chapter 5: Configure Network Interfaces 39 The Servers window on the HP CFS Management Console shows the network interfaces for each server as defined in this file. (Because there can be stale information in the configuration file, the Servers window may not match your current network configuration exactly.) Each network interface is labeled “Hosting Enabled” or “Hosting Disabled,” which indicates whether it can be used for virtual hosts.
Chapter 5: Configure Network Interfaces 40 The process looks for another network in this order: • Networks that allow administrative traffic. • Networks that discourage administrative traffic. If HP Clustered File System must use a network that was configured to discourage administrative traffic, it will fail over to a network that allows the traffic as soon as that network becomes available to all servers in the cluster.
Chapter 5: Configure Network Interfaces 41 • To modify an existing network interface, select that interface, rightclick, and select Properties. The network interface must be down; you cannot modify an “up” network interface. Server: The name or IP address of the server that will include the new network interface. IP: Type the IP address for the network interface. Net Mask: Type the net mask for the network interface.
Chapter 5: Configure Network Interfaces 42 If you need to remove a network interface from an online server, first physically remove the corresponding cable from the server. PanPulse will then report that the network interface is down and you can perform the delete operation. The mx command to remove a network interface is as follows: mx netif delete Allow or Discourage Administrative Traffic By default, all network interfaces allow administrative traffic.
6 Configure the SAN SAN configuration includes the following: • Import SAN disks into the cluster. • Deport SAN disks from the cluster. • Display information about SAN disks. Overview SAN Configuration Requirements Be sure that your SAN configuration meets the requirements specified in the HP StorageWorks Clustered File System Setup Guide. Storage Control Layer Module The Storage Control Layer (SCL) module manages shared SAN devices.
Chapter 6: Configure the SAN 44 As part of managing shared SAN devices, the SCL also gives each disk a globally unique device identifier that all servers in the cluster use to access the device. Although the identifiers (such as psd2 or psd2p6) appear on certain HP CFS Management Console windows, they are generally only needed for internal use by HP Clustered File System.
Chapter 6: Configure the SAN 45 If you want to extend a PSFS filesystem and the underlying partition, you can use the Extend option provided on the HP CFS Management Console. To make other changes to the partition table after a disk has been imported, you will need to deport the disk, make the changes with the Windows Disk Management utility, and then import the disk again.
Chapter 6: Configure the SAN 46 To import a disk from the command line, use the following command: mx disk import ... To determine the uuid for a disk, run the following command, which prints the uuid, the size, and a vendor string for each unimported SAN disk. mx disk status You can also use the Disk Info window to import a disk. Deport SAN Disks Deporting a disk removes it from cluster control. You cannot deport a disk that contains a membership partition.
Chapter 6: Configure the SAN 47 To deport a disk from the command line, use the following command: mx disk deport ... To determine the uuid for the disk, use the mx disk status --imported command. You can also use the Disk Info window to deport a disk. Local Disk Information The Disk Info window displays disk information from the viewpoint of the local server.
Chapter 6: Configure the SAN 48 When you select a disk, the window displays information about the partitions on the disk. The window also lists any mount paths for PSFS filesystems. To import or deport a disk, select that disk and then click Import or Deport as appropriate. Display Disk Information with sandiskinfo The sandiskinfo command can display information for both imported and unimported SAN disks. Under normal operations, the sandiskinfo output should be the same on all servers in the cluster.
Chapter 6: Configure the SAN sandiskinfo Disk: \\.\Global\psd2 Uid: 20:00:00:04:cf:13:38:3a::0 Vendor: SEAGATE 49 SAN info: fcswitch5:7 Capacity: 34733M The command syntax is as follows: sandiskinfo [-i|-u|-v] [-al] The default is -i, which produces the output shown earlier for imported disks. The -u option produces the same output for unimported disks. The -a option also lists the partitions on each disk. When combined with -u, it displays partition information for unimported disks.
Chapter 6: Configure the SAN 50 partition 04: size 9421M type partition 05: size 16M type partition 06: size 9421M type (PSFS Filesystem) partition 07: size 1028M type partition 08: size 1028M type (unknown) Disk: \\.\Global\psd2 Uid: 20:00:00:04:cf:13:38:3a::0 SAN info: fcswitch5:7 Vendor: SEAGATE Capacity: 34733M Local Device Paths: \\.
7 Configure Dynamic Volumes HP Clustered File System includes a CFS Volume Manager that you can use to create, extend, recreate, or destroy dynamic volumes, if you have purchased the separate license. Dynamic volumes allow large filesystems to span multiple disks, LUNs, or storage arrays. Overview Basic and Dynamic Volumes Volumes are used to store PSFS filesystems. There are two types of volumes: dynamic and basic. Dynamic volumes are created by the CFS Volume Manager.
Chapter 7: Configure Dynamic Volumes 52 Types of Dynamic Volumes HP Clustered File System supports two types of dynamic volumes: striped and concatenated. The volume type determines how data is written to the volume. • Striping. When a dynamic volume is created with striping enabled, a specific amount of data (called the stripe size) is written to each subdevice in turn. For example, a dynamic volume could include three subdevices and a stripe size of 64 KB.
Chapter 7: Configure Dynamic Volumes 53 • Striped dynamic volumes can be extended up to 16 times; however, the total number of subdevices cannot exceed 128. Guidelines for Creating Dynamic Volumes When creating striped dynamic volumes, follow these guidelines: • The subdevices used for a striped dynamic volume should be the same size. The CFS Volume Manager uses the same amount of space on each subdevice in the stripeset.
Chapter 7: Configure Dynamic Volumes 54 Filesystem: If you want HP Clustered File System to create a filesystem that will be placed on the dynamic volume, enter a label to identify the filesystem. If you do not want a filesystem to be created, remove the checkmark from “Create filesystem after volume creation.” If you are creating a filesystem, you can also select the options to apply to the filesystem.
Chapter 7: Configure Dynamic Volumes 55 Available Subdevices: The display includes all imported subdevices that are not currently in use by another volume and that do not have a filesystem in place. The subdevices that you select will be used in the order in which they appear on the list. Use the arrow keys to reorder, oneat-a-time, the appropriate subdevices and then highlight those subdevices.
Chapter 7: Configure Dynamic Volumes 56 The following command lists the available subdevices: mx dynvolume showcreateopt Dynamic Volume Properties To see the configuration for a dynamic volume, select Storage > Dynamic Volume > Volume Properties and then choose the volume that you want to view. If a filesystem is associated with the volume, the Volume Properties window shows information for both the dynamic volume and the filesystem.
Chapter 7: Configure Dynamic Volumes 57 • Optimal. The volume has only one stripeset that includes all subdevices. Each subdevice is written to in turn. • Suboptimal. The volume has been extended and includes more than one stripeset. The subdevices in the first stripeset will be completely filled before writes to the next stripeset begin. To change the Stripe State to optimal, you will need to recreate the dynamic volume.
Chapter 7: Configure Dynamic Volumes 58 Extend a Dynamic Volume The Extend Volume option allows you to add subdevices to an existing dynamic volume. When you extend the volume on which a filesystem is mounted, you can optionally increase the size of the filesystem to fill the size of the volume. NOTE: The subdevices used for a striped dynamic volume are called a stripeset. When a striped dynamic volume is extended, the new subdevices form another stripeset.
Chapter 7: Configure Dynamic Volumes 59 Dynamic Volume Properties: The current properties of this dynamic volume. Filesystem Properties: The properties for the filesystem located on this dynamic volume. Available Subdevices: Select the additional subdevices to be added to the dynamic volume. Use the arrow keys to reorder those subdevices if necessary. Extend Filesystem: To increase the size of the filesystem to match the size of the extended volume, click this checkbox.
Chapter 7: Configure Dynamic Volumes 60 To extend a dynamic volume from the command line, use the following command: mx dynvolume extend Destroy a Dynamic Volume When a dynamic volume is destroyed, the filesystem on that volume, and any persistent mounts for the filesystem, are also destroyed. Before destroying a dynamic volume, be sure that the filesystem is no longer needed or has been copied or backed up to another location.
Chapter 7: Configure Dynamic Volumes 61 Recreate a Dynamic Volume Occasionally you may want to recreate a dynamic volume. For example, you might want to implement striping on a concatenated volume or, if a striped dynamic volume has been extended, you might want to recreate the volume to place all of the subdevices in the same stripe set. When a dynamic volume is recreated, the CFS Volume Manager first destroys the volume and then creates it again using the subdevices and options that you select.
Chapter 7: Configure Dynamic Volumes 62 You can change or reorder the subdevices used for the volume and enable striping if desired. To recreate a volume from the command line, you will first need to use the dynvolume destroy command and then run the dynvolume create command.
Chapter 7: Configure Dynamic Volumes 63 Convert a Basic Volume to a Dynamic Volume If you have PSFS filesystems that were created directly on an imported disk partition or LUN (a basic volume), you can convert the basic volume to a dynamic volume. The new dynamic volume will contain only the original subdevice; you can use the Extend Volume option to add other subdevices to the dynamic volume. NOTE: The new dynamic volume is unstriped. It is not possible to add striping to a converted dynamic volume.
Chapter 7: Configure Dynamic Volumes 64 To convert a basic volume to a dynamic volume from the command line, use the following command: mx dynvolume convert
8 Configure PSFS Filesystems HP StorageWorks Clustered File System provides the PSFS filesystem. This direct-access shared filesystem enables multiple servers to concurrently read and write data stored on shared SAN storage devices. A journaling filesystem, PSFS provides live crash recovery.
Chapter 8: Configure PSFS Filesystems 66 The PSFS filesystem does not migrate processes from one server to another. If you want processes to be spread across servers, you will need to take the appropriate actions. Journaling Filesystem When you initiate certain filesystem operations such as creating, opening, or moving a file or modifying its size, the filesystem writes the metadata, or structural information, for that event to a transaction journal. The filesystem then performs the operation.
Chapter 8: Configure PSFS Filesystems 67 Filesystem Management and Integrity HP Clustered File System uses the SANPulse process to manage PSFS filesystems. SANPulse performs the following tasks. • Coordinates filesystem mounts, unmounts, and crash recovery operations. • Checks for cluster partitioning, which can occur when cluster network communications are lost but the affected servers can still access the SAN.
Chapter 8: Configure PSFS Filesystems 68 Crash Recovery When a server using a PSFS filesystem either crashes or stops communicating with the cluster, another server in the cluster will replay the filesystem journal to complete any transactions that were in progress at the time of the crash. Users on the remaining servers will notice a slight delay while the journal is replayed. Typically the recovery procedure takes only a few seconds.
Chapter 8: Configure PSFS Filesystems 69 Create a Filesystem from the Management Console To create a filesystem, select Storage > Filesystem > Add Filesystem on the HP CFS Management Console, or click the Filesystem icon on the toolbar. Label: Type a label that identifies the filesystem. Available Volumes: This part of the window lists the basic or dynamic volumes that are currently unused. Select one of these volumes for the filesystem.
Chapter 8: Configure PSFS Filesystems 70 NOTE: The Create a Filesystem window identifies volumes by their HP Clustered File System names such as psd1p2. To match these names to their local Windows names, open the Disk Info window (select the server on the Servers tab, right-click, and then select Get Local Disk Info). Click the Options button to see the Filesystem Options window, which lists the available options for the filesystem. In this release, the only option is to specify the block size.
Chapter 8: Configure PSFS Filesystems 71 Create a Filesystem from the Command Line To create a filesystem, use one of the following HP Clustered File System commands. The mx Command Use this syntax: mx fs create [--label
Chapter 8: Configure PSFS Filesystems 72 The -o option has the following parameters: • blocksize=# Specify the block size (either 4096 or 8192) for the filesystem. • disable-fzbm Create the filesystem without Full Zone Bit Maps (FZBMs). The FZBM on-disk filesystem format reduces the amount of data that the filesystem needs to read when allocating a block. It is particularly useful for speeding up allocation times on large, relatively full filesystems.
Chapter 8: Configure PSFS Filesystems 73 Assign Drive Letter: HP Clustered File System queries the servers in the cluster to determine the drive letters that are currently unused on all of the servers. You can assign any of these drive letters to the filesystem. NOTE: If, on a cluster server, the drive letter you selected is assigned for another purpose before the Assign Drive Letter operation is complete, the affected server will not be able to access the filesystem via the drive letter.
Chapter 8: Configure PSFS Filesystems 74 You can also view this information from the command line. To see the assignments for a filesystem, use this command: mx fs queryassignments To see the assignments on specific servers, use this command: mx fs getdriveletter --server Remove Drive Letter or Path Assignments If you no longer want to associate a filesystem with a particular drive letter or mount path, you can remove the assignment.
Chapter 8: Configure PSFS Filesystems 75 After the drive letter or path has been unassigned, you will not be able to access the filesystem until you assign a new drive letter or path to it. To remove assignments from the command line, use this command: mx fs unassign Filesystem Mounts In the Windows operating system, a filesystem is automatically mounted on a server the first time the server attempts to access it.
Chapter 8: Configure PSFS Filesystems 76 Extend a Mounted Filesystem If the Volume allocation display shows that there is space remaining on the volume, you can use the “Extend Filesystem” option on the Properties window to increase the size of the PSFS filesystem to the maximum size of the volume. When you click on the Extend Filesystem button, you will see a warning such as the following. When you click Yes, HP Clustered File System will extend the filesystem to use all of the available space.
Chapter 8: Configure PSFS Filesystems 77 Features Tab The Features tab shows whether Full Zone Bit Maps (FZBM) are enabled on the filesystem. View Filesystem Status from the Command Line You can use the following mx command to see status information. mx fs status [--verbose] The command lists the status of each filesystem. The --verbose option also displays the FS type (always PSFS), the size of the filesystem in KB, and the UUID of the parent disk.
Chapter 8: Configure PSFS Filesystems 78 Suspend a Filesystem for Backups The psfssuspend utility suspends a PSFS filesystem in a stable, coherent, and unchanging state. While the filesystem is in this state, you can copy it for backup and/or archival purposes. The filesystem is essentially unusable while it is suspended; however, applications that can tolerate extended waits for I/O do not need to be terminated. The psfsresume utility restores a suspended filesystem.
Chapter 8: Configure PSFS Filesystems 79 NOTE: If an attempt to mount the copied filesystem fails with an “FSID conflict” error, run the following command. In the command, is the partition that contains the copied filesystem, and
Chapter 8: Configure PSFS Filesystems 80 The options are as follows: • --rebuild-tree Rebuilds the filesystem tree using leaf nodes found on the device. Normally you should use this option only if psfscheck reports errors that can be fixed only by --rebuild-tree. We strongly recommend that you make a backup copy of the entire partition before you attempt to run psfscheck with the --rebuild-tree option.
Chapter 8: Configure PSFS Filesystems 81 • --create-bitmap-file filename, -c filename Saves bitmap of found leaves. • -y Causes psfscheck to answer “yes” to all questions. Enable or Disable FZBMs The psfscheck utility also provides options to enable or disable Full Zone Bit Maps (FZBMs). This on-disk filesystem format reduces the amount of data that the filesystem needs to read when allocating a block. It is particularly useful for speeding up allocation times on large, relatively full filesystems.
9 Cluster Operations on the Applications Tab The Applications tab on the Management Console shows all HP Clustered File System applications, virtual hosts, service monitors, and device monitors configured in the cluster and enables you to manage and monitor them from a single screen. Applications Overview An application provides a way to group associated cluster resources (virtual hosts, service monitors, and device monitors) so that they can be treated as a unit.
Chapter 9: Cluster Operations on the Applications Tab 83 If you do not specify an application name when you create a virtual host, the application will use the IP address of the virtual host as the application name. Similarly, if you do not specify an application name for a device monitor, the application will use the same name as the device monitor.
Chapter 9: Cluster Operations on the Applications Tab 84 The servers on which the resources are configured appear in the columns. You can change the order of the server columns by dragging a column to another location. You can also resize the columns. The cells indicate whether a resource is deployed on a particular server, as well as the current status of the resource. If a cell is empty, the resource is not deployed on that server.
Chapter 9: Cluster Operations on the Applications Tab 85 column, all clients can access the application. If the status is Error or Warning, at least one resource in the application has that status. The possible states for the application are: Icon Status OK Meaning Clients can access the application. Warning Clients can access the application but not from the primary node. Error Clients cannot access the application.
Chapter 9: Cluster Operations on the Applications Tab 86 On the Type tab shown above, select the types of virtual hosts, service monitors, and device monitors that you want to see. Click on the State tab to select specific states that you are interested in viewing. (The Applications tab will be updated immediately.) Click OK to close the filter. The filter then appears as a separate tab and will be available to you when you connect to any cluster. (Filters are stored per user under the registry key.
Chapter 9: Cluster Operations on the Applications Tab 87 Applications The following operations affect all entities associated with an HP Clustered File System application. These operations can also be performed from the command line, as described in the HP StorageWorks Clustered File System Command Reference Guide. Rename an application: Right-click on the application cell (in the Name column) and then specify the new name on the Rename Application dialog.
Chapter 9: Cluster Operations on the Applications Tab 88 • Delete the virtual host. To perform these procedures, left-click on the cell for the virtual host (click in the Name column). Then right-click and select the appropriate operation from the menu.See Chapter 9, “Configure Virtual Hosts” on page 89 for more information about these procedures.
10 Configure Virtual Hosts HP StorageWorks Clustered File System uses virtual hosts to provide failover protection for servers and network applications. Overview A virtual host is a hostname/IP address configured on a set of network interfaces. Each interface must be located on a different server. The first network interface configured is the primary interface for the virtual host. The server providing this interface is the primary server.
Chapter 10: Configure Virtual Hosts 90 Cluster Health and Virtual Host Failover To ensure the availability of a virtual host, HP Clustered File System monitors the health of the administrative network, the active network interface, and the underlying server. If you have created service or device monitors, those monitors periodically check the health of the specified services or devices.
Chapter 10: Configure Virtual Hosts 91 The failover operation to another network interface has minimal impact on clients. For example, if clients were downloading Web pages during the failover, they would receive a “transfer interrupted” message and could simply reload the Web page. If they were reading Web pages, they would not notice any interruption. If the active network interface fails, only the virtual hosts associated with that interface are failed over.
Chapter 10: Configure Virtual Hosts 92 Add or Modify a Virtual Host To add or update a virtual host from the HP CFS Management Console, select the appropriate option: • To add a new virtual host, select Cluster > Virtual Host > Add Virtual Host or click the V-Host icon on the toolbar. Then configure the virtual host on the Add Virtual Host window. • To update an existing virtual host, select that virtual host on either the Server or Virtual Hosts window, right-click, and select Properties.
Chapter 10: Configure Virtual Hosts 93 select an existing application name, or leave this field blank. However, if you do not assign a name, HP Clustered File System will use the IP address for the virtual host as the application name. Always active: If you check this box, upon server failure, the virtual host will move to an active server even if all associated service and device monitors are inactive or down.
Chapter 10: Configure Virtual Hosts 94 Network Interfaces: When the “All Servers” box is checked, the virtual host will be configured on all servers having an interface on the network you select for this virtual host. When you add another server to the cluster, the virtual host will automatically be configured on that server. This option can be useful with administrative applications. Available:/Members: The Available column lists all network interfaces that are available for this virtual host.
Chapter 10: Configure Virtual Hosts 95 Configure Applications for Virtual Hosts After creating virtual hosts, you will need to configure your network applications to recognize them. For example, if you are using a Web server, you may need to edit its configuration files to recognize and respond to the virtual hosts. By default, FTP responds to any virtual host request it receives.
Chapter 10: Configure Virtual Hosts 96 Delete a Virtual Host Select the virtual host to be deleted on either the Servers window or the Virtual Hosts window, right-click, and select Delete. Any service monitors configured on that virtual host are also deleted. To delete a virtual host from the command line, use this command: mx vhost delete ...
Chapter 10: Configure Virtual Hosts 97 Re-Host Virtual Hosts You can use the Applications tab to modify the configuration of a virtual host. For example, you might want to change the primary for the virtual host. To re-host a virtual host, right click in a cell for that virtual host and then select Re-Host. You will then see a message warning that this action will cause the clients of the application to lose connection. When you continue, the Custom Virtual Host Re-Host window appears.
Chapter 10: Configure Virtual Hosts 98 The status and enablement of the service and device monitors associated with the virtual host also contribute to a server’s health calculation. When a server is completely “healthy,” all of the services associated with the virtual host are up and enabled. When certain events occur on the server where a virtual host is located, the ClusterPulse process will attempt to fail over the virtual host to another server configured for that virtual host.
Chapter 10: Configure Virtual Hosts 99 • A server is considered down if it loses coordinated communication with the cluster (for example, the server crashed or was shut down, HP Clustered File System was shut down on that server, the server failed to schedule a cluster group communication process for an extended period of time, the server disabled the NIC being used for cluster network traffic, and so on). • The PanPulse process controls whether a network interface is marked up or down.
Chapter 10: Configure Virtual Hosts 100 Customize Service and Device Monitors for Failover By default, when a service or device monitor probe fails, indicating that the watched service is down or the monitored device cannot be accessed, ClusterPulse will fail over the associated virtual host to another server where the monitored service or device is up. You can customize this behavior using the Advanced monitor settings.
Chapter 10: Configure Virtual Hosts 101 You can use the following Advanced settings to affect how ClusterPulse selects the network interface for failover. • The Event Severity setting allows you to specify whether ClusterPulse should consider the existence of monitor events (such as a script failure or timeout) when it chooses a network interface for failover. If the events are considered, the network interface for the affected server becomes less desirable.
Chapter 10: Configure Virtual Hosts 102 For example, consider a two-node cluster that is configured with its primary on node 1 and backup on node 2, and that uses the NOFAILBACK option. Three service monitors are configured on the virtual host. When a service monitor probe fails on node 1, the virtual host will fail over to node 2. Following are some possible scenarios: • When the monitored service is restored on node 1, the virtual host will remain on node 2.
Chapter 10: Configure Virtual Hosts Virtual Host Policy 103 Monitor Probe Severity Behavior When Probe Reports DOWN AUTORECOVER Failover occurs. The virtual host remains on the backup server until a “healthier” server is available. NOAUTORECOVER Failover occurs and monitor is disabled on the original server. The virtual host remains on the backup server until a “healthier” server is available.
11 Configure Service Monitors Service monitors are typically used to monitor a network service such as HTTP or FTP. If a service monitor indicates that a network service is not functioning properly on the primary server, HP Clustered File System can transfer the network traffic to a backup server that also provides that network service. Overview Before creating a service monitor for a particular service, you will need to configure that service on your servers.
Chapter 11: Configure Service Monitors 105 Similarly, if a server is removed from the configuration, the service monitors assigned to the virtual host are automatically removed from that server. Service monitor parameters (such as probe severity, Start scripts, and Stop scripts) are consistent across all servers configured for a virtual host.
Chapter 11: Configure Service Monitors 106 Type Port Default Probe Timeout Default Probe Frequency Script Parameters TCP 0 5 seconds 30 seconds none CUSTOM NA 60 seconds 60 seconds user probe script FTP Service Monitor By default the FTP service monitor probes TCP port 21 of the virtual host address. You can change this port number to the port number configured for your FTP server. The default frequency of the probe is every 30 seconds.
Chapter 11: Configure Service Monitors 107 HTTPS Service Monitor This monitor functions in the same manner as the HTTP service monitor; however, it uses the HTTS protocol to make the connection to port 440. The URL specified for the monitor should begin with HTTPS. SMTP Service Monitor By default, the SMTP service monitor probes TCP port 25 (sendmail port) of the virtual host address. You can change this port number to the port number configured for your SMTP server.
Chapter 11: Configure Service Monitors 108 Custom Service Monitor HP Clustered File System provides a CUSTOM service monitor type that can be used when the built-in monitor types are not sufficient. Custom monitors can be particularly useful when integrating HP Clustered File System with a custom application. HP Clustered File System treats custom monitors just as it does the builtin monitors, except that you must supply the probe script.
Chapter 11: Configure Service Monitors 109 Virtual Host: The service monitor is assigned to this virtual host. Server Port: HP Clustered File System supplies the default port number for the service you select. If your service uses a port other than the default, type that port number here. Monitor Type: Select the type of service that you want to monitor. Timeout: The maximum amount of time that the monitor_agent process will wait for a probe to complete.
Chapter 11: Configure Service Monitors 110 If you do not specify a URL, the probe operation will connect to httpd and wait for a response. If httpd responds, the monitor assumes that the service is operating correctly. • DNS. Specify the address to be resolved. • CUSTOM. Type the pathname for the probe script to be used with the monitor. When you complete the Add Service Monitor form, the new monitor appears on the Management Console.
Chapter 11: Configure Service Monitors 111 Service Monitor Policy The Policy tab lets you specify the failover behavior of the service monitor and set its service priority. Timeout and Failure Severity This setting works with the virtual host policy (either AUTOFAILBACK or NOFAILBACK) to determine what happens when a probe of a monitored service fails.
Chapter 11: Configure Service Monitors 112 NOFAILOVER. When the monitored service fails, ClusterPulse does not fail over to a backup network interface. This option is useful when the monitored resource is not critical, but is important enough that you want to keep a record of its health. AUTORECOVER. This is the default. The virtual host fails over when a monitor probe fails. When the service is recovered on the original node, failback occurs according to the virtual host’s failback policy.
Chapter 11: Configure Service Monitors 113 Probe Type The probe type applies only to CUSTOM service monitors. These monitors can be configured to be either single-probe or multi-probe. A multi-probe monitor performs the probe function on each node where the monitor is configured, regardless of whether the monitor instance is active or inactive. The built-in monitors work in this manner. Single-probe monitors perform the probe function only on the node where the monitor instance is active.
Chapter 11: Configure Service Monitors 114 Scripts Service monitors can optionally be configured with scripts that are run at various points during cluster operation. The script types are as follows: Recovery script. Runs after a monitor probe failure is detected, in an attempt to restore the service. Start script. Runs as a service is becoming active on a server. Stop script. Runs as a service is becoming inactive on a server.
Chapter 11: Configure Service Monitors 115 without considering this to be an error. In both of these cases, the script should exit with a zero exit status. This behavior is necessary because HP Clustered File System runs the Start and Stop scripts to establish the desired start/stop activity, even though the service may actually have been started by something other than HP Clustered File System before ClusterPulse was started.
Chapter 11: Configure Service Monitors 116 To configure event severity from the command line, use this option: --eventSeverity consider|ignore Script Ordering Script ordering determines the order in which Start and Stop scripts are run when a virtual host moves from one server to another. If you do not configure a monitor with Start and Stop scripts, the script ordering configuration has no effect. There are two settings: SERIAL. This is the default setting.
Chapter 11: Configure Service Monitors 117 Other Configuration Procedures These procedures can be performed from the Servers, Virtual Hosts, or Applications tabs on the HP CFS Management Console or from the command line. Delete a Service Monitor From the Management Console, select the service monitor to be deleted, right-click, and select Delete.
Chapter 11: Configure Service Monitors 118 View Service Monitor Errors To view the last error for a service monitor, select that service monitor on the Management Console, right-click, and select View Last Error. Clear Service Monitor Errors From the Management Console, select the service monitor where the event occurred, right-click, and select Clear Last Event. To clear a monitor event from the command line, use this command: mx service clear ...
12 Configure Device Monitors HP StorageWorks Clustered File System provides built-in device monitors that can be used to watch local disks or gateway devices or to monitor access to a SAN disk partition containing a PSFS filesystem. You can also create custom device monitors. Overview HP Clustered File System provides the following types of device monitors. To configure a device monitor, you will need to specify the probe timeout and frequency and a monitor-specific value.
Chapter 12: Configure Device Monitors 120 For example, if a probe fails on the primary server for a virtual host, the virtual host may fail over to a backup server. See “Device Monitors and Failover” on page 122 for details about where a device monitor is active. DISK Device Monitor A DISK device monitor periodically attempts to read the first block of the disk partition that you specify. You can use this monitor to determine whether a server can access a SAN partition containing a PSFS filesystem.
Chapter 12: Configure Device Monitors 121 After the gateway device monitor is configured on a server, it pings the gateway device periodically. If a network failure occurs and the ping fails, any active virtual hosts on the server will become inactive and fail over to another server. Custom Device Monitor A CUSTOM device monitor can be used when the built-in DISK type is not sufficient. Custom device monitors can be particularly useful when integrating HP Clustered File System with a custom application.
Chapter 12: Configure Device Monitors 122 Failover If a probe of a monitored device fails, HP Clustered File System attempts to relocate any virtual hosts that depend on the monitored device to a healthier server. A virtual host is never active on a server with an inactive, down, or disabled monitored device. If every server configured for a virtual host has a down device that the virtual host depends on, the virtual host will not be active anywhere in the cluster and thus will be totally down.
Chapter 12: Configure Device Monitors 123 • A server is considered down if it loses coordinated communication with the cluster (for example, the server crashed or was shut down, HP Clustered File System was shut down on the server, the server failed to schedule a cluster group communication process for an extended period of time). 3. If the device monitor is multi-active, it will be active on all servers passing evaluation for steps 1 and 2.
Chapter 12: Configure Device Monitors 124 Device Name: Type the name of the device monitor. You can use up to 32 alphanumeric characters. Application name: Specify the name of the HP Clustered File System application to be associated with this device monitor. HP Clustered File System applications are used to group related virtual hosts, service monitors, and device monitors on the Applications tab.
Chapter 12: Configure Device Monitors 125 If you are creating the monitor for a partition on a shared SAN disk, you can also use the HP Clustered File System name for the partition, such as \\.\psd2p2. Using this name ensures that the correct partition will be monitored, even if the current drive letter or mount path is inadvertently reassigned. • GATEWAY. Specify the IP address of the gateway device (such as a router). The IP address must be on a different subnet than the servers in the cluster.
Chapter 12: Configure Device Monitors 126 Probe Severity The Probe Severity tab lets you specify the failover behavior of the device monitor. The Probe Severity setting works with the virtual host policy (either AUTOFAILBACK or NOFAILBACK) to determine what happens when a monitored device fails.
Chapter 12: Configure Device Monitors 127 NOFAILOVER. When the monitor probe fails, ClusterPulse does not fail over to a backup network interface. This option is useful when the monitored resource is not critical, but is important enough that you want to keep a record of its health. AUTORECOVER. This is the default. The virtual host fails over when a monitor probe fails. When device access is recovered on the original node, failback occurs according to the virtual host’s failback policy. NOAUTORECOVER.
Chapter 12: Configure Device Monitors 128 Custom Scripts The Scripts tab lets you configure custom Recovery, Start, and Stop scripts for a device monitor. Device monitors can optionally be configured with scripts that are run at various points during cluster operation. The script types are as follows: Recovery script. Runs after a monitor probe failure is detected, in an attempt to restore the device. Start script. Runs as a device is becoming active on a server. Stop script.
Chapter 12: Configure Device Monitors 129 Similarly, Stop scripts must be robust enough to run when the device is already stopped, without considering this to be an error. In both of these cases, the script should exit with a zero exit status.
Chapter 12: Configure Device Monitors 130 To configure event severity from the command line, use this option: --eventSeverity consider|ignore Script Ordering Script ordering determines the order in which Start and Stop scripts are run when a shared device or virtual host moves from one server to another. If you do not configure a monitor with Start and Stop scripts, the script ordering configuration has no effect. There are two settings for script ordering: SERIAL and PARALLEL. SERIAL.
Chapter 12: Configure Device Monitors 131 Virtual Hosts The Virtual Hosts tab lets you specify the virtual hosts that will fail over if the monitor detects a failure. When a device monitor detects a failure, HP Clustered File System attempts to fail over the active virtual hosts associated with that monitor. By default, all virtual hosts on the servers used with the device monitor are dependent on the device monitor.
Chapter 12: Configure Device Monitors 132 To specify virtual hosts from the command line, use this option: --vhosts ,,... Servers for Device Monitors The Servers tab allows you to select the servers on which the device monitor will be configured. You can also set some options related to the monitor probe operation and failover. Probe Type. The servers on which the monitor probe will occur. Select Single-Probe to conduct the probe only on the server where the monitor is active.
Chapter 12: Configure Device Monitors 133 • Single-Always-Active. The monitor is active on only one of the selected servers. Upon server failure, the monitor will fail over to an active server even if all associated service and device monitors are down. (“Associated” service and device monitors are those monitors that are associated with the same virtual host as this device monitor.) • Multi-Active. The monitor is active simultaneously on all selected servers.
Chapter 12: Configure Device Monitors 134 Disable a Device Monitor From the Management Console, select the device monitor to be disabled, right-click, and select Disable. To disable a device monitor from the command line, use this command: mx device disable ... Enable a Device Monitor From the Management Console, select the device monitor to be enabled, right-click, and select Enable. To enable a device monitor from the command line, use this command: mx device enable ...
13 Configure Notifiers If you would like certain actions to take place when cluster events occur, you can configure notifiers that define how the events should be handled. Overview HP StorageWorks Clustered File System uses notifiers to enable you to view event information generated by servers, network interfaces, virtual hosts, service monitors, device monitors, and filesystems. Notifiers send events from these entities to user-defined notifier scripts.
Chapter 13: Configure Notifiers 136 When adding a notifier, you will need to specify a name for the notifier and to supply the script to be run when an event is triggered that matches the event and entity combination. The notifier script will be run with any arguments that you included in the script string. The script may read STDIN to accept the event message.
Chapter 13: Configure Notifiers 137 Script: Enter the name of the script that will be run when an event occurs. Event: Check the events for which you want to receive notification. Entity: Check the entities for which you want to receive notification. The USER1 - USER7 entities are user-defined entities for the mxlogger command. See “Add Your Own Messages to the Event Log” on page 175. The notifier now appears in the Notifiers window.
Chapter 13: Configure Notifiers 138 Enable a Notifier Select the notifier to be enabled from the Notifiers window, right-click, and select Enable. To enable a notifier from the command line, use this command: mx notifier enable ... Test a Notifier Select the notifier to be tested from the Notifiers window, right-click, and select Test. The event messages for each configured entity will now be sent to the notifier.
Chapter 13: Configure Notifiers 139 The Test Notifier option causes a test event to be generated for each of the event/entity combinations that you configure for the notifier. Following is an example: 10.10.1.1 Error SERVICEMONITORS 0 Oct 31 2000 13:31:31 TEST Notifier message for A Sample Notifier Script The following batch file can be used in conjunction with a command-line tool to send e-mail when notifier events are generated.
14 Test Your Configuration After you have configured HP Clustered File System, we recommend that you perform a set of basic tests to validate that SAN shared filesystem operation, virtual host operation and failover, DNS load-balancing operation and failover, and failover of the LAN administrative network work correctly.
Chapter 14: Test Your Configuration 141 Test SAN Connectivity and Shared Filesystem Operation Use the following procedure to test basic SAN connectivity and shared filesystem operation in your cluster: 1. From the HP CFS Management Console, log into one of the cluster servers. 2. Import an unused SAN disk into your cluster configuration. 3. Create a PSFS filesystem on an unused partition on this disk. 4.
Chapter 14: Test Your Configuration 142 7. Restore the power to the server and then reboot it. Verify that this server, upon rebooting, is able to mount the shared filesystem. Verify that all servers are able to access the shared filesystem. Test Virtual Host Operation and Failover The following procedure tests automatic failover and recovery reintegration. It is best to run these tests in a non-production environment.
Chapter 14: Test Your Configuration 143 3. Verify that all servers are up, that the service you are testing is up, and that the virtual host is active on the primary server and inactive on the backup servers. 4. Stop the service you are testing on the primary server (for example, for HTTP, bring down the HTTP process). 5. Verify that HP Clustered File System detects the service failure. The virtual host should be inactive on the primary server and active on the first backup server. 6.
Chapter 14: Test Your Configuration 144 The DNS name is www.acmd.com and 192.168.100.1 is a virtual host with primary on acmd1 and backup on acmd2. 192.168.100.2 is primary on acmd2 and backup on acmd1. DNS is set up to round robin on the servers acmd1 and acmd2, using the virtual host addresses 192.168.100.1 and 192.168.100.2. Validate Correct Load-Balancing Operation The following procedure validates that DNS round robin and HP Clustered File System are working correctly.
Chapter 14: Test Your Configuration 145 7. Verify that DNS now serves up both IP addresses again and that the ping is returned correctly by both. Test LAN Failover of Administrative Cluster Traffic Use the following procedure to test the LAN administrative traffic failover capability of HP Clustered File System: 1. Connect your cluster servers with at least two physically separate LANs. Configure the Linux network software to enable the interfaces to these networks on each of the cluster servers. 2.
15 Advanced Topics The topics described here provide technical details about HP Clustered File System operations. This information is not required to use HP Clustered File System in typical configurations; however, it may be useful if you want to design custom scripts and monitors, to integrate HP Clustered File System with custom applications, or to diagnose complex configuration problems.
Chapter 15: Advanced Topics 147 The probe mechanism is in one of the following states on each server: Up, Down, Unknown, Timeout. A service monitor also has an activity status on each server. The status can be one of the following: Starting, Active, Suspended, Stopping, Inactive, Failure. The following examples show state transitions for a service monitor that uses the default values for autorecovery, priority, and serial script ordering. Start and Stop scripts are also defined for the monitor.
Chapter 15: Advanced Topics 148 The first example shows the state transitions that occur at startup from an unknown state. At i1, all instances of the monitor have completed stopping. At i2, the virtual host is configured on the Primary. At i3, the monitor start script begins on the Primary and probing begins on the backups. At i4, probing begins on the Primary.
Chapter 15: Advanced Topics 149 At i5 in the following example, the probe fails on the Primary. At i6, the virtual host is deconfigured on the Primary. At i7, the monitor stop script begins on the Primary. At i8, the virtual host is configured on the second backup. At i9, the monitor start script begins on the second backup. At i10, probing begins on the second backup.
Chapter 15: Advanced Topics 150 A custom device monitor also has an activity status on each server. This status indicates the current activity of the monitor on the server. The status can be one of the following: Starting, Active, Suspended, Stopping, Inactive, Failure.
Chapter 15: Advanced Topics Primary 151 Time t1 active Vhost status inactive Service probe status unknown Service monitor activity active undefined star ting Device probe status unknown Device monitor activity active undefined star ting up inactive down inactive stopping up First Bac kup Vhost status inactive Service probe status unknown Service monitor activity undefined up inactive stopping Device probe status Device monitor activity Sec ond Bac kup Vhost status unknown up un
Chapter 15: Advanced Topics 152 Integrate Custom Applications There are many ways to integrate custom applications with HP Clustered File System: • Use service monitors or device monitors to monitor the application • Use a predefined monitor or your own user-defined monitor • Use Start, Stop, and Recovery scripts Following are some examples of these strategies.
Chapter 15: Advanced Topics 153 Built-In Monitor or User-Defined Monitor? To decide whether to use a built-in monitor or a user-defined monitor, first determine whether a built-in monitor is available for the service you want to monitor and then consider the degree of content verification that you need.
Chapter 15: Advanced Topics 154 This script connects to port 2468, sends a string specified by the protocol, and determines whether it has received an expected response. You distribute this script to the same location on all servers on virtual host vh1, and then create a custom service monitor that uses that script. This provides not only verification of the connection, but a degree of content verification.
Chapter 15: Advanced Topics 155 MX_SERVER=IP address The primary address of the server that calls the script. The address is specified in dotted decimal format. MX_TYPE=(SERVICE|DEVICE) Whether the script is for a service or device monitor. MX_VHOST=IP address The IP address of the virtual host. The address is specified in dotted decimal format. (Applies only to service monitors.) MX_PORT=Port or name The port or name of the service monitor. (Applies only to service monitors.
16 SAN Maintenance The following information and procedures apply to SANs used with HP StorageWorks Clustered File System. Server Access to the SAN When a server is either added to the cluster or rebooted, HP Clustered File System needs to take some administrative actions to make the server a full member of the cluster with access to the shared filesystems on the SAN. During this time, the HP CFS Management Console reports the message “Joining cluster” for the server.
Chapter 16: SAN Maintenance 157 • Repeated I/O errors when the server tries to write to a PSFS journal. The server then loses access to the affected filesystem. When the disk experiencing the I/O errors is fixed, the server will automatically regain access to the filesystem. The HP CFS Management Console typically displays an alert message when a server loses access to the SAN. (See Appendix B for more information about these messages.
Chapter 16: SAN Maintenance 158 mxsanlk displays the status of the SANlock stored in each membership partition. It can be used to determine whether any of the membership partitions need to be repaired. Also, if a network partition occurs, mxsanlk can be used to determine which network partition has control of the SAN. Following is some sample output. The command was issued on host 10.10.30.3. The SDMP administrator is the administrator for the cluster to which the host belongs.
Chapter 16: SAN Maintenance 159 • locked, cannot access The host on which mxsanlk was run held the SANlock but is now unable to access it. The membership partition may need repair. • trying to lock, not yet committed by owner The SANlock is either not held or has not yet been committed by its holder. The host on which mxsanlk was run is trying to acquire the SANlock. • unlocked, trying to lock The SANlock does not appear to be held. The host on which mxsanlk was run is trying to acquire the SANlock.
Chapter 16: SAN Maintenance 160 • trying to lock (lock is corrupt, will repair) The host on which mxsanlk was run is trying to acquire the SANlock. The SANlock was corrupted but will be repaired. • locked (lock is corrupt, will repair) The host on which mxsanlk was run holds the lock. The SANlock was corrupted but will be repaired. If a membership partition cannot be accessed, use the mprepair program to correct the problem.
Chapter 16: SAN Maintenance 161 mprepair Utility The mprepair utility can be used to repair any problems if a failure causes servers to have inconsistent views of the membership partitions. This utility is invoked from the operating system prompt. NOTE: HP Clustered File System cannot be running when you use mprepair. To stop the cluster, issue the command net stop matrixserver from the Command Prompt.
Chapter 16: SAN Maintenance 162 INACCESSIBLE. The mprepair utility cannot access the device containing the membership partition. CORRUPT. The partition is not valid. MISMATCH. The membership partition is valid but its MP list does not match the server’s local MP list. If the status is NOT FOUND or INACCESSIBLE, there may be a problem with the disk or with another SAN component. When the problem is repaired, the status should return to OK. If the status is CORRUPT, you should resilver the partition.
Chapter 16: SAN Maintenance 163 To fix this problem, use the --inactivate_mp option (described under “mprepair Options” below) to change the state of the membership partition to “inactive.” You can then import the disk into the cluster. Sizes for Membership Partitions HP Clustered File System stores the size of the smallest membership partition that was created during the HP Clustered File System installation.
Chapter 16: SAN Maintenance 164 The output shows the local membership partition list on the server where you are running mprepair. It then compares this list with the lists located on the disks containing the membership partitions. The output also includes the device database records for the disks containing the membership partitions. Following is an example.
Chapter 16: SAN Maintenance 165 However, in certain situations you may need to perform the resilver operation manually. For example, a membership partition might become corrupt or a local membership list might become out of date. To resilver from a particular partition, type the following command: mprepair --resilver UID/PART# UID is the UID for the device and PART# is the number of the partition on the device.
Chapter 16: SAN Maintenance 166 Change a Host Bus Adapter or Driver HP Clustered File System and the psd driver must be disabled when you add, remove, or update a Host Bus Adapter or its driver. The following procedure describes how to change a Host Bus Adapter or driver on a cluster server. All commands are run from the Command Prompt. 1. Stop HP Clustered File System: net stop matrixserver 2. Disable the HP Clustered File System service: mxservice -uninstall 3.
Chapter 16: SAN Maintenance 167 Server Cannot Be Located If the cluster reports that it cannot locate a server on the SAN but you know that the server is connected, there may be an FC switch problem. On a Brocade FC switch, log into the switch and verify that all F-Port and L-Port IDs specified in switchshow also appear in the local nameserver, nsshow. If the lists of ports are different, reboot the switch. If the reboot does not clear the problem, there may be a problem with the switch.
Chapter 16: SAN Maintenance 168 Online Replacement of a FibreChannel Switch This procedure applies only to sites using EMC PowerPath or HP Secure Path. When a cluster includes multiple FibreChannel switches, you can replace a switch without affecting normal cluster operations. The following conditions must be met when performing online replacement of a FibreChannel switch. • The replacement switch must be the same model as the original switch and must have the same number of ports.
Chapter 16: SAN Maintenance 169 4. Back up the zone configuration information, either from the original switch or from another switch in the fabric. Use the cfgShow command and record its output. 5. Connect the power and either the Ethernet or the serial console cable to the new switch. 6. Log on to the new switch. 7. Disable the switch with the switchDisable command. 8. Disable any stale active configuration on the new switch with the cfgDisable command. 9.
Chapter 16: SAN Maintenance 170 Also verify that no zone conflicts are being reported on the inter-switch links (ISL). To ensure a highly available configuration after the switch has been replaced, verify that all servers have eligible I/O paths through the replaced switch. You can use the PowerPath powermt command or the Secure Path spmgr command to do this. Replace a McDATA FC Switch To replace a McDATA FibreChannel switch, complete these steps: 1.
Chapter 16: SAN Maintenance 171 Any existing zone configuration on the replacement switch should be removed to allow the fabric to properly communicate current zoning when the switch joins the fabric. 6. Add the private community to the Configure > Management > SNMP tab and ensure that it is write enabled. 7. Connect the FC connectors to the new switch. Be sure to plug them into the same ports as on the original switch. 8. Bring the switch online.
17 Other Cluster Maintenance Although the HP Clustered File System requires little special maintenance beyond that which is normally required for your servers and services, you may need to perform the following activities: • Maintain the HP Clustered File System event log • Disable a server for maintenance • Troubleshoot a cluster • Troubleshoot service and device monitors Maintain the HP Clustered File System Event Log HP Clustered File System stores its log messages in the HP Clustered File System event
Chapter 17: Other Cluster Maintenance 173 Windows Event Viewer Select Start > Programs > Administration Tools > Event Viewer, and then click on MatrixServer to see the log messages. You can use the options on the Action menu to manipulate the event log. HP CFS Management Console You can also use the HP CFS Management Console to view or maintain the event log on each server. The changes you make affect only the event log for the server selected on the Servers window.
Chapter 17: Other Cluster Maintenance 174 View the Event Log To view the event log for a specific server, select that server, right-click, and then select View Log. The Server Log window displays the most recent messages from the event log. You can select the types of messages that you want to view by checking or unchecking the boxes at the top of the window. Use the scroll bars to move up, down, left, and right in the file, allowing you to see entire messages without resizing the window.
Chapter 17: Other Cluster Maintenance 175 Audit Administrative Commands HP Clustered File System provides an audit feature that can be used to log administrative commands in the event log. The following types of commands are audited: • Login authenication for the HP Management Console and mx commands • Commands invoked via the HP Management Console • mx commands, with the exception of status commands The audit entry contains the IP address and port number of the client TCP connection.
Chapter 17: Other Cluster Maintenance 176 The -l level option specifies the severity of the message. level can be ERROR, WARNING, INFO, EVENT, FATAL, FAILUREAUDIT, SUCCESSAUDIT, TRACE, or DEBUG. The -d option allows you to specify a numeric message ID. The default is 100. The -G option specifies that the message is global; the -L option specifies that it is local. The default is local. If the log-text contains special characters, it must be enclosed in quotation marks.
Chapter 17: Other Cluster Maintenance 177 • Network check: IP network and interface assignments, forward and reverse hostname lookup. • Storage check: Host Bus Adapters, drivers, and settings. • Miscellaneous check: other checks such as the non-paged pool setting. Output from the utility appears on the screen and is also written to the Application Log section of the Event Viewer.
Chapter 17: Other Cluster Maintenance 178 The Server Status Is “Down” If a server is running but HP Clustered File System shows it as down, follow these diagnostic steps: 1. Verify that the server is connected to the network. 2. Verify that the network devices and interfaces are properly configured on the server. 3. Ensure that the ClusterPulse process is running on the server. 4. Verify that the same version of HP Clustered File System is installed on all servers in the cluster.
Chapter 17: Other Cluster Maintenance 179 HP Clustered File System Exits Immediately If the ClusterPulse process exits immediately on starting, first determine whether the process is running. To do this, check the Services applet in the Control Panel or run net start on the command line and check for the HP StorageWorks Clustered File System service. Also use the Event Viewer program (Start > Programs > Administrative Tools > Event Viewer) to view the event log.
Chapter 17: Other Cluster Maintenance 180 “Undefined” Status If the probe has not completed because of a script configuration problem or because HP Clustered File System is still attempting to finish the first probe, the status will be reported as “undefined” instead of Down. “SYSTEM ERROR” Status The “SYSTEM ERROR” status indicates that a serious system functional error occurred while HP Clustered File System was trying to probe the service.
Chapter 17: Other Cluster Maintenance 181 STOP_TIMEOUT. A Stop script was executed but it did not complete within the specified timeout period. RECOV_TIMEOUT. A Recovery script was executed but it did not complete within the specified timeout period. START_FAILURE. A Start script was executed but it returned a non-zero exit status. STOP_FAILURE. A Stop script was executed but returned a non-zero exit status. RECOV_FAILURE. A Recovery script was executed but returned a non- zero exit status.
Chapter 17: Other Cluster Maintenance 182 Because the error is server-specific, you must clear it on each server in the cluster (just as you had to correct the script on each server that reported a problem). NOTE: An error on a monitor may still be indicated after correcting the problem with the Start, Stop, Recovery, or probe script. Errors can be cleared only with the HP CFS Management Console or the appropriate mx command. An error will not be automatically cleared by the ClusterPulse process.
Chapter 17: Other Cluster Maintenance 183 “Activity Unknown” Status For a brief period while the monitor_agent process checks the monitor script configuration and creates a thread to serve the monitor, the activity may be displayed as “activity unknown.
Chapter 17: Other Cluster Maintenance 184 Internal Network Port Numbers The following network port numbers are used for internal, server-toserver communication. You should need to change the firewall rules for these ports only if you have HP Clustered File System nodes firewalled from each other.
A Management Console Icons The Management Console uses the following icons. HP Clustered File System Entities The following icons represent the HP Clustered File System entities. If an entity is disabled, the color of the icon becomes less intense.
Appendix A: Management Console Icons 186 Additional icons are added to the entity icon to indicate the status of the entity. The following example shows the status icons for the server entity. The status icons are the same for all entities and have the following meanings. Monitor Probe Status The following icons indicate the status of service monitor and device monitor probes. If the monitor is disabled, the color of the icons is less intense.
Appendix A: Management Console Icons 187 On the Applications tab, virtual hosts and single-active monitors use the following icons to indicate the primary and backups. Multi-active monitors use the same icons but do not include the primary or backup indication. Management Console Alerts The Management Console uses the following icons to indicate the severity of the messages that appear in the Alert window.
B Error and Event Log Messages When certain errors occur, HP Clustered File System writes messages to the HP CFS Management Console. Other error messages are written to the HP Clustered File System event log. Management Console Alert Messages NN.NN.NN.NN has lost a significant portion of its SAN access, possibly due to a SAN hardware failure The specified server is unable to write to any of the membership partitions.
Appendix B: Error and Event Log Messages 189 NN.NN.NN.NN should be rebooted ASAP as it stopped cluster network communication DATE HH:MM:SS and was excluded from the SAN to protect filesystem integrity The server was excluded from the cluster because it could no longer communicate over the network. The server should be rebooted at the first opportunity. Also check the network and make sure that the server is not experiencing a resource shortage. NN.NN.NN.
Appendix B: Error and Event Log Messages 190 Error connecting to server : unknown host The server identified as is not responding to the connection request from the Management Console. Verify that you typed a valid hostname or IP address in the login window. This error may indicate that the ClusterPulse process is not running; restart HP Clustered File System on the server.
Appendix B: Error and Event Log Messages 191 Fencing operation failed, reboot NN.NN.NN.NN ASAP. NN.NN.NN.NN stopped cluster communication DATE HH:MM but cannot be excluded from the cluster because of a networking or fencing hardware failure or misconfiguration. To protect filesystem integrity, some or all filesystem operations may be paused until NN.NN.NN.NN is rebooted or until fencing operations can be performed.
Appendix B: Error and Event Log Messages 192 Cluster unable to take control of SAN, because a majority of the membership partitions cannot be written or are corrupt, possibly due to a SAN hardware failure or misconfiguration and/or because servers have been excluded from the SAN. As a result, some or all filesystem operations may be paused throughout the cluster. In addition, filesystem mounts and unmounts and disk imports and deports cannot be performed.
Appendix B: Error and Event Log Messages 193 Cluster unable to take control of SAN, because the servers are unable to write to a majority of the membership partitions, possibly due to a SAN hardware failure or misconfiguration and/or because some servers have been excluded from the SAN. As a result, some or all filesystem operations may be paused throughout the cluster. In addition, filesystem mounts and unmounts and disk imports and deports cannot be performed.
Appendix B: Error and Event Log Messages 194 Membership Partitions are corrupt or inaccessible, preventing SAN access A majority of the membership partitions are either inaccessible or corrupt. HP Clustered File System cannot allow access to the PSFS filesystems while the Membership Partitions are in this state. To obtain more detailed information about the state of each Membership Partition, use the command mprepair -v --get_current_mps.
Appendix B: Error and Event Log Messages 195 Singleton cluster unable to take control of SAN, because the cluster that includes NN.NN.NN.NN currently controls the SAN. Possibly this server has not been added to the cluster or has been deleted from the cluster, or possibly a networking failure or misconfiguration has partitioned this server from the servers that control the SAN. Check the cluster configuration and add the server if it is not currently a member.
Appendix B: Error and Event Log Messages 196 ClusterPulse Messages Bad command -- Could not find device monitor instance for XXX on server YYY The monitor_agent process is reporting status on a device monitor with device name XXX on server YYY but the ClusterPulse process does not recognize this device. Probably the Management Console has removed the device monitor and monitor_agent has already sent the status to ClusterPulse. Therefore, no corrective action is required.
Appendix B: Error and Event Log Messages 197 Internal system error -- Internal error at server X.X.X.X: select returned with an unknown read socket N Internal system error -- Internal error at server X.X.X.X: select returned with an unknown write socket N Internal system error -- Internal select error at server X.X.X.X: [select ?] with errno of N The ClusterPulse process received a system error. Report this error to HP Technical Support at your earliest opportunity.
Appendix B: Error and Event Log Messages 198 Monitor error -- monitor_agent reported N:: The monitor_agent process experienced an error and is copying the error string to the HP Clustered File System event log. Inspect the error string for details about resolving the error. Network error -- set_readable called with unknown socket N Network error -- set_writeable called with unknown socket N If you receive this message, notify HP Technical Support at your earliest convenience.
Appendix B: Error and Event Log Messages 199 The ClusterPulse process experienced an error while trying to write to the monitor_agent process. It will attempt to recover from this failure. Script error -- HP Clustered File System cannot invoke a non executable agent monitor_agent Verify that the execute permission on monitor_agent is set correctly.
Appendix B: Error and Event Log Messages 200 Write error - in default_write_fun: Unknown connection mode for IP %s port %d Read error - in default_read_fun: Unknown connection mode for IP %s port %d If you receive either of these messages, notify HP Technical Support at your earliest convenience. PSFS Filesystem Messages If you receive a panic message from the PSFS filesystem, report it to HP Technical Support at your earliest convenience.
Appendix B: Error and Event Log Messages 201 Fatal messages have this format: [Fatal ] [] SANPulse SERVERS A fatal message indicates that HP Clustered File System has terminated on the specified server. First attempt to restart HP Clustered File System on that server. If the cluster software cannot be restarted, you will see another message asking you to reboot the server.
Appendix B: Error and Event Log Messages 202 If PanPulse determines that one or more local interfaces are down or unavailable, it will report messages such as the following: Interface address
Appendix B: Error and Event Log Messages 203 Port 8940 Only one instance of PanPulse can be running on port 8940 on a server. If another application is using that port or another instance of PanPulse is started, the following error will be reported. Unable to bind on port 8940. Please make sure that this is the only copy of panpulse running on this server.
Index A administrative network defined 5 failover 39 network topology 38 requirements for 38 select 38 administrative traffic 42 alerts cluster 25 on Management Console 25 applications configure to recognize virtual hosts 95 create 82 filter 85 integrate 152 manage 86 name of 82 status 84 Applications tab filter applications 85 format 85 icons 84 manage application monitors 88 manage applications 86 menu operations 87 modify display 84 rehost virtual host 97 reorder columns 84 audit log, for administrative
Index activeness policy 122 activity status 182 CUSTOM monitor 121 defined 10 DISK monitor 120 events 134 GATEWAY monitor 120 multi-active 119 troubleshooting 179 device monitor configuration add or update 123 advanced settings probe severity 126 script ordering 130 servers 132 virtual hosts 131 delete 133 disable 134 enable 134 DISK device monitor 120 disks, SAN deport 46 import 44 online insertion 167 DLM (Distributed Lock Manager) defined 7 filesystem synchronization 67 drive letter, filesystem assignme
Index create 68 drive letter assign 72 remove 74 view 73 extend 76 features 65 features, configured 77 journal 66 mount 75 mount path assign 72 remove 74 view 73 properties 75 relabel 75 restrictions 68 suspend 78 firewall 183 FTP service monitor 106 G GATEWAY device monitor 120 getting help 1 grpcommd process 8 H HP storage web site 1 technical support 1 HTTP service monitor 106 HTTPS service monitor 107 L load balancing, DNS configure 34 test operation and failover 143 M maintenance procedures 172 Ma
Index O OLI, storage 167 P PanPulse process administrative network 38 defined 7 error messages 201 partitions on SAN disks requirement for importing 44 passwords assign or change with mxpasswd 26 ports, network external 183 internal 183 primary server 10 probe severity, failover 102 psd driver 7 PSFS filesystem.
Index TCP 107 shared disks import 44 SMDS (Shared Memory Data Store) 9 SMTP service monitor 107 Start scripts device monitor 128 service monitor 113 Stop scripts device monitor 128 service monitor 113 subdevices, for dynamic volumes 51 T TCP service monitor 107 technical support, HP 1 troubleshooting monitors 179 208 V virtual host activeness policy 98 applications, configure to recognize 95 device monitors, dependency on 131 enable or disable network interface 42 failover 97 guidelines 91 policy for fa