Configuring and Managing a Red Hat Cluster 5.1 Red Hat Cluster for Red Hat Enterprise Linux 5.
Configuring and Managing a Red Hat Cluster Configuring and Managing a Red Hat Cluster describes the configuration and management of Red Hat cluster systems for Red Hat Enterprise Linux 5.1 It does not include information about Red Hat Linux Virtual Servers (LVS). Information about installing and configuring LVS is in a separate document.
Configuring and Managing a Red Hat Cluster: Red Hat Cluster for Red Hat Enterprise Linux 5.1 Copyright © You need to override this in your local ent file Red Hat, Inc. Copyright © You need to override this in your local ent file Red Hat Inc.. This material may only be distributed subject to the terms and conditions set forth in the Open Publication License, V1.0 or later with the restrictions noted below (the latest version of the OPL is presently available at http://www.opencontent.org/openpub/).
Configuring and Managing a Red Hat Cluster
Introduction .............................................................................................................. vii 1. Document Conventions ................................................................................ viii 2. Feedback ...................................................................................................... ix 1. Red Hat Cluster Configuration and Management Overview ....................................... 1 1. Configuration Basics .................................
Configuring and Managing a Red Hat Cluster 4. Managing Red Hat Cluster With Conga ..................................................................49 1. Starting, Stopping, and Deleting Clusters ........................................................49 2. Managing Cluster Nodes ...............................................................................50 3. Managing High-Availability Services ...............................................................51 4.
Introduction This document provides information about installing, configuring and managing Red Hat Cluster components. Red Hat Cluster components are part of Red Hat Cluster Suite and allow you to connect a group of computers (called nodes or members) to work together as a cluster. This document does not include information about installing, configuring, and managing Linux Virtual Server (LVS) software. Information about that is in a separate document.
Introduction environment. • Global File System: Configuration and Administration — Provides information about installing, configuring, and maintaining Red Hat GFS (Red Hat Global File System). • Using Device-Mapper Multipath — Provides information about using the Device-Mapper Multipath feature of Red Hat Enterprise Linux 5. • Using GNBD with Global File System — Provides an overview on using Global Network Block Device (GNBD) with Red Hat GFS.
Feedback Italic Courier font represents a variable, such as an installation directory: install_dir/bin/ bold font Bold font represents application programs and text found on a graphical interface. When shown like this: OK , it indicates a button on a graphical application interface. Additionally, the manual uses different strategies to draw your attention to pieces of information.
Introduction If you spot a typo, or if you have thought of a way to make this manual better, we would love to hear from you. Please submit a report in Bugzilla (http://bugzilla.redhat.com/bugzilla/) against the component Documentation-cluster. Be sure to mention the manual's identifier: Cluster_Administration RHEL 5.1 (2008-01-10T14:58) By mentioning this manual's identifier, we know exactly which version of the guide you have.
Chapter 1. Red Hat Cluster Configuration and Management Overview Red Hat Cluster allows you to connect a group of computers (called nodes or members) to work together as a cluster. You can use Red Hat Cluster to suit your clustering needs (for example, setting up a cluster for sharing files on a GFS file system or setting up service failover). 1. Configuration Basics To set up a cluster, you must connect the nodes to certain cluster hardware and configure the nodes into the cluster environment.
Chapter 1. Red Hat Cluster Configuration and Management Overview • Fibre Channel switch — A Fibre Channel switch provides access to Fibre Channel storage. Other options are available for storage according to the type of storage interface; for example, iSCSI or GNBD. A Fibre Channel switch can be configured to perform fencing. • Storage — Some type of storage is required for a cluster. The type required depends on the purpose of the cluster. Figure 1.1. Red Hat Cluster Hardware Overview 1.2.
Configuring Red Hat Cluster Software relationship among the cluster components. Figure 1.2, “Cluster Configuration Structure” shows an example of the hierarchical relationship among cluster nodes, high-availability services, and resources. The cluster nodes are connected to one or more fencing devices. Nodes can be grouped into a failover domain for a cluster service. The services comprise resources such as NFS exports, IP addresses, and shared GFS partitions. Figure 1.2.
Chapter 1. Red Hat Cluster Configuration and Management Overview A brief overview of each configuration tool is provided in the following sections: • Section 2, “Conga” • Section 3, “system-config-cluster Cluster Administration GUI” • Section 4, “Command Line Administration Tools” In addition, information about using Conga and system-config-cluster is provided in subsequent chapters of this document. Information about the command line tools is available in the man pages for the tools. 2.
Conga can manage storage on computers whether they belong to a cluster or not. To administer a cluster or storage, an administrator adds (or registers) a cluster or a computer to a luci server. When a cluster or a computer is registered with luci, the FQDN hostname or IP address of each computer is stored in a luci database. You can populate the database of one luci instance from another luciinstance.
Chapter 1. Red Hat Cluster Configuration and Management Overview Figure 1.3. luci homebase Tab Figure 1.4.
system-config-cluster Cluster Figure 1.5. luci storage Tab 3. system-config-cluster Cluster Administration GUI This section provides an overview of the cluster administration graphical user interface (GUI) available with Red Hat Cluster Suite — system-config-cluster. It is for use with the cluster infrastructure and the high-availability service management components. system-config-cluster consists of two major functions: the Cluster Configuration Tool and the Cluster Status Tool.
Chapter 1. Red Hat Cluster Configuration and Management Overview While system-config-cluster provides several convenient tools for configuring and managing a Red Hat Cluster, the newer, more comprehensive tool, Conga, provides more convenience and flexibility than system-config-cluster. 3.1. Cluster Configuration Tool You can access the Cluster Configuration Tool (Figure 1.6, “Cluster Configuration Tool”) through the Cluster Configuration tab in the Cluster Administration GUI. Figure 1.6.
Administration GUI The Cluster Configuration Tool represents cluster configuration components in the configuration file (/etc/cluster/cluster.conf) with a hierarchical graphical display in the left panel. A triangle icon to the left of a component name indicates that the component has one or more subordinate components assigned to it. Clicking the triangle icon expands and collapses the portion of the tree below a component.
Chapter 1. Red Hat Cluster Configuration and Management Overview recovery policy for the service. Services are represented as subordinate elements under Services. Using configuration buttons at the bottom of the right frame (below Properties), you can create services (when Services is selected) or edit service properties (when a service is selected). 3.2. Cluster Status Tool You can access the Cluster Status Tool (Figure 1.
Command Line Administration Tools The nodes and services displayed in the Cluster Status Tool are determined by the cluster configuration file (/etc/cluster/cluster.conf). You can use the Cluster Status Tool to enable, disable, restart, or relocate a high-availability service. 4.
12
Chapter 2.
Chapter 2. Before Configuring a Red Hat Cluster Cluster Nodes” lists the IP port numbers, their respective protocols, the components to which the port numbers are assigned, and references to iptables rule examples. At each cluster node, enable IP ports according to Table 2.1, “Enabled IP Ports on Red Hat Cluster Nodes”. (All examples are in Section 2.3, “Examples of iptables Rules”.
Examples of iptables Rules If a cluster node is running luci, port 11111 should already have been enabled. IP Port Number Protocol Component Reference to Example of iptables Rules 8084 TCP luci (Conga user interface server) Example 2.2, “Port 8084: luci (Cluster Node or Computer Running luci)” 11111 TCP ricci (Conga remote agent) Example 2.3, “Port 11111: ricci (Cluster Node and Computer Running luci)” Table 2.2. Enabled IP Ports on a Computer That Runs luci 2.3.
Chapter 2. Before Configuring a Red Hat Cluster 10.10.10.0/24 -d 10.10.10.0/24 --dports 11111 -j ACCEPT Example 2.3. Port 11111: ricci (Cluster Node and Computer Running luci) -A INPUT -i 10.10.10.200 -m state --state NEW -m multiport -p tcp -s 10.10.10.0/24 -d 10.10.10.0/24 --dports 14567 -j ACCEPT Example 2.4. Port 14567: gnbd -A INPUT -i 10.10.10.200 -m state --state NEW -m multiport -p tcp -s 10.10.10.0/24 -d 10.10.10.0/24 --dports 16851 -j ACCEPT Example 2.5.
Configuring ACPI For Use with Integrated 10.10.10.0/24 -d 10.10.10.0/24 --dports 50007 -j ACCEPT Example 2.9. Port 50007: ccsd (UDP) 3. Configuring ACPI For Use with Integrated Fence Devices If your cluster uses integrated fence devices, you must configure ACPI (Advanced Configuration and Power Interface) to ensure immediate and complete fencing. Note For the most current information about integrated fence devices supported by Red Hat Cluster Suite, refer to http://www.redhat.
Chapter 2. Before Configuring a Red Hat Cluster Soft-Off with one of the following alternate methods: • Changing the BIOS setting to "instant-off" or an equivalent setting that turns off the node without delay Note Disabling ACPI Soft-Off with the BIOS may not be possible with some computers. • Appending acpi=off to the kernel boot command line of the /boot/grub/grub.conf file Important This method completely disables ACPI; some computers do not boot correctly if ACPI is completely disabled.
Fence Devices • chkconfig --del acpid — This command removes acpid from chkconfig management. — OR — • chkconfig --level 2345 acpid off — This command turns off acpid. 2. Reboot the node. 3. When the cluster is configured and running, verify that the node turns off immediately when fenced. Tip You can fence the node with the fence_node command or Conga. 3.2. Disabling ACPI Soft-Off with the BIOS The preferred method of disabling ACPI Soft-Off is with chkconfig management (Section 3.
Chapter 2. Before Configuring a Red Hat Cluster Note The equivalents to ACPI Function, Soft-Off by PWR-BTTN, and Instant-Off may vary among computers. However, the objective of this procedure is to configure the BIOS so that the computer is turned off via the power button without delay. 4. Exit the BIOS CMOS Setup Utility program, saving the BIOS configuration. 5. When the cluster is configured and running, verify that the node turns off immediately when fenced.
Disabling ACPI Completely in the grub.conf 3.3. Disabling ACPI Completely in the grub.conf File The preferred method of disabling ACPI Soft-Off is with chkconfig management (Section 3.1, “Disabling ACPI Soft-Off with chkconfig Management”). If the preferred method is not effective for your cluster, you can disable ACPI Soft-Off with the BIOS power management (Section 3.2, “Disabling ACPI Soft-Off with the BIOS”).
Chapter 2. Before Configuring a Red Hat Cluster title Red Hat Enterprise Linux Server (2.6.18-36.el5) root (hd0,0) kernel /vmlinuz-2.6.18-36.el5 ro root=/dev/VolGroup00/LogVol00 console=ttyS0,115200n8 acpi=off initrd /initrd-2.6.18-36.el5.img In this example, acpi=off has been appended to the kernel boot command line — the line starting with "kernel /vmlinuz-2.6.18-36.el5". Example 2.11. Kernel Boot Command Line with acpi=off Appended to It 4.
File Important Overall, heuristics and other qdiskd parameters for your Red Hat Cluster depend on the site environment and special requirements needed. To understand the use of heuristics and other qdiskd parameters, refer to the qdisk(5) man page. If you require assistance understanding and using qdiskd for your site, contact an authorized Red Hat support representative.
Chapter 2. Before Configuring a Red Hat Cluster Note Using JBOD as a quorum disk is not recommended. A JBOD cannot provide dependable performance and therefore may not allow a node to write to it quickly enough. If a node is unable to write to a quorum disk device quickly enough, the node is falsely evicted from a cluster. 6. Multicast Addresses Red Hat Cluster nodes communicate among each other using multicast addresses.
General Configuration Considerations No-single-point-of-failure hardware configuration Clusters can include a dual-controller RAID array, multiple bonded network channels, multiple paths between cluster members and storage, and redundant un-interruptible power supply (UPS) systems to ensure that no single failure results in application down time or loss of data. Alternatively, a low-cost cluster can be set up to provide less availability than a no-single-point-of-failure cluster.
26
Chapter 3.
Chapter 3. Configuring Red Hat Cluster With Conga 9. Configuring storage. Refer to Section 10, “Configuring Cluster Storage”. 2. Starting luci and ricci To administer Red Hat Clusters with Conga, install and run luci and ricci as follows: 1. At each node to be administered by Conga, install the ricci agent. For example: # yum install ricci 2. At each node to be administered by Conga, start ricci. For example: # service ricci start Starting ricci: [ OK ] 3.
Creating A Cluster Restart the Luci server for changes to take effect eg. service luci restart 5. Start luci using service luci restart. For example: # service luci restart Shutting down luci: Starting luci: generating https SSL certificates... [ OK ] [ OK ] done Please, point your web browser to https://nano-01:8084 to access luci 6. At a Web browser, place the URL of the luci server into the URL address box and click Go (or the equivalent).
Chapter 3. Configuring Red Hat Cluster With Conga A progress page shows the progress of those actions for each node in the cluster. When the process of creating a new cluster is complete, a page is displayed providing a configuration interface for the newly created cluster. 4. Global Cluster Properties When a cluster is created, or if you select a cluster to configure, a cluster-specific page is displayed. The page provides an interface for configuring cluster-wide properties and detailed properties.
Global Cluster Properties Note For more information about Post-Join Delay and Post-Fail Delay, refer to the fenced(8) man page. 3. Multicast tab — This tab provides an interface for configuring these Multicast Configuration parameters: Let cluster choose the multicast address and Specify the multicast address manually. Red Hat Cluster software chooses a multicast address for cluster management communication among cluster nodes; therefore, the default setting is Let cluster choose the multicast address.
Chapter 3. Configuring Red Hat Cluster With Conga Parameter Description Use a Quorum Partition Enables quorum partition. Enables quorum-disk parameters in the Quorum Partition tab. Interval The frequency of read/write cycles, in seconds. Votes The number of votes the quorum daemon advertises to CMAN when it has a high enough score. TKO The number of cycles a node must miss to be declared dead. Minimum Score The minimum score for a node to be considered "alive".
Configuring Fence Devices Tip If you are creating a new cluster, you can create fence devices when you configure cluster nodes. Refer to Section 6, “Configuring Cluster Members”. With Conga you can create shared and non-shared fence devices.
Chapter 3. Configuring Red Hat Cluster With Conga • Creating shared fence devices — Refer to Section 5.1, “Creating a Shared Fence Device”. The procedures apply only to creating shared fence devices. You can create non-shared (and shared) fence devices while configuring nodes (refer to Section 6, “Configuring Cluster Members”). • Modifying or deleting fence devices — Refer to Section 5.2, “Modifying or Deleting a Fence Device”. The procedures apply to both shared and non-shared fence devices.
Modifying or Deleting a Fence Device Figure 3.1. Fence Device Configuration 3. At the Add a Sharable Fence Device page, click the drop-down box under Fencing Type and select the type of fence device to configure. 4. Specify the information in the Fencing Type dialog box according to the type of fence device. Refer to Appendix B, Fence Device Parameters for more information about fence device parameters. 5. Click Add this shared fence device. 6.
Chapter 3. Configuring Red Hat Cluster With Conga 5.2. Modifying or Deleting a Fence Device To modify or delete a fence device, follow these steps: 1. At the detailed menu for the cluster (below the clusters menu), click Shared Fence Devices. Clicking Shared Fence Devices causes the display of the fence devices for a cluster and causes the display of menu items for fence device configuration: Add a Fence Device and Configure a Fence Device. 2. Click Configure a Fence Device.
Adding a Member to a Running Cluster Creating a cluster consists of selecting a set of nodes (or members) to be part of the cluster. Once you have completed the initial step of creating a cluster and creating fence devices, you need to configure cluster nodes. To initially configure cluster nodes after creating a new cluster, follow the steps in this section.
Chapter 3. Configuring Red Hat Cluster With Conga 4. Click Submit. Clicking Submit causes the following actions: a. Cluster software packages to be downloaded onto the added node. b. Cluster software to be installed (or verification that the appropriate software packages are installed) onto the added node. c. Cluster configuration file to be updated and propagated to each node in the cluster — including the added node. d. Joining the added node to cluster.
Configuring a Failover Domain 1. Click the link of the node to be deleted. Clicking the link of the node to be deleted causes a page to be displayed for that link showing how that node is configured. Note To allow services running on a node to fail over when the node is deleted, skip the next step. 2. Disable or relocate each service that is running on the node to be deleted: Note Repeat this step for each service that needs to be disabled or started on another node. a.
Chapter 3. Configuring Red Hat Cluster With Conga be started (either manually or by the cluster software). • Unordered — When a cluster service is assigned to an unordered failover domain, the member on which the cluster service runs is chosen from the available failover domain members with no priority ordering. • Ordered — Allows you to specify a preference order among the members of a failover domain.
Modifying a Failover Domain 7.1. Adding a Failover Domain To add a failover domain, follow the steps in this section. The starting point of the procedure is at the cluster-specific page that you navigate to from Choose a cluster to administer displayed on the cluster tab. 1. At the detailed menu for the cluster (below the clusters menu), click Failover Domains.
Chapter 3. Configuring Red Hat Cluster With Conga displayed on the cluster tab. 1. At the detailed menu for the cluster (below the clusters menu), click Failover Domains. Clicking Failover Domains causes the display of failover domains with related services and the display of menu items for failover domains: Add a Failover Domain and Configure a Failover Domain . 2. Click Configure a Failover Domain.
Adding Cluster Resources 9. To make additional changes to the failover domain, continue modifications at the Failover Domain Form page and click Submit when you are done. 8. Adding Cluster Resources To add a cluster resource, follow the steps in this section. The starting point of the procedure is at the cluster-specific page that you navigate to from Choose a cluster to administer displayed on the cluster tab. 1. At the detailed menu for the cluster (below the clusters menu), click Resources.
Chapter 3. Configuring Red Hat Cluster With Conga File System ID — When creating a new file system resource, you can leave this field blank. Leaving the field blank causes a file system ID to be assigned automatically after you click Submit at the File System Resource Configuration dialog box. If you need to assign a file system ID explicitly, specify it in this field.
Adding a Cluster Service to the Cluster Options — Additional client access rights. For more information, refer to the exports(5) man page, General Options NFS Export Name — Enter a name for the NFS export resource. Script Name — Enter a name for the custom user script. File (with path) — Enter the path where this custom script is located (for example, /etc/init.d/userscript) Samba Service Name — Enter a name for the Samba server.
Chapter 3. Configuring Red Hat Cluster With Conga service must be started manually any time the cluster comes up from the stopped state. Tip Use a descriptive name that clearly distinguishes the service from other services in the cluster. 4. Add a resource to the service; click Add a resource to this service. Clicking Add a resource to this service causes the display of two drop-down boxes: Add a new local resource and Use an existing global resource.
Configuring Cluster Storage link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1356 qdisc pfifo_fast qlen 1000 link/ether 00:05:5d:9a:d8:91 brd ff:ff:ff:ff:ff:ff inet 10.11.4.31/22 brd 10.11.7.255 scope global eth0 inet6 fe80::205:5dff:fe9a:d891/64 scope link inet 10.11.4.240/22 scope global secondary eth0 valid_lft forever preferred_lft forever 10.
Chapter 3. Configuring Red Hat Cluster With Conga • Hard Drives • Partitions • Volume Groups Each section is set up as an expandable tree, with links to property sheets for specific devices, partitions, and storage entities. Configure the storage for your cluster to suit your cluster requirements. If you are configuring Red Hat GFS, configure clustered logical volumes first, using CLVM. For more information about CLVM and GFS refer to Red Hat documentation for those products.
Chapter 4. Managing Red Hat Cluster With Conga This chapter describes various administrative tasks for managing a Red Hat Cluster and consists of the following sections: • Section 1, “Starting, Stopping, and Deleting Clusters” • Section 2, “Managing Cluster Nodes” • Section 3, “Managing High-Availability Services” • Section 4, “Diagnosing and Correcting Problems in a Cluster” 1.
Chapter 4. Managing Red Hat Cluster With Conga Selecting Start this cluster starts cluster software. • Delete this cluster — Selecting this action halts a running cluster, disables cluster software from starting automatically, and removes the cluster configuration file from each node. You can select this action for any state the cluster is in. Deleting a cluster frees each node in the cluster for use in another cluster. 2. Select one of the functions and click Go. 3.
Managing High-Availability Services • Have node leave cluster/Have node join cluster — Have node leave cluster is available when a node has joined of a cluster. Have node join cluster is available when a node has left a cluster. Selecting Have node leave cluster shuts down cluster software and makes the node leave the cluster. Making a node leave a cluster prevents the node from automatically joining the cluster when it is rebooted.
Chapter 4. Managing Red Hat Cluster With Conga 2. At the right of each service listed on the page, click the Choose a task drop-down box. Clicking Choose a task drop-down box reveals the following selections depending on if the service is running: • If service is running — Configure this service, Restart this service, and Stop this service. • If service is not running — Configure this service, Start this service, and Delete this service.
Chapter 5.
Chapter 5. Configuring Red Hat Cluster With system-config-cluster 3. Creating fence devices. Refer to Section 4, “Configuring Fence Devices”. 4. Creating cluster members. Refer to Section 5, “Adding and Deleting Members”. 5. Creating failover domains. Refer to Section 6, “Configuring a Failover Domain”. 6. Creating resources. Refer to Section 7, “Adding Cluster Resources”. 7. Creating cluster services. Refer to Section 8, “Adding a Cluster Service to the Cluster”. 8.
Starting the Cluster Configuration Tool Figure 5.1. Starting a New Configuration File Note The Cluster Management tab for the Red Hat Cluster Suite management GUI is available after you save the configuration file with the Cluster Configuration Tool, exit, and restart the Red Hat Cluster Suite management GUI (system-config-cluster). (The Cluster Management tab displays the status of the cluster service manager, cluster nodes, and resources, and shows statistics concerning cluster service operation.
Chapter 5. Configuring Red Hat Cluster With system-config-cluster dialog box if you enable Use a Quorum disk: Interval, TKO, Votes, Minimum Score, Device, Label, and Quorum Disk Heuristic. Table 5.1, “Quorum-Disk Parameters” describes the parameters. Important Quorum-disk parameters and heuristics depend on the site environment and special requirements needed. To understand the use of quorum-disk parameters and heuristics, refer to the qdisk(5) man page.
Starting the Cluster Configuration Tool Figure 5.2. Creating A New Configuration 4. When you have completed entering the cluster name and other parameters in the New Configuration dialog box, click OK. Clicking OK starts the Cluster Configuration Tool, displaying a graphical representation of the configuration (Figure 5.3, “The Cluster Configuration Tool”).
Chapter 5. Configuring Red Hat Cluster With system-config-cluster Figure 5.3. The Cluster Configuration Tool Parameter Description Use a Quorum Disk Enables quorum disk. Enables quorum-disk parameters in the New Configuration dialog box. Interval The frequency of read/write cycles, in seconds. TKO The number of cycles a node must miss in order to be declared dead. Votes The number of votes the quorum daemon advertises to CMAN when it has a high enough score.
Configuring Cluster Properties Parameter Description same on all nodes. Label Quorum Disk Heuristics Specifies the quorum disk label created by the mkqdisk utility. If this field contains an entry, the label overrides the Device field. If this field is used, the quorum daemon reads /proc/partitions and checks for qdisk signatures on every block device found, comparing the label against the specified label. This is useful in configurations where the quorum device name differs among nodes.
Chapter 5. Configuring Red Hat Cluster With system-config-cluster 5. Specify the Fence Daemon Properties parameters: Post-Join Delay and Post-Fail Delay. a. The Post-Join Delay parameter is the number of seconds the fence daemon (fenced) waits before fencing a node after the node joins the fence domain. The Post-Join Delay default value is 3. A typical setting for Post-Join Delay is between 20 and 30 seconds, but can vary according to cluster and network performance. b.
Adding and Deleting Members Figure 5.4. Fence Device Configuration 2. At the Fence Device Configuration dialog box, click the drop-down box under Add a New Fence Device and select the type of fence device to configure. 3. Specify the information in the Fence Device Configuration dialog box according to the type of fence device. Refer to Appendix B, Fence Device Parameters for more information about fence device parameters. 4. Click OK. 5.
Chapter 5. Configuring Red Hat Cluster With system-config-cluster 2. At the bottom of the right frame (labeled Properties), click the Add a Cluster Node button. Clicking that button causes a Node Properties dialog box to be displayed. The Node Properties dialog box presents text boxes for Cluster Node Name and Quorum Votes (refer to Figure 5.5, “Adding a Member to a New Cluster”). Figure 5.5. Adding a Member to a New Cluster 3. At the Cluster Node Name text box, specify a node name.
Adding a Member to a Running Cluster box to be displayed. c. At the Fence Configuration dialog box, bottom of the right frame (below Properties), click Add a New Fence Level. Clicking Add a New Fence Level causes a fence-level element (for example, Fence-Level-1, Fence-Level-2, and so on) to be displayed below the node in the left frame of the Fence Configuration dialog box. d. Click the fence-level element. e. At the bottom of the right frame (below Properties), click Add a New Fence to this Level.
Chapter 5. Configuring Red Hat Cluster With system-config-cluster 1. Add the node and configure fencing for it as in Section 5.1, “Adding a Member to a Cluster”. 2. Click Send to Cluster to propagate the updated configuration to other running nodes in the cluster. 3. Use the scp command to send the updated /etc/cluster/cluster.conf file from one of the existing cluster nodes to the new node. 4. At the Red Hat Cluster Suite management GUI Cluster Status Tool tab, disable each service listed under Services.
Deleting a Member from a Cluster cluster. 3. Use the scp command to send the updated /etc/cluster/cluster.conf file from one of the existing cluster nodes to the new node. 4. Start cluster services on the new node by running the following commands in this order: a. service cman start b. service clvmd start, if CLVM has been used to create clustered volumes c. service gfs start, if you are using Red Hat GFS d. service rgmanager start 5. Start the Red Hat Cluster Suite management GUI.
Chapter 5. Configuring Red Hat Cluster With system-config-cluster Figure 5.6. Confirm Deleting a Member d. At that dialog box, click Yes to confirm deletion. e. Propagate the updated configuration by clicking the Send to Cluster button. (Propagating the updated configuration automatically saves the configuration.) 4. Stop the cluster software on the remaining running nodes by running the following commands at each node in this order: a. service rgmanager stop b.
Configuring a Failover Domain • Unrestricted — Allows you to specify that a subset of members are preferred, but that a cluster service assigned to this domain can run on any available member. • Restricted — Allows you to restrict the members that can run a particular cluster service. If none of the members in a restricted failover domain are available, the cluster service cannot be started (either manually or by the cluster software).
Chapter 5. Configuring Red Hat Cluster With system-config-cluster • Section 6.1, “Adding a Failover Domain” • Section 6.2, “Removing a Failover Domain” • Section 6.3, “Removing a Member from a Failover Domain” 6.1. Adding a Failover Domain To add a failover domain, follow these steps: 1. At the left frame of the Cluster Configuration Tool, click Failover Domains. 2. At the bottom of the right frame (labeled Properties), click the Create a Failover Domain button.
Adding a Failover Domain Figure 5.7. Failover Domain Configuration: Configuring a Failover Domain 4. Click the Available Cluster Nodes drop-down box and select the members for this failover domain. 5. To restrict failover to members in this failover domain, click (check) the Restrict Failover To This Domains Members checkbox. (With Restrict Failover To This Domains Members checked, services assigned to this failover domain fail over only to nodes in this failover domain.) 6.
Chapter 5. Configuring Red Hat Cluster With system-config-cluster Figure 5.8. Failover Domain Configuration: Adjusting Priority b. For each node that requires a priority adjustment, click the node listed in the Member Node/Priority columns and adjust priority by clicking one of the Adjust Priority arrows. Priority is indicated by the position in the Member Node column and the value in the Priority column.
Removing a Member from a Failover Domain To remove a failover domain, follow these steps: 1. At the left frame of the Cluster Configuration Tool, click the failover domain that you want to delete (listed under Failover Domains). 2. At the bottom of the right frame (labeled Properties), click the Delete Failover Domain button. Clicking the Delete Failover Domain button causes a warning dialog box do be displayed asking if you want to remove the failover domain.
Chapter 5. Configuring Red Hat Cluster With system-config-cluster • New cluster — If this is a new cluster, choose File => Save to save the changes to the cluster configuration. • Running cluster — If this cluster is operational and running, and you want to propagate the change immediately, click the Send to Cluster button. Clicking Send to Cluster automatically saves the configuration change.
Adding Cluster Resources Options — Mount options. File System ID — When creating a new file system resource, you can leave this field blank. Leaving the field blank causes a file system ID to be assigned automatically after you click OK at the Resource Configuration dialog box. If you need to assign a file system ID explicitly, specify it in this field.
Chapter 5. Configuring Red Hat Cluster With system-config-cluster addresses (with wild-card support), and netgroups. Read-Write and Read Only options — Specify the type of access rights for this NFS client resource: • Read-Write — Specifies that the NFS client has read-write access. The default setting is Read-Write. • Read Only — Specifies that the NFS client has read-only access. Options — Additional client access rights.
Adding a Cluster Service to the Cluster 2. At the bottom of the right frame (labeled Properties), click the Create a Service button. Clicking Create a Service causes the Add a Service dialog box to be displayed. 3. At the Add a Service dialog box, type the name of the service in the Name text box and click OK. Clicking OK causes the Service Management dialog box to be displayed (refer to Figure 5.9, “Adding a Cluster Service”).
Chapter 5. Configuring Red Hat Cluster With system-config-cluster a Failover Domain” for instructions on how to configure a failover domain.) 5. Autostart This Service checkbox — This is checked by default. If Autostart This Service is checked, the service is started automatically when a cluster is started and running. If Autostart This Service is not checked, the service must be started manually any time the cluster comes up from stopped state. 6.
Propagating The Configuration File: New 9. If needed, you may also create a private resource that you can create that becomes a subordinate resource by clicking on the Attach a new Private Resource to the Selection button. The process is the same as creating a shared resource described in Section 7, “Adding Cluster Resources”. The private resource will appear as a child to the shared resource to which you associated with the shared resource.
Chapter 5. Configuring Red Hat Cluster With system-config-cluster Propagating the cluster configuration file this way is necessary for the first time a cluster is created. Once a cluster is installed and running, the cluster configuration file is propagated using the Red Hat cluster management GUI Send to Cluster button. For more information about propagating the cluster configuration using the GUI Send to Cluster button, refer to Section 3, “Modifying the Cluster Configuration”. 10.
Chapter 6.
Chapter 6. Managing Red Hat Cluster With system-config-cluster 3. service clvmd stop, if CLVM has been used to create clustered volumes 4. service cman stop Stopping the cluster services on a member causes its services to fail over to an active member. 2. Managing High-Availability Services You can manage cluster services with the Cluster Status Tool (Figure 6.1, “Cluster Status Tool”) through the Cluster Management tab in Cluster Administration GUI. Figure 6.1.
Modifying the Cluster Configuration You can use the Cluster Status Tool to enable, disable, restart, or relocate a high-availability service. The Cluster Status Tool displays the current cluster status in the Services area and automatically updates the status every 10 seconds. To enable a service, you can select the service in the Services area and click Enable. To disable a service, you can select the service in the Services area and click Disable.
Chapter 6. Managing Red Hat Cluster With system-config-cluster 3. Modifying the Cluster Configuration To modify the cluster configuration (the cluster configuration file (/etc/cluster/cluster.conf), use the Cluster Configuration Tool. For more information about using the Cluster Configuration Tool, refer to Chapter 5, Configuring Red Hat Cluster With system-config-cluster. Warning Do not manually edit the contents of the /etc/cluster/cluster.
Backing Up and Restoring the Cluster Configuring Red Hat Cluster With system-config-cluster. 3. Clicking Send to Cluster causes a Warning dialog box to be displayed. Click Yes to save and propagate the configuration. 4. Clicking Yes causes an Information dialog box to be displayed, confirming that the current configuration has been propagated to the cluster. Click OK. 5. Click the Cluster Management tab and verify that the changes have been propagated to the cluster members. 4.
Chapter 6. Managing Red Hat Cluster With system-config-cluster 6. Clicking File => Save As causes the system-config-cluster dialog box to be displayed. 7. At the the system-config-cluster dialog box, select /etc/cluster/cluster.conf and click OK. (Verify the file selection in the Selection box.) 8. Clicking OK causes an Information dialog box to be displayed. At that dialog box, click OK. 9. Propagate the updated configuration file throughout the cluster by clicking Send to Cluster.
Database # chkconfig --level 2345 gfs on # chkconfig --level 2345 clvmd on # chkconfig --level 2345 cman on You can then reboot the member for the changes to take effect or run the following commands in the order shown to restart cluster software: 1. service cman start 2. service clvmd start, if CLVM has been used to create clustered volumes 3. service gfs start, if you are using Red Hat GFS 4. service rgmanager start 6.
86
Appendix A. Example of Setting Up Apache HTTP Server This appendix provides an example of setting up a highly available Apache HTTP Server on a Red Hat Cluster. The example describes how to set up a service to fail over an Apache HTTP Server. Variables in the example apply to this example only; they are provided to assist setting up a service that suits your requirements. Note This example uses the Cluster Configuration Tool (system-config-cluster).
Appendix A. Example of Setting Up Apache HTTP Server systems from accessing the same data simultaneously, which may result in data corruption. Therefore, do not include the file systems in the /etc/fstab file. 2. Configuring Shared Storage To set up the shared file system resource, perform the following tasks as root on one cluster system: 1. On one cluster node, use the interactive parted utility to create a partition to use for the document root directory.
Installing and Configuring the Apache HTTP 1. Edit the /etc/httpd/conf/httpd.conf configuration file and customize the file according to your configuration. For example: • Specify the directory that contains the HTML files. Also specify this mount point when adding the service to the cluster configuration. It is only required to change this field if the mount point for the web site's content differs from the default setting of /var/www/html/.
Appendix A. Example of Setting Up Apache HTTP Server domain, if configured). Before the service is added to the cluster configuration, ensure that the Apache HTTP Server directories are not mounted. Then, on one node, invoke the Cluster Configuration Tool to add the service, as follows. This example assumes a failover domain named httpd-domain was created for this service. 1. Add the init script for the Apache HTTP Server service. • Select the Resources tab and click Create a Resource.
Server • Click Create a Service. Type a Name for the service in the Add a Service dialog. • In the Service Management dialog, select a Failover Domain from the drop-down menu or leave it as None. • Click the Add a Shared Resource to this service button. From the available list, choose each resource that you created in the previous steps. Repeat this step until all resources have been added. • Click OK. 6. Choose File => Save to save your changes.
92
Appendix B. Fence Device Parameters This appendix provides tables with parameter descriptions of fence devices. Note Certain fence devices have an optional Password Script parameter. The Password Scriptparameter allows specifying that a fence-device password is supplied from a script rather than from the Password parameter. Using the Password Script parameter supersedes the Password parameter, allowing passwords to not be visible in the cluster configuration file (/etc/cluster/cluster.conf).
Appendix B. Fence Device Parameters Field Description IP Address The IP address assigned to the PAP console. Login The login name used to access the PAP console. Password The password used to authenticate the connection to the PAP console. Password Script (optional) The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Domain Domain of the Bull PAP system to power cycle Table B.3.
Table B.6. GNBD (Global Network Block Device) Field Description Name A name for the server with HP iLO support. Hostname The hostname assigned to the device. Login The login name used to access the device. Password The password used to authenticate the connection to the device. Password Script (optional) The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Table B.7.
Appendix B. Fence Device Parameters Field Description Login The login name of a user capable of issuing power on/off commands to the given IPMI port. Password The password used to authenticate the connection to the IPMI port. Password Script (optional) The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Authentication Type none, password, md2, or md5 Use Lanplus True or 1. If blank, then value is False. Table B.10.
Field Description Port The switch outlet number. Table B.13. RPS-10 Power Switch (two-node clusters only) Field Description Name A name for the SANBox2 device connected to the cluster. IP Address The IP address assigned to the device. Login The login name used to access the device. Password The password used to authenticate the connection to the device. Password Script (optional) The script that supplies a password for access to the fence device. Using this supersedes the Password parameter.
Appendix B. Fence Device Parameters Field Description Name A name for the WTI power switch connected to the cluster. IP Address The IP address assigned to the device. Password The password used to authenticate the connection to the device. Password Script (optional) The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Table B.18.
Appendix C. Upgrading A Red Hat Cluster from RHEL 4 to RHEL 5 This appendix provides a procedure for upgrading a Red Hat cluster from RHEL 4 to RHEL 5. The procedure includes changes required for Red Hat GFS and CLVM, also. For more information about Red Hat GFS, refer to Global File System: Configuration and Administration. For more information about LVM for clusters, refer to LVM Administrator's Guide: Configuration and Administration.
Appendix C. Upgrading A Red Hat Cluster from RHEL 4 to RHEL 5 • GULM — Run service lock_gulmd stop. f. Run service ccsd stop. 3. Disable cluster software from starting during reboot. At each node, run /sbin/chkconfig as follows: # # # # # # chkconfig chkconfig chkconfig chkconfig chkconfig chkconfig --level --level --level --level --level --level 2345 2345 2345 2345 2345 2345 rgmanager off gfs off clvmd off fenced off cman off ccsd off 4. Edit the cluster configuration file as follows: a.
# gfs_tool sb /dev/my_vg/gfs1 proto lock_dlm You shouldn't change any of these values if the filesystem is mounted. Are you sure? [y/n] y current lock protocol name = "lock_gulm" new lock protocol name = "lock_dlm" Done 6. Update the software in the cluster nodes to RHEL 5 and Red Hat Cluster Suite for RHEL 5. You can acquire and update software through Red Hat Network channels for RHEL 5 and Red Hat Cluster Suite for RHEL 5. 7. Run lvmconf --enable-cluster. 8. Enable cluster software to start upon reboot.
102
Index A ACPI configuring, 17 Apache HTTP Server httpd.
Index Apache HTTP Server httpd.