HP Integrity Servers with Microsoft® Windows Server™ 2003 Cluster Installation and Configuration Guide HP Part Number: 5992-4441 Published: April 2008
© Copyright 2008 Hewlett-Packard Development Company, L.P. Legal Notices Confidential computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. The information contained herein is subject to change without notice.
Table of Contents About This Document.........................................................................................................9 Intended Audience.................................................................................................................................9 New and Changed Information in This Edition.....................................................................................9 Document Organization.....................................................................
List of Figures 1-1 1-2 1-3 2-1 NLB Example.................................................................................................................................14 Single Quorum Example...............................................................................................................16 MNS Quorum Example.................................................................................................................17 Example cluster hardware cabling scheme............................
List of Tables 1-1 2-1 Server Cluster and NLB Features..................................................................................................12 Installation and Configuration Input ...........................................................................................
About This Document This document describes how to install and configure clustered computing solutions using HP Integrity servers running Microsoft® Windows Server™ 2003. The document printing date and part number indicate the document’s current edition. The printing date changes when a new edition is printed. Minor changes may be made at reprint without changing the printing date. The document part number changes when extensive changes are made.
User input Command Ctrl+x [] {} ... | Commands and other text that you type. A command name or qualified command phrase. A key sequence. A sequence such as Ctrl+x indicates that you must hold down the key labeled Ctrl while you press another key or mouse button. The contents are optional in command line syntax. If the contents are a list separated by |, you must choose one of the items. The contents are required in command line syntax.
1 Introduction This document describes how to install and configure clustered computing solutions using HP Integrity servers running Microsoft Windows Server 2003. The clustering improvements for Microsoft Windows Server 2003, 64-bit Edition (over Microsoft Windows 2000) include the following: Larger cluster sizes 64-bit Enterprise and Datacenter Editions now support up to eight nodes.
Public network • • One or more public networks can be used as a backup for the private network and can be used both for internal cluster communication and to host client applications. Network adapters, known to the cluster as network interfaces, attach nodes to networks. Each node tracks cluster configuration. Every node in the cluster is aware when another node joins or leaves the cluster.
Table 1-1 Server Cluster and NLB Features (continued) Server Cluster NLB Supports clusters up to eight nodes Supports clusters up to 32 nodes Requires the use of shared or replicated storage Doesn't require any special hardware or software Server Cluster Use a server cluster to provide high availability for mission critical applications through failover. It uses a shared-nothing architecture, which means that a resource can be active on only one node in the cluster at any given time.
Figure 1-1 NLB Example Cluster Terminology A working knowledge of clustering begins with the definition of some common terms. The following terms are used throughout this document. Nodes Individual servers or members of a cluster are referred to as nodes or systems (the terms are used interchangeably). A node can be an active or inactive member of a cluster, depending on whether or not it is currently online and in communication with the other cluster nodes.
If the resource cannot be brought online or taken offline after a specified amount of time, and the resource is set to the failed state, you can specify the amount of time that cluster service waits before failing the resource by setting its pending timeout value in Cluster Administrator. Resource state changes can occur either manually (when you use Cluster Administrator to make a state transition) or automatically (during the failover process).
Arbitration The quorum is used as the tie-breaker to avoid split-brain scenarios. A split-brain scenario occurs when all network communication links between two or more cluster nodes fail. In these cases, the cluster can split into two or more partitions that cannot communicate with each other. The quorum then guarantees that any cluster resource is brought online on one node only.
Stateful applications Applications or Windows NT services that require only a single instance at any time and require state information to be stored typically use single quorums, because they already have shared state information storage. Connecting all nodes to a single storage device simplifies transferring control of the data to a backup node. Another advantage is that only one node must remain active for the cluster to function. However, this architecture has several weaknesses.
In the case of a failure or split-brain, all partitions that do not contain an MNS quorum are terminated. This ensures that if there is a partition running that contains a majority of the nodes, it can safely start up any resources that are not running on that partition. Thus, it can be the only partition in the cluster that is running resources. MNS quorums have strict requirements to ensure they work correctly.
Failback Failback is the process of returning a resource or group of resources to the node on which it was running before it failed over. For example, when node A comes back online, IIS can fail back from node B to node A.
2 Administering the Cluster This chapter provides step-by-step installation and configuration directions for HP Integrity clustered systems running Microsoft Windows Server 2003, 64-bit Edition. Verifying Minimum System Requirements To verify that you have all of the required software and firmware and have completed all the necessary setup tasks before beginning your cluster installation, complete the following steps: 1.
9. Verify that you have sufficient administrative rights to install the OS and other software onto each node. 10. Verify that all of the required hardware is properly installed and cabled (see Figure 2-1). For information about best practices for this step, go to: http://www.microsoft.com/technet/prodtechnol/windowsserver2003/library/ServerHelp/ f5abf1f9-1d84-4088-ae54-06da05ac9cb4.mspx NOTE: Figure 2-1 is an example only. It might not represent the actual cabling required by your system. 11.
Figure 2-1 Example cluster hardware cabling scheme Gathering Required Installation Information Use Table 2-1 to record the input parameters you need to install the OS and configure the cluster. Record the information in the Value column next to each description.
Table 2-1 Installation and Configuration Input (continued) Input Description Value Public network connection, Node 1: IP address, and subnet mask IP address: for each node Subnet mask: Administering the Cluster IP address: Subnet mask: Node 3: Node 4: IP address: IP address: Subnet mask: Subnet mask: Node 5: Node 6: IP address: IP address: Subnet mask: Subnet mask: Node 7: Node 8: IP address: IP address: Subnet mask: Subnet mask: Private network connection Node 1: (cluster heartbeat),
Table 2-1 Installation and Configuration Input (continued) Input Description Value WWID, slot number, and bus Node 1: of each FCA FCA 1 WWID: Node 2: FCA 1 WWID: FCA 1 slot and bus: FCA 1 slot and bus: FCA 2 WWID: FCA 2 WWID: FCA 2 slot and bus: FCA 2 slot and bus: Node 3: Node 4: FCA 1 WWID: FCA 1 WWID: FCA 1 slot and bus: FCA 1 slot and bus: FCA 2 WWID: FCA 2 WWID: FCA 2 slot and bus: FCA 2 slot and bus: Node 5: Node 6: FCA 1 WWID: FCA 1 WWID: FCA 1 slot and bus: FCA 1 slot and bu
Configuring the Public and Private Networks NOTE: Private and public NICs must be configured in different subnets, otherwise the cluster service and Cluster Administrator utility cannot detect the second NIC. In clustered systems, node-to-node communication occurs across a private network, while client-to-cluster communication occurs across one or more public networks. To review the Microsoft recommendations and best practices for securing your private and public networks, go to: http://technet2.microsoft.
4. 5. Click the General tab. Be sure that only the Internet Protocol (TCP/IP) checkbox is selected. If you have a network adapter that transmits at multiple speeds, manually specify a speed and duplex mode. Do not use an autoselect setting for speed, because some adapters can drop packets while determining the speed. The speed for the network adapters must be hard set to the same speed on all nodes according to the card manufacturer specification.
NOTE: If your public network paths are teamed, you must put your teamed connection at the top of the list (instead of the external public network). 8. Repeat Step 1 through Step 7 for each node in the cluster. Be sure to assign a unique IP address to each node while keeping the subnet mask the same for all nodes. 9. If you are running multiple public networks (for example, Public-1, Public-2, and so on), repeat Step 1 through Step 8 for each network, until all are configured. 10.
2. Install and configure your HP StorageWorks MultiPath for Windows software. For an overview and general discussion of the MultiPath software, go to: http://h18006.www1.hp.com/products/sanworks/secure-path/spwin.html HP MultiPathing IO (MPIO) Device Specific Module software can be used as an alternative to HP StorageWorks Secure Path to provide multipath support. NOTE: You must use MultiPath software if more than one host bus adapter (HBA) is installed in each cluster.
5. 6. 7. 8. Click the Computer Name tab, and click Change. Select Domain Name and enter the domain name determined by your network administrator. Reboot when prompted and log into the new domain. Install the MultiPath software on this node. All other nodes should be powered Off before completing this step. Click Start→Programs→Administrative Tools→Computer Management→Disk Management and select Disk Management.
4. 5. 6. In the Action menu list, select Add Nodes to Cluster and click OK. In the Welcome to Add Nodes wizard, click Next. Enter the name of the node you want to add under Computer Name, click Add, then click Next. Cluster analysis begins. NOTE: You can list all the nodes at the same time by entering the name of each one and clicking Add. This adds all nodes to the cluster in a single step. However, there is a risk with this method.
Validating Cluster Operation To validate your cluster installation, use one or both of the following methods from any node in the cluster. Method 1: Simulate a Failover To simulate a failover, complete the following steps: 1. 2. 3. 4. 5. 6. Select Start→Programs→Administrative Tools→Cluster Administrator and connect to the cluster. If your cluster has only two nodes, right-click one of the cluster groups and select Move Group.
With clustered systems, you can do maintenance even when users are online. Wait until a convenient, off-peak time when one of the nodes in the cluster can be taken offline for maintenance and its workload distributed among the remaining nodes. Before the upgrade, however, you must evaluate the entire cluster to verify that the remaining nodes can handle the increased workload.