Dell|EMC AX4-5i iSCSI Storage Arrays With Microsoft® Windows Server® Failover Clusters Hardware Installation and Troubleshooting Guide w w w. d e l l . c o m | s u p p o r t . d e l l .
Notes, Notices, and Cautions NOTE: A NOTE indicates important information that helps you make better use of your computer. NOTICE: A NOTICE indicates either potential damage to hardware or loss of data and tells you how to avoid the problem. CAUTION: A CAUTION indicates a potential for property damage, personal injury, or death. ___________________ Information in this document is subject to change without notice. © 2008 Dell Inc. All rights reserved.
Contents 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . Cluster Solution . . . . . . . . . . . . . . . . . . . . . . 6 . . . . . . . . . . . . . . . . . . . . 7 Cluster Storage . . . . . . . . . . . . . . . . . . . . NICs Dedicated to iSCSI . . . . . . . . . . . . . . . 9 9 . . . . . . . . . . . . 9 . . . . . . . . . . . . . . . 9 Supported Cluster Configurations . Direct-Attached Cluster 8 . . . . . . . Ethernet Switches Dedicated to iSCSI . 2 6 . . . . . . . . . . .
3 Preparing Your Systems for Clustering . . . . . . . . . . . . . . . . . . . . . Cluster Configuration Overview . Installation Overview . . . . . . . . . . . . 27 . . . . . . . . . . . . . . . . . . 29 Installing the iSCSI NICs . . . . . . . . . . . . . . Installing the Microsoft iSCSI Software Initiator . . . . . . . . . . . . . . . . . . 31 31 Configuring the Shared Storage System 32 A Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . B Cluster Data Form . . . . .
Introduction A Dell™ Failover Cluster combines specific hardware and software components to provide enhanced availability for applications and services that run on your cluster. A Failover Cluster reduces the possibility of any single point of failure within the system that can cause the clustered applications or services to become unavailable.
Cluster Solution Your cluster supports a minimum of two nodes to a maximum of either eight nodes (with Windows Server 2003 operating systems) or sixteen nodes (with Windows Server 2008 operating systems) and provides the following features: • Gigabit Ethernet technology for iSCSI clusters • High availability of resources to network clients • Redundant paths to the shared storage • Failure recovery for applications and services • Flexible maintenance capabilities, allowing you to repair, maintain, o
Cluster Nodes Table 1-1 lists the hardware requirements for the cluster nodes. Table 1-1. Cluster Node Requirements Component Minimum Requirement Cluster nodes A minimum of two identical PowerEdge servers are required. The maximum number of nodes that is supported depends on the variant of the Windows Server operating system used in your cluster, and on the physical topology in which the storage system and nodes are interconnected.
Cluster Storage Cluster nodes can share access to external storage systems. However, only one of the nodes can own any RAID volume in the external storage system at any time. Microsoft Cluster Services (MSCS) controls which node has access to each RAID volume in the shared storage system. Table 1-2 lists the supported storage systems and the configuration requirements for the cluster nodes and stand-alone systems connected to the storage systems. Table 1-2.
Table 1-3. Dell|EMC Storage System Requirements Processor Enclosure Minimum Required Storage Possible Storage Expansion SPS AX4-5i One DPE with at least Up to three DAE with a 1 (required) and 2 4 and up to 12 hard maximum of 12 hard (optional) drives drives each NOTE: Ensure that the core software version running on the storage system is supported. For specific version requirements, see the Dell Cluster Configuration Support Matrices located on the Dell High Availability Cluster website at www.
Figure 1-1. Direct-Attached, Single-Cluster Configuration public network cluster node cluster node private network iSCSI connections iSCSI connections storage system EMC PowerPath Limitations in a Direct-Attached Cluster EMC PowerPath provides failover capabilities and multiple path detection as well as dynamic load balancing between multiple ports on the same storage processor.
Figure 1-2. iSCSI SAN-Attached Cluster public network cluster node cluster node iSCSI connections private network Ethernet switch iSCSI connections Ethernet switch storage system Other Documents You May Need CAUTION: For important safety and regulatory information, see the safety information that shipped with your system. Warranty information may be included within this document or as a separate document.
• The Dell Failover Clusters with Microsoft Windows Server 2008 Installation and Troubleshooting Guide provides more information on deploying your cluster with the Windows Server 2008 operating system. • The Dell Cluster Configuration Support Matrices provides a list of recommended operating systems, hardware components, and driver or firmware versions for your Failover Cluster. • Operating system documentation describes how to install (if necessary), configure, and use the operating system software.
Cabling Your Cluster Hardware NOTE: To configure Dell blade server modules in a Dell™ PowerEdge™ cluster, see the Using Dell Blade Servers in a Dell PowerEdge High Availability Cluster document located on the Dell Support website at support.dell.com. Cabling the Mouse, Keyboard, and Monitor When installing a cluster configuration in a rack, you must include a switch box to connect the mouse, keyboard, and monitor to the nodes.
Figure 2-1. Power Cabling Example With One Power Supply in the PowerEdge Systems and One SPS in the AX4-5i Storage Array primary power supplies on one AC power strip (or on one AC PDU [not shown]) SPS redundant power supplies on one AC power strip (or on one AC PDU [not shown]) NOTE: This illustration is intended only to demonstrate the power distribution of the components.
Figure 2-2. Power Cabling Example With Two Power Supplies in the PowerEdge Systems and Two SPSs in the AX4-5i Storage Array primary power supplies on one AC power strip (or on one AC PDU [not shown]) SPS redundant power supplies on one AC power strip (or on one AC PDU [not shown]) NOTE: This illustration is intended only to demonstrate the power distribution of the components.
Table 2-1. Network Connections Network Connection Description Public network All connections to the client LAN. At least one public network must be configured for Mixed mode for private network failover. Private network A dedicated connection for sharing cluster health and status information only.
Cabling the Private Network The private network connection to the nodes is provided by a different network adapter in each node. This network is used for intra-cluster communications. Table 2-2 describes three possible private network configurations. Table 2-2.
Cabling the Storage Systems This section provides information for connecting your cluster to a storage system in a direct-attached configuration, or to one or more storage systems in an iSCSI SAN-attached configuration. Connect the management port on each storage processor to the network where the management station resides on using an Ethernet network cable.
Each cluster node attaches to the storage system using CAT5e or CAT6 LAN cables with RJ45 connectors that attach to Gigabit Ethernet NICs in the cluster nodes and the Gigabit iSCSI storage processor (SP) ports in the Dell|EMC storage system. NOTE: The connections listed in this section are representative of one proven method of ensuring redundancy in the connections between the cluster nodes and the storage system. Other methods that achieve the same type of redundant connectivity may be acceptable.
Figure 2-5. Cabling the Cluster Nodes to an AX4-5i Storage System cluster node 1 01 cluster node 2 Gigabit Ethernet ports (2) 10 Gigabit Ethernet ports (2) SP-B SP-A AX4-5i storage array Cabling Storage for Your iSCSI SAN-Attached Cluster An iSCSI SAN-attached cluster is a cluster configuration where all cluster nodes are attached to a single storage system or to multiple storage systems through a network using a redundant switch fabric.
Figure 2-6.
Figure 2-7.
Cabling One iSCSI SAN-Attached Cluster to a Dell|EMC AX4-5i Storage System 1 Connect cluster node 1 to the iSCSI network. a Connect a network cable from iSCSI NIC 0 (or NIC port 0) to the network switch 0 (sw0). b Connect a network cable from iSCSI NIC 1 (or NIC port 1) to the network switch 1 (sw1). 2 Repeat step 1 for each cluster node. 3 Connect the storage system to the iSCSI network. a Connect a network cable from the network switch 0 (sw0) to SP-A iSCSI port 0.
Figure 2-8.
Cabling Multiple iSCSI SAN-Attached Clusters to a Dell|EMC Storage System To cable multiple clusters to the storage system, connect the cluster nodes to the appropriate iSCSI switches and then connect the iSCSI switches to the appropriate storage processors on the processor enclosure. For rules and guidelines for iSCSI SAN-attached clusters, see the Dell Cluster Configuration Support Matrices located on the Dell High Availability Cluster website at www.dell.com/ha.
When attaching multiple storage systems with your cluster, the following rules apply: • There is a maximum of four storage systems per cluster. • The shared storage systems and firmware must be identical. Using dissimilar storage systems and firmware for your shared storage is not supported. • MSCS is limited to 22 drive letters. Because drive letters A through D are reserved for local disks, a maximum of 22 drive letters (E to Z) can be used for your storage system disks.
Preparing Your Systems for Clustering CAUTION: Only trained service technicians are authorized to remove and access any of the components inside the system. For complete information about safety precautions, working inside the computer, and protecting against electrostatic discharge, see the safety information that shipped with your system. Cluster Configuration Overview 1 Ensure that your site can handle the cluster’s power requirements.
4 Establish the physical network topology and the TCP/IP settings for network adapters on each cluster node to provide access to the cluster public and private networks. 5 Configure each cluster node as a member in the same Windows Active Directory Domain. NOTE: You can configure the cluster nodes as Domain Controllers.
11 Test the failover capabilities of your new cluster. NOTE: For Failover Clusters configured with Windows Server 2008, you can also use the Cluster Validation Wizard. 12 Configure highly-available applications and services on your Failover Cluster. Depending on your configuration, this may also require providing additional LUNs to the cluster or creating new cluster resource groups. Test the failover capabilities of the new resources.
The following sub-sections describe steps that enable you to establish communication between the cluster nodes and your shared Dell|EMC AX4-5i storage array, and to present disks from the storage array to the cluster.
6 Read and accept the license agreement and click Next to install the software. 7 At the completion screen, click Finish to complete the installation. 8 Select the Do not restart now option to reboot the system after modifying the TCP/IP registry settings in the section "Configuring the Shared Storage System" on page 32. Modifying the TCP Registry Settings To modify the TCP Registry: 1 Determine the IP addresses or the DHCP IP addresses that are used for iSCSI traffic. 2 Start the Registry Editor.
4 In the Choose Language Setup screen, select the required language, and click OK. 5 In the Welcome window of the setup wizard, click Next. 6 In the CLARiiON AX-series window, select PowerPath and click Next. Follow the on-the-screen instructions to complete the installation. 7 Click Yes to reboot the system.
5 Follow the on-screen instructions to complete the installation. 6 To initialize the storage system: a From the cluster node or management station launch the Navisphere Storage System Initialization Utility that you installed. Go to Start→ Programs→ EMC→ Navisphere→ Navisphere Storage System Initialization. b Read the license agreement, click I accept, and then click Next. c From the Uninitialized Systems list, select the storage system to be initialized, and click Next.
Configuring the Navisphere Server Utility The Navisphere Server Utility registers the cluster node NICs with the storage systems, allowing the nodes to access the cluster storage data.
Configuring the iSCSI Initiator Configuring the iSCSI Initiator using iSNS iSNS includes an iSNS server component and iSNS client component. The iSNS server must reside within the IP storage network on a host or in the switch firmware. An iSNS client resides on both the iSCSI storage system and any iSCSI systems connected to the storage system. iSNS provides the following services: • Name registration and discovery services – Targets and initiators register their attributes and addresses.
• Discovery domains and login control service – Resources in a typical storage network are divided into manageable groups called discovery domains. Discovery domains help scale the storage network by reducing the number of unnecessary logins; each initiator only logins to a subset of targets which are within the domain. Each target can use Login Control to subordinate its access control policy to the iSNS server.
To connect to the storage system: 1 On the cluster node, open the Navisphere Server Utility. 2 Select Configure iSCSI Connections on this cluster node and click Next. 3 Select Configure iSCSI Connections and click Next. 4 In the iSCSI Targets and Connections window, select Discover iSCSI targets using this iSNS server to send a request to the iSNS server for all connected iSCSI storage-system targets, and click Next.
Configuring the iSCSI Initiator without iSNS On the cluster node: 1 Open the Navisphere Server Utility. 2 Select Configure iSCSI Connections on this cluster node and click Next. 3 Select Configure iSCSI Connections and click Next. 4 In the iSCSI Targets and Connections window, select one of the following options to discover the iSCSI target ports on the connected storage systems: – Discover iSCSI targets on this subnet - Scans the current subnet for all connected iSCSI storage-system targets.
7 Click Next. If the Network Interfaces (NICs) window is displayed, go to step 8. If the Server Registration window is displayed, go to step 9. 8 In the Network Interfaces (NICs) window: a Deselect any NICs that are used for general network traffic and click Apply. b Click OK and then click Next. 9 In the Server Registration window, click Next to send the updated information to the storage system. 10 Click Finish to close the wizard.
Advanced Storage Features (Optional) Your Dell|EMC AX4-5i storage array may be configured to provide advanced features that can be used with your cluster. These features include Snapshot Management, SANCopy, Navisphere Manager, and MirrorView. The following sections describe these features. Snapshot Management Snapshot Management captures images of a virtual disk and retains the image independently of subsequent changes to the files.
Installing and Configuring a Failover Cluster You can configure the operating system services on your Failover Cluster, after you have established the private and public networks and have assigned the shared disks from the storage array to the cluster nodes. The procedures for configuring the failover cluster are different depending on the Windows Server operating system you use.
Preparing Your Systems for Clustering
Troubleshooting This appendix provides troubleshooting information for your cluster configuration. Table A-1 describes general cluster problems you may encounter and the probable causes and solutions for each problem. Table A-1. General Cluster Troubleshooting Problem Probable Cause Corrective Action The nodes cannot access the storage system, or the cluster software is not functioning with the storage system.
Table A-1. General Cluster Troubleshooting (continued) Problem Probable Cause Corrective Action One or more nodes may have the Internet Connection Firewall enabled, blocking Remote Procedure Call (RPC) communications between the nodes. Configure the Internet Connection Firewall to allow communications that are required by the Microsoft® Cluster Service (MSCS) and the clustered applications or services. Attempts to connect to The Cluster Service a cluster using Cluster has not been started.
Table A-1. General Cluster Troubleshooting (continued) Problem Probable Cause Corrective Action The cluster network name is not responding on the network because the Internet Connection Firewall is enabled on one or more nodes. Configure the Internet Connection Firewall to allow communications that are required by MSCS and the clustered applications or services. The TCP/IP configuration is incorrect.
Table A-1. General Cluster Troubleshooting (continued) Problem Probable Cause Corrective Action Unable to add a node to The new node cannot Ensure that the new cluster the cluster. access the shared node can enumerate the cluster disks. disks using Windows Disk The shared disks are Administration. If the disks do not appear in Disk enumerated by the Administration, check the operating system following: differently on the cluster nodes.
Table A-1. General Cluster Troubleshooting (continued) Problem Probable Cause Cluster Services does not operate correctly on a cluster running Windows Server 2003 and the Internet Firewall enabled. The Windows Perform the following steps: Internet Connection 1 On the Windows desktop, Firewall is enabled, right-click My Computer and which may conflict click Manage. with Cluster Services. 2 In the Computer Management window, doubleclick Services. 3 In the Services window, double-click Cluster Services.
Table A-1. General Cluster Troubleshooting (continued) Problem Probable Cause Corrective Action Public network clients cannot access the applications or services that are provided by the cluster. One or more nodes may have the Internet Connection Firewall enabled, blocking RPC communications between the nodes. Configure the Internet Connection Firewall to allow communications that are required by the MSCS and the clustered applications or services.
Cluster Data Form You can attach the following form in a convenient location near each cluster node or rack to record information about the cluster. Use the form when you call for technical support. Table B-1. Cluster Configuration Information Cluster Information Cluster Solution Cluster name and IP address Server type Installer Date installed Applications Location Notes Table B-2.
Table B-3. Additional Network Information Additional Networks Table B-4.
iSCSI Configuration Worksheet If you need additional space for more than one host server, use an additional sheet. A Server 1, iSCSI NIC port 0 Server 1, iSCSI NIC port 1 Server 2, iSCSI NIC port 0 Server 2, iSCSI NIC port 1 Server 3, iSCSI NIC port 0 Server 3, iSCSI NIC port 1 Static IP address (host server) __ . __ . __ . ___ __ . __ . __ . ___ __ . __ . __ . ___ __ . __ . __ . ___ __ . __ . __ . ___ __ . __ . __ . ___ Subnet Default Gateway __ . __ . __ . ___ __ . __ . __ . ___ __ . __ . __ .
iSCSI Configuration Worksheet
Index C D cable configurations cluster interconnect, 17 for client networks, 16 for mouse, keyboard, and monitor, 13 for power supplies, 13 Dell | EMC AX4-5i Cabling a two-node cluster, 19 cluster optional configurations, 9 cluster configurations connecting to multiple shared storage systems, 25 connecting to one shared storage system, 9 direct-attached, 9, 18 iSCSI SAN-attached, 10 cluster storage requirements, 8 clustering overview, 5 Dell|EMC AX4-5i Cabling a two-node cluster, 19 cabling the cluster
K P keyboard cabling, 13 power supplies cabling, 13 M monitor cabling, 13 private network cabling, 15, 17 hardware components, 17 hardware components and connections, 17 mouse cabling, 13 public network cabling, 15 MSCS installing and configuring, 41 N network adapters cabling the private network, 16-17 cabling the public network, 16 T troubleshooting connecting to a cluster, 44 shared storage subsystem, 43 W warranty, 11 O operating system Windows Server 2003, Enterprise Edition installing, 29