Dell|EMC AX4-5 Fibre Channel Storage Arrays With Microsoft® Windows Server® Failover Clusters Hardware Installation and Troubleshooting Guide w w w. d e l l . c o m | s u p p o r t . d e l l .
Notes, Notices, and Cautions NOTE: A NOTE indicates important information that helps you make better use of your computer. NOTICE: A NOTICE indicates either potential damage to hardware or loss of data and tells you how to avoid the problem. CAUTION: A CAUTION indicates a potential for property damage, personal injury, or death. ___________________ Information in this document is subject to change without notice. © 2008 Dell Inc. All rights reserved.
Contents 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . Cluster Solution . . . . . . . . . . . . . . . . . . . . . . 6 . . . . . . . . . . . . . . . . . . . . 7 Cluster Storage . . . . . . . . . . . . . . . . . . . . Direct-Attached Cluster SAN-Attached Cluster . . . . . . . . . . . . 9 9 . . . . . . . . . . . . . . . Other Documents You May Need . . . . . . . . . . . . Cabling Your Cluster Hardware . . . . . . . .
3 Preparing Your Systems for Clustering . . . . . . . . . . . . . . . . . . . . . Cluster Configuration Overview . Installation Overview . . . . . . . . . . . . 29 . . . . . . . . . . . . . . . . . . 31 Installing the Fibre Channel HBAs . . . . . . . . . . . . Installing EMC PowerPath . 32 . . . . . . . . . . . . . . . 32 Implementing Zoning on a Fibre Channel Switched Fabric . . . . . . . . . . . . . Using Worldwide Port Name Zoning Installing and Configuring the Shared Storage System . . .
Introduction A failover cluster combines specific hardware and software components to provide enhanced availability for applications and services that are run on the cluster. A failover cluster is designed to reduce the possibility of any single point of failure within the system that can cause the clustered applications or services to become unavailable.
Cluster Solution Your cluster implements a minimum of two node to a maximum of either eight nodes (for Windows Server 2003) or sixteen nodes (for Windows Server 2008) clustering and provides the following features: • 8-Gbps, 4-Gbps and 2-Gbps Fibre Channel technologies • High availability of resources to network clients • Redundant paths to the shared storage • Failure recovery for applications and services • Flexible maintenance capabilities, allowing you to repair, maintain, or upgrade a node or
Cluster Nodes Table 1-1 lists the hardware requirements for the cluster nodes. Table 1-1. Cluster Node Requirements Component Minimum Requirement Cluster nodes A minimum of two identical Dell™ PowerEdge™ servers are required. The maximum number of nodes that is supported depends on the variant of the Windows Server operating system used in your cluster, and on the physical topology in which the storage system and nodes are interconnected.
Cluster Storage Cluster nodes can share access to external storage systems. However, only one of the nodes can own any redundant array of independent disks (RAID) volume in the external storage system at any time. Microsoft Cluster Services (MSCS) controls which node has access to each RAID volume in the shared storage system. Table 1-2 lists supported storage systems and the configuration requirements for the cluster nodes and stand-alone systems connected to the storage systems. Table 1-2.
Table 1-3 lists hardware requirements for the disk processor enclosures DPE DAE, and SPS. Table 1-3. Dell|EMC Storage System Requirements Storage System Minimum Required Storage Possible Storage Expansion AX4-5 1 DPE with at least 4 Up to 3 DAE with a and up to 12 hard maximum of 12 harddrives drives each SPS 1 is required, the second SPS is optional NOTE: Ensure that the core software version running on the storage system is supported by Dell.
Figure 1-1. Direct-Attached, Single-Cluster Configuration public network cluster node cluster node private network Fibre Channel connections Fibre Channel connections storage system EMC PowerPath Limitations in a Direct-Attached Cluster EMC PowerPath provides failover capabilities and multiple path detection as well as dynamic load balancing between multiple ports on the same storage processor.
Figure 1-2. SAN-Attached Cluster public network cluster node cluster node private network Fibre Channel connections Fibre Channel connections Fibre Channel switch Fibre Channel switch storage system Other Documents You May Need CAUTION: The safety information that is shipped with your system provides important safety and regulatory information. Warranty information may be included within this document or as a separate document.
• The Dell Cluster Configuration Support Matrices provides a list of recommended operating systems, hardware components, and driver or firmware versions for your Dell Windows Server Failover Cluster. • The HBA documentation provides installation instructions for the HBAs. • Systems management software documentation describes the features, requirements, installation, and basic operation of the software.
Cabling Your Cluster Hardware NOTE: To configure Dell blade server modules in a Dell™ PowerEdge™ cluster, see the Using Dell Blade Servers in a Dell PowerEdge High Availability Cluster document located on the Dell Support website at support.dell.com. Cabling the Mouse, Keyboard, and Monitor When installing a cluster configuration in a rack, you must include a switch box to connect the mouse, keyboard, and monitor to the nodes.
Figure 2-1. Power Cabling Example With One Power Supply in the PowerEdge Systems and One Standby Power Supply (SPS) in an AX4-5 Storage System primary power supplies on one AC power strip (or on one AC PDU [not shown]) SPS redundant power supplies on one AC power strip (or on one AC PDU [not shown]) NOTE: This illustration is intended only to demonstrate the power distribution of the components.
Figure 2-2. Power Cabling Example With Two Power Supplies in the PowerEdge Systems and Two SPS(s) in an AX4-5 Storage System A B A B A B 0 Fibre 1 Fibre 0 Fibre 1 Fibre A B primary power supplies on one AC power strip (or on one AC PDU [not shown]) redundant power supplies on one AC power strip (or on one AC PDU [not shown]) Cabling Your Cluster for Public and Private Networks The network adapters in the cluster nodes provide at least two network connections for each node, as described in Table 2-1.
Table 2-1. Network Connections Network Connection Description Public network All connections to the client LAN. At least one public network must be configured for Mixed mode for private network failover. Private network A dedicated connection for sharing cluster health and status information only.
Cabling the Public Network Any network adapter supported by a system running TCP/IP may be used to connect to the public network segments. You can install additional network adapters to support additional public network segments or to provide redundancy in the event of a faulty primary network adapter or switch port. Cabling the Private Network The private network connection to the nodes is provided by a different network adapter in each node. This network is used for intra-cluster communications.
NIC Teaming NIC teaming combines two or more NICs to provide load balancing and fault tolerance. Your cluster supports NIC teaming, but only in a public network; NIC teaming is not supported in a private network. You should use the same brand of NICs in a team, and you cannot mix brands of teaming drivers.
Figure 2-4. Direct-Attached Cluster Configuration public network cluster node cluster node private network Fibre Channel connections Fibre Channel connections storage system Each cluster node attaches to the storage system using two Fibre optic cables with duplex local connector (LC) multimode connectors that attach to the HBA ports in the cluster nodes and the storage processor (SP) ports in the Dell|EMC storage system.
Cabling a Two-Node Cluster to an AX4-5 Storage System 1 Connect cluster node 1 to the storage system. a Install a cable from cluster node 1 HBA port 0 to SP-A Fibre port 0 (first fibre port). b Install a cable from cluster node 1 HBA port 1 to SP-B Fibre port 0 (first fibre port). 2 Connect cluster node 2 to the storage system. a Install a cable from cluster node 2 HBA port 0 to SP-A Fibre port 1(second fibre port).
Cabling Storage for Your SAN-Attached Cluster A SAN-attached cluster is a cluster configuration where all cluster nodes are attached to a single storage system or to multiple storage systems through a SAN using a redundant switch fabric. SAN-attached cluster configurations provide more flexibility, expandability, and performance than direct-attached configurations. Figure 2-6 shows an example of a two node, SAN-attached cluster. Figure 2-7 shows an example of an eight-node, SAN-attached cluster.
Figure 2-7. Eight-Node SAN-Attached Cluster public network private network cluster nodes (8) Fibre Channel switch Fibre Channel switch storage system Each HBA port is cabled to a port on a Fibre Channel switch. One or more cables connect from the outgoing ports on a switch to a storage processor on a Dell|EMC storage system.
Cabling a SAN-Attached Cluster to an AX4-5 Storage System 1 Connect cluster node 1 to the SAN. a Connect a cable from HBA port 0 to Fibre Channel switch 0 (sw0). b Connect a cable from HBA port 1 to Fibre Channel switch 1 (sw1). 2 Repeat step 1 for each cluster node. 3 Connect the storage system to the SAN. a Connect a cable from Fibre Channel switch 0 (sw0) to SP-A Fibre port 0(first fibre port). b Connect a cable from Fibre Channel switch 0 (sw0) to SP-B Fibre port 1(second fibre port).
Figure 2-8.
Cabling Multiple SAN-Attached Clusters to the AX4-5 Storage System 1 In the first cluster, connect cluster node 1 to the SAN. a Connect a cable from HBA port 0 to Fibre Channel switch 0 (sw0). b Connect a cable from HBA port 1 to Fibre Channel switch 1 (sw1). 2 In the first cluster, repeat step 1 for each node. 3 For each additional cluster, repeat step 1 and step 2. 4 Connect the storage system to the SAN. a Connect a cable from Fibre Channel switch 0 (sw0) to SP-A Fibre port 0(first fibre port).
• MSCS is limited to 22 drive letters. Because drive letters A through D are reserved for local disks, a maximum of 22 drive letters (E to Z) can be used for your storage system disks. • Windows Server 2003 and 2008 support mount points, allowing greater than 22 drives per cluster.
Figure 2-10 shows a supported PowerEdge cluster configuration using redundant Fibre Channel switches and a tape library. In this configuration, each of the cluster nodes can access the tape library to provide backup for your local disk resources, as well as your cluster disk resources. Using this configuration allows you to add more servers and storage systems in the future, if needed. NOTE: While tape libraries can be connected to multiple fabrics, they do not provide path failover. Figure 2-10.
Configuring Your Cluster With SAN Backup You can provide centralized backup for your clusters by sharing your SAN with multiple clusters, storage systems, and a tape library. Figure 2-11 provides an example of cabling the cluster nodes to your storage systems and SAN backup with a tape library. Figure 2-11.
Preparing Your Systems for Clustering CAUTION: Only trained service technicians are authorized to remove and access any of the components inside the system. See the safety information shipped that is with your system for complete information about safety precautions, working inside the computer, and protecting against electrostatic discharge. Cluster Configuration Overview 1 Ensure that your site can handle the cluster’s power requirements.
5 Configure each cluster node as a member in the same Windows Active Directory Domain. NOTE: You can configure the cluster nodes as Domain Controllers. For more information, see “Selecting a Domain Model” section of Dell Failover Clusters with Microsoft Windows Server 2003 Installation and Troubleshooting Guide or Dell Failover Clusters with Microsoft Windows Server 2008 Installation and Troubleshooting Guide located on the Dell Support website at support.dell.com.
12 Configure highly-available applications and services on your failover cluster. Depending on your configuration, this may also require providing additional LUNs to the cluster or creating new cluster resource groups. Test the failover capabilities of the new resources. 13 Configure client systems to access the highly available applications and services that are hosted on your failover cluster.
Installing the Fibre Channel HBAs For dual-HBA configurations, It is recommended that you install the Fibre Channel HBAs on separate peripheral component interconnect (PCI) buses. Placing the adapters on separate buses improves availability and performance. See the Dell Cluster Configuration Support Matrices located on the Dell High Availability Clustering website at www.dell.com/ha for more information about your system's PCI bus configuration and supported HBAs.
6 In the CLARiiON AX-Series window, select and click Next. Follow the onscreen instructions to complete the installation. 7 Click Yes to reboot the system. Implementing Zoning on a Fibre Channel Switched Fabric A Fibre Channel switched fabric consists of one or more Fibre Channel switches that provide high-speed connections between servers and storage devices.
Table 3-1 provides a list of WWN identifiers that you can find in the Dell|EMC cluster environment. Table 3-1.
• Each host can be connected to a maximum of four storage systems. • The integrated bridge/SNC or fibre-channel interface on a tape library can be added to any zone. Installing and Configuring the Shared Storage System NOTE: You must configure the network settings and create a user account to manage the AX4-5 storage system from the network.
2 To initialize the storage system: a Select Start→Programs→EMC→Navisphere→Navisphere Storage System Initialization. b Read the license agreement, click I accept and then click Next. c From the Uninitialized Systems list, select the storage system to be initialized, and click Next. d Follow the on-screen instructions to complete the initialization.
Installing Navisphere Server Utility The Navisphere Server Utility registers the cluster node HBAs with the storage systems, allowing the nodes to access the cluster storage data.
Assigning the Virtual Disks to Cluster Nodes NOTE: For best practice, have at least one virtual disk for each application. If multiple NTFS partitions are created on a single LUN or virtual disk, these partitions will not be able to fail over individually from node-to-node. To perform data I/O to the virtual disks, assign the virtual disks to a cluster node by performing the following steps: 1 Open a Web browser. 2 Type the storage system IP address in the Address field.
Snapshot Management Snapshot Management captures images of a virtual disk and retains the image independently of subsequent changes to the files. The images can be used to share virtual disks with another system without affecting the contents of the source virtual disk. Snapshot Management creates copies of either virtual disks or snapshots. Snapshots are virtual copies that create an image of the source virtual disk at the time the snapshot was created.
Installing and Configuring a Failover Cluster You can configure the operating system services on your Dell Windows Server failover cluster, after you have established the private and public networks and have assigned the shared disks from the storage array to the cluster nodes. The procedures for configuring the failover cluster are different depending on the Windows Server operating system you use.
Troubleshooting This appendix provides troubleshooting information for your cluster configuration. Table A-1 describes general cluster problems you may encounter and the probable causes and solutions for each problem. Table A-1. General Cluster Troubleshooting Problem Probable Cause Corrective Action The nodes cannot access the storage system, or the cluster software is not functioning with the storage system.
Table A-1. General Cluster Troubleshooting (continued) Problem Probable Cause One of the nodes takes a The node-to-node long time to join the network has failed due cluster. to a cabling or hardware failure. or One of the nodes fail to join the cluster. Long delays in nodeto-node communications may be normal. One or more nodes may have the Internet Connection Firewall enabled, blocking Remote Procedure Call (RPC) communications between the nodes.
Table A-1. General Cluster Troubleshooting (continued) Problem Probable Cause Attempts to connect to The Cluster Service a cluster using Cluster has not been started. Administrator fail. A cluster has not been formed on the system. The system has just been booted and services are still starting. Corrective Action Verify that the Cluster Service is running and that a cluster has been formed.
Table A-1. General Cluster Troubleshooting (continued) Problem Probable Cause Corrective Action You are prompted to configure one network instead of two during MSCS installation. The TCP/IP configuration is incorrect. The node-to-node network and public network must be assigned static IP addresses on different subnets.
Table A-1. General Cluster Troubleshooting (continued) Problem Probable Cause Corrective Action Unable to add a node to The new node cannot Ensure that the new cluster the cluster. access the shared node can enumerate the cluster disks. disks using Windows Disk The shared disks are Administration. If the disks do not appear in Disk enumerated by the Administration, check the operating system following: differently on the cluster nodes.
Table A-1. General Cluster Troubleshooting (continued) Problem Probable Cause Cluster Services does not operate correctly on a cluster running Windows Server 2003 and the Internet Firewall enabled. The Windows Perform the following steps: Internet Connection 1 On the Windows desktop, Firewall is enabled, right-click My Computer and which may conflict click Manage. with Cluster Services. 2 In the Computer Management window, doubleclick Services. 3 In the Services window, double-click Cluster Services.
Table A-1. General Cluster Troubleshooting (continued) Problem Probable Cause Corrective Action Public network clients cannot access the applications or services that are provided by the cluster. One or more nodes may have the Internet Connection Firewall enabled, blocking RPC communications between the nodes. Configure the Internet Connection Firewall to allow communications that are required by the MSCS and the clustered applications or services.
Troubleshooting
Cluster Data Form You can attach the following form in a convenient location near each cluster node or rack to record information about the cluster. Use the form when you call for technical support. Table B-1. Cluster Information Cluster Solution Cluster name and IP address Server type Installer Date installed Applications Location Notes Table B-2.
Additional Networks Table B-3.
Zoning Configuration Form Node HBA WWPNs or Storage Alias Names WWPNs or Alias Names Zone Name Zone Set for Configuration Name Zoning Configuration Form 51
Zoning Configuration Form
Index C cable configurations cluster interconnect, 17 for client networks, 17 for mouse, keyboard, and monitor, 13 for power supplies, 13 cluster configurations connecting to multiple shared storage systems, 25 connecting to one shared storage system, 9 direct-attached, 9, 18 SAN-attached, 10 cluster storage requirements, 8 clustering overview, 5 D direct-attached cluster about, 18 drivers installing and configuring Emulex, 32 E Emulex HBAs installing and configuring, 32 installing and configuring driver
M P monitor cabling, 13 power supplies cabling, 13 mouse cabling, 13 private network cabling, 15, 17 hardware components, 17 hardware components and connections, 17 MSCS installing and configuring, 40 public network cabling, 15 N Navisphere Manager about, 39 hardware view, 39 storage view, 39 network adapters cabling the private network, 16-17 cabling the public network, 17 S SAN configuring SAN backup in your cluster, 28 SAN-attached cluster about, 21 configurations, 9 O single initiator zoning
W warranty, 11 worldwide port name zoning, 33 Z zones implementing on a Fibre Channel switched fabric, 33 using worldwide port names, 33 Index 55
Index