Dell EqualLogic PS Series iSCSI Storage Arrays With Microsoft Windows Server Failover Clusters Hardware Installation and Troubleshooting Guide
Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION: A CAUTION indicates potential damage to hardware or loss of data if instructions are not followed. WARNING: A WARNING indicates a potential for property damage, personal injury, or death. Information in this publication is subject to change without notice. © 2012 Dell Inc. All rights reserved.
Contents Notes, Cautions, and Warnings...................................................................................................2 1 Introduction..................................................................................................................................5 Cluster Solution.........................................................................................................................................................5 Cluster Hardware Requirements...........................
Troubleshooting.........................................................................................................................43 5 Cluster Data Form......................................................................................................................47 6 iSCSI Configuration Worksheet..............................................................................................
Introduction 1 A Dell Failover Cluster combines specific hardware and software components to provide enhanced availability for applications and services that run on your cluster. A Failover Cluster reduces the possibility of any single point of failure within the system that can cause the clustered applications or services to become unavailable.
Cluster Hardware Requirements Your cluster requires the following hardware components: • Cluster nodes • Cluster storage Cluster Nodes The following section lists the hardware requirements for the cluster nodes. Component Minimum Requirement Cluster nodes A minimum of two identical Dell PowerEdge systems are required. The maximum number of nodes that are supported depend on the variant of the Windows Server operating system used in your cluster.
Cluster Storage Requirements Hardware Components Requirement Storage system One or more Dell EqualLogic PS Series groups. Each Dell EqualLogic PS5000/PS5500/ PS6000/PS6010/PS6100/PS6110/PS6500/PS6510 group supports up to sixteen storage arrays (members) and each PS4000/PS4100/PS4110 group supports up to two storage arrays.
Network Configuration Recommendations It is recommended that you follow the guidelines in this section. In addition to these guidelines, all the usual rules for proper network configuration apply to group members. Recommendation Description Network connections between array(s) and hosts Connect array(s) and hosts to a switched network and ensure that all network connections between hosts and array(s) are Gigabit or 10 Gigabit Ethernet.
Supported Cluster Configurations iSCSI SAN-Attached Cluster In an iSCSI switch-attached cluster, all the nodes are attached to a single storage array or to multiple storage arrays through redundant iSCSI SANs for high-availability. iSCSI SAN-attached clusters provide superior configuration flexibility, expandability, and performance. Figure 1. iSCSI SAN-Attached Cluster 1. 2. 3. 4. public network cluster nodes private network iSCSI connections 5. Gigabit or 10 Gigabit Ethernet switches 6.
• The Dell Cluster Configuration Support Matrices provides a list of supported operating systems, hardware components, and driver or firmware versions for your Failover Cluster. • Operating system documentation describes how to install (if necessary), configure, and use the operating system software. • Documentation for any hardware and software components you purchased separately provides information to configure and install those options.
Cluster Hardware Cabling 2 This section provides information on cluster hardware cabling. Mouse, Keyboard, And Monitor Cabling Information When installing a cluster configuration in a rack, you must include a switch box to connect the mouse, keyboard, and monitor to the nodes. For instructions on cabling the connections of each node to the switch box, see the documentation included with your rack.
5. redundant power supplies on one AC power strip Figure 3. Power Cabling Example With Two Power Supplies in the PowerEdge Systems 1. 2. 3. 4. 5. cluster node 1 cluster node 2 primary power supplies on one AC power strip EqualLogic PS series storage array redundant power supplies on one AC power strip Cluster Cabling Information For Public And Private Networks The network adapters in the cluster nodes provide at least two network connections for each node.
Figure 4. Example of Network Cabling Connection 1. 2. 3. 4. public network cluster node 1 public network adapter private network adapter 5. private network 6. cluster node 2 For Public Network Any network adapter supported by a system running TCP/IP may be used to connect to the public network segments. You can install additional network adapters to support additional public network segments or to provide redundancy in the event of a faulty primary network adapter or switch port.
Method Hardware Components Connection Optical Gigabit or 10 Gigabit Ethernet Connect a multi-mode optical cable between the network adapters with LC network adapters in both nodes. connectors Dual-Port Network Adapters Usage You can configure your cluster to use the public network as a failover for private network communications. If dual-port network adapters are used, do not use both ports simultaneously to support both the public and private networks.
Figure 5. Two-Node iSCSI SAN-Attached Cluster 1. 2. 3. 4. public network cluster node private network iSCSI connections 5. Gigabit or 10 Gigabit Ethernet switches 6. storage system Gigabit NICs can access the 10 Gigabit iSCSI ports on the EqualLogic PS4110/PS6010/PS6110/PS6510 storage systems if any one of the following conditions exist: • The switch supports both Gigabit and 10 Gigabit Ethernet.
Figure 6. Sixteen-Node iSCSI SAN-Attached Cluster 1. 2. 3. 4. 5. public network private network cluster nodes (2-16) Gigabit or 10 Gigabit Ethernet switches storage system Cabling One iSCSI SAN-Attached Cluster To The Dell EqualLogic PS Series Storage Array(s) 1. Connect cluster node 1 to the iSCSI switches: a) Connect a network cable from iSCSI NIC 0 (or NIC port 0) to the network switch 0. b) Connect a network cable from iSCSI NIC 1 (or NIC port 1) to the network switch 1. 2.
NOTE: You can use only one of the two 10 Gb Ethernet ports on each control module at a time. With the 10GBASE-T port (left Ethernet 0 port), use CAT6 or better cable. With the SFP+ port (right Ethernet 0 port), use fiber optic cable acceptable for 10GBASE-SR or twinax cable. For more information, see the figures below. Figure 7. Cabling an iSCSI SAN-Attached Cluster to a Dell EqualLogic PS4110 Storage Array 1. 2. 3. 4. cluster node 1 cluster node 2 switch 0 switch 1 5.
1. 2. 3. 4. cluster node 1 cluster node 2 switch 0 switch 1 5. Dell EqualLogic PS6110 storage system 6. control module 0 7. control module 1 Cabling The Dell EqualLogic PS4000/PS4100/PS6010/PS6510 Storage Arrays 1. Connect a network cable from the network switch 0 to Ethernet 0 on the control module 1. 2. Connect a network cable from the network switch 0 to Ethernet 0 on the control module 0. 3. Connect a network cable from the network switch 1 to Ethernet 1 on the control module 1. 4.
Figure 10. Cabling an iSCSI SAN-Attached Cluster to a Dell EqualLogic PS4100 Storage Array 1. 2. 3. 4. cluster node 1 cluster node 2 switch 0 switch 1 5. Dell EqualLogic PS4100 storage system 6. control module 0 7. control module 1 Figure 11. Cabling an iSCSI SAN-Attached Cluster to a Dell EqualLogic PS6010 Storage Array 1. cluster node 1 2. cluster node 2 3. switch 0 4. switch 1 5. Dell EqualLogic PS6010 storage system 6.
7. control module 0 Figure 12. Cabling an iSCSI SAN-Attached Cluster to a Dell EqualLogic PS6510 Storage Array 1. 2. 3. 4. cluster node 1 cluster node 2 switch 0 switch 1 5. Dell EqualLogic PS6510 storage system 6. control module 1 7. control module 0 Cabling The Dell EqualLogic PS5000/PS5500 Storage Arrays 1. Connect a network cable from the network switch 0 to Ethernet 0 on the control module 1. 2. Connect a network cable from the network switch 0 to Ethernet 1 on the control module 1. 3.
Figure 13. Cabling an iSCSI SAN-Attached Cluster to a Dell EqualLogic PS5000 Storage Array 1. 2. 3. 4. cluster node 1 cluster node 2 switch 0 switch 1 5. Dell EqualLogic PS5000 storage system 6. control module 1 7. control module 0 Figure 14. Cabling an iSCSI SAN-attached cluster to a Dell EqualLogic PS5500 Storage Array 1. 2. 3. 4. cluster node 1 cluster node 2 switch 0 switch 1 5. Dell EqualLogic PS5500 storage system 6. control module 1 7.
Cabling The Dell EqualLogic PS6000/PS6100/PS6500 Storage Arrays 1. Connect a network cable from the network switch 0 to Ethernet 0 on the control module 1. 2. Connect a network cable from the network switch 0 to Ethernet 1 on the control module 1. 3. Connect a network cable from the network switch 1 to Ethernet 2 on the control module 1. 4. Connect a network cable from the network switch 1 to Ethernet 3 on the control module 1. 5.
Figure 16. Cabling an iSCSI SAN-Attached Cluster to a Dell EqualLogic PS6100 Storage Array 1. 2. 3. 4. cluster node 1 cluster node 2 switch 0 switch 1 5. Dell EqualLogic PS6100 storage system 6. control module 0 7. control module 1 Figure 17. Cabling an iSCSI SAN-Attached Cluster to a Dell EqualLogic PS6500 Storage Array 1. cluster node 1 2. cluster node 2 3. switch 0 4.
5. Dell EqualLogic PS6500 storage system 6. control module 1 7. control module 0 Cabling Multiple iSCSI SAN-Attached Clusters To The Dell EqualLogic PS Series Storage Array(s) To cable multiple clusters to the storage array(s), connect the cluster nodes to the appropriate iSCSI switches and then connect the iSCSI switches to the control modules on the Dell EqualLogic PS Series storage array(s).
Cabling Multiple iSCSI SAN-Attached Clusters For Dell EqualLogic PS6000/PS6100/PS6500 Storage Arrays 1. Connect a network cable from the network switch 0 to Ethernet 0 on the control module 1. 2. Connect a network cable from the network switch 0 to Ethernet 1 on the control module 1. 3. Connect a network cable from the network switch 1 to Ethernet 2 on the control module 1. 4. Connect a network cable from the network switch 1 to Ethernet 3 on the control module 1. 5.
Preparing Your Systems For Clustering 3 CAUTION: Many repairs may only be done by a certified service technician. You should only perform troubleshooting and simple repairs as authorized in your product documentation, or as directed by the online or telephone service and support team. Damage due to servicing that is not authorized by Dell is not covered by your warranty. Read and follow the safety instructions that came with the product.
12. Configure highly-available applications and services on your Failover Cluster. Depending on your configuration, this may also require providing additional volumes to the cluster or creating new cluster resource groups. Test the failover capabilities of the new resources. 13. Configure client systems to access the highly-available applications and services that are hosted on your Failover Cluster.
NOTE: For more information on using ASM in the cluster, see the Host Integration Tools EqualLogic Auto-Snapshot Manager/Microsoft Edition User Guide at support.dell.com/manuals. NOTE: ASM is supported in the cluster environment with Host Integration Tools version 3.2 or later. • VDS Provider — Enables you to use the Microsoft Virtual Disk Service (VDS) and Microsoft Storage Manager for SANs to create and manage volumes in a EqualLogic PS Series group.
Prompt Description IP address Network address for the Ethernet 0 network interface. Netmask Combines with the IP address to identify the subnet on which the Ethernet 0 network interface resides. Default gateway Network address for the device used to connect subnets and forward network traffic beyond the local network.
5. Enter the array configuration in the Initialize an Array dialog box. For more information, see Array Configuration. 6. Click a field name link to display help on the field. In addition, choose the option to create a new group. Then, click Next. 7. Enter the group configuration in the Creating a Group dialog box and then, click Next. For more information, see Group Configuration. A message is displayed when the array has been initialized. 8. Click OK. 9.
– Click Finish to complete the configuration and exit the wizard. When you exit the wizard, it configures the group IP address as an iSCSI discovery address on the computer, if not already present. After you join a group, you can use the Group Manager GUI or CLI to create and manage volumes. Computer Access To A Group You can use the Remote Setup Wizard to enable Windows computer access to a Dell EqualLogic PS Series group.
1. Start the Remote Setup Wizard. 2. In the Welcome dialog box, select Configure MPIO settings for this computer, and click Next. The Configure MPIO Settings dialog box is displayed. 3. By default, all host adapters on all subnets that are accessible by the PS Series group are configured for multipath I/O. If you want to exclude a subnet, move it from the left panel to the right panel. Also, select whether you want to enable balancing the I/O load across the adapters.
1. Use a web browser and go to the Microsoft Download Center website at microsoft.com/downloads. 2. Search for iscsi initiator. 3. Select and download the latest supported initiator software and related documentation for your operating system. 4. Double-click the executable file. The installation wizard launches. 5. In the Welcome screen, click Next. 6. In the following screens, select the Initiator Service, Software Initiator, and Microsoft MPIO Multipathing Support for iSCSI options. 7.
• Creating Access Control Records. • Connecting Hosts to Volumes. • Advanced Storage Features. Running The Group Manager GUI You can use the Group Manager graphical user interface (GUI) to configure the storage array(s) and perform other group administration tasks using one of the following methods: • Use a Web browser through a standard web connection using HTTP (port 80). • Install the GUI on a local system and run it as a standalone application. To run the Group Manager GUI on a web browser: 1.
iSCSI target name that is generated for the volume. Host access to the volume is always through the iSCSI target name, not the volume name. – Description — The volume description is optional. – 2. Storage pool — All volume data is restricted to the members that make up the pool. By default, the volume is assigned to the default pool. If multiple pools exist, you can assign the volume to a different pool. Click Next. The Create Volume – Space Reserve dialog box is displayed.
– Select the Advanced tab. – Ensure that the Enable shared access to the iSCSI target from multiple initiators check box is selected. Connecting Hosts To Volumes This section discusses how to make the proper connection to a PS Series SAN including adding the Target Portal and connecting to volumes from the host. Using the Microsoft iSCSI Initiator Service, add a Target Portal using the PS Series group IP address if it is not present. This allows the host to discover available targets.
CAUTION: If you want to mount the volume of a snapshot, clone, or replica using the Group Manager GUI, mount it to a standalone node or a cluster node in a different cluster. Do not mount the snapshot, clone, or replica of a clustered disk to a node in the same cluster because it has the same disk signature as the original clustered disk. Windows detects two disks of the same disk signature and changes the disk signature on one of them.
Volumes Cloning a volume creates a new volume with a new name and iSCSI target, having the same size, contents, and Thin Provisioning setting as the original volume. The new volume is located in the same pool as the original volume and is available immediately. Cloning a volume does not affect the original volume, which continues to exist after the cloning operation. A cloned volume consumes 100% of the original volume size from free space in the pool in which the original volume resides.
Replication Replication enables you to copy volume data across groups, physically located in the same building or separated by some distance. Replication protects the data from failures ranging from destruction of a volume to a complete site disaster, with no impact on data availability or performance. Similar to a snapshot, a replica represents the contents of a volume at a specific point in time. There must be adequate network bandwidth and full IP routing between the groups.
Volume Collections NOTE: Do not use the Group Manager GUI to mount the clone of a clustered disk to a node in the same cluster. A volume collection consists of one or more volumes from any pool and simplifies the creation of snapshots and replicas. Volume collections are useful when you have multiple, related volumes. In a single operation, you can create snapshots of the volumes (a snapshot collection) or replicas of the volumes (a replica collection).
4 Troubleshooting The following section describes general cluster problems you may encounter and the probable causes and solutions for each problem. Problem Probable Cause The nodes cannot access the storage The storage system is not cabled system, or the cluster software is not properly to the nodes or the cabling functioning with the storage system. between the storage components is incorrect. One of the nodes takes a long time to join the cluster.
Problem Probable Cause • The system has just been booted and services are still starting. Corrective Action • Use the Event Viewer and look for the following events logged by the Cluster Service: Microsoft Cluster Service successfully formed a cluster on this node. Or Microsoft Cluster Service successfully joined the cluster.
Problem Probable Cause Corrective Action One or more nodes may have the Internet Connection Firewall enabled, blocking RPC communications between the nodes. Configure the Internet Connection Firewall to allow communications that are required by the MSCS and the clustered applications or services. For more information, see the article KB883398 at support.microsoft.com.
Problem Probable Cause Corrective Action The storage array firmware upgrade process using Telnet, exits without allowing you to enter y to the following message: Do you want to proceed (y/n)[n]: The Telnet program sends an extra line after you press User serial connection for the array firmware upgrade. While running the Cluster Validation Wizard, the Validate IP Configuration test detects that two iSCSI NICs are on the same subnet and a warning is displayed.
5 Cluster Data Form You can attach the following form in a convenient location near each cluster node or rack to record information about the cluster. Use the form when you call for technical support. Table 1. Cluster Configuration Information Cluster Information Cluster Solution Cluster name and IP address Server type Installer Date installed Applications Location Notes Table 2. Cluster Node Configuration Information Node Name Service Tag Number Public IP Address Private IP Address Table 3.
iSCSI Configuration Worksheet 6 49