Dell EqualLogic Configuration Guide Dell Storage Engineering • Configure unified block and file storage solutions based on EqualLogic PS Series arrays and the FS Series Family of NAS Appliances. • Recommendations and best practices for iSCSI SAN and scale-out NAS network fabric design. • Updated capacity guidelines, capabilities, limitations, and feature sets for the EqualLogic product family. Version 14.
Abstract This configuration guide provides technical guidance for designing and implementing Dell EqualLogic PS Series storage solutions in iSCSI SAN environments.
Revision history Revision Date Changes 14.1 March 2013 Added section 14 Data Center Bridging Added volume limits for Synchronous Replication in Table 3 Modified section 7.1.2 and section 7.2 Changed all references in tables in section 4 to read PS41x0 and PS61x0 Added note in section 5.1 Updated Appendix D – EqualLogic Upgrade Paths Updated the Related Publications 13.
12.2.1 July 2011 Corrected Spanning Tree portfast guidance in Appendix D. 12.2 May 2011 Updated figures and content additions to Replication section 12.1 March 2011 Section 3.3.7: Replication Section 4.6: Integrating 1GbE and 10GbE SANs Section 2.
Table of contents Abstract ......................................................................................................................................................................... i Revision history ........................................................................................................................................................... ii 1 PS Series storage arrays .............................................................................................................
3.1 Setting the RAID Policy for a member ............................................................................................. 3-25 3.2 Guidelines for choosing a RAID policy ............................................................................................ 3-25 3.2.1 RAID level ...................................................................................................................................... 3-26 3.2.2 Drive configuration ..........................................
6.2 6.2.1 About Synchronous replication ................................................................................................6-49 6.2.2 How Synchronous replication works.......................................................................................6-49 6.2.3 Synchronous states ..................................................................................................................... 6-50 6.2.4 Caveats: .....................................................................
7.3.4 Redundant SAN configuration .................................................................................................. 7-72 7.3.5 Partially redundant SAN configurations .................................................................................. 7-74 7.3.6 Minimum cabling scenarios: PS4100 and PS6100 ................................................................ 7-75 Mixed speed environments - Integrating 1GbE and 10GbE SANs ....................................................
13.1 FS7500 connection paths .............................................................................................................. 13-120 13.2 FS7600/7610 connection paths ................................................................................................... 13-122 14 Data Center Bridging (DCB) .............................................................................................................. 14-126 14.1 DCB Overview ...................................................
Introduction With the Dell™ EqualLogic™ PS Series of storage arrays Dell provides a storage solution that delivers the benefits of consolidated networked storage in a self-managing, iSCSI storage area network (SAN) that is affordable and easy to use, regardless of scale. By eliminating complex tasks and enabling fast and flexible storage provisioning, these solutions dramatically reduce the costs of storage acquisition and ongoing operations.
1 PS Series storage arrays PS Series Storage SANs provide a peer storage architecture comprised of one or more independent arrays. Each array contains its own controllers, cache, storage, and interface ports. Grouped together they can create one or more single instance storage pools that are based on the IETF iSCSI standard.
Table 2 PS4100/PS6100 array models 1.2 Array model Drive type Number of drives PS4100E 3.5” SAS 7.2K RPM 12 PS4100X 2.5” SAS 10K RPM 24 PS4100XV 2.5” SAS 15K RPM 24 PS4100XV 3.5” SAS 15K RPM 12 PS6100E 3.5” SAS 7.2K RPM 24 PS6100X 2.5” SAS 10K RPM 24 PS6100XV 2.5” SAS 15K RPM 24 PS6100XV 3.5” SAS 15K RPM 24 PS6100S SSD 12 or 24 PS6100XS SSD + SAS 10K RPM 7 SSD + 17 SAS PS4110E 3.5” SAS 7.2K RPM 12 PS4110X 2.5” SAS 10K RPM 24 PS4110XV 2.
Table 3 Supported configuration limits PS4000/PS4100 and a PS-M4110 groups only Configuration Volumes and replica sets per group Volume size 512 c h Volumes enabled for replication (outbound) 15 TB 32 256 2048 10,000 Snapshots per volume 128 512 Replicas per volume 128 512 Volumes that have Synchronous Replication Enabled 4 32 Schedules (snapshot or replication) per volume or volume collection 16 16 Persistent Reservation registrants per volume 96 96 Replication partners per grou
1.3 Array models prior to PS4100/PS6100 Since the EqualLogic PS Series was introduced, there have been several different array models released with new features, better performance and greater storage capacity. The storage array controllers were also improved to take advantage of advances in the underlying networking and storage technologies. March 2013 Dell EqualLogic Configuration Guide v14.
1.3.1 Controller types in all models prior to PS4100/PS6100 Array controllers can be identified and differentiated by the controller "type" designation. Each controller type will have a different colored label to help quickly identify the controller type. Table 4 lists each Dell EqualLogic controller along with some characteristics about each.
Controller type 1.3.2 Network interfaces Storage type Type 9 2 x 1GbaseT 1 x 10/100Mb mgt SAS SATA 2 generation PS4000 2GB Cache Cannot mix SAS and SATA drives in same array Type 10 2 x 10GB SFP+ 1 x 10/100Mb mgt SAS SATA SSD 10Gb Ethernet PS6010 – PS6510 2GB Cache Faceplate Notes nd Controller redundancy in all models prior to PS4100/PS6100 Each array can be configured with either a single controller, or dual redundant controllers.
connected to the SAN, then you must use port 0 on the passive controller for the other connection to the SAN. This is illustrated in the partially connected scenario shown in Figure 1. Figure 1 Partially connected controller failover Note how IP addresses are reassigned to the ports during the failover processes shown in Figure 1 and Figure 2. March 2013 Dell EqualLogic Configuration Guide v14.
Figure 2 Fully connected controller failover March 2013 Dell EqualLogic Configuration Guide v14.
1.4 Array models PS4100/PS6100 1.4.1 Controller types in PS4100/PS6100 models The new controller types available in the PS4100 and PS6100 model arrays became available starting in August 2011. Table 5 lists each Dell EqualLogic controller along with some characteristics.
1.4.2 Controller redundancy in PS4100/PS6100 controllers Each array can be configured with either a single controller, or dual redundant controllers. The single controller configuration will provide the same level of I/O performance as a dual controller configuration. The dual controller configuration provides for redundancy. Redundant controllers will prevent volume connections between hosts and SAN from being dropped in the event of an active controller failure.
Figure 3 Controller failover process and optimal connection paths 1.4.4 Vertical port failover behavior in PS4100/PS6100 controllers In PS Series controllers prior to PS4100/6100 families, a link failure or a switch failure was not recognized as a failure mode by the controller. Thus a failure of a link or an entire switch would reduce bandwidth available from the array. Referring to Figure 4 or Figure 5, assume that CM0 is the active controller.
Figure 4 PS4100 vertical port failover Figure 5 PS6100 vertical port failover With PS4100/PS6100 family controllers, vertical port failover can ensure continuous full bandwidth is available from the array even if you have a link or switch failure. This is accomplished by combining corresponding physical ports in each controller (vertical pairs) into a single logical port from the point of view of the active controller.
Figure 6 PS4100 vertical port failover and optimal connection paths IMPORTANT: By alternating switch connection paths between ports in a vertical port pair, port failover allows the array to maintain 100% bandwidth capability in the event of a switch failure. March 2013 Dell EqualLogic Configuration Guide v14.
Figure 7 PS6100 vertical port failover process and optimal connection paths 1.4.5 Vertical port failover behavior in PS4110/PS6110 controllers In PS Series controllers prior to PS4110/6110 families, a link failure or a switch failure was not recognized as a failure mode by the controller. This caused a failure of a link or an entire switch to reduce bandwidth available from the array. Referring to Figure 4 or Figure 5, assume that CM0 is the active controller.
Figure 8 4110/6110 vertical port failover With the PS4110/PS6110 family of controllers, vertical port failover can ensure continuous full bandwidth is available from the array even if you have a link or switch failure. This is accomplished by combining 10GbE “eth0” ports in each controller into a single logical port from the point of view of the active controller. In a fully redundant SAN configuration, you must configure the connection as shown in Figure 9.
Figure 9 4110/6110 Vertical port failover scenario 1.5 Array model PS-M4110 1.5.1 Controller type in PS-M4110 model The PS-M4110 controller is designed based on a modified version of the PS4100 Controller. Host and SAS cards combined to form a single unit, fixed I/O module, connecting to the M1000e chassis infrastructure.
1.5.2 Configuration options The PS-M4110 has four basic configuration options. It can be configured on Fabric A or Fabric B, and each Fabric Configuration can use a 10Gb KR switch or a 10Gb KR Pass-Thru Module (PTM). Figure 10 depicts a basic configuration using MXL switch, however any of the supported switches can be used in this configuration. S A N S A N LAN LAN SAN Stack LAN/Client Network SAN Stack LAN-to-Agg Uplinks Figure 10 Basic PS-M4110 configuration for data center-in-a-box 1.5.
• • • 10G KR is the only supported IOM Switches: Force10 MXL, PowerConnect M8024-k, M8428-k Pass-through: 10Gb-K pass-thru modules only with external switches The following are basic networking recommendations for implementing the PS-M4110 Storage Blade. • • • • • • • IOMs must be Interconnected Datacenter in a Box: IOMs directly interconnected External Switches can be used to provide interconnection if rack mounted arrays are needed.
2 Controller firmware 2.1 About member firmware Each control module in a group member must be running the same version of the PS Series firmware. Firmware is stored on a compact flash card or a microSD card on each control module. Dell recommends the following: • • • • • Always run the latest firmware to take advantage of new features and fixes. All group members must run the same firmware version.
In addition to the Release Notes, the process for updating controller firmware is described in detail in the following document (Support ID required for login access): • • • Table 6 PS Series Storage Arrays: Updating Storage Array Firmware, available at: https://www.equallogic.com/support/download_file.aspx?id=594 Supported firmware upgrade paths (up to version 6.0.x) are shown in Table 6 below. If you are starting with v4.2.* or v4.3.* then you can update straight to v5.0.4.
2.2.1 PS Series Firmware Compatibility with EqualLogic Tools The following table provides a quick reference of EqualLogic product version compatibility for the recent major firmware releases. Table 7 PS Series firmware compatibility Firmware V6.0.x Firmware V5.2.x Firmware V5.1.x Firmware V5.0.x Firmware V4.3.x SAN HeadQuarters 2.2.x 2.2.x 2.1.x 2.2.x 2.1.x 2.2.x 2.1.x 2.2.x 2.1.x Host Integration Tools for Microsoft 4.0.x 4.0.x 3.5.x 4.0.x 3.5.x 4.0.x 3.5.x 3.4.x 3.3.x 4.0.x 3.5.x 3.4.
2.3 Optimizing for High Availability and preparing for Array Firmware updates Business critical data centers must be designed to sustain various types of service interruptions. In an EqualLogic SAN environment, software, servers, NICs, switches, and storage are all interdependent variables that need tuning and configuration with best practices in order to ensure high availability (HA) and non-disruptive operations.
2.3.4 Storage Heartbeat on vSphere 5.0, 4.1, and 4.0 Note: This recommendation for using Storage Heartbeat applies only vSphere 4.1 and 5.0. It is not necessary with vSphere 5.1. In the VMware virtual networking model, certain types of vmkernel network traffic is sent out on a default vmkernel port for each subnet.
3 RAID policies Each array in an EqualLogic array group is configured with a single RAID policy. Arrays (or group members) within the same storage pool that have the same RAID policy will cooperatively work to host volumes by distributing those volumes over multiple arrays. Two things that are defined by the RAID policy are: • • RAID level hot-spare configuration Each array implements a default RAID policy that includes a hot-spare.
3.2.1 RAID level Each RAID level offers varying combinations of performance, data protection and capacity utilization. Choose your RAID preference carefully based on reliability, capacity, and performance requirements. PS Series arrays support the following RAID types: • RAID 5 (not recommended for business-critical data) • RAID 6 • RAID 6 Accelerated • RAID 10 • RAID 50 See RAID Level Characteristics in Table 8 below for detailed information about the RAID types and their recommended uses. 3.2.
Table 8 RAID level characteristics 3.3.1 Recommended drive configurations Raid policy Recommended usage scenarios RAID 10 Applications and workloads requiring the highest levels of I/O performance for random writes. Systems containing 10K and 15K RPM drives. RAID 6 Situations in which I/O performance for random writes is not a key factor. Applications requiring the highest levels of data protection and reliability. Systems containing 24 or more drives.
Table 9 RAID Level Characteristic Comparison Workload Requirement Workload Requirement RAID 5 RAID 6 RAID 10 RAID 50 Capacity Excellent Good Average Good Availability Poor Excellent Good Fair Sequential reads Excellent Excellent Excellent Excellent Sequential writes Good Good Good Good Random reads Excellent Excellent Excellent Excellent Random writes Good Fair Excellent Good Longest Shortest Medium Performance impact Longest of drive failure or RAID reconstruction 3
To convert from a no-spares RAID policy to a policy that uses spare drives, you must use the CLI. Refer to the Dell EqualLogic Group Manager CLI Reference Guide. You can also use the CLI to convert to a RAID policy that does not use spare drives, but Dell recommends against doing this. March 2013 Dell EqualLogic Configuration Guide v14.
4 Capacity planning 4.1 RAID 6 drive layouts and total reported usable storage RAID6 (striped set with dual distributed parity) combines N disks in an arrangement where each stripe consists of N-2 disks capacity for data blocks and two disks capacity for parity blocks. Each parity block generates parity using a different view of the data blocks depending on the RAID 6 implementation. RAID 6 can tolerate up to two drive failures per RAID stripe set at the same time without data loss.
Table 11 RAID 6 drive layouts and total storage available with hot spares (in GB) Disk drives Hot spare No hot spare 6 5 Data/Parity + 1 Hot-spare 6 Data/Parity 7 6 Data/Parity + 1 Hot-spare 7 Data/Parity 7 Data/Parity + 1 Hot-spare 8 Data/Parity 12 11 Data/Parity + 1 Hot-spare 12 Data/Parity 14 13 Data/ Parity + 1 Hot-spare 14 Data/Parity 15 Data/ Parity + 1 Hot-spare 16 Data/Parity 15 Data/Parity(d) + 1 Hot-spare 16 Data/Parity 23 Data/Parity + 1 Hot-spare 24 Data/Parity 23 Data/Pa
4.2 RAID 10 drive layouts and total reported usable storage Using a RAID 10 policy, Table 12 shows the drive layouts that are enforced based on the number of drives in each array/hot spare configuration, and the total usable storage available for each model. RAID 10 (mirrored sets in a striped set) combines two high performance RAID types: RAID 0 and RAID 1. A RAID 10 is created by first building a series of two disk RAID 1 mirrored sets, and then distributing data over those mirrors.
4.3 RAID 50 drive layouts and total reported usable storage Table 13 shows the drive layouts that are enforced when using a RAID 50 policy based on the number of drives in each array/hot spare configuration and the total usable storage available for each model. RAID 50 (RAID 5 sets in a striped set) is created by first creating two or more RAID 5 sets and then striping data over those RAID5 sets. RAID 50 implementations can tolerate a single drive failure per RAID5 set.
4.4 RAID 5 drive layouts and total reported usable storage RAID 5 (striped disks with distributed parity) will combine N disks in an arrangement where each stripe consists of N–1 disks that contain data blocks plus 1 disk that contains a parity block. For each stripe, the parity block will be placed on a different disk ensuring that the parity blocks are not located on a single disk in the RAID set. RAID 5 implementations can tolerate a single drive failure without data loss.
4.5 Array RAID configurations and associated RAID sets The tables show a logical drive layout when an array is initialized for the first time. The actual physical layout of drives can change and evolve due to maintenance and administrative actions. Spare drives can move as they are utilized to replace failed drives and newly added drives become the spares. It is not possible to determine which physical drives are associated with each RAID set.
5 PS Series array concepts 5.1 Groups and pools A PS Series SAN Group is a Storage Area Network (SAN) comprised of one or more PS Series arrays connected to an IP network. Each array in a group is called a group member. Each member is assigned to a storage pool. There can be up to 4 pools within the group. A group can consist of up to 16 arrays of any family or model as long as all arrays in the group are running firmware with the same major and minor release number.
• • If all members in the pool are running PS Series firmware v5.0 or later then you can mix PS5500E, PS6500E/X and PS6510E/X models together with other array models in the same pool. If you are running PS Series firmware version prior to v5.0 then PS5500E, PS6500E/X and PS6510E/X models must reside in a separate pool from other array types. Figure 3 shows a PS Series group with the maximum of four pools. Note the use of Pool 3 for containing PS5500/PS6500 series arrays only.
• • Do not mix arrays with different drive technologies (SATA, SAS, SSD) within a single pool unless they are running a unique RAID policy. Do not mix arrays with different controller speeds (1GbE, 10GbE) within a single pool unless they are each running unique RAID policies. 5.2 Volumes Volumes provide the storage allocation structure within an EqualLogic SAN.
Note: IQN names are assigned to volumes automatically when they are created. They cannot be changed for the life of the volume. If a volume name is changed, the IQN name associated with the volume will remain unchanged. 5.2.
• • Snapshot reserve space for any volume can be decreased at any time. The minimum size allowed is based on the current space used by existing snapshots using the snapshot reserve. Snapshot reserved space for any volume can be increased at any time assuming there is available free space in the storage pool hosting the volume. 5.3.1 Clones Cloning creates a new volume by copying an existing volume. The new volume has the same reported size, contents and thin-provision settings as the original volume.
• • • • • • A minimum physical allocation of 10% of the logical allocation is required If a volume is converted to a thin provisioned volume, physical allocation cannot be less than the amount of physical space already used within the volume Any pool free space allocated to a thin provisioned volume is not returned to the free pool if the host’s file system usage of that volume is reduced (due to file system defragmentation, data removal, etc.
6 Array firmware features 6.1 Replication Replication is a powerful feature that can help you manage and implement a disaster recovery strategy for your business applications and data. By replicating business-critical volumes, you ensure that your business operations can resume quickly on a partner group in the event of a disaster on the primary group. You also have the ability to restore the configuration to its original state if the problem on the original group can be corrected.
• • • • • PS Series groups can have up to 16 replication partners and can support a maximum of 10,000 total snapshots and replicas from all of its replication partners. A group can have volumes replicating with multiple partners, but an individual volume can have only one replication partner. A maximum of 256 volumes per group can be configured for active replication. All volumes that are part of a volume collection can only replicate with a single partner.
6.1.3 Replication process When a replica is created the first replication process completes transfer of all volume data. For subsequent replicas, only the data that changed between the start time of the previous replication cycle and the start time of the new replication cycle is transferred to the secondary group. Dedicated volume snapshots are created and deleted in the background as necessary to facilitate the replication process.
Figure 13 Replication Process March 2013 Dell EqualLogic Configuration Guide v14.
With replication, it does not matter if the volume is thin provisioned or uses a traditional volume since in either case only the data that has changed will be copied to the replica. On the secondary side, volumes are always thin provisioned to conserve available capacity used by the Replica Reserve for that volume. 6.1.
Figure 14 Replica Reserve, Local Reserve, and Delegated Space Auto replication requires reserved disk space on both the primary and secondary groups. The amount of space required depends on several factors: • • • • Volume size. The amount of data that changes (on the source volume) between each replication period. The number of replicas that need to be retained on the secondary site. If a failback snapshot is retained on the primary group.
Table 20 Replication space sizing guidelines Recommended Value Replication Space Local Reserve (Located on Primary Group.) Space Efficient Value No Failback Snapshot: 100% 5% + CR Keep Failback Snapshot: 200% 10% + CR (a) (a) 200% Replica Reserve (b) Ensures there is adequate space for the last replica and any replica in progress. (Located on Secondary Group.) Delegated Space (Located on Secondary Group.
Table 21 Firmware replication compatibility 6.1.8 Firmware of “Local” Group Firmware on Replication Partner V6.0 V5.1.x, V5.2.x, V6.0.x V5.1.x, V5.2.x V5.0.x, V5.1.x, V5.2.x V5.0.x V4.2.x, V4.3.x, V5.0.x, V5.1.x, V5.2.x V4.2.x, V4.3.x V4.1.x, V4.2.x, V4.3.x, V5.0.x V4.1.x V4.0.x, V4.1.x, V4.2.x, 4.3.x V4.0.x V3.2.x, V3.3.x, V4.0.x, V4.1.x, V4.2.x V3.2.x, V3.3.x V3.0.x, V3.1.x, V3.2.x, V3.3.x, V4.0.x V3.0.x, V3.1.x V3.0.x, V3.1.x, V3.2.x, V3.3.
In SyncRep-enabled volumes, volume data exists simultaneously in two pools: • SyncActive: The pool to which iSCSI initiators connect when reading and writing volume data. • SyncAlternate: When volume data is written to the SyncActive pool, the group simultaneously writes the same data to this pool. You can switch the roles of the SyncActive and SyncAlternate pools. When you switch the SyncActive and SyncAlternate pools, the former SyncActive pool then becomes the SyncAlternate pool, and vice-versa.
• • • • Adequate network bandwidth between pools. Free space in each pool to accommodate the volume and snapshot reserve for the volume. You cannot enable SyncRep on a volume for which traditional replication is configured, and you cannot enable traditional replication on a volume for which SyncRep is configured. See the "Disabling Replication" topic in the online help for instructions on disabling traditional replication on a volume. You cannot enable SyncRep on a volume that is bound to a group member.
1. 2. 3. 4. The SyncAlternate volume becomes unavailable. Initiator access to the volume continues without interruption; the volume is out of sync. The group tracks all changes written to the volume. When the SyncAlternate volume becomes available, tracked changes are written to the SyncAlternate volume. Figure 16 SyncAlternate volume unavailable Performance will be temporarily degraded while changes are being tracked or when tracked changes are being written back to the SyncAlternate volume. 6.2.
Figure 17 Tracked changes written to the SyncAlternate volume 6.2.9 SyncActive volume unavailable If a malfunction occurs in the SyncActive pool, or some other event has occurred causing the volume to go offline, you can safely switch or failover to the SyncAlternate volume by following one of the procedures listed below. • Volume In Sync: If the volume is in sync, you may switch to the SyncAlternate as documented in the online help.
Table 22 Comparing SyncRep and traditional replication Replication consideration Traditional replication SyncRep Typical Use Case A point-in-time process that is conducted between two Groups, often in geographically diverse locations. Replication provides protection against a regional disaster such as an earthquake or hurricane. Traditional replication has the advantage of providing multiple recovery points.
Replication consideration Traditional replication SyncRep Network Requirements Replication requires that the network connection between the primary and secondary groups must be able to handle the load of the data transfer and complete the replication in a timely manner. Because writes are not acknowledged until they are written to both the SyncActive and SyncAlternate pools, SyncRep is sensitive to network latency.
Replication consideration Traditional replication SyncRep Impact on Applications iSCSI initiators must be reconfigured to connect to the secondary group after the failover, or an alternate set of host resources must be brought online, both of which may cause application disruptions. If you are using the Host Integration Tools, you can coordinate replication with host software to quiesce applications on a schedule and create application consistent Smart Copies.
Figure 18 IPsec protected traffic 6.3.2 Protected Intra-Group Traffic Once IPsec is enabled, all network traffic between group members is automatically protected with IPsec using IKEv2. No further configuration is required. 6.3.3 IPsec and Replication The PS Series Firmware provides no mechanism for using IPsec to protect traffic between replication partners.
You can generate certificates suitable for use in IPsec connections to the PS Series using any Windows, OpenSSL, or other commercial Certificate Authority product. You can use the Group Manager CLI to import, display, and delete certificates. using the IPsec certificate commands. See the Dell EqualLogic Group Manager CLI Reference Guide for more information. 6.3.6 About IPsec Pre-Shared Keys In addition to using certificates, you can use pre-shared keys to authenticate secured connections.
• • • • • • • • • • Kerberos-based authentication is not supported. Multiple Root Certificate Authorities (CA) are not supported. Certificate Revocation Lists (CRL) are not supported. Only users with group administrator privileges can configure IPsec. Perfect Forward Secrecy (PFS) is not supported. Encrypted private keys are not supported for X.509 format certificates. Dell recommends using a minimum of 3600 seconds and 10GB lifetime rekey values.
7 EqualLogic SAN design An EqualLogic iSCSI SAN can be operated in any network that supports the industry standards and IP subnet design guidelines described in this section. Because of this flexibility, there are many network design and configuration choices that can affect SAN performance. The following sections provide details related to network design and configuration to support the use of an EqualLogic SAN.
7.1.2 General requirements and recommendations For EqualLogic PS Arrays, the following general SAN design requirements apply: • For all members (arrays) in a given SAN Group all ports should be connected to the same subnet. This allows the arrays to communicate with each other as a group of peer members. The arrays must be in the same subnet as the group’s “well known” IP address.
• • • • • • Any EqualLogic SAN group that is required to send or receive replication traffic to/from another SAN group must have an uninterrupted communications path (ie. “visibility”) between each group.
Note: A detailed and frequently updated list of recommended switches is maintained in a separate document: Validated Components List for EqualLogic PS Series SANs http://www.delltechcenter.com/page/EqualLogic+Validated+Components Note: The FS7500 NAS appliance requires the use of 1Gb switches that meet the requirements in this section. An EqualLogic SAN consists of one or more hosts connected to one or more PS Series arrays through a switched Ethernet network.
• • • more than 2 non-stacking switches, R-STP must be enabled on all ports used for inter-switch trunks. All non-inter-switch trunk ports should be marked as “edge” ports or set to “portfast”. Support for unicast storm control: iSCSI in general, and Dell EqualLogic SANs in particular can send packets in a very “bursty” profile that many switches could misdiagnose as a virally induced packet storm.
7.2.1.1 Stacking Switches Stacking switches provide the preferred method for creating an inter-switch connection within a Layer 2 network infrastructure. Stacking is typically accomplished using a vendor proprietary, highbandwidth, low-latency interconnect that allows two or more switches to be connected in such a way that each switch becomes part of a larger, virtual switch. A stackable switch will provide a set of dedicated stacking ports. Installation of an optional stacking module may be required.
Table 23 Link aggregation types Link aggregation type Notes Static Static link aggregation defines a set of links that provide a point to point connection between two switches. These links may or may not provide failover redundancy or traffic load management. LACP Link Aggregation Control Protocol is based on IEEE 802.3ad or IEEE 802.1AX. LACP is a dynamic LAG technology that automatically adjusts to the appearance or disappearance of links within the defined LACP group.
Figure 20 Using a LAG to Interconnect Switch Stacks 7.2.2 Sizing inter-switch connections Use the guidelines in Table 24 as a starting point for estimating Inter-switch connection sizes. Table 24 Switch Interconnect Design Guidelines Connection Speeds Interconnection Guidelines 1GbE switches attached to 1GbE array controllers 1-5 arrays: 1Gb of IST bandwidth per active array controller port (up to the aggregated maximum bandwidth of the IST.
Table 25 Stacking vs. Inter-Switch Trunking Interconnect type 7.3 Primary purpose Analysis Stacking Create a larger, logical switch within an isolated physical location.
Table 26 Redundant Server NIC Configurations LOM NIC NIC Connections to SAN 7.3.1 Installed NIC 1 Installed NIC 2 X X - - X X X - X Design guidelines for host connectivity in a redundant SAN Using the Dell PowerEdge R610 server as an example, you configure redundant connection paths to the SAN switches as shown in Figure 21 below. The R610 server shown in Figure 21 has one additional dual-port PCI-E NIC installed.
Figure 22 Redundant NIC Connections from Server to SAN using two installed PCI-E NICs 7.3.2 Multi-path I/O There are generally two types of multi-path access methods for communicating from a host to an external device. For general networking communications, the preferred method of redundant connections is the teaming of multiple NICs into a single, virtual network connection entity. For storage, the preferred method of redundant connection is the use of Multi-Path IO (MPIO).
• For other operating systems, use the native MPIO functionality. Configuring Microsoft Windows MPIO Configure Microsoft Windows MPIO with the following initial configuration settings. Customized settings may be required depending on the supported application(s). • • • • • • • Change the “Subnets included” field to include ONLY the subnet(s) dedicated to the SAN network infrastructure. Change the “Subnets excluded” field to include all other subnets.
• • • Unless otherwise stated, all reference designs will provide end-to-end host to volume redundant paths A minimal number of switches will be illustrated to allow for the design concept to be understood. Actual implementations will vary depending on your network infrastructure requirements. If sharing physical switches with other non-SAN traffic, we assume all switches are VLAN capable. 7.3.
Figure 24 Redundant SAN Connection Paths: PS4100 Figure 25 Redundant SAN Connection Paths: PS6100 Note: For a production environment, the configuration examples shown above will protect your access to data. These are the ONLY SAN configurations recommended by Dell. March 2013 Dell EqualLogic Configuration Guide v14.
7.3.5 Partially redundant SAN configurations Each of the SAN configurations shown in this section will allow host connectivity to data stored in the SAN. These configurations are for reference only, and the methods shown apply to both PS3000PS6000 family controllers and PS4100/PS610 family controllers. They are not recommended for production deployment since they do not provide end-to-end redundant connection paths. Single array controller configurations 7.3.5.
Dual array controller configurations 7.3.5.2 You can configure a Dell EqualLogic array to run using dual controllers. Table 28 below shows configurations using a single array controller. Table 28 Dual controller array configurations Single NIC Single Switch Dual Controller Single NIC Dual Switch Dual Controller Dual NIC Single Switch Dual Controller 7.3.6 Minimum cabling scenarios: PS4100 and PS6100 The vertical port failover feature (described in Section 1.4.
only one half of the available controller ports. The diagrams in Figure 26 and Figure 27 show how to connect these arrays to accomplish this. Note: The example configurations shown in this section only apply to the PS4100 and PS6100 family arrays and are recommended only when you do not have available SAN switch ports necessary to support fully cabled configurations. Figure 26 Minimum cabling scenario: PS4100 March 2013 Dell EqualLogic Configuration Guide v14.
Figure 27 Minimum cabling scenario: PS6100 March 2013 Dell EqualLogic Configuration Guide v14.
8 Mixed speed environments - Integrating 1GbE and 10GbE SANs With the introduction of 10GbE, there will be situations that require 1Gb arrays and 10Gb arrays coexisting in the same SAN infrastructure. EqualLogic PS Series arrays support operation of 1Gb and 10Gb arrays within the same group.
• • • Each of the 1Gb switches is configured with one dual-port 10GbE uplink module and one stacking module. The 10GbE uplink modules are used for creating 20Gb LAG uplinks to the 10Gb switches. Split Interconnect –The 20Gb LAG uplinks between the 1Gb and 10Gb switches are crossconnected so that each 10Gb switch physically connects to both switches in the 1Gb stack. FS7500 Connection Path – The initial FS Series NAS product (FS7500) is a 1Gb only solution.
If you are using switches that do not support a stacking mode then you must use the straight interconnect uplink pattern shown in Figure 29. Note the following design differences between the split uplink pattern in Figure 28 and the straight uplink pattern in Figure 29: • • A LAG is used to create the connection between 1Gb SW#1 and 1Gb SW#2. A high rapid spanning tree link cost is assigned to the 1Gb switch LAG (note the location of the RSTP blocked path in Figure 29).
• • • If you have predominately 1Gb initiators, start upgrading your arrays to 10Gb for comparable or better performance across almost all situations. If you have predominately 10Gb initiators, you should only access data and volumes residing on 10Gb arrays (from those initiators). You may see high latency and retransmit rates when 10Gb initiators connect to 1Gb targets. When adding 10Gb arrays, place them in separate pools from your 1Gb arrays. March 2013 Dell EqualLogic Configuration Guide v14.
9 Blade server chassis integration Integrating the PowerEdge M1000e Blade Server Solution (or any third party blade chassis implementation) requires additional SAN design considerations. Each M1000e can support up to three separate networking “fabrics” that interconnect ports on each blade server to a pair of blade I/O modules within each chassis fabric through an intervening chassis midplane interface. Each fabric is associated with different interfaces on a given blade server as described in Table 29.
Figure 30 Blade I/O modules and M1000e Chassis There are three primary methods of integrating blade chassis with EqualLogic PS Series arrays. • • • Directly attaching EqualLogic arrays to the blade I/O modules on each chassis. Using Pass-Through module to external switches Utilizing a two-tier design by creating an external SAN switch infrastructure to host all array connections and using the blade I/O modules as “host access” switches.
9.1 Designing a SAN using blade chassis I/O modules with arrays directly attached The following points should be taken into consideration when planning a SAN solution that requires directly attaching EqualLogic arrays to M-Series blade I/O modules: • Limited number of externally accessible ports typically available on blade I/O modules Current M-Series blade I/O modules have limited numbers of externally accessible Ethernet ports.
Table 30 Blade I/O Module options for EqualLogic Maximum Available External Ports Ports Recommended for Interconnect Arrays Supportable per M1000e PowerConnect M6220 8x 1GbE 0(Stackable) 2 PowerConnect M6348 16x 1GbE 0(Stackable) 4 PowerConnect M8428-k 8x 10GbE 2x10GbE(PS6010/6510) 3x10GbE(PS4110/6110) 3(PS6010/6510) 5(PS4110/6110) PowerConnect M8024-k 8x 10GbE 2x10GbE(PS6010/6510) 3x10GbE(PS4110/6110) 3(PS6010/6510) 5(PS4110/6110) 2x40GbE Fixed + 2x option slots (2x40GbE, 4xSFP+, 4x10Gb
• Not all M-Series I/O modules can be stacked together or with external switches. Typically, each M-Series I/O modules model can only stack with modules of the same exact model. It may also be possible to stack M-Series I/O modules with some “stack compatible” stand-alone PowerConnect switch models. The table below provides stack compatibility information.
Table 32 Single M1000e enclosure stacking single fabric Single Fabric Stacked Configuration Advantages • • Concerns Consistent M1000e fabric management that adheres to M1000e recommended practices for blade IO configuration Reduces administration overhead March 2013 • • All blade server Ethernet ports used for SAN reside on the same mezzanine card resulting in a potential single point of failure Upgrading switch FW will require scheduled downtime for SAN network Dell EqualLogic Configuration Guide v
Table 33 Single M1000e enclosure stacking dual fabric Dual Fabric Stacked Configuration Advantages • Concerns Ensures that Ethernet SAN ports on each blade server will be distributed across two different mezzanine cards for a more highly available solution • • Does not adhere to recommended practices for M1000e blade IO configuration Upgrading switch FW will require scheduled downtime for SAN network As discussed earlier, one of the concerns when creating a SAN that consists of a single stack is that
Table 34 Single enclosure link aggregation single fabric Single fabric non-stacked configuration Advantages • • Concerns Consistent M1000e fabric management that adheres to M1000e recommended practices for blade IO configuration Switch FW can be upgraded without requiring network to be brought offline • • All blade server Ethernet ports used for SAN reside on the same mezzanine card resulting in a potential single point of failure Spanning Tree must be considered if uplinking SAN to external switches
The 10GbE PowerConnect M-Series I/O modules do not provide a dedicated stacking interface and must be interconnected using available, external Ethernet ports in conjunction with a link aggregation protocol such as LACP or “front-port” stacking. Due to the limited number of external ports available on Dell’s PowerConnect M-Series blade I/O modules, SAN growth can be limited.
“host access” switches. In most cases, the latter method of SAN design is preferable to the former, but this section will describe strategies to build a SAN that consists of multiple M1000e enclosures. As in the previous section, there are two methods that must be considered when interconnecting MSeries I/O modules: stacking and non-stacking.
Table 36 Single stack multiple M1000e enclosure direct connect SAN Single stacked, multiple M1000e enclosure configuration Advantages • • Concerns Simplified switch management High-bandwidth interconnect • • Not highly available, single stack solution cannot be maintained without scheduled downtime. Additional scalability will require SAN network redesign and possible downtime. Other notes • Limited scalability due to potentially excessive hop-counts and latency.
Table 37 Non-stacked, multiple M1000e enclosure direct connect SAN Non-Stacked, Multiple M1000e Enclosure Configuration Advantages • • Concerns Required for non-stacking M-Series I/O modules to support direct attached arrays.
9.2 Designing a SAN using blade Pass-through module This SAN design include configurations where the blade server host ports are directly connected to TOR switches using 1GbE pass-through IOM in the M1000e blade chassis.
Table 38 M-Series I/O module Pass-Through to external switch infrastructure M-Series I/O module Pass-Through to external switch infrastructure Advantages • • • • • • • Concerns Fewer switch hops Fewer switches to manage SAN can support M1000e enclosures and rack or tower stand-alone servers SAN traffic isolated outside of M1000e enclosure Less expensive that multi-tiered switch solution Supports non-dell branded switching as a single vendor solution No firmware to update on the pass-through modules.
9.3 Designing a SAN using blade chassis I/O modules as host access to external switches for array connection When attempting to have multiple M1000e blade chassis connected to EqualLogic SAN arrays, or if there is a need to also attach traditional rack or tower servers to the SAN, it is strongly recommended that you consider creating a SAN infrastructure that does not directly attach arrays to the blade I/O modules, but rather consider using a set of external switches to host array connections.
Table 39 Horizontal Stack with Vertical LAG Connecting M1000e to Stacked external switch infrastructure Advantages • • • • • • • Concerns Support configuring LAGs between M-series I/O modules across both external switches in stack (cross-switch LAG) SAN Array-to-array traffic is isolated outside of the M1000e enclosures Supports ability to scale arrays Supports ability to scale host. Supports sharing SAN with non-M1000e servers.
Table 40 3-Way LAG Connecting M1000e to Non-Stacked external switch infrastructure Advantages • • • • • • Concerns Switch Firmware updates can be made on all switches without downtime SAN Array-to-array traffic is isolated outside of the M1000e enclosures Supports ability to scale arrays Supports ability to scale host. Supports sharing SAN with non-M1000e servers. Can support different vendor switches in the external tier • • • Limited scalability of LAG bandwidth.
Table 41 4-Way Stack Connecting M1000e to Non-Stacked external switch infrastructure Advantages • • • • • • Concerns SAN Array-to-array traffic is isolated outside of the M1000e enclosures Fixed hops between hosts and array volumes regardless of number of M1000e enclosures Supports ability to scale arrays Supports ability to scale host. Supports sharing SAN with non-M1000e servers. Ease of Administration • • • Must bring external switch stack down to apply switch firmware updates.
Table 42 Vertical Stack with Horizontal LAG Connecting M1000e to Non-Stacked external switch infrastructure Advantages • • • • • • Concerns Switch Firmware updates can be made on all switches without downtime Fixed hops between hosts and array volumes regardless of number of M1000e enclosures Supports ability to scale arrays Supports ability to scale host. Supports sharing SAN with non-M1000e servers. High Availability • • Limited scalability of LAG bandwidth.
Table 43 Dual M-Series Stacks to Stacked external switch infrastructure M-Series I/O module Stack(s) to Stacked external switch infrastructure Advantages • • • • • • • Concerns Each M-Series I/O module stack can be updated independently without bringing SAN down. SAN Array-to-array traffic is isolated outside of the M1000e enclosures Fixed hops between hosts and array volumes regardless of number of M1000e enclosures Supports ability to scale arrays Supports ability to scale host.
Table 44 M-Series I/O module Stack(s) to Non-Stacked external switch infrastructure M-Series I/O module Stack(s) to Non-Stacked external switch infrastructure Advantages • • • • • • • Concerns Each M-Series I/O module stack can be updated independently without bringing SAN down. SAN Array-to-array traffic is isolated outside of the M1000e enclosures Fixed hops between hosts and array volumes regardless of number of M1000e enclosures Supports ability to scale arrays Supports ability to scale host.
10 Fluid File system The Dell EqualLogic FS Series NAS Appliance adds scale-out unified file and block Network Attached 2 Storage (NAS) capabilities to any EqualLogic PS Series iSCSI SANs. The key design features and benefits provided by the EqualLogic FS Series Appliance include: • • • • A scalable unified (block and file), virtualized IP based storage platform. Seamless capacity and performance scaling to meet growing storage demands.
10.1 FS Series architecture The FS Series appliance connects to an EqualLogic PS Series SAN via standard iSCSI connection paths with iSCSI based block level I/O taking place between the FS Series appliance initiators and the target volumes on the PS Series arrays. The FS Series controllers host the FluidFS based NAS file system and all front-end client protocol connection management and load balancing, as well as manage all backend data protection and high-availability functions.
Figure 33 FS Series NAS (file only) 10.2 Dell FluidFS Dell EqualLogic FS Series appliances, coupled with PS Series arrays, offer a high performance, high availability, scalable NAS solution. FS Series Firmware V2.0 adds the following features: • • • • NAS Container Replication NAS Antivirus Service for CIFS Shares New Monitoring Features Support for FS7600 and FS7610 Appliances 3 Dell FluidFS is a high performance clustered file system.
Snapshots: FluidFS snapshots are read only and redirect-on-write. They are created and managed by the FS Series appliance to provide file system level snapshot capability. They function independently of and have no impact on the operation of PS Series array based snapshots. Note: FS Series FluidFS snapshots and PS Series volume based snapshots function independently and have no impact on each other. Please see the following whitepaper for more information on Dell FluidFS: Dell Fluid File System: http://www.
11 FS Series NAS Appliances The FS7500 is the premier offering in the Dell FS Series product line. Table 45 lists the basic functional details for each FS Series product. Table 45 FS Series Models 11.
Table 46 NAS cluster configuration limits March 2013 Attribute Single NAS appliance cluster (Two NAS controllers) Dual NAS appliance cluster (Four NAS controllers) Maximum NAS reserve 509 TB usable 509 TB usable Minimum NAS reserve 512 GB 1024 GB Maximum file size 4 TB 4 TB Maximum local groups 300 300 Files in a cluster 64 billion 128 billion Directories in a cluster 34 billion 68 billion Containers in a cluster 256 512 Minimum container size 20 MB 20 MB Maximum container size
Single NAS appliance cluster (Two NAS controllers) Dual NAS appliance cluster (Four NAS controllers) Maximum replication partners 16 16 Number of NAS controllers in replication source and destination clusters Each NAS cluster in the replication partnership must contain the same number of NAS controllers. Each NAS cluster in the replication partnership must contain the same number of NAS controllers.
Table 48 Valid add-controllers pairs 11.3 Controller Pair 1 Controller Pair 2 FS7500 FS7500 FS7500 FS7600 FS7600 FS7600 FS7600 FS7500 FS7610 FS7610 FS7500 system components 4 The system components in the initial release of the EqualLogic FS Series NAS appliance (the FS7500) consist of two controller node units and one backup power supply (BPS) shelf containing two redundant power supplies. The system components and required power cabling paths are shown in Figure 34.
System controllers The system contains dual active-active controller nodes with large onboard battery-backed caches and 24 GB of battery protected memory per node. They operate in an active-active environment mirroring the system’s cache. Each node regularly monitors the BPS battery status. They require the BPS to maintain a minimum level of power stored during normal operation to ensure that they can execute a proper shutdown procedure in the event of power interruption.
11.6 FS7610 components FS7610 hot pluggable NAS Controller 1 FS7610 hot pluggable NAS Controller 2 Figure 36 FS7610 NAS Appliance March 2013 Dell EqualLogic Configuration Guide v14.
12 FS Series file level operations In this section we provide an overview of key FS Series Appliance features along with some operational limits. Please refer to the NAS Service Management Tasks section of the EqualLogic PS Series Group Administration Manual (version 5.1 or later) for detailed descriptions and all administrative task procedures. 12.1 NAS cluster The NAS cluster is the logical run-time container in which one or more NAS containers are created.
Figure 37 Containment Model: PS Series Storage and FS Series NAS Reserve The addition of the FS Series NAS appliance to a PS Series group does not change the functional behavior of PS Series groups and pools. PS Series groups and pools are explained in more detail in sections 5.1 and 5.1.1. 12.3 NAS Container To provision NAS storage, you need to create NAS containers within the NAS cluster. Inside a NAS Container you can create multiple CIFS shares and/or NFS exports.
Figure 38 Containment Model: FS Series NAS Reserve, Containers, Shares and Exports 12.3.1 NAS Container security styles There are three security style options that can be applied to a NAS Container: • • • UNIX: Controls file access using UNIX permissions in all protocols. A client can change permissions only by using the chmod and chown commands on the NFS mount point. You can also specify the UNIX permissions for files and directories created in the container by Window clients.
accessible as CIFs shares and NFS exports, a general rule of thumb for how to assign the security style would be as follows: • • • If your users are predominantly Linux/UNIX based, use the UNIX style. Likewise, if your users are predominantly Windows/CIFS based, use the NTFS style. If you have mixed clients, use the style applicable to majority of users. Then create a user mapping of your Windows to Linux users or vice versa. The user permissions are equivalent to the mapped user.
• • • Snapshot reserve space utilization is a function of the data change rate in the container. Old snapshots are deleted to make room for new snapshots if enough snapshot reserve is not available. NDMP based backup automatically creates a snapshot, from which the backup is created. 12.5 NAS Snapshots and Replication The FS76X0 further ensures business continuity through support for file system, point-in-time snapshots and snapshot-based, asynchronous replication.
Figure 39 NAS replication Replication requirements: • • • • • Both sites must be EqualLogic FS appliances. Replication from FS7600 to any other Dell product using FluidFS is not supported. Each site (or EqualLogic Group) must have the same number of NAS controllers or NAS appliances. FS appliances must run firmware version 2.0 or higher and PS arrays must run version 6.0 or higher. There must be a network link between the two sites (or EqualLogic groups) via the SAN side network.
• • Temporary Promotion: Promote Read/Write with an ability to demote. However all writes are lost when container is demoted to resume replication from primary site. Permanent Promote: Promote Read/Write in case primary site or container is not available and clients need to be failed over to secondary site. All writes are preserved March 2013 Dell EqualLogic Configuration Guide v14.
13 FS Series NAS Configuration In Section 11.3 we presented a high level view of the FS7500 NAS component architecture (See Figure 34). In this section we provide detailed connection diagrams demonstrating how to setup fully connected iSCSI SAN and client LAN connection paths for the FS7500 and FS7600/FS7610 appliances. Note: It is recommended to keep the client and SAN side networks physically separate and deploy two switches on both sides to provide redundancy in the event of a switch failure. 13.
Figure 40 Connection Paths for FS7500 Client LAN Figure 41 below shows the iSCSI SAN, IPMI, and node interconnect paths. Pay careful attention to how the controller ports alternate between redundant switch paths. Note: With the exception of the IPMI connection paths, corresponding ports on each controller node must connect to the same SAN switch. This connection pattern is shown in Figure 41.
Figure 41 Connection Paths for FS7500 iSCSI SAN, IPMI and Controller Interconnect 13.2 FS7600/7610 connection paths The Dell EqualLogic NAS appliances require the following networks: • • Client network: Used for client access to the NFS exports and CIFS shares hosted by the NAS cluster. SAN/internal network: Used for internal communication between the controllers and communication between the controllers and the EqualLogic PS Series SAN. The SAN and Internal networks use the same set of switches.
It is recommended to keep the client and SAN side networks physically separate and deploy two switches on both sides to protect against a switch failure. Figure 42 FS7600 AND FS7610 networks See figures below for network connections. Figure 43 FS7600 network March 2013 Dell EqualLogic Configuration Guide v14.
Figure 44 FS7610 network March 2013 Dell EqualLogic Configuration Guide v14.
Installation/Expansion • • • If installing FS7500/FS76x0 into an existing EqualLogic SAN, verify the existing customer network meets the minimum requirements. Refer to FS76x0 installation guide for more information on network requirements. Early validation helps avoid issues during and after the install. FS7500/FS76x0 installation service is mandatory. All NAS appliances in a NAS cluster need to have either 1Gb or 10Gb connectivity. Appliances with different connectivity cannot be mixed in a NAS cluster.
14 Data Center Bridging (DCB) The enhancement to the Ethernet Specifications (IEEE 802.3 specifications) called Data Center Bridging (DCB) enables bandwidth allocation and lossless behavior for storage traffic when the same physical network infrastructure is shared between storage and other traffic. The network is the fundamental resource that connects the assorted devices together to form the datacenter Ethernet infrastructure.
the same physical infrastructure to increase operational efficiency, constrain costs, and ease network management. There are primarily three progressive versions of DCB: • • • Cisco, Intel, Nuova (CIN) DCBX Converged Enhanced Ethernet (CEE) DCBX or baseline DCBX Institute of Electrical and Electronic Engineers (IEEE) DCB. DCB technologies based on standards include: • • • • PFC – Priority based Flow Control (802.1Qbb) ETS – Enhanced Transmission Selection (802.1Qaz) CN – Congestion Notification (802.
Figure 45 Physically separate dedicated network infrastructures A converged network includes carrying both SAN and other network traffic such as server LAN on a single network infrastructure as shown in Figure 46. iSCSI can be converged with non-storage based server LAN traffic, allowing the network ports and the inter-connecting links to carry multiple traffic types or protocols.
Figure 46 14.2 DCB enabled converged network infrastructure DCB requirements for EqualLogic It is required that all devices in the EqualLogic SAN support DCB for iSCSI when this functionality is enabled. If any device in the SAN does not support DCB, then DCB needs to be disabled at the switches for the entire SAN. Once all devices in the SAN are DCB compliant, then DCB can be reenabled.
The minimum switch and server CNA/NIC requirements to support an end-to-end DCB solution with EqualLogic are: • • • • Data Center Bridging Exchange (DCBx) -DCB protocol that performs discovery, configuration, and mismatch resolution using Link Layer Discovery Protocol (LLDP ) Application Priority ( iSCSI TLV ) - Switches must support configuration of a priority value for iSCSI protocol and advertisement to peer ports.
The recommended operational mode is that switches use non-willing DCBx mode, while server NIC/CNAs and Storage ports operate in willing mode. This is the default behavior on most switches and the NICs/CNAs. The DCB parameters (ETS, PFC, iSCSI application priority) are then configured in the switch devices and learned by the end-devices. 14.4 Basic Deployment Topology Example This topology is an example topology of a single layer switching with rack servers.
• Blade IOM switch with ToR switch. 14.5.1 Blade IOM switch only Network ports of both the hosts and storage are connected to the M1000e blade IOM switches. No ToR switches are required. The switch interconnect can be a stack or a LAG, and no uplink is required. Figure 48 Blade IOM switch only 14.5.2 ToR switch only Network ports of both the hosts and the storage are connected to external ToR switches.
Figure 49 ToR switch only 14.5.3 Blade IOM switch with ToR switch Host network ports are connected to the M1000e blade IOM switches and the storage network ports are connected to ToR switches. The switch interconnect can be a stack, a LAG, or a VLTi and should March 2013 Dell EqualLogic Configuration Guide v14.
connect the ToR switch to better facilitate inter-array member traffic. An uplink stack, LAG, or VLT LAG from the blade IOM switch tier to the ToR switch tier is also required. Figure 50 Blade IOM switch with ToR switch (Two tier design) 14.
Figure 51 M1000e Blade Enclosure full DCB solution The components used in the solution are listed here: • • • • • M1000e chassis M620 blade server PS-M4110XV array Force10 MXL (2) Broadcom 57810 CNA 14.7 VLANs for iSCSI A non-default VLAN is required for operating prioritized lossless iSCSI traffic in a DCB enabled Ethernet infrastructure. Switch ports that are based on the IEEE 802.1Q VLAN specification forward frames in the default or native VLAN without tags (untagged frames).
For more information on the DCB requirements and configuration guidelines, see the following white paper: Data Center Bridging: Standards, Behavioral Requirements, and Configuration Guidelines with Dell EqualLogic iSCSI SANs For more information on the DCB requirements for EqualLogic, to ensure that DCB is properly enabled and/or disabled across all devices, and to assist with identifying and resolving basic DCB configuration issues in the EqualLogic SAN, see the following white paper: EqualLogic DCB Config
Appendix A Network ports and protocols PS Series groups use a number of TCP and UDP protocols for group management, I/O operations, and internal communication. If you have switches or routers set to block these protocols, you may need to unblock them to allow management or I/O operations to work correctly. The required and optional protocols are listed in the following sections. A.1 Required ports and protocols Table 49 lists the ports and protocols required for operating an EqualLogic iSCSI SAN.
Table 50 Optional ports and protocols Type Port Protocol Access CLI Management TCP 23 Telnet To group IP address TCP 22 SSH To group IP address Web Based Management TCP 80 HTTP To group IP address TCP 3002 GUI communication To group IP address TCP 3003 GUI communication (encrypted) To group IP address SNMP UDP 161 SNMP To and from group IP address Syslog UDP 514 Syslog From group IP address EqualLogic Diagnostics March 2013 TCP 20 FTP Software update and diagnostic proce
Appendix B Recommended switches The list of recommended switches is now maintained in a separate document. • EqualLogic Compatibility Matrix http://en.community.dell.com/dellgroups/dtcmedia/m/mediagallery/19856862/download.aspx March 2013 Dell EqualLogic Configuration Guide v14.
Appendix C Supported iSCSI initiators The list of supported iSCSI initiators is now maintained in a separate document. • EqualLogic Compatibility Matrix http://en.community.dell.com/dellgroups/dtcmedia/m/mediagallery/19856862/download.aspx March 2013 Dell EqualLogic Configuration Guide v14.
Appendix D Upgrade paths for EqualLogic PS SeriesArrays Table 51 EqualLogic Upgrade Paths End of sales life arrays: Latest available conversion model 1Gb to 10Gb conversion availability Drive upgrades availability PS-50 thru PS3000 None None None PS4000 Yes, PS6000 – Dellstar Yes, PS6010 – Dellstar Yes – cus kit tool PS5000 Yes, PS6000 – Dellstar Yes, PS6010 – Dellstar Yes – cus kit tool PS6000 None Yes, PS6010 – Dellstar Yes – cus kit tool PS5500 Yes, PS6500 – Dellstar Yes, PS6510 –
Related publications The following locations provide additional background and technical details supporting configuration of EqualLogic SANs. • • • • • EqualLogic Compatibility Matrix http://en.community.dell.com/dellgroups/dtcmedia/m/mediagallery/19856862/download.aspx EqualLogic Technical Content http://en.community.dell.com/techcenter/storage/w/wiki/2660.equallogic-technicalcontent.aspx Rapid EqualLogic Configuration Portal http://en.community.dell.com/techcenter/storage/w/wiki/3615.
THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR IMPLIED WARRANTIES OF ANY KIND.