Dell EqualLogic Configuration Guide Dell Storage Infrastructure and Solutions Configure unified block and file storage solutions based on EqualLogic PS Series arrays and the FS Series Family of NAS Appliances. Recommendations and best practices for iSCSI SAN and scale-out NAS network fabric design. Updated capacity guidelines, capabilities, limitations, and feature sets for the EqualLogic product family. Version 13.
Abstract This configuration guide provides technical guidance for designing and implementing Dell EqualLogic PS Series storage solutions in iSCSI SAN environments.
Revision History Revision Date Changes 13.2 June 2012 13.1 March 2012 Updated for PS6110/4110. Updated Blade Integration in section 4.7. Updated the capacity tables from raw storage sizes to usable storage sizes. Added replication partner compatibility information. 12.4 November 2011 New sections and other changes throughout the document to include coverage of FS Series NAS Appliance (FS7500) 12.
Table of contents 1 Introduction ........................................................................................................................................................ 1 1.1 2 PS Series Storage Arrays ................................................................................................................................... 2 2.1 Array Models ..........................................................................................................................................
3.8.2 Replication Paths ............................................................................................................................. 29 3.8.3 Replication Process ......................................................................................................................... 29 3.8.4 Fast failback ...................................................................................................................................... 32 3.8.
5.3.1 FS7500 Supported Configuration Limits ..................................................................................... 74 5.3.2 FS7500 System Components ........................................................................................................ 76 5.3.3 FS7500 file system operation on controller failover ................................................................. 76 FS Series File Level Operations ........................................................................
1 Introduction With the Dell™ EqualLogic™ PS Series of storage arrays Dell provides a storage solution that delivers the benefits of consolidated networked storage in a self-managing, iSCSI storage area network (SAN) that is affordable and easy to use, regardless of scale. By eliminating complex tasks and enabling fast and flexible storage provisioning, these solutions dramatically reduce the costs of storage acquisition and ongoing operations.
2 PS Series Storage Arrays PS Series Storage SANs provide a peer storage architecture comprised of one or more independent arrays. Each array contains its own controllers, cache, storage, and interface ports. Grouped together they can create one or more single instance storage pools that are based on the IETF iSCSI standard.
Table 2 PS4100/PS6100 Array Models Array Model Drive Type Number of Drives PS4100E 3.5” SAS 7.2K RPM 12 PS4100X 2.5” SAS 10K RPM 24 PS4100XV 2.5” SAS 15K RPM 24 PS4100XV 3.5” SAS 15K RPM 12 PS6100E 3.5” SAS 7.2K RPM 24 PS6100X 2.5” SAS 10K RPM 24 PS6100XV 2.5” SAS 15K RPM 24 PS6100XV 3.5” SAS 15K RPM 24 PS6100S SSD 12 or 24 PS6100XS SSD + SAS 10K RPM 7 SSD + 17 SAS PS4110E 3.5” SAS 7.2K RPM 12 PS4110X 2.5” SAS 10K RPM 24 PS4110XV 2.5” SAS 15K RPM 24 PS4110XV 3.
Replication partners per group 16 16 Replication partners per volume 1 1 Members per group 2 16 Members per pool 2 8 Pools per group 2 4 Volumes per collection 8 8 Collections per group (snapshot and replication) 100 100 Volume connections (each time an iSCSI initiator connects to a volume e,f counts as a connection) 512 per pool 1024 per group with 2 pools 1024 per pool 4096 per group with 4 pools Access control records per volume and its snapshots 16 16 Simultaneous management s
Table 4 Array Controller Types – all models prior to PS4100/PS6100 Controller Type Faceplate Network Interfaces Storage Type Notes Type 1 3 x 1GbaseT 3 x 1Gb SFP (combo) SATA • Original Controller Design • PS50 – PS2400 • 1GB Cache Type 2 3 x 1GbaseT 3 x 1Gb SFP (combo) SATA • PS50 – PS2400 • 1GB Cache 3 x 1GbaseT SAS SATA • PS3000 – PS5000 • 1GB Cache • Cannot mix Type 3 SAS with Type 3 SATA SAS • PS3000 – PS5000 • 1GB Cache • Cannot mix Type 3 SAS with Type 4 controller Type 3 SAS Type
prevent volume connections between hosts and SAN from being dropped in the event of an active controller failure. The Active Controller is the controller which is processing all disk and network I/O operations for the array. A second controller in dual controller configurations will always be in a “passive” operating mode. In this mode, the secondary controller will exhibit the following characteristics: • • 2.3.
Figure 1 Partially Connected Controller Failover Note how IP addresses are reassigned to the ports during the failover processes shown in Figure 1 and Figure 2. June 2012 Dell EqualLogic Configuration Guide v13.
Figure 2 Fully Connected Controller Failover June 2012 Dell EqualLogic Configuration Guide v13.
2.3.4 Controller Types in PS4100/PS6100 models The new controller types available in the PS4100 and PS6100 model arrays became available starting in August 2011. Table 5 lists each Dell EqualLogic controller along with some characteristics. Table 5 PS4100/PS6100 controller types Controller Type Type 11 Type 12 Type 14 Type 17 2.3.
2.3.6 Controller failover behavior: PS41x0/PS61x0 In the event of a controller failure the following changes occur: • • The passive controller immediately activates and continues to process all data requests to the array. Vertical port pairing insures that IP addresses assigned to each of the failed controller Ethernet ports apply to the corresponding ports on the second controller. As stated in Section 2.3.
2.3.7 Vertical Port Failover behavior in PS4100/PS6100 Controllers In PS Series controllers prior to PS4100/6100 families, a link failure or a switch failure was not recognized as a failure mode by the controller. Thus a failure of a link or an entire switch would reduce bandwidth available from the array. Referring to Figure 4 or Figure 5, assume that CM0 is the active controller.
In a redundant switch SAN configuration, to optimize the system response in the event you have a vertical port failover you must split the vertical port pair connections between both SAN switches. The connection paths illustrated in Figure 6 and Figure 7 show how to alternate the port connection paths between the two controllers. Also note how IP addresses are assigned to vertical port pairs.
Figure 7 PS6100 Vertical Port Failover Process and Optimal Connection Paths 2.3.8 Vertical Port Failover behavior in PS4110/PS6110 Controllers In PS Series controllers prior to PS4110/6110 families, a link failure or a switch failure was not recognized as a failure mode by the controller. This caused a failure of a link or an entire switch to reduce bandwidth available from the array. Referring to Figure 4 or Figure 5, assume that CM0 is the active controller.
Figure 8 4110/6110 Vertical Port Failover With the PS4110/PS6110 family of controllers, vertical port failover can ensure continuous full bandwidth is available from the array even if you have a link or switch failure. This is accomplished by combining 10GbE “eth0” ports in each controller into a single logical port from the point of view of the active controller. In a fully redundant SAN configuration, you must configure the connection as shown in Figure 9.
Figure 9 4110/6110 Vertical Port Failover Scenario 2.3.9 Controller Firmware Each EqualLogic PS Series array runs a core operating system in firmware that provides all of the PS Series features and functionality. The firmware version is defined using a version number and will be updated from time to time as new features are added or for general maintenance improvements. The firmware version number takes the following form: "X.Y.Z": • • • "X " is used to identify the "major" release number.
In addition to the Release Notes, the process for updating controller firmware is described in detail in the following document (Support ID required for login access): • PS Series Storage Arrays: Updating Storage Array Firmware, available at: • • https://www.equallogic.com/support/download_file.aspx?id=594 Supported firmware upgrade paths (up to version 5.0.x) are shown in Table 6 below. If you are starting with v4.2.* or v4.3.* then you can update straight to v5.0.4. If the array is already running v5.
Table 7 PS Series Firmware Compatibility Host Integration Tools for Microsoft Firmware V5.2.x 2.2.x 2.1.x 4.0.x 3.5.x Firmware V5.1.x 2.2.x 2.1.x 4.0.x 3.5.x Host Integration Tools for VMware 3.1.x 3.0.x EqualLogic Storage Replication Adapter for VMware Site Recovery Manager EqualLogic Multipathing Extension Module for VMware vSphere Host Integration Tools for Linux Manual Transfer Utility SAN HeadQuarters 2.4 Firmware V4.3.x 2.2.x 2.1.x 4.0.x 3.5.x 3.4.x 3.3.x 2.0.x 2.1.x 2.0.x 1.0.x 3.1.x 3.0.
Table 8 shows the drive layouts that are enforced when using a RAID 5 policy based on the number of drives in each array/hot spare configuration, and the total usable storage available for each model.
shows the drive layouts that are enforced based on the number of drives in each array/hot spare configuration, and the total usable storage available for each model.
2.4.3 RAID 10 Drive Layouts and Total Usable Storage Using a RAID 10 policy, Table 10 shows the drive layouts that are enforced based on the number of drives in each array/hot spare configuration, and the total usable storage available for each model. RAID 10 (mirrored sets in a striped set) combines two high performance RAID types: RAID 0 and RAID 1. A RAID 10 is created by first building a series of two disk RAID 1 mirrored sets, and then distributing data over those mirrors.
2.4.4 RAID 50 Drive Layouts and Total Usable Storage Table 11 shows the drive layouts that are enforced when using a RAID 50 policy based on the number of drives in each array/hot spare configuration and the total usable storage available for each model. RAID 50 (RAID 5 sets in a striped set) is created by first creating two or more RAID 5 sets and then striping data over those RAID5 sets. RAID 50 implementations can tolerate a single drive failure per RAID5 set.
3 PS Series Block Level Operations 3.1 Groups A PS Series SAN Group is a Storage Area Network (SAN) comprised of one or more PS Series arrays connected to an IP network. Each array in a group is called a group member. Each member is assigned to a storage pool. There can be up to 4 pools within the group. A group can consist of up to 16 arrays of any family or model as long as all arrays in the group are running firmware with the same major and minor release number.
• • If all members in the pool are running PS Series firmware v5.0 or later then you can mix PS5500E, PS6500E/X and PS6510E/X models together with other array models in the same pool. If you are running PS Series firmware version prior to v5.0 then PS5500E, PS6500E/X and PS6510E/X models must reside in a separate pool from other array types. Figure 3 shows a PS Series group with the maximum of 4 pools. Note the use of Pool 3 for containing PS5500/PS6500 series arrays only.
• • Do not mix arrays with different drive technologies (SATA, SAS, SSD) within a single pool unless they are running a unique RAID policy. Do not mix arrays with different controller speeds (1GbE, 10GbE) within a single pool unless they are each running unique RAID policies. 3.3 Volumes Volumes provide the storage allocation structure within an EqualLogic SAN.
Note: IQN names are assigned to volumes automatically when they are created. They cannot be changed for the life of the volume. If a volume name is changed, the IQN name associated with the volume will remain unchanged. 3.3.
• • • • • • • • Use Pool Free Space, not Group Free Space when making all determinations of thin provisioned volume physical capacity allocation. Create regular volumes before creating thin provisioned volumes. This provides the administrator with a better view of the remaining available free space in the pool.
• • • Snapshots of volumes with a high data change rate will require a larger snapshot reserve space. Snapshots have access control lists that are inherited from the parent volume by default. Snapshot reserve space for any volume can be decreased at any time. The minimum size allowed will be based on the current space usage consumed by existing snapshots using the snapshot reserve.
3.8 Replication Replication is a powerful feature that can help you manage and implement a disaster recovery strategy for your business applications and data. By replicating business-critical volumes, you ensure that your business operations can resume quickly on a partner group in the event of a disaster on the primary group. You also have the ability to restore the configuration to its original state if the problem on the original group can be corrected.
3.8.2 Replication Paths The example replication paths shown in Figure 11 are described below. • • • • Basic partnership: one partner group hosts the primary copy of the volume and a second partner group hosts the replica copy of the same volume. We also show the reverse direction of the path for the Fast Failback replication feature, if it is enabled for the replica set.
Volume Replica Set A full copy of the primary volume, with data synchronized to the beginning of the most current completed replication = + A time sequenced set of replicas, where each replica corresponds to the state of the volume at the beginning of a prior replication. The number of prior replicas in the replica set that can stored on the secondary group is limited by the size of the Replica Reserve allocated for that volume and the amount of data that changes.
Figure 12 Replication Process June 2012 Dell EqualLogic Configuration Guide v13.
With replication, it does not matter if the volume is thin-provisioned or uses a traditional volume since in either case only the data that has changed will be copied to the replica. On the secondary side, volumes are always thin-provisioned to conserve available capacity used by the Replica Reserve for that volume. 3.8.
Figure 13 Replica Reserve, Local Reserve, and Delegated Space Auto-replication requires reserved disk space on both the primary and secondary groups. The amount of space required depends on several factors: • • • • Volume size. The amount of data that changes (on the source volume) between each replication period. The number of replicas that need to be retained on the secondary site. If a failback snapshot is retained on the primary group.
Table 14 Replication Space Sizing Guidelines Recommended Value Replication Space Local Reserve (Located on Primary Group.) Space Efficient Value No Failback Snapshot: 100% 5% + CR Keep Failback Snapshot: 200% 10% + CR (a) (a) 200% Replica Reserve (b) (Located on Secondary Group.) Delegated Space (Located on Secondary Group. The replica reserve space for all replica sets coming from a single group) (a) (b) 3.8.
Table 15 Firmware Replication Compatibility Firmware of “Local” Group V5.1.x, V5.2.x V5.0.x Firmware on Replication Partner V5.0.x, V5.1.x, V5.2.x V4.2.x, V4.3.x, V5.0.x, V5.1.x, V5.2.x V4.2.x, V4.3.x V4.1.x V4.0.x V3.2.x, V3.3.x V3.0.x, V3.1.x V4.1.x, V4.2.x, V4.3.x, V5.0.x V4.0.x, V4.1.x, V4.2.x, 4.3.x V3.2.x, V3.3.x, V4.0.x, V4.1.x, V4.2.x V3.0.x, V3.1.x, V3.2.x, V3.3.x, V4.0.x V3.0.x, V3.1.x, V3.2.x, V3.3.x 3.
4 EqualLogic SAN Design An EqualLogic iSCSI SAN can be operated in any network that supports the industry standards and IP subnet design guidelines described in this section. Because of this flexibility, there are many network design and configuration choices that can affect SAN performance. The following sections provide details related to network design and configuration to support the use of an EqualLogic SAN.
Note: Hosts can be in a different subnet as long as those hosts have layer 3 routing available to the subnet containing the arrays and the group’s well known IP address. • • • • • • • • • • Rapid Spanning Tree Protocol must be enabled if the SAN infrastructure has more than two switches in a non-stacked configuration, and portfast must be enabled on all edge device ports (hosts, FS Series appliances and arrays).
• 4.2.1 For best performance and reliability, we recommend that all interconnection paths between non-stacking switches (LAGs) use a dynamic link aggregation protocol such as LACP Quality of Service (QoS) Quality of service is described as either of the following: • • The ability to provide different priority levels to different applications, users, or data flows, or to guarantee a certain level of performance to a data flow.
• • • • • • clustered storage traffic patterns used by the EqualLogic SAN architecture, switches that support “cut-through” mode are not suitable for use in an EqualLogic SAN and may actually result in lower overall SAN performance. Support for IEEE 802.3x flow control (passive and/or active) on ALL ports: Switches and network interface controllers used in an EqualLogic SAN must be able to passively respond to any “pause” frames received.
fabric. Figure 14 shows the two common methods for interconnecting switches, using either stacking switches or non-stacking switches. Figure 14 Switch Interconnects 4.3.1.1 Stacking Switches Stacking switches provide the preferred method for creating an inter-switch connection within a Layer 2 network infrastructure.
Table 16 Link Aggregation Types Link Aggregation Type Notes Static Static link aggregation defines a set of links that provide a point to point connection between two switches. These links may or may not provide failover redundancy or traffic load management. LACP Link Aggregation Control Protocol is based on IEEE 802.3ad or IEEE 802.1AX. LACP is a dynamic LAG technology that automatically adjusts to the appearance or disappearance of links within the defined LACP group.
Figure 15 Using a LAG to Interconnect Switch Stacks 4.3.2 Sizing Inter-Switch Connections Use the guidelines in Table 17 as a starting point for estimating Inter-switch connection sizes. Table 17 Switch Interconnect Design Guidelines Connection Speeds Interconnection Guidelines 1-5 arrays: 1Gb of IST bandwidth per active array controller port (up to the aggregated maximum bandwidth of the IST.
Table 18 Stacking vs. Inter-Switch Trunking Interconnect Type Stacking Link Aggregation Groups (LAG) Primary Purpose Analysis Create a larger, logical switch within an isolated physical location.
Table 19 Redundant Server NIC Configurations LOM NIC NIC Connections to SAN 4.4.1 Installed NIC 1 Installed NIC 2 X X - - X X X - X Design guidelines for host connectivity in a redundant SAN Using the Dell PowerEdge R610 server as an example, you configure redundant connection paths to the SAN switches as shown in Figure 16 below. The R610 server shown in Figure 16 has one additional dual-port PCI-E NIC installed.
Figure 17 Redundant NIC Connections from Server to SAN using two installed PCI-E NICs 4.4.2 Multi-Path I/O There are generally two types of multi-path access methods for communicating from a host to an external device. For general networking communications, the preferred method of redundant connections is the teaming of multiple NICs into a single, virtual network connection entity. For storage, the preferred method of redundant connection is the use of Multi-Path IO (MPIO).
4.4.2.3 Configuring Microsoft Windows MPIO Configure Microsoft Windows MPIO with the following initial configuration settings. Customized settings may be required depending on the supported application(s). • • • • • • • Change the “Subnets included” field to include ONLY the subnet(s) dedicated to the SAN network infrastructure. Change the “Subnets excluded” field to include all other subnets. The “Load balancing policy” should remain set to the default value of “Least queue depth”.
• • 4.5.1 A minimal number of switches will be illustrated to allow for the design concept to be understood. Actual implementations will vary depending on your network infrastructure requirements. If sharing physical switches with other non-SAN traffic, we assume all switches are VLAN capable. Redundant SAN Configuration In a redundant iSCSI SAN, each component of the SAN infrastructure has a redundant connection or path.
Figure 19 Redundant SAN Connection Paths: PS4100 Figure 20 Redundant SAN Connection Paths: PS6100 Note: For a production environment, the configuration examples shown above will protect your access to data. These are the ONLY SAN configurations recommended by Dell. 4.5.2 Partially Redundant SAN Configurations Each of the SAN configurations shown in this section will allow host connectivity to data stored in the SAN.
PS6000 family controllers and PS4100/PS610 family controllers. They are not recommended for production deployment since they do not provide end-to-end redundant connection paths. 4.5.2.1 Single Array Controller Configurations Table 20 below shows configurations using a single array controller. Table 20 Single Controller Array Configurations Single NIC Single Switch Single Controller Dual NIC Single Switch Single Controller Dual NIC Dual Switch Single Controller 4.5.2.
Table 21 Dual Controller Array Configurations Single NIC Single Switch Dual Controller Single NIC Dual Switch Dual Controller Dual NIC Single Switch Dual Controller 4.5.3 Minimum Cabling Scenarios: PS4100 and PS6100 The vertical port failover feature (described in Section 2.3.7) allows you to cable dual controller PS4100 and PS6100 family arrays for maximum I/O bandwidth and controller redundancy while using only one half of the available controller ports.
Note: The example configurations shown in this section only apply to the PS4100 and PS6100 family arrays and are recommended only when you do not have available SAN switch ports necessary to support fully cabled configurations. Figure 21 Minimum Cabling Scenario: PS4100 June 2012 Dell EqualLogic Configuration Guide v13.
Figure 22 Minimum Cabling Scenario: PS6100 4.6 Integrating 1GbE and 10GbE SANs With the introduction of 10GbE, there will be situations that require 1Gb arrays and 10Gb arrays coexisting in the same SAN infrastructure. EqualLogic PS Series arrays support operation of 1Gb and 10Gb arrays within the same group.
4.6.1 Design considerations To properly implement a mixed speed SAN, you must pay close attention to the following design and implementation considerations: • • • • • Ethernet switch feature set, port density and required configuration settings Optimization of Rapid Spanning Tree Protocol behavior Optimal switch interconnect pattern Awareness of I/O workload patterns coming from 1Gb and 10Gb initiators vs. target volume locations in the SAN I/O performance implications when using mixed speed vs.
Figure 23 Mixed speed redundant SAN using split interconnect between 1Gb and 10Gb switches 4.6.1.1 Optimizing Rapid Spanning Tree Protocol Behavior The LAG between the 10Gb switches in Figure 23 creates a loop in the network. Rapid Spanning Tree Protocol (RSTP) will compensate for this by blocking paths as necessary.
Figure 24 Mixed speed redundant SAN using straight interconnect between 1Gb and 10Gb switches 4.6.2 Mixed SAN best practices The following list summarizes the important SAN design considerations for integrating 10Gb EqualLogic arrays into existing 1Gb EqualLogic SANs. • • • When integrating 10Gb switches into your existing 1Gb environment, how you interconnect the mixed speed switches (split vs.
associated with different interfaces on a given blade server as described in Table 22. Each blade server has a “LAN on Motherboard” capability that is mapped to the IO modules located in the Fabric A IO modules slots on the M1000e chassis and only supports 1Gb or 10Gb Ethernet networking depending on the blade server model. In addition, each blade server has two “mezzanine” sockets for adding additional networking options such as 1Gb or 10Gb Ethernet, Infiniband, or Fibre Channel cards.
• Creating an external SAN switch infrastructure to host all array connections and using the blade IO modules as “host access” switches. For each of these two general SAN design strategies, the user must make design decisions with respect to the type of interconnects to use between the blade IO modules and/or the external standalone switches. Depending on the design recommendation, stacking, link aggregation, or a combination of both types of interconnect may be used. In Section 4.7.
• • • • • be interconnected. Since the IO modules are not interconnected via the chassis midplane, the only alternative is to use external ports (stacking or Ethernet) to make these inter-switch connections. Maximum stack size versus practical stack size Depending on the IO module model, the maximum number of switches allowed in a single stack may be different. In addition, the number of switches supported by the switch may not be optimal for a SAN using EqualLogic arrays.
Placing the blade IO modules in the same fabric does remove one aspect of high availability in that each blade server will have both of the SAN ports located on the same fabric mezzanine card. This creates a potential single point of failure if the mezzanine card as a whole fails. One alternative configuration would be to place the two blade IO modules into enclosure slots associated with two different enclosure fabrics (B1 and C1 for example).
Table 26 Single M1000e Enclosure Stacking Dual Fabric Dual Fabric Stacked Configuration Advantages • Concerns Ensures that Ethernet SAN ports on each blade server will be distributed across two different mezzanine cards for a more highly available solution • • Does not adhere to recommended practices for M1000e blade IO configuration Upgrading switch FW will require scheduled downtime for SAN network As discussed earlier, one of the concerns when creating a SAN that consists of a single stack is that
Table 27 Single Enclosure Link Aggregation Single Fabric Single Fabric Non-Stacked Configuration Advantages • • Concerns Consistent M1000e fabric management that adheres to M1000e recommended practices for blade IO configuration Switch FW can be upgraded without requiring network to be brought offline • • All blade server Ethernet ports used for SAN reside on the same mezzanine card resulting in a potential single point of failure Spanning Tree must be considered if uplinking SAN to external switches
The 10GbE PowerConnect M-Series IO Modules do not provide a dedicated stacking interface and must be interconnected using available, external Ethernet ports in conjunction with a link aggregation protocol such as LACP or “front-port” stacking. Due to the limited number of external ports available on Dell’s PowerConnect M-Series blade IO modules, SAN growth can be limited.
Table 29 Single Stack Multiple M1000e Enclosure Direct Connect SAN Single Stacked, Multiple M1000e Enclosure Configuration Advantages • • Concerns Simplified switch management High-bandwidth interconnect • • Not highly available, single stack solution cannot be maintained without scheduled downtime. Additional scalability will require SAN network redesign and possible downtime. Other Notes • Limited scalability due to potentially excessive hop-counts and latency.
Table 30 Non-Stacked, Multiple M1000e Enclosure Direct Connect SAN Non-Stacked, Multiple M1000e Enclosure Configuration Advantages • • Concerns Required for non-stacking M-Series IO Modules to support direct attached arrays.
4.7.2 Designing a SAN using Blade Chassis IO Modules as Host Access to External Switches for Array Connection When attempting to have multiple M1000e blade chassis connected to EqualLogic SAN arrays, or if there is a need to also attach traditional rack or tower servers to the SAN, it is strongly recommended that you consider creating a SAN infrastructure that does not directly attach arrays to the blade IO modules, but rather consider using a set of external switches to host array connections.
Table 31 Connecting M1000e to Stacked External Switch Infrastructure Connecting M1000e to Stacked External Switch Infrastructure Advantages • • • • • • • • Concerns Support configuring LAGs between M-series IO modules across both external switches in stack (cross-switch LAG) M-Series IO modules’ firmware can be updated without bringing SAN network down.
Table 32 Connecting M1000e to Non-Stacked External Switch Infrastructure Connecting M1000e to Non-Stacked External Switch Infrastructure Advantages • • • • • • Concerns Switch Firmware updates can be made on all switches without downtime SAN Array-to-array traffic is isolated outside of the M1000e enclosures Fixed hops between hosts and array volumes regardless of number of M1000e enclosures Supports ability to scale arrays Supports ability to scale host. Supports sharing SAN with non-M1000e servers.
Table 33 Dual M-Series Stacks to Stacked External Switch Infrastructure M-Series IO Module Stack(s) to Stacked External Switch Infrastructure Advantages • • • • • • • Concerns Each M-Series IO Module stack can be updated independently without bringing SAN down. SAN Array-to-array traffic is isolated outside of the M1000e enclosures Fixed hops between hosts and array volumes regardless of number of M1000e enclosures Supports ability to scale arrays Supports ability to scale host.
Table 34 M-Series IO Module Stack(s) to Non-Stacked External Switch Infrastructure M-Series IO Module Stack(s) to Non-Stacked External Switch Infrastructure Advantages • • • • • • • Concerns Each M-Series IO Module stack can be updated independently without bringing SAN down. SAN Array-to-array traffic is isolated outside of the M1000e enclosures Fixed hops between hosts and array volumes regardless of number of M1000e enclosures Supports ability to scale arrays Supports ability to scale host.
4.7.3 M-Series Pass-Through Modules Pass-Through modules are supported for use with EqualLogic SAN solutions. The Pass-Through module provides a simple, direct path from each blade server’s optional Ethernet mezzanine card to an externally accessible port. These ports can then be connected to one or more external switches that are hosting PS-Series arrays.
5 FS Series NAS Appliance The Dell EqualLogic FS7500 adds scale-out unified file and block Network Attached Storage (NAS) 2 capabilities to any EqualLogic PS Series iSCSI SANs. The FS7500 was introduced in July 2011. It is the first EqualLogic based FS Series appliance released by Dell. The key design features and benefits provided by the EqualLogic FS Series include: • • • • A scalable unified (block and file), virtualized IP based storage platform.
Figure 26 FS 7500 Unified Storage Architecture 5.1.1 FS Series solution for file only storage In a file only scenario the initiators on the FS Series appliance are the only iSCSI clients connecting to the PS Series arrays. The pools and volumes in the arrays provide storage for FS Series appliance file I/O only. This scenario is shown in Figure 27. June 2012 Dell EqualLogic Configuration Guide v13.
Figure 27 FS Series NAS (file only) 5.2 Dell FluidFS 3 Dell FluidFS is a high performance clustered file system. It provides fully interoperable multi-protocol file sharing for UNIX, Linux, and Windows® clients using standard CIFS and NFS file access protocols and authentication methods (Active Directory, LDAP, NIS).
Snapshots: FluidFS snapshots are read only and redirect-on-write. They are created and managed by the FS Series appliance to provide file system level snapshot capability. They function independently of and have no impact on the operation of PS Series array based snapshots. Note: FS Series FluidFS snapshots and PS Series volume based snapshots function independently and have no impact on each other. Please see the following whitepaper for more information on Dell FluidFS: Dell Fluid File System: http://www.
Table 37 Features and Supported Configuration Limits Feature/Limit FS7500 Specification Protocol Support CIFS (SMB 1), NFSv3, NDMP, SNMP, NTP, iSCSI, Active Directory, LDAP, NIS (Network Information Service) Supported Arrays Any new or existing EqualLogic PS Series array running controller firmware version 5.
5.3.2 FS7500 System Components 4 The system components in the initial release of the EqualLogic FS Series NAS appliance (the FS7500) consist of two controller node units and one backup power supply (BPS) shelf containing two redundant power supplies. The system components and required power cabling paths are shown in Figure 28. Figure 28 FS7500 System Components and Power Cabling 5.3.2.
triggers all I/O normally written to the mirror cache to be written to a journal file instead. Client load balancing in the FS7500 make this process transparent from a client point of view. June 2012 Dell EqualLogic Configuration Guide v13.
6 FS Series File Level Operations In this section we provide an overview of key FS Series Appliance features along with some operational limits. Please refer to the NAS Service Management Tasks section of the EqualLogic PS Series Group Administration Manual (version 5.1 or later) for detailed descriptions and all administrative task procedures. 6.1 NAS Service The NAS service is the logical run-time container in which one or more NAS file systems are created.
Figure 29 Containment Model: PS Series Storage and FS Series NAS Reserve The addition of the FS Series NAS appliance to a PS Series group does not change the functional behavior of PS Series groups and pools. PS Series groups and pools are explained in more detail in sections 3.1 and 3.2 on page 22. 6.3 NAS File Systems To provision NAS storage, you need to create NAS file systems within the NAS service. Inside a NAS File System you can create multiple CIFS shares and/or NFS exports.
Figure 30 Containment Model: FS Series NAS Reserve, File Systems, Shares and Exports 6.3.1 NAS File System security styles There are three security style options that can be applied to a NAS File System: UNIX Controls file access using UNIX permissions in all protocols. A client can change permissions only by using the chmod and chown commands on the NFS mount point. You can also specify the UNIX permissions for files and directories created in the file system by Window clients.
Referring to File System 1 in Figure 30, you could assign any of the three file system security styles to it. Given that portions of it are simultaneously accessible as CIFs shares and NFS exports, a general rule of thumb for how to assign the security style would be as follows: • • • If your users are predominantly Linux/UNIX based, use the UNIX style. Likewise, if your users are predominantly Windows/CIFS based, use the NTFS style. If you have a nearly even mix, then use the mixed style. 6.
• • • Snapshot reserve space utilization is a function of the data change rate in the file system. Old snapshots are deleted to make room for new snapshots if enough snapshot reserve is not available. NDMP based backup automatically creates a snapshot, from which the backup is created. June 2012 Dell EqualLogic Configuration Guide v13.
7 FS Series NAS Configuration In Section 5.1 we presented a high level view of the FS7500 NAS component architecture (See Figure 26 and Figure 27). In this section we provide detailed connection diagrams demonstrating how to setup fully connected iSCSI SAN and client LAN connection paths for the FS7500 appliance. 7.1 FS7500 Connection Paths The FS7500 appliance is comprised of two peer system controller nodes.
Figure 31 Connection Paths for FS7500 Client LAN Figure 32 below shows the iSCSI SAN, IPMI and node interconnect paths. Pay careful attention to how the controller ports alternate between redundant switch paths. Note: With the exception of the IPMI connection paths, corresponding ports on each controller node must connect to the same SAN switch. This connection pattern is shown in Figure 32.
Figure 32 Connection Paths for FS7500 iSCSI SAN, IPMI and Controller Interconnect June 2012 Dell EqualLogic Configuration Guide v13.
Appendix A Network Ports and Protocols PS Series groups use a number of TCP and UDP protocols for group management, I/O operations, and internal communication. If you have switches or routers set to block these protocols, you may need to unblock them to allow management or I/O operations to work correctly. The required and optional protocols are listed in the following sections. A.1 Required Ports and Protocols Table 38 lists the ports and protocols required for operating an EqualLogic iSCSI SAN.
EqualLogic Diagnostics TCP 20 FTP Software update and diagnostic procedures; to all individual member IP addresses TCP 25 SMTP E-mail and diagnostic notifications; from all individual member IP addresses to the configured SMTP server June 2012 Dell EqualLogic Configuration Guide v13.
Appendix B Recommended Switches The list of recommended switches is now maintained in a separate document. Please see:… Validated Components List for EqualLogic PS Series SANs http://www.delltechcenter.com/page/EqualLogic+Validated+Components June 2012 Dell EqualLogic Configuration Guide v13.
Appendix C Supported iSCSI Initiators The list of supported iSCSI initiators is now maintained in a separate document. Please see:… Validated Components List for EqualLogic PS Series SANs http://www.delltechcenter.com/page/EqualLogic+Validated+Components June 2012 Dell EqualLogic Configuration Guide v13.
Related Publications The following publications provide additional background and technical details supporting configuration of EqualLogic SANs. In future versions of this document we will continue to extract and include more information from the various white papers and technical reports that are referenced here. All documents listed in Table 40 below are available for internet download, unless noted otherwise.
Dell Reference Architecture Sizing and Best Practices for Microsoft Exchange Server 2007 in a VMware ESX Server Environment using EqualLogic PS Series Storage Deploying Microsoft Hyper-V with PS Series Arrays Deploying Thin Provisioning in a PS Series SAN Monitoring Your PS Series SAN with SAN Headquarters PS Series Groups: Backup and Recovery Overview PS Series STorage Arrays: Choosing a Member RAID Policy Red Hat Linux v5.
THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR IMPLIED WARRANTIES OF ANY KIND.