Sun StorEdge™ T3 and T3+ Array Configuration Guide Sun Microsystems, Inc. 901 San Antonio Road Palo Alto, CA 94303-4900 U.S.A. 650-960-1300 Part No. 816-0777-10 August 2001, Revision A Send comments about this document to: docfeedback@sun.
Copyright 2001 Sun Microsystems, Inc., 901 San Antonio Road, Palo Alto, CA 94303-4900 U.S.A. All rights reserved. This product or document is distributed under licenses restricting its use, copying, distribution, and decompilation. No part of this product or document may be reproduced in any form by any means without prior written authorization of Sun and its licensors, if any. Third-party software, including font technology, is copyrighted and licensed from Sun suppliers.
Contents Preface 1. ix Array Configuration Overview Product Description Controller Card 1 2 Interconnect Cards Array Configurations 1 4 6 Configuration Guidelines and Restrictions Configuration Recommendations 2.
Logical Volumes 16 Guidelines for Configuring Logical Volumes 17 Determining How Many Logical Volumes You Need Determining Which RAID Level You Need Determining Whether You Need a Hot Spare Creating and Labeling a Logical Volume Setting the LUN Reconstruction Rate 21 RAID 1 21 RAID 5 21 Configuring RAID Levels 3. Configuring Partner Groups 4.
Single Host With Two Hubs and Eight Controller Units Configured as Four Partner Groups 36 Dual Hosts With Two Hubs and Four Controller Units 38 Dual Hosts With Two Hubs and Eight Controller Units 40 Dual Hosts With Two Hubs and Four Controller Units Configured as Two Partner Groups 42 Dual Hosts With Two Hubs and Eight Controller Units Configured as Four Partner Groups 44 Switch Host Connection 46 Dual Hosts With Two Switches and Two Controller Units 46 Dual Hosts With Two Switches and Eight Control
Administration Path 60 Connecting Partner Groups Workgroup Configurations Enterprise Configurations Glossary vi 60 62 63 65 Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001
Figures 3 FIGURE 1-1 Sun StorEdge T3 Array Controller Card and Ports FIGURE 1-2 Sun StorEdge T3+ Array Controller Card and Ports 4 FIGURE 1-3 Interconnect Card and Ports 5 FIGURE 1-4 Workgroup Configuration FIGURE 1-5 Enterprise Configuration FIGURE 3-1 Sun StorEdge T3 Array Partner Group FIGURE 4-1 Single Host Connected to One Controller Unit FIGURE 4-2 Single Host With Two Controller Units Configured as a Partner Group 29 FIGURE 4-3 Failover Configuration FIGURE 4-4 Single Host With F
viii 52 FIGURE 5-1 Sun Enterprise 6x00/5x00/4x00/3x00 SBus+ I/O Board FIGURE 5-2 Sun StorEdge PCI FC-100 Host Bus Adapter 53 FIGURE 5-3 Sun StorEdge SBus FC-100 Host Bus Adapter FIGURE 5-4 Sun StorEdge PCI Single Fibre Channel Network Adapter FIGURE 5-5 Sun StorEdge PCI Dual Fibre Channel Network Adapter 56 FIGURE 5-6 Sun StorEdge CompactPCI Dual Fibre Channel Network Adapter 57 FIGURE 6-1 Sun StorEdge T3 Array Controller Card and Interconnect Cards 61 FIGURE 6-2 Sun StorEdge T3+ Array Cont
Preface The Sun StorEdge T3 and T3+ Array Configuration Guide describes the recommended configurations for Sun StorEdge T3 and T3+ arrays for high availability, maximum performance, and maximum storage capability. This guide is intended for Sun™ field sales and technical support personnel. Before You Read This Book Read the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual for product overview information.
Using UNIX Commands This document contains some information on basic UNIX® commands and procedures such as booting the devices. For further information, see one or more of the following: ■ AnswerBook2™ online documentation for the Solaris™ software environment ■ Other software documentation that you received with your system Typographic Conventions x Typeface Meaning Examples AaBbCc123 The names of commands, files, and directories; on-screen computer output Edit your.login file.
Shell Prompts Shell Prompt C shell machine_name% C shell superuser machine_name# Bourne shell and Korn shell $ Bourne shell and Korn shell superuser # Sun StorEdge T3 and T3+ array :/: Related Documentation Application Title Part Number Latest array updates Sun StorEdge T3 and T3+ Array Release Notes 816-1983 Installation overview Sun StorEdge T3 and T3+ Array Start Here 816-0772 Safety procedures Sun StorEdge T3 and T3+ Array Regulatory and Safety Compliance Manual 816-0774 Site pre
Application Title Part Number Sun StorEdge Component Manager installation Sun StorEdge Component Manager Installation Guide - Solaris 806-6645 Sun StorEdge Component Manager Installation Guide - Windows NT 806-6646 Using Sun StorEdge Component Manager software Sun StorEdge Component Manager User’s Guide 806-6647 Latest Sun StorEdge Component Manager Updates Sun StorEdge Component Manager Release Notes 806-6648 Accessing Sun Documentation Online You can find the Sun StorEdge T3 and T3+ array do
CHAPTER 1 Array Configuration Overview This chapter describes the Sun StorEdge T3 and T3+ arrays, the connection ports, and Fibre Channel connections. It also describes basic rules and recommendations for configuring the array, and it lists supported hardware and software platforms. Note – For installation and cabling information, refer to the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual.
The array can be used either as a standalone storage unit or as a building block, interconnected with other arrays of the same type and configured in various ways to provide a storage solution optimized to the host application. The array can be placed on a table top or rackmounted in a server cabinet or expansion cabinet. The array is sometimes called a controller unit, which refers to the internal RAID controller on the controller card. Arrays without the controller card are called expansion units.
Serial port (RJ-11) 10BASE-T Ethernet port (RJ-45) FC-AL data connection port Note: FC-AL port requires an MIA for cable connection. FIGURE 1-1 Sun StorEdge T3 Array Controller Card and Ports The Sun StorEdge T3+ array controller card contains: ■ One Fibre Channel-Arbitrated Loop (FC-AL) port using an LC small-form factor (SFF) connector. The fiber-optic cable that provides data channel connectivity to the array has an LC-SFF connector that attaches directly to the port on the controller card.
Serial port (RJ-45) 10/100BASE-T Ethernet port (RJ-45) FC-AL data connection port (LC-SFF) FIGURE 1-2 Sun StorEdge T3+ Array Controller Card and Ports Interconnect Cards The interconnect cards are alike on both array models. There are two interconnect ports on each card: one input and one output for interconnecting multiple arrays. The interconnect card provides switch and failover capabilities, as well as an environmental monitor for the array.
Interconnect cards Output Input FIGURE 1-3 Interconnect Card and Ports Chapter 1 Array Configuration Overview 5
Array Configurations Each array uses Fibre Channel-Arbitrated Loop (FC-AL) connections to connect to the application host. An FC-AL connection is a 100-Mbyte/second serial channel that enables multiple devices, such as disk drives and controllers, to be connected. Two array configurations are supported: ■ Workgroup. This standalone array is a high-performance, high-RAS configuration with a single hardware RAID cached controller.
Alternate master controller unit Interconnect cables Application host Ethernet connection Host-bus adapters Master controller unit FC-AL connection Management host Ethernet connection Ethernet port LAN FIGURE 1-5 Enterprise Configuration Note – Sun StorEdge T3 array workgroup and enterprise configurations require a media-interface adapter (MIA) connected to the Fibre Channel port to connect the fiber-optic cable. Sun StorEdge T3+ array configurations support direct FC-AL connections.
Configuration Guidelines and Restrictions Workgroup Configurations: ■ The media access control (MAC) address is required to assign an IP address to the controller unit. The MAC address uniquely identifies each node of a network. The MAC address is available on the pull-out tab on the front left side of the array. ■ A host-based mirroring solution is necessary to protect data in cache. ■ Sun StorEdge T3 array workgroup configurations are supported in Sun Cluster 2.2 environments.
Configuration Recommendations ■ Use enterprise configurations for controller redundancy. ■ Use host-based software such as VERITAS Volume Manager (VxVM), Sun Enterprise™ Server Alternate Pathing (AP) software, or Sun StorEdge Traffic Manager for multipathing support. ■ Connect redundant paths to separate host adapters, I/O cards, and system buses. ■ Configure active paths over separate system buses to maximize bandwidth.
Supported Software The following software is supported on Sun StorEdge T3 and T3+ arrays: ■ Solaris 2.6, Solaris 7, and Solaris 8 operating environments ■ VERITAS Volume Manager 3.04 and later with DMP ■ Sun Enterprise Server Alternate Pathing (AP) 2.3.1 ■ Sun StorEdge Component Manager 2.1 and later ■ StorTools™ 3.3 Diagnostics ■ Sun Cluster 2.2 and 3.0 software (see “Sun Cluster Support” on page 10) ■ Sun StorEdge Data Management Center 3.0 ■ Sun StorEdge Instant Image 2.
■ Switches are not supported. ■ Hubs must be used. ■ The Sun StorEdge SBus FC-100 (SOC+) HBA and the onboard SOC+ interface in Sun Fire™ systems are supported. ■ On Sun Enterprise 6x00/5x00/4x00/3x00 systems, a maximum of 64 arrays are supported per cluster. ■ On Sun Enterprise 10000 systems, a maximum of 256 arrays are supported per cluster. ■ To ensure full redundancy, host-based mirroring software such as Solstice DiskSuite (SDS) 4.2 or SDS 4.2.1 must be used. ■ Solaris 2.
12 Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001
CHAPTER 2 Configuring Global Parameters When an array is shipped, the global parameters are set to default values. This chapter describes how to reconfigure your array by changing these default values. Caution – If you are planning an enterprise configuration using new factory units, be sure to install and set up the units as a partner group before you power on, and change any parameters or create/change any logical volumes.
Configuring Cache for Performance and Redundancy Cache mode can be set to the following values: ■ Auto. The cache mode is determined as either write-behind or write-through, based on the I/O profile. If the array has full redundancy available, then caching operates in write-behind mode. If any array component is non-redundant, the caching mode is set to write-through. Read caching is always performed. Auto caching mode provides the best performance while retaining full redundancy protection.
Configuring Data Block Size The data block size is the amount of data written to each drive when striping data across drives. (The block size is also known as the stripe unit size.) The block size can be changed only when there are no volumes defined. The block size can be configured as 16 Kbytes, 32 Kbytes, or 64 Kbytes. The default block size is 64 Kbytes. A cache segment is the amount of data being read into cache. A cache segment is 1/8 of a data block.
Note – The data block size must be configured before any logical volumes are created on the units. Remember, this block size is used for every logical volume created on the unit. Therefore it is important to have similar application data configured per unit. Data block size is universal throughout a partner group. Therefore, you cannot change it after you have created a volume. To change the data block size, you must first delete the volume(s), change the data block size, and then create new volume(s).
Note – Individual physical disk drives are not visible from the application host. Refer to the Sun StorEdge T3 and T3+ Array Administrator’s Guide for more information on creating logical volumes. Guidelines for Configuring Logical Volumes Use the following guidelines when configuring logical volumes: ■ The array’s native volume management can support a maximum of two volumes per array unit.
■ Data-intensive NFS, version 3 ■ DSS ■ DW ■ HPC Note – If you are creating new volumes or changing the volume configuration, you must first manually rewrite the label of the previous volume using the autoconfigure option of the format(1M) UNIX host command. For more information on this procedure, refer to the Sun StorEdge T3 and T3+ Array Administrator’s Guide. Caution – Removing and reconfiguring the volume will destroy all data previously stored there.
Note – Only one hot spare is allowed per array and it is only usable for the array in which it is configured. The hot spare must be configured as drive 9. Drive 9 will be the hot spare in the unit. So, for example, should a drive failure occur on drive 7, drive 9 is synchronized automatically with the entire LUN to reflect the data on drive 7. Once the failed drive (7) is replaced, the controller unit will automatically copy the data from drive 9 to the new drive, and drive 9 will become a hot spare again.
If the volume has a hot spare configured and that drive is available, the data on the disabled drive is reconstructed on the hot-spare drive. When this operation is complete, the volume is operating with full redundancy protection, so another drive in the volume may fail without loss of data. After a drive has been replaced, the original data is automatically reconstructed on the new drive. If no hot spare was used, the data is regenerated using the RAID redundancy data in the volume.
■ Second, you can use third-party software on the host system to create as many partitions as desired from a given volume. In the Solaris environment, you can use VERITAS Volume Manager or Solaris Logical Volume Management (SLVM) formerly known as Solstice DiskSuite (SDS) for this purpose. Note – For information on using the format utility, refer to the format (1M) man page. For more information on third-party software or VERITAS Volume Manager, refer to the documentation for that product.
Configuring RAID Levels The Sun StorEdge T3 and T3+ arrays are preconfigured at the factory with a single LUN, RAID level 5 redundancy and no hot spare. Once a volume has been configured, you cannot reconfigure it to change its size, RAID level, or hot spare configuration. You must first delete the volume and create a new one with the configuration values you want.
CHAPTER 3 Configuring Partner Groups Sun StorEdge T3 and T3+ arrays can be interconnected in partner groups to form a redundant and larger storage system. Note – The terms partner group and enterprise configuration refer to the same type of configuration and are used interchangeably in this document. Note – Partner groups are not supported in Sun Cluster 2.2 configurations.
Ethernet connection Alternate master controller unit Application host Interconnect cables Host-bus adapters Master controller unit FC-AL connection Management host Ethernet connection LAN FIGURE 3-1 Ethernet port Sun StorEdge T3 Array Partner Group Note – Sun StorEdge T3 arrays require a media-interface adapter (MIA) connected to the Fibre Channel port on the controller card to connect the fiber-optic cable. Sun StorEdge T3+ array configurations support direct FC-AL connections.
Any controller unit will boot from the master controller unit’s drives. All configuration data, including syslog information, is located on the master controller unit’s drives. How Partner Groups Work If the master controller unit fails and the “heartbeat” between it and the alternate master stops, this failure causes a controller failover, where the alternate master assumes the role of the master controller unit.
Creating Partner Groups Partner groups can be created in two ways: ■ From new units ■ From existing standalone units Instructions for installing new array units and connecting them to create partner groups can be found in the Sun StorEdge T3 and T3+ Array Installation, Operation, and Service Manual. To configure existing standalone arrays with data into a partner group, you must go through a qualified service provider. Contact your SunService representative for more information.
CHAPTER 4 Configuration Examples This chapter includes sample reference configurations for Sun StorEdge T3 and T3+ arrays. Although there are many supported configurations, these reference configurations provide the best solution for many installations.
Single Host With One Controller Unit FIGURE 4-1 shows one application host connected through an FC-AL cable to one array controller unit. The Ethernet cable connects the controller to a management host via a LAN on a public or separate network, and requires an IP address. Note – This configuration is not recommended for RAS functionality because the controller is a single point of failure. In this type of configuration, use a host-based mirroring solution to protect data in cache.
Single Host With Two Controller Units Configured as a Partner Group FIGURE 4-2 shows one application host connected through FC-AL cables to one array partner group, which consists of two Sun StorEdge T3+ arrays. The Ethernet connection from the master controller unit is on a public or separate network and requires an IP address for the partner group. In the event of a failover, the alternate master controller unit will use the master controller unit’s IP address and MAC address.
Host Multipathing Management Software While Sun StorEdge T3 and T3+ arrays are redundant devices that automatically reconfigure whenever a failure occurs on any internal component, a host-based solution is needed for a redundant data path.
Single Host With Four Controller Units Configured as Two Partner Groups FIGURE 4-4 shows one application host connected through FC-AL cables to four arrays configured as two separate partner groups. This configuration can be used for capacity and I/O throughput requirements. Host-based Alternate Pathing software is required for this configuration. Note – This configuration is a recommended enterprise configuration for RAS functionality because the controller is not a single point of failure.
Single Host With Eight Controller Units Configured as Four Partner Groups FIGURE 4-5 shows one application host connected through FC-AL cables to eight Sun StorEdge T3+ arrays, forming four partner groups. This configuration is the maximum allowed in a 72-inch cabinet. This configuration can be used for footprint and I/O throughput. Note – This configuration is a recommended enterprise configuration for RAS functionality because the controller is not a single point of failure.
Alternate master controller unit Ethernet Interconnect cables Application host HBA HBA HBA HBA FC-AL Master controller unit HBA HBA HBA HBA FC-AL Ethernet port Management host LAN FIGURE 4-5 Single Host With Eight Controller Units Configured as Four Partner Groups Chapter 4 Configuration Examples 33
Hub Host Connection The following sample configurations are included in this section: ■ “Single Host With Two Hubs and Four Controller Units Configured as Two Partner Groups” on page 34 ■ “Single Host With Two Hubs and Eight Controller Units Configured as Four Partner Groups” on page 36 ■ “Dual Hosts With Two Hubs and Four Controller Units” on page 38 ■ “Dual Hosts With Two Hubs and Eight Controller Units” on page 40 ■ “Dual Hosts With Two Hubs and Four Controller Units Configured as Two Partner Gr
The following three parameters must be set on the master controller unit, as follows: ■ ■ ■ mp_support = rw or mpxio cache mode = auto cache mirroring = auto Note – For information on setting these parameters, refer to the Sun StorEdge T3 and T3+ Array Administrator’s Guide Host-based multipathing software is required for this configuration.
Single Host With Two Hubs and Eight Controller Units Configured as Four Partner Groups FIGURE 4-7 shows one application host connected through FC-AL cables to two hubs and to eight Sun StorEdge T3+ arrays, forming four partner groups. This configuration is the maximum allowed in a 72-inch cabinet. This configuration can be used for footprint and I/O throughput. Note – This configuration is a recommended enterprise configuration for RAS functionality because the controller is not a single point of failure.
Hub Application host HBA HBA Hub Alternate master controller unit Ethernet FC-AL Interconnect cables Master controller unit FC-AL Ethernet port LAN FIGURE 4-7 Management host Single Host With Two Hubs Configured and Eight Controller Units as Four Partner Groups Chapter 4 Configuration Examples 37
Dual Hosts With Two Hubs and Four Controller Units FIGURE 4-8 shows two application hosts connected through FC-AL cables to two hubs and four Sun StorEdge T3+ arrays. This configuration, also known as a multi-initiator configuration, can be used for footprint and I/O throughput.
Hub Application host 1 HBA HBA Hub Application host 2 HBA Controller unit HBA FC-AL Ethernet Ethernet port LAN FIGURE 4-8 Management host Dual Hosts With Two Hubs and Four Controller Units Chapter 4 Configuration Examples 39
Dual Hosts With Two Hubs and Eight Controller Units FIGURE 4-9 shows two application hosts connected through FC-AL cables to two hubs and eight Sun StorEdge T3+ arrays. This configuration, also known as a multiinitiator configuration, can be used for footprint and I/O throughput.
Hub Application host 1 HBA HBA Hub Application host 2 HBA Controller unit HBA Ethernet FC-AL Ethernet port LAN FIGURE 4-9 Management Host Dual Hosts With Two Hubs and Eight Controller Units Chapter 4 Configuration Examples 41
Dual Hosts With Two Hubs and Four Controller Units Configured as Two Partner Groups FIGURE 4-8 shows two application hosts connected through FC-AL cables to two hubs and four Sun StorEdge T3+ arrays forming two partner groups. This multi-initiator configuration can be used for footprint and I/O throughput. Note – This configuration is a recommended enterprise configuration for RAS functionality because the controller is not a single point of failure.
Hub Application host 1 HBA HBA Hub Application host 2 HBA Alternate master controller unit HBA FC-AL Interconnect cables Ethernet Master controller unit FC-AL Ethernet port LAN FIGURE 4-10 Management host Dual Hosts With Two Hubs and Four Controller Units Configured as Two Partner Groups Chapter 4 Configuration Examples 43
Dual Hosts With Two Hubs and Eight Controller Units Configured as Four Partner Groups FIGURE 4-9 shows two application hosts connected through FC-AL cables to two hubs and eight Sun StorEdge T3+ arrays forming four partner groups. This multi-initiator configuration can be used for footprint and I/O throughput. This configuration is a recommended enterprise configuration for RAS functionality because the controller is not a single point of failure.
Hub Application host 1 HBA HBA Hub Application host 2 HBA Alternate master controller unit HBA FC-AL Interconnect cables Ethernet Master controller unit FC-AL LAN Management host Ethernet port FIGURE 4-11 Dual Hosts With Two Hubs and Eight Controller Units Configured as Four Partner Groups Chapter 4 Configuration Examples 45
Switch Host Connection This section contains the following example configurations: ■ “Dual Hosts With Two Switches and Two Controller Units” on page 46 ■ “Dual Hosts With Two Switches and Eight Controller Units” on page 48 Dual Hosts With Two Switches and Two Controller Units FIGURE 4-12 shows two application hosts connected through FC-AL cables to two switches and two Sun StorEdge T3+ arrays. This multi-initiator configuration can be used for footprint and I/O throughput.
Switch HBA Switch Application host A HBA Application host B HBA HBA Controller unit Ethernet FC-AL Ethernet port LAN FIGURE 4-12 Management Host Dual Hosts With Two Switches and Two Controller Units Chapter 4 Configuration Examples 47
Dual Hosts With Two Switches and Eight Controller Units FIGURE 4-13 shows two application hosts connected through FC-AL cables to two switches and eight Sun StorEdge T3+ arrays. This multi-initiator configuration, can be used for footprint and I/O throughput. Note – This configuration is not a recommended for RAS functionality because the controller is a single point of failure.
Switch Switch HBA Application host 1 HBA HBA Application host 2 HBA Controller unit Ethernet FC-AL LAN Management host Ethernet port FIGURE 4-13 Dual Hosts With Two Switches and Eight Controller Units Chapter 4 Configuration Examples 49
50 Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001
CHAPTER 5 Host Connections This chapter describes the host bus adapters (HBAs) that are supported by Sun StorEdge T3 and T3+ arrays: ■ “Sun Enterprise SBus+ and Graphics+ I/O Boards” on page 52 ■ “Sun StorEdge PCI FC-100 Host Bus Adapter” on page 53 ■ “Sun StorEdge SBus FC-100 Host Bus Adapter” on page 54 ■ “Sun StorEdge PCI Single Fibre Channel Network Adapter” on page 55 ■ “Sun StorEdge PCI Dual Fibre Channel Network Adapter” on page 56 ■ “Sun StorEdge CompactPCI Dual Fibre Channel Network Ad
Sun Enterprise SBus+ and Graphics+ I/O Boards The SBus+ and Graphics+ I/O boards each provide mounting for two Gigabit Interface Converters (GBICs). For more detailed information about these I/O boards, refer to the Sun Enterprise 6x00/5x00/4x00/3x00 Systems SBus+ and Graphics+ I/O Boards Installation Guide, part number 805-2704. FIGURE 5-1 shows an Enterprise 6x00/5x00/4x00/3x00 SBus+ I/O board.
Sun StorEdge PCI FC-100 Host Bus Adapter The Sun StorEdge PCI FC-100 host bus adapter is a 33-MHz, 100 Mbytes/second, single-loop Fibre Channel PCI host bus adapter with an onboard GBIC. This host bus adapter is PCI Version 2.1-compliant. For more detailed information about this product, refer to the Sun StorEdge PCI FC-100 Host Adapter Installation Manual, part number 805-3682. FIGURE 5-2 shows a Sun StorEdge PCI FC-100 host bus adapter.
Sun StorEdge SBus FC-100 Host Bus Adapter The Sun StorEdge SBus FC-100 host bus adapter is a single-width Fibre Channel SBus card with a Sun Serial Optical Channel (SOC+) ASIC (application-specific integrated circuit). You can connect up to two loops to each card, using hotpluggable GBICs. For more detailed information about this product, refer to the Sun StorEdge SBus FC-100 Host Adapter Installation and Service Manual, part number 802-7572. FIGURE 5-3 shows a Sun StorEdge SBus FC-100 host bus adapter.
Sun StorEdge PCI Single Fibre Channel Network Adapter The Sun StorEdge PCI Single Fibre Channel network adapter is a Fibre Channel PCI card with one onboard optical receiver. This network adapter is PCI Version 2.1-compliant. For more detailed information about this product, refer to the Sun StorEdge PCI Single Fibre Channel Network Adapter Installation Guide, part number 806-7532-xx. FIGURE 5-4 shows a Sun StorEdge PCI Single Fibre Channel network adapter.
Sun StorEdge PCI Dual Fibre Channel Network Adapter The Sun StorEdge PCI Dual Fibre Channel network adapter is a Fibre Channel PCI card with two onboard optical transceivers. This network adapter is PCI Version 2.1-compliant. For more detailed information about this product, refer to the Sun StorEdge PCI Dual Fibre Channel Network Adapter Installation Guide, part number 806-4199-xx. FIGURE 5-6 shows a Sun StorEdge PCI Dual Fibre Channel network adapter.
Sun StorEdge CompactPCI Dual Fibre Channel Network Adapter The Sun StorEdge CompactPCI Dual Fibre Channel network adapter has two 1-Gbit Fibre Channel ports on a cPCI card. For more detailed information about this product, refer to the Sun StorEdge CompactPCI Dual Fibre Channel Network Adapter Installation Guide, part number 816-0241-xx. FIGURE 5-6 shows a Sun StorEdge CompactPCI Dual Fibre Channel network adapter.
58 Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001
CHAPTER 6 Array Cabling This chapter describes the array configurations supported by the Sun StorEdge T3 and T3+ arrays, and it includes the following sections: ■ “Overview of Array Cabling” on page 59 ■ “Workgroup Configurations” on page 62 ■ “Enterprise Configurations” on page 63 Overview of Array Cabling Sun StorEdge T3 and T3+ arrays have the following connections: ■ One FC-AL interface to the application host ■ One Ethernet interface to the management host (via a LAN) for administration purp
■ Switch connection where the FC-AL from the array is connected to a switch on the same network as the data host. Administration Path For the administration path, each controller unit has an Ethernet connector. For each installed controller, an Ethernet connection and IP address are required. The administration server uses this link to set up and manage the arrays using Sun StorEdge Component Manager software. Note – In a partner group, only one of the two Ethernet connections is active at any time.
Controller card Serial port (RJ-11) 10BASE-T Ethernet port (RJ-45) Interconnect cards FC-AL data connection port Note: FC-AL port requires an MIA for cable connection.
Workgroup Configurations The following configuration rules apply to array workgroup configurations (FIGURE 6-3): ■ The interconnect ports, which are used only in partner group configurations, cannot be used for host connections. ■ The FC-AL connection provides a data path to the application host. ■ The Ethernet connection provides a link to the management host. ■ The serial port is used solely for diagnostics and service by qualified service personnel only.
Enterprise Configurations The following rules configuration rules apply to enterprise (partner group) configurations (FIGURE 6-4): ■ The interconnect ports, which are used only in enterprise configurations, cannot be used for host connections. ■ The FC-AL connection provides a data path to the application host. ■ The Ethernet connection provides a link to the management host. ■ The serial port is used solely for diagnostics and service by qualified service personnel only.
64 Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001
Glossary A administrative domain alternate master controller unit Alternate Pathing (AP) Partner groups (interconnected controller units) that share common administration through a master controller. Also called “alternate master unit,” the secondary array unit in a partner group that provides failover capability from the master controller unit. A mechanism that reroutes data to the other array controller in a partner group upon failure in the host data path.
C command-line interface (CLI) controller unit The interface between the Sun StorEdge T3 and T3+ array’s pSOS operating system and the user in which the user types commands to administer the array. A Sun StorEdge T3 and T3+ array that includes a controller card. It can be use as a standalone unit or configured with other Sun StorEdge T3 and T3+ arrays.
F Fibre Channel Arbitrated Loop (FC-AL) field-replaceable unit (FRU) FLASH memory device (FMD) A 100 Mbyte/s serial channel that enables connection of multiple devices (disk drives and controllers). A component that is easily removed and replaced by a field service engineer or a system administrator. A device on the controller card that stores EPROM firmware.
I input/output operations per second (IOPS) interconnect cable interconnect card A performance measurement of the transaction rate. An FC-AL cable with a unique switched-loop architecture that is used to interconnect multiple Sun StorEdge T3 and T3+ arrays. An array component that contains the interface circuitry and two connectors for interconnecting multiple Sun StorEdge T3 and T3+ arrays.
multi-initiator configuration A supported array configuration that connects two hosts to one or more array administrative domains through hub or switch connections. P parity Additional information stored with data on a disk that enables the controller to rebuild data after a drive failure. partner group A pair of interconnected controller units. Expansion units interconnected to the pair of controller units can also be part of the partner group.
reverse address resolution protocol (RARP) A utility in the Solaris operating environment that enables automatic assignment of the array IP address from the host. S SC Simple Network Management Protocol (SNMP) small form factor (SFF) synchronous dynamic random access memory (SDRAM) system area An industry standard name used to describe a connector standard. A network management protocol designed to give a user the capability to remotely manage a computer network.
W workgroup configuration world wide name (WWN) write caching A standalone array connected to a host system. A number used to identify array volumes in both the array system and Solaris environment. Data used to build up stripes of data, eliminating the read-modify-write overhead. Write caching improves performance for applications that are writing to disk.
72 Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001
Index A administration path, 60 alternate master controller unit in a partner group, 7 Alternate Pathing (AP) in configuration recommendations, 9 in partner group configuration, 30 auto cache mode, 14 C cabling overview, 59 cache allocation, configuring, 16 for improving performance, 13 mirrored, enabling, 16 setting cache modes, 14 cache segment, 15 cluster support See SunCluster 2.
E L enterprise configuration configuration rules, 63 description, 6 See partner group Ethernet administration path, 60 connection, 2, 3 expansion units, 2 logical unit (LUN) See LUNs LUNs and applications, 17 creating and labeling, 19 default value, 22 definition, 16 determining how many are needed, 17 guidelines for configuring, 17 reconstruction rate, setting, 19 viewing, 16 F FC-AL connections, 6 data path, 59 Fibre Channel-Arbitrated Loop (FC-AL) See FC-AL H HBA SOC+, 54 Sun StorEdge CompactPCI Dua
sharing parameter settings, 9 using AP, 30 using DMP, 30 using multipathing software, 30 platforms supported, 9 W workgroup configuration, 6, 28 configuration rules, 62 write-behind cache mode, 14 write-through cache mode, 14 R RAID and applications, 18 configuring for redundancy, 20 default level, 22 determining level required, 18 levels, defined, 21 S single controller configuration, 6 SOC+ HBA, 54 software supported, 10 stripe unit size See data block size Sun Cluster 2.
76 Sun StorEdge T3 and T3+ Array Configuration Guide • August 2001