Sun StorEdge™ A1000 and A3x00/A3500FC Best Practices Guide Sun Microsystems, Inc. 4150 Network Circle Santa Clara, CA 95054 U.S.A. 650-960-1300 Part No. 806-6419-14 November 2002, Revision A Send comments about this document to: docfeedback@sun.
Copyright 2002 Sun Microsystems, Inc., 4150 Network Circle, Santa Clara, California 95054, U.S.A. All rights reserved. Sun Microsystems, Inc. has intellectual property rights relating to technology embodied in the product that is described in this document. In particular, and without limitation, these intellectual property rights may include one or more of the U.S. patents listed at http://www.sun.com/patents and one or more additional patents or pending patent applications in the U.S.
Contents Preface 1. 2. xiii Troubleshooting Overview 1–1 1.1 A3x00/A3500FC Commandments 1.2 Available Tools and Information 1–2 1–3 1.2.1 Documentation 1–3 1.2.2 Web Sites 1.2.3 Internal Directory 1.2.4 Obtaining the Latest Version of RAID Manager 1.2.5 RAID Manager 6.0, 6.1 and 6.22 Are not Supported 1.2.6 Serial Cable 1.2.7 RAID Manager 6.xx Architecture White Paper Available 1–4 1.3 Tips for Filing a Bug 1.
2–3 2.1.5 SCSI and Fiber-Optic Cables 2.1.6 SCSI ID, Loop ID, Controller, and Disk Tray Switch Settings 2.1.7 World Wide Name (WWN) 2–3 2–4 2–5 Adding or Moving Arrays to a Host With Existing A3x00 Arrays 2.3 Adding Disks or Disk Trays 2–6 2–6 2.3.1 Adding or Moving Disk Trays to Existing Arrays 2.3.2 Adding or Moving Disk Drives to Existing Arrays 2–7 2–7 2.4 Setting Up 2x7 and 3x15 Configurations, and Converting 1x5 to 2x7 or 3x15 2–8 2.5 Sun StorEdge A3500/A3500FC Lite 2.
3.2 4. 3.1.4 Tunable Parameters and Settings 3.1.5 Multi-Initiator/Clustering Environment 3.1.6 Maximum LUN Support LUN Creation/RAID Level 3–3 3–5 3–5 3.2.1 General Information 3.2.2 LUN Numbers 3.2.3 The Use of RAID Levels 3.2.4 Cache Mirroring 3.2.5 Reconstruction Rate 3.2.6 Creation Process (Serial/Parallel) Time 3.2.7 DacStor Size (Upgrades) 3–8 3.3 LUN Deletion and Modification 3–9 3.4 Controller and Other Settings 3–5 3–6 3–6 3–6 3–7 3.4.1 NVSRAM Settings 3.4.
4.4 4.5 Ghost LUNs and Ghost Devices 4–5 4.4.1 4–8 Removing Ghost Drives Device Tree Rearranged 4.5.1 Dynamic Reconfiguration Related Problems 4.5.1.1 4–10 4–10 SNMP 4.7 Interaction With Other Volume Managers 4–11 VERITAS 4–12 4–12 4.7.1.1 VERITAS Enabling and Disabling DMP 4.7.1.2 HA Configuration Using VERITAS 4.7.1.3 Adding or Moving Arrays Under VERITAS 4.7.2 Solstice Disksuite (SDS) 4.7.3 Sun Cluster 4.7.4 High Availability (HA) 4.7.5 Quorum Device Disk Drives 5.1.
5.2 5.3 6. 5.1.10 Verifying the HBA 5.1.11 Verifying the Controller Boards and Paths to the A3x00/A3500FC 5–8 5.1.12 Controller Board LEDs 5.1.13 Ethernet Port FRU Replacement 5–8 5–10 5–10 5.2.1 HBA 5–10 5.2.2 Interconnect Cables 5.2.3 Power Cords 5.2.4 Power Sequencer 5.2.5 Hub 5.2.6 Controller Card Guidelines 5.2.7 Amount of Cache 5.2.8 Battery Unit 5.2.9 Cooling 5.2.10 Disk Drives 5.2.11 Disk Tray 5.2.12 Midplanes 5.2.
6.1.4 6–5 6.2 LUNs Not Seen 6.3 Rebuilding a Missing LUN Without Reinitialization 6.4 6–6 6–7 6.3.1 Setting the VKI_EDIT_OPTIONS 6.3.2 Resetting the VKI_EDIT_OPTIONS 6.3.3 Deleting a LUN With the RAID Manager GUI 6.3.4 Recreating a LUN With the RAID Manager GUI 6.3.5 Disabling the Debug Options Dynamic Reconfiguration 6–7 6–9 6–9 6–9 6–10 6–11 6.4.1 Prominent Bugs 6.4.2 Further Information 6–12 6–12 6.5 Controller Failover and LUN Balancing Takes Too Long 6.6 GUI Hang 6.
Figures FIGURE 2-1 SCSI Bus Length Calculation 2–4 FIGURE 2-2 Fibre Channel Connection With Long Wave GBIC Support 2–4 Figures ix
x Sun StorEdge A1000 and A3x00/A3500FC Best Practices Guide • November 2002
Tables 1–2 TABLE 1-1 A3x00/A3500FC Commandments - Thou Shalt TABLE 1-2 A3x00/A3500FC Commandments - Thou Shalt Not 1–2 TABLE 1-3 Web Sites TABLE 1-4 Terminal Emulation Functionality TABLE 1-5 FINs Affecting the Sun StorEdge A1000 and RSM 2000/A3x00/A3500FC Product Family 7 TABLE 1-6 FCOs Affecting the Sun StorEdge RSM 2000/A3x00/A3500FC Product Family 1–11 TABLE 2-1 Server Configuration and Maximum Controller Modules Supported 2–12 TABLE 5-1 Controller Module SCSI ID Settings TABLE A-1 A1
xii Sun StorEdge A1000 and A3x00/A3500FC Best Practices Guide • November 2002
Preface The Sun StorEdge A1000 and A3x00/A3500FC Best Practices Guide is intended for use by experienced Sun™ engineering personnel (FE, SE, SSE, and CPRE) who have received basic training on the Sun StorEdge™ A1000, A3x00/A3500FC. It is not intended to replace the existing documentation set, but rather to serve as a single point of reference that provides some answers to questions relating to common installation and service tasks.
How This Book Is Organized This manual is organized as follows: Chapter 1 introduces some of the tools that are available to help troubleshoot the Sun StorEdge A3x00/A3500FC disk array. Chapter 2 provides some additional information, guidelines, and tips relating to the installation and configuration of hardware. Chapter 3 provides some additional information, guidelines, and tips relating to the installation and configuration of RAID Manager.
Typographic Conventions Typeface or Symbol Meaning Examples AaBbCc123 The names of commands, files, and directories; on-screen computer output Edit your .login file. Use ls -a to list all files. % You have mail. AaBbCc123 What you type, when contrasted with on-screen computer output % su Password: AaBbCc123 Book titles, new words or terms, words to be emphasized Read Chapter 6 in the User’s Guide. These are called class options. You must be superuser to do this.
Related Documentation xvi Application Title Part Number Installation and Service Sun StorEdge A3500/A3500FC Controller Module Guide 805-4980 Installation and Service Sun StorEdge A3500/A3500FC Hardware Configuration Guide 805-4981 Installation Sun StorEdge A3500/A3500FC Task Map 805-4982 Installation and Service Sun StorEdge A3x00 Controller FRU Replacement Guide 805-7854 Installation Sun StorEdge A3500FC Controller Upgrade Guide 806-0479 Installation and Service Sun StorEdge Expansion
Accessing Sun Documentation You can view, print, or purchase a broad selection of Sun documentation, including localized versions, at: http://www.sun.com/documentation Sun Welcomes Your Comments Sun is interested in improving its documentation and welcomes your comments and suggestions. You can email your comments to Sun at: docfeedback@sun.com Please include the part number (806-6419-13) of your document in the subject line of your email.
xviii Sun StorEdge A1000 and A3x00/A3500FC Best Practices Guide • November 2002
CHAPTER 1 Troubleshooting Overview This chapter introduces some of the tools that are available to help troubleshoot the Sun StorEdge A3x00/A3500FC disk array, tips for filing a bug, and a listing of the latest field information notices (FINs) and field change orders (FCOs). This chapter contains the following topics: ■ Section 1.1, “A3x00/A3500FC Commandments” on page 1-2 ■ Section 1.2, “Available Tools and Information” on page 1-3 ■ Section 1.3, “Tips for Filing a Bug” on page 1-6 ■ Section 1.
1.1 A3x00/A3500FC Commandments Tables 1-1 and 1-2 contain PDE recommendations and tips that should be read and followed prior to performing any installation or service tasks on the Sun StorEdge A3x00/A3500FC disk array. TABLE 1-1 Number Commandment 1 Read the RAID Manager 6 Release Notes and Early Notifier 20029. 2 Only upgrade RAID Manager 6 software and firmware if and only if the controller module, LUNs, and disk drives are all in an optimal state.
TABLE 1-2 1.2 A3x00/A3500FC Commandments - Thou Shalt Not (Continued) Number Commandment 5 Do not perform boot -r while a controller is held in reset. See Section 6.1, “Controller Held in Reset, Causes, and How to Recover” on page 6-2. 6 Do not enable 16/32 LUN support unless it is necessary (refer to FIN I0589). 7 Do not run A3x00s in a production environment without a LUN 0. 8 Do not move disk drives between hardware arrays (A1000, RSM2000, A3x00, and A3500FC) or in the same array.
1.2.2 Web Sites The internal and external web sites listed in TABLE 1-3 provide quick access to a wide variety of relevant information. TABLE 1-3 Web Sites Web Site Name URL Sonoma Engineering http://webhome.sfbay/A3x00 Network Storage http://webhome.sfbay/networkstorage NSTE (QAST) Group http://webhome.sfbay/qast OneStop Sun Storage Products http://onestop.Eng/storage Enterprise Services Storage ACES http://trainme.east Escalation Web Interface http://sdn.
Note – Before you attempt to install the RAID Manager software, be sure to read the Sun StorEdge RAID Manager 6.22 and 6.22.1 Upgrade Guide and Early Notifier 20029 for the latest installation and operation procedures. 1.2.5 RAID Manager 6.0, 6.1 and 6.22 Are not Supported RAID Manager 6.0 and 6.1 have been superseded by newer versions of RAID Manager. RAID Manager 6.1.1 is only supported in cases involving data corruption or loss. Upgrade to RAID Manager 6.22.1 as soon as possible. 1.2.
If you use a PC to connect to the serial port of the disk array, you need terminal emulation software. Also, you need to ensure that the Break functionality is available. Although there are many different software applications that provide terminal emulation, you will have the best results if you use the applications listed in TABLE 1-4. TABLE 1-4 1.2.
Tables 1-5 and 1-6 list the current FINs and FCOs affecting the Sun StorEdge RSM 2000/A3x00/A3500FC product family as of January 2001. TABLE 1-5 FINs Affecting the Sun StorEdge A1000 and RSM 2000/A3x00/A3500FC Product Family FIN Number Release Date I0310 Product Description 07/09/97 SSA RSM 21x RSM Array 2000 Updated - Failure to follow documented installation procedures to remove shipping brackets and reseat drives may cause multiple disk errors.
TABLE 1-5 FINs Affecting the Sun StorEdge A1000 and RSM 2000/A3x00/A3500FC Product Family FIN Number Release Date I0511 Product Description 07/28/99 VM DMP interferes with A3x00 RDAC Enabling DMP with RDAC on A3x00 and A1000 may cause private regions to be lost. I0520 04/24/01 Quorum device in Sun Cluster Servicing storage that contains a device that is used as a quorum device in a Sun Cluster environment. I0531 01/07/00 A3500FC parallel LUN modifications UPDATED FIN.
TABLE 1-5 FINs Affecting the Sun StorEdge A1000 and RSM 2000/A3x00/A3500FC Product Family FIN Number Release Date Product Description I0589 06/21/00 Any RM 6 version glm.conf file must be modified to support more than 8 LUNs on any PCI-SCSI connected A1000 or A3x00 using any version of RM 6. I0590 07/20/00 Sun StorEdge A3500FC upgrade kit The SCSI NVSRAM will overwrite the FC controller NVSRAM if following the documented A3500 SCSI to FC upgrade procedure.
TABLE 1-5 FINs Affecting the Sun StorEdge A1000 and RSM 2000/A3x00/A3500FC Product Family FIN Number Release Date Product Description I0685 07/18/02 RAID Manager 6 Software Certain precautions need to be observed when upgrading the Solaris operating environment on systems with RAID Manager 6 software installed. I0688 06/27/01 RAID Manager 6.22 on Solaris 2.5.1 NSTE (QAST) qualified RAID Manager 6.22 to run on Solaris 2.5.1. I0698 07/12/01 RAID Manager 6.
TABLE 1-5 FINs Affecting the Sun StorEdge A1000 and RSM 2000/A3x00/A3500FC Product Family FIN Number Release Date Product Description I0828 05/21/02 RM 6.22.1 LUNs may become inaccessible after upgrading from RM 6.22 to 6.22.1 or after adding unformatted disk drives to RM 6.22.1. I0845 06/25/02 RAID Manager 6 RAID Manager 6 may hang for 3-8 minutes when an IBM drive is in the failed state in a Sun StorEdge A1000/A3x00/A3500FC array.
1-12 Sun StorEdge A1000 and A3x00/A3500FC Best Practices Guide • November 2002
CHAPTER 2 Hardware Installation and Configuration This chapter provides some additional information, guidelines, and tips relating to the installation and configuration of hardware. This chapter contains the following sections: ■ Section 2.1, “New Installation” on page 2-2 ■ Section 2.2, “Adding or Moving Arrays to a Host With Existing A3x00 Arrays” on page 2-6 ■ Section 2.3, “Adding Disks or Disk Trays” on page 2-6 ■ Section 2.
2.1 New Installation This section contains the following topics: 2.1.1 ■ Section 2.1.1, “Battery Unit” on page 2-2 ■ Section 2.1.2, “Power Cables” on page 2-2 ■ Section 2.1.3, “Power Sequencer” on page 2-3 ■ Section 2.1.4, “Local/Remote Switch” on page 2-3 ■ Section 2.1.5, “SCSI and Fiber-Optic Cables” on page 2-3 ■ Section 2.1.6, “SCSI ID, Loop ID, Controller, and Disk Tray Switch Settings” on page 2-4 ■ Section 2.1.
expansion cabinet power sequencer. If the cabinet’s original factory configuration has not been changed, then the cabinet should contain the correct power sequencer connections. Note – At the bottom of the expansion cabinet are two power sequencers. The front power sequencer is hidden behind the front key switch panel. Remove the front key switch panel to gain access to the power sequencer’s power cable. 2.1.
The total SCSI bus length for this example is 24.4 M. It is calculated as follows: each SCSI cable’s length (Cable no. 1 + Cable no. 2 + Cable no. 3) + the internal SCSI bus length of each device (Host no. 1 + A3x00 no. 1 + A3x00 no. 2 + Host no. 2). Host no. 1 (0.1 M) Cable no. 1 (8.0 M) A3x00 no. 1 (0.1 M) Cable no. 2 (8.0 M) A3x00 no. 2 (0.1 M) FIGURE 2-1 Cable no. 3 (8.0 M) Host no. 2 (0.
If two disk trays have the same tray ID, the system reports a 98/01 ASC/ASCQ error code during boot up time. In the 1x2 configuration since two drive channels from a controller share one disk tray, it is unavoidable to have a tray ID conflict. During boot up time, the 98/01 ASC/ASCQ error code is reported but has no impact on system performance.
2.2 Adding or Moving Arrays to a Host With Existing A3x00 Arrays When moving a disk array, ensure that the array being moved has firmware levels that match with the new host. See "Upgrading Controller Firmware" in the Sun StorEdge RAID Manager 6.22 Release Notes. Since the firmware on the controller cannot be downgraded, except in the case of a universal FRU, you should not move an array to a host with a lower RAID Manager release.
2.3.1 Adding or Moving Disk Trays to Existing Arrays RAID Manager 6.22 has dynamic capacity expansion capability. If your RAID system has not used all five drive side channels, you can add disk trays to it and expand the capacity of the existing drive groups. The existing LUN capacity does not increase. See “Configuring RAID Modules” in the Sun StorEdge RAID Manager 6.22 User’s Guide.
Note – Refer to Escalation no. 525788, bug 4334761. Refer to FIN I0612 for further information. Caution – If the drives you are adding to the array were previously owned by another controller module, either A1000 or A3x00/A3500FC, ensure that you preformat the disk drives to wipe clean the old DacStore information before inserting them in an A3x00/A3500FC disk tray. Caution – Do not randomly swap drives between drive slots or RAID systems. You must use the Recovery Guru procedure to replace drives.
Note – When you issue a sysWipe command you might see a message indicating that sysWipe is being done in a background process. Wait for a message indicating that sysWipe is complete before issuing a sysReboot command. Once the configuration is reset and the previous DacStore is cleaned up, the drive status should come up Optimal as long as the drive has no internal problem. sysWipe should be run from each controller.
2.6 Cluster, Multi-Initiator, and SCSI Daisy Chaining Configurations This section contains the following topics: 2.6.1 ■ Section 2.6.1, “Cluster Information” on page 2-10 ■ Section 2.6.2, “Multi-Initiator Information” on page 2-11 ■ Section 2.6.3, “SCSI Daisy Chaining Information” on page 2-11 Cluster Information Refer to the Sun StorEdge A3500/A3500FC Hardware Configuration Guide for instructions about cabling and setting up the SCSI ID and/or the FC loop ID.
2.6.2 Multi-Initiator Information Hubs are required for connecting A3500FCs in cluster/multi-initiator configurations. A3x00/A3500FC is supported with Sun Cluster 2.2 in a two node cluster configuration. In a multi-initiator (aka multi-host) connection, both nodes need to be Sun SPARC servers. Only a multi-initiator configuration that runs Sun Cluster is supported by Sun. This applies to A3x00 and A3500FC. Refer to the following web site for details regarding a cluster support matrix: http://suncluster.
2.7.1 Maximum Server Configurations Table 2-1 lists the maximum number of controller modules that are supported for a given server configuration.
■ X2622 (501-4884-05) I/O type 5, 83/90/100 MHz Gigaplane. Both SOC+ connections on one I/O board can be used to connect to an A3500FC concurrently. For better redundancy, do not connect both controllers of the same controller module to the same I/O board. Minimum firmware requirement for supported I/O board is: 1.8.25. Note – Refer to FIN I0586-1 for details. 2.7.3 Second Port on the SOC+ Card Other Fibre Channel devices can be connected to the second port. Refer to FIN I05861 for details. 2.7.
2.8 SCSI to FC-AL Upgrade Note – Refer to FIN I0590 and the latest version of the Sun StorEdge A3500/A3500FC Controller Upgrade Guide for more information regarding this procedure. The latest version of the Sun StorEdge A3500/A3500FC Controller Upgrade Guide at the time this document was prepared: part no. 806-0479-11. You need to load NVSRAM to the A3500FC controllers as documented in FIN I0590 and in the Sun StorEdge A3500/A3500FC Controller Upgrade Guide.
CHAPTER 3 RAID Manager Installation and Configuration This chapter provides some additional information, guidelines, and tips relating to the installation and configuration of an array. This chapter contains the following sections: ■ Section 3.1, “Installation and Configuration Tips, Tunable Parameters, and Settings” on page 3-2 ■ Section 3.2, “LUN Creation/RAID Level” on page 3-5 ■ Section 3.3, “LUN Deletion and Modification” on page 3-9 ■ Section 3.
3.1 Installation and Configuration Tips, Tunable Parameters, and Settings This section contains the following topics: 3.1.1 ■ Section 3.1.1, “Software Installation” on page 3-2 ■ Section 3.1.2, “Software Configuration” on page 3-3 ■ Section 3.1.3, “RAID Module Configuration” on page 3-3 ■ Section 3.1.4, “Tunable Parameters and Settings” on page 3-3 ■ Section 3.1.5, “Multi-Initiator/Clustering Environment” on page 3-4 ■ Section 3.1.
3.1.2 3.1.3 Software Configuration ■ RAID Manager 6.1.1—Refer to the Sun StorEdge RAID Manager 6.1.1 Installation and Support Guide for details. ■ RAID Manager 6.22—Refer to the Sun StorEdge RAID Manager 6.22 Installation and Support Guide for details. ■ If the default LUN 0 has to be resized (remove and recreate because the size is too small), see FIN I0573 for procedure. ■ When upgrading to RAID Manager 6.22, you may see warning messages indicating that there are bad disk drives.
3.1.5 Multi-Initiator/Clustering Environment ■ Sun Cluster is the only clustering/multi-initiator environment tested and verified by Sun with the A3x00. A number of parameters should be modified to run an A3x00 under the Sun Cluster 2.1/Sun Cluster 2.2 environment. See Rdac_RetryCount, Rdac_NoAltOffline and Rdac_Fail_Flag in the RAID Manager 6.1.1_u1 or 6.1.1_u2 patch number 106707-03 or later. ■ See the Sun Cluster documentation for specific Sun Cluster requirements. For Sun Cluster 2.
3.1.6 Maximum LUN Support The default setting for maximum LUN support is 8. If more than 8 LUNs are required on each A3x00, refer to "Maximum LUN Support..." in the RAID Manager 6 Release Notes for details. Also see FIN I0589 for PCI HBAs. Note – When performing a RAID Manager upgrade, if extended LUN support is enabled, ensure that you reenable it during the upgrade as described in the Upgrade Guide. Caution – Do not use the add16lun.sh script found on the RAID Manager 6.1.1 CD-ROM on a PCI machine.
■ DacStore size is different between LUNs created under RAID Manager 6.0/RAID Manager 6.1 vs. RAID Manager 6.1.1/RAID Manager 6.22x. See Section 3.2.7, “DacStor Size (Upgrades)” on page 3-8. ■ LUN creation under RAID Manager 6.22x. hot_add is a new command introduced in RAID Manager 6.22x and patch 106552-04 in RAID Manager 6.1.1_u1/2. It cleans up the Solaris device tree by running devfsadm (Solaris 8 and later) or the following set of commands: drvconfig, devlinks, and disks.
Caution – If write cache is turned on in dual/active mode you must also have it mirrored. Failure to do so may result in data corruption if a controller fails. 3.2.5 Reconstruction Rate ■ Reconstruction rate depends on the "Reconstruction Rate setting" and the I/O load on the module. Refer to Chapter 7 "Maintenance and Tuning" in the Sun StorEdge RAID Manager User’s Guide.
Note – Do not revive a drive if it is failed by the controller. Refer to the Sun StorEdge RAID Manager User’s Guide. 3.2.6 3.2.7 Creation Process (Serial/Parallel) Time ■ LUNs can be created with either CLI or the GUI. Use the GUI to create LUNs for better coordination between the various back-end utilities. ■ Be sure that an Optimal LUN 0 resides on one of the controllers and an Optimal LUN exists on the other controller before you attempt parallel LUN creation.
RAID Manager 6.22x does not support 2-MByte DacStore. It supports 40-MByte DacStore. 3.3 3.4 LUN Deletion and Modification ■ See the "Guidelines for Creating or Deleting LUNs" section in the Sun StorEdge RAID Manager 6.22 Release Notes for details on restrictions for LUN 0 removal. Refer to FIN I0573-2 for more information regarding the serious consequences of deleting LUN 0. ■ A number of new features are available to modify a LUN/drive group while the LUN/drive group is in production.
Caution – Modifying the NVSRAM settings via the nvutil (1M) command will change the behavior of the controller. Use caution when executing this command. 3.4.2 Parity Check Settings This section contains the following topics: 3.4.2.1 ■ Section 3.4.2.1, “RAID Manager 6.1.1” on page 3-10 ■ Section 3.4.2.2, “RAID Manager 6.22x” on page 3-10 ■ Section 3.4.2.3, “Parity Repair” on page 3-11 ■ Section 3.4.2.4, “Multi-host Environment” on page 3-11 RAID Manager 6.1.1 The RAID Manager 6.1.
3.4.2.3 Parity Repair With RAID 3 and RAID 5, data blocks are assumed to be good. Parity blocks are regenerated by parityck (1M) with the proper options. See the man page parityck (1M) for further details. 3.4.2.4 Multi-host Environment ■ Only 1 host in a cluster should be capable of running parityck. ■ Each host in a box sharing environment can run parityck.
3-12 Sun StorEdge A1000 and A3x00/A3500FC Best Practices Guide • November 2002
CHAPTER 4 System Software Installation and Configuration This chapter provides some additional information, guidelines, and tips relating to installation and configuration of system software. This chapter contains the following sections: ■ Section 4.1, “Installation” on page 4-2 ■ Section 4.2, “Solaris Kernel Driver” on page 4-2 ■ Section 4.3, “format and lad” on page 4-4 ■ Section 4.4, “Ghost LUNs and Ghost Devices” on page 4-5 ■ Section 4.5, “Device Tree Rearranged” on page 4-9 ■ Section 4.
4.1 Installation This section contains the following topics: 4.1.1 4.1.2 ■ Section 4.1.1, “New Installation” on page 4-2 ■ Section 4.1.2, “All Upgrades to RAID Manager 6.22 or 6.22.1” on page 4-2 New Installation ■ RAID Manager 6.1.1—Refer to Chapter 1 in the Sun StorEdge RAID Manager 6.1.1 Installation and Support Guide for Solaris. ■ RAID Manager 6.22—See the “About the Installation Procedure” section in Chapter 1 in the Sun StorEdge RAID Manager 6.22 Installation and Support Guide for Solaris.
RAID Manager 6.22 supports both SCSI and Fibre Channel interconnect to the A3x00/A3500FC controller module. RAID Manager 6.22 supports only SCSI when you are using the Solaris 2.5.1 11/97 operating environment. See FIN I0688. The SCSI driver stack is the same as RAID Manager 6.1.1. The driver stack for Fibre Channel is: 4.2.1 4.2.2 ■ SBus/soc+socal/sf/ssd ■ PCI/QLC2100/ifp/ssd ■ PCI/QLC220x/fcp/ssd sd_max_throttle Settings ■ sd_max_throttle for the A3x00 is set by sd.
sd_error_level = 2 or 0 (for A3x00 SCSI) ssd_error_level = 2 or 0 (for A3500FC) RdacDebug = 1 (for both SCSI and FC) You can set the variables in two ways: ■ adb -kw ■ You can add the variables to the end of /etc/system, followed by a reboot. See the man page system (4) for further details. With the variables set, all failed command descriptor block (CDB) and retry commands will appear on the console and in /var/adm/messages. Be sure enough space is available on /var/adm, if file system size is limited.
4.3.1 Volume Labeling At the end of the LUN creation process, format (1M) is called to label the LUN with a volume label. If the LUN creation process is interrupted or the LUN is created via the serial port, a valid Solaris label may not exist on the LUN. In this case, just label the LUN manually using the format (1M) command. 4.4 Ghost LUNs and Ghost Devices The following sample procedure corrects a configuration with a LUN that has a drive defined at location [15,15] (not valid).
■ dev pointer=0x2b4c3ec is the address of the second GHS, under GHS1 GHS ENTRY 0 dev pointer=0x2b5348c (the address of the first Ghost Hot Spare) devnum=2 state=2 status=0 flags 4000 GHS ENTRY 1 dev pointer=0x2b4c3ec devnum 0002 state=2 status=0 flags 4000 value = 5 = 0x5 2. Remove the extra LUN that is part of the Global hot spare list. Utilize shell commands on a laptop connected directly to the RS-232 port. -> m 0x dev pointer address,4 The memory locations are displayed 4 bytes at a time. 3.
5. Modify the global hot spare list in memory. This entry should be the dev pointer address from ghsList. Zero this location out. -> m &globalHotSpare 02f09104: 02b5348c-02b4c3ec (Put address of second GHS here) (packing stack after removal of invalid entry) 02f09108: 02b4c3ec-00000000 (Zero out this location) 02f0910c: 00000000- . (end the command with a "." [period] and a return) value = 1 = 0x1 When the globalHotSpare list is modified, the entries should be packed, removing the false GHS pointer.
4.4.1 Removing Ghost Drives Use this procedure to remove ghost drives that are not Global hot spares. To remove a phantom drive, perform the following steps through the controller shell. Caution – Only trained and experienced Sun personnel should access the serial port. You should have a copy of the latest Debug Guide. There are certain commands that can destroy customer data or configuration information. No warning messages appear if a potentially damaging command has been executed.
7. Enter the string: -> m 0x(ADDRESS from step 4),4 8. After the dash (-), enter the nextphy value from step 2 and press enter. 9. Enter a period (.) and press enter. 10. Enter the string: -> cfgPhy ch,id Verify “number of phydevs = 0”. 11. Enter the string: -> isp cfgSaveFailedDrives 4.5 Device Tree Rearranged This section contains the following topics: ■ Section 4.5.1, “Dynamic Reconfiguration Related Problems” on page 4-10 ■ Section 4.5.1.
the disks (1M) program would remove such failed controllers. In Solaris 7 and later, disks won’t purge failed devices unless they are called as disks -C or devfsadm -C (Solaris 8). ■ 4.5.1 With the change to devfsadm in Solaris 7 and later, the file /dev/cfg also keeps bus numbers and may need to be removed before a reconfiguration reboot in order to clear up persistent misnumbering. Dynamic Reconfiguration Related Problems The first time you add an array, the /kernel/drv/rdriver.
/net/artemas.ebay/global/archive/StorEdge_Products/ sonoma/rm_6.1.1_u2/FCS/Tools or /net/artemas.ebay/global/archive/StorEdge_Products/ sonoma/rm_6.22/Tools Note – Adding support of more LUNs than you need extends the time required for reboot and the response time of the RAID Manager 6 GUI because it has to scan all the potential LUNs. See FIN I0551-1 or later. ■ Common problem—RAID Manager 6 is unable to communicate with the module but lad shows more then 8 LUNs. Solution—Re-run addXXlun.
4.7 Interaction With Other Volume Managers This section contains the following topics: 4.7.1 ■ Section 4.7.1, “VERITAS” on page 4-12 ■ Section 4.7.2, “Solstice Disksuite (SDS)” on page 4-13 ■ Section 4.7.3, “Sun Cluster” on page 4-13 ■ Section 4.7.4, “High Availability (HA)” on page 4-13 ■ Section 4.7.5, “Quorum Device” on page 4-14 VERITAS This section contains the following topics: ■ Section 4.7.1.1, “VERITAS Enabling and Disabling DMP” on page 4-12 ■ Section 4.7.1.
4.7.1.2 HA Configuration Using VERITAS If you have a problem running an A3x00 under a third party cluster environment, you can check with CPRE to see whether they have VIP arrangement with the third party vendor to help you move forward. Because Sun has no access to the source code of third party cluster software, debugging is problematical.
4.7.5 Quorum Device Quorum is a concept that is used in distributed systems, a cluster environment particularly. The requirements and restrictions of a quorum device are specific to the particular cluster environment. Refer to the following web sites for online documentation: http://suncluster.eng.sun.com/engineering/SC2.1 http://suncluster.eng/engineering/SC2.2/fcs_docs/fcs_docs.html Using the Sun StorEdge A1000or A3x00/A3500FC array as a quorum device is not supported. See FIN I0520-02.
CHAPTER 5 Maintenance and Service This chapter provides maintenance and service information for verifying FRU functionality, guidelines for replacing FRUs, and tips on upgrading to the latest software and firmware levels. This chapter contains the following sections: ■ Section 5.1, “Verifying FRU Functionality” on page 5-2 ■ Section 5.2, “FRU Replacement” on page 5-10 ■ Section 5.
5.1 Verifying FRU Functionality This section contains the following topics: ■ Section 5.1.1, “Disk Drives” on page 5-3 ■ Section 5.1.2, “Disk Tray” on page 5-4 ■ Section 5.1.3, “Power Sequencer” on page 5-5 ■ Section 5.1.4, “SCSI Cables” on page 5-6 ■ Section 5.1.5, “SCSI ID Jumper Settings” on page 5-7 ■ Section 5.1.6, “SCSI Termination Power Jumpers” on page 5-7 ■ Section 5.1.7, “LED Indicators” on page 5-7 ■ Section 5.1.8, “Backplane Assembly” on page 5-7 ■ Section 5.1.
Note – Remember to reset the battery date on both controllers after a battery replacement. Refer to Chapter 6 in the Sun StorEdge RAID Manager 6.22 User’s Guide and read the section “Recovering from Battery Failures” for details on resetting the battery date. Note – The power supplies have a thermal protection shutdown feature. To recover from a power supply shutdown, see Section 7.1 “Recovering From a Power Supply Shutdown” in the Sun StorEdge A3500/A3500FC Controller Module Guide. 5.1.
5.1.2 Disk Tray This section contains the following topics: ■ Section 5.1.2.1, “RSM Tray” on page 5-5 ■ Section 5.1.2.2, “D1000 Tray” on page 5-5 Several conditions can cause a disk tray to become inaccessible: a loose or defective SCSI cable, a loose or defective SCSI terminator, a defective SCSI chip on the controller board, or a defective component in the disk tray. The problem can be sometimes difficult to isolate. Check the rmlog.log and system logs for an error sense code or a FRU code.
5.1.2.1 RSM Tray A common point of failure on the RSM tray is with the WD2S card, part number 3702196 (older version) and part number 370-3375 (newer version). This card is located at the point where the SCSI cable attaches to the RSM disk tray. The function of this card is to convert Wide Differential SCSI to Single Ended SCSI. The other common point of failure on the RSM tray is with the SEN card, part number 370-2195. The SEN card should have microcode rev 1.1.
There is a Local/Remote switch located on the front panel of each power sequencer. When the Local/Remote switch is set to Local the sequenced outputs are controlled by a circuit breaker located on the front panel of each power sequencer. When the Local/Remote switch is set to Remote, the sequenced outputs are controlled by the key switch located at the bottom front of the Expansion rack. When the Local/Remote switch is set to OFF, power is removed from the sequenced outputs.
5.1.5 SCSI ID Jumper Settings The controller module SCSI ID can be changed, if necessary, by the use of jumpers. The SCSI ID jumper block is located at the rear of the SCSI controller module. See Section 2.3 “Verifying Controller Module ID Settings” in the Sun StorEdge A3500/A3500FC Hardware Configuration Guide for detailed instructions. The factory default settings are shown in TABLE 5-1. TABLE 5-1 5.1.
5.1.10 5.1.11 Verifying the HBA ■ Refer to Early Notifier 20029 for the latest information regarding HBA support. ■ The UDWIS/SBus host bus adapter (HBA) should be at firmware level 1.28 or higher (Refer to FCO A0163-1 and FIN I0547 for further details). ■ The older SOC+ card, part number 501-3060, is not supported with the A3500FC. You need to check the card part number label located on the SBus connector to determine the part number.
The rdacutil -u command unfails the alternate controller then attempts to communicate with the alternate controller through the I/O path. This is how the RAID Manager 6 GUI unfails a controller. Sometimes by issuing these two commands you can determine if the failure is internal to the controller board or external.
5.1.13 Ethernet Port The Ethernet port located on the back of the controller module is not supported. 5.2 FRU Replacement This section contains the following topics: 5.2.1 ■ Section 5.2.1, “HBA” on page 5-10 ■ Section 5.2.2, “Interconnect Cables” on page 5-11 ■ Section 5.2.3, “Power Cords” on page 5-11 ■ Section 5.2.4, “Power Sequencer” on page 5-11 ■ Section 5.2.5, “Hub” on page 5-12 ■ Section 5.2.6, “Controller Card Guidelines” on page 5-12 ■ Section 5.2.
5.2.2 Interconnect Cables ■ Host SCSI cables Stop all I/O activities to the corresponding data path before replacing a host SCSI cable. ■ SCSI cables See Section 6.1 in the Sun StorEdge A3500/A3500FC Controller Module Guide for further details. ■ Fiber-optic cables See Section 6.2 in the Sun StorEdge A3500/A3500FC Controller Module Guide for further details. ■ Controller module terminators Stop all I/O activities to the corresponding controller module before replacing a controller module terminator.
5.2.5 Hub You need to stop all I/O activities to the hub before replacing it. Refer to the FC-100 Hub Installation and Service Manual for further details. 5.2.6 Controller Card Guidelines ■ With RAID Manager 6.22.1 or patches 109232 and 109233, there are new NVSRAMs. With a Sun StorEdge A1000, download the NVSRAM after the controller card is replaced. See FIN I0709.
■ SCSI Controller Canister w/Memory (RSM), part number 540-3600 ■ Fiber Controller Canister w/Memory (D1000), part number 540-4026 ■ Fiber Controller Canister w/Memory (RSM), part number 540-4027 When returning a controller canister for repair, ensure that the memory SIMMs are returned with the controller canister. If a SCSI controller canister being returned has 128-MB of cache memory, order two memory FRUs, part number F370-2439 in addition to the replacement controller canister FRU.
5.2.9 ■ The battery has a service life of two years. After two years, it needs to be replaced. A fresh battery will guarantee that the data saved in the controller’s cache memory will be kept live for up to the design specification of 72 hours. ■ See the sections “To Replace Old Batteries” and “To Replace New Batteries” in the Sun StorEdge RAID Manager 6.22.1 Release Notes for more information on replacing old and new batteries. Cooling See Section 7.
■ In the RSM tray: the SEN card and the WD2S card Install a jumper at location ID3 to change SCSI address range from 0-7 to 8-15. ■ The entire disk tray RAID Manager 6 Recovery Guru reports drive side channel failure when the failure affects the entire disk tray. After the hardware component has been replaced, run a health check to verify the status of the drive channel, drive tray, and each disk drive.
5.2.13 Reset Configuration and sysWipe Reset Configuration or sysWipe will delete all LUNs and bring the RAID system to a default state: active controller A, passive controller B and one default 10-MB LUN 0. Reset Configuration is a RAID Manager 6 procedure and sysWipe is a serial port command. sysWipe wipes clean all prior DacStore data. You need to issue a sysReboot after a sysWipe command is executed. Caution – Only trained and experienced Sun personnel should access the serial port.
■ Section 5.3.3, “Firmware Upgrade” on page 5-18 Follow these general guidelines to simplify software and firmware upgrades. ■ Do not run an A3x00 with mixed firmware levels. FRU replacements for controller boards come at firmware 02.05.06.32. This level can be upgraded (or downgraded) to an appropriate level after installation. Always check the firmware level. ■ When upgrading from RAID Manager 6.0 to RAID Manager 6.1.1 (or later), upgrading the firmware requires an intermediate step.
5.3.2 RAID Manager 6 Upgrade You should use the Sun StorEdge RAID Manager 6.22 and 6.22.1 Upgrade Guide (part number 806-7792) to perform any upgrade to RAID Manager 6.22 or 6.22.1. Refer to the Sun StorEdge RAID Manager 6.22 Installation and Support Guide for Solaris for further details. RAID Manager 6.1.1 does not support Solaris 8. Solaris 8 support requires RAID Manager 6.22 and patch no. 108553. For Solaris 9, only RM 6.22.1 is supported. RAID Manager 6.0 and RAID Manager 6.1 has a 2-MB DacStore.
Note – If you are running Solaris 7 dated 11/99 and plan to upgrade the firmware on an A3x00 controller, you need to ensure that patch no. 106541-10 (KJP 10) for Solaris has been installed. Refer to Sun Early Notifier EN20029 and bug 4334814 for further details.
5-20 Sun StorEdge A1000 and A3x00/A3500FC Best Practices Guide • November 2002
CHAPTER 6 Troubleshooting Common Problems This chapter discusses some common problems encountered in the field and provides additional information and tips for troubleshooting these problems. This chapter contains the following sections: ■ Section 6.1, “Controller Held in Reset, Causes, and How to Recover” on page 6-2 ■ Section 6.2, “LUNs Not Seen” on page 6-6 ■ Section 6.3, “Rebuilding a Missing LUN Without Reinitialization” on page 6-7 ■ Section 6.
6.1 Controller Held in Reset, Causes, and How to Recover This section contains the following topics: ■ Section 6.1.1, “Reason Controllers Should be Failed” on page 6-2 ■ Section 6.1.2, “Failing a Controller in Dual/Active Mode” on page 6-3 ■ Section 6.1.3, “Replacing a Failed Controller” on page 6-4 The A3x00/A3500FC controllers do not detect controller failure and fail themselves. The host system via the A3x00/A3500FC drivers or the user must make the decision to fail a controller.
The symptoms of obtrusive array controllers may be the successful completion of some commands, particularly non data access commands. Many data access commands may fail on one or both controller in the subsystem. Other symptoms include frequent command time-outs amidst many successful command operations. ■ Failed Inter-controller Communication (ICON) Path Redundant controllers rely on the ICON channel, which may be a dedicated Application-specific Integrated Circuit (ASIC).
Upon receiving the mode select, the controller will: ■ Attempt to quiesce itself and its pair. ■ New commands to either of the controllers will terminate with a check condition indicating that quiescence is in progress. ■ Write the new controller information to DacStore. ■ Hold the alternate controller in reset. ■ Reset the drive buses. ■ Reconfigure to become the active controller in active/passive mode. ■ Return status to the host for the mode select command.
back to active mode. Once you have done that, you might have to do some LUN rebalancing between the active controllers now. Also check to make sure that the firmware level matches what was on the active controller; you might have to do a firmware upgrade on the replaced controller. If you are going to use the rdacutil -u/U to unfail the controller, and use the device argument form, you must use the controller that is in active mode and not the failed controller device name.
■ ■ ■ ■ 6.2 3f/00 3f/01 3f/02 3f/03 targe operating conditions have changed microcode has been changed changed operating definition inquiry data has changed LUNs Not Seen There are many possible causes, but the usual scenario is after reconfiguration of the system: 6-6 ■ After upgrade of RAID Manager 6, see bug 4118532. ■ After upgrade of Solaris: usually sd.conf is lost causing LUNs above 8 are no longer seen. This is described in the Sun StorEdge RAID Manager 6.22 Release Notes.
■ 6.3 Adding drives from another A3x00 while the system is powered down can cause loss of LUN configuration as described in bug 4133673. Rebuilding a Missing LUN Without Reinitialization This section covers the following topics: ■ ■ ■ ■ ■ Section 6.3.1, Section 6.3.2, Section 6.3.3, Section 6.3.4, Section 6.3.
2. To enter insert mode, type: i Press Return or Enter. 3. Type: writeZerosFlag=1 Press Return or Enter twice. 4. To enable debug options, type: + Press Return or Enter. 5. To quit, type: q Press Return or Enter. 6. To commit changes, type: y Press Return or Enter. 7. From the shell prompt type: -> writeZerosFlag=1 8.
If the flag was set properly the output should indicate: value = 1 and you can proceed to Section 6.3.3, “Deleting a LUN With the RAID Manager GUI” on page 6-9 or Section 6.3.4, “Recreating a LUN With the RAID Manager GUI” on page 6-9. However, if the output says anything like: “new value added to table,” something was done incorrectly within the VKI_EDIT_OPTIONS. Do not proceed. Re-enter the VKI_EDIT_OPTIONS and remove the statement previously entered on both controllers. 6.3.
1. From the Configuration screen, select a module from RAID Module. When the LUN was deleted, all the drives assigned to that LUN should have been moved to the Unassigned drive area under module information. 2. Highlight the Unassigned drive icon. Right click the Unassigned icon and select Create LUN... 3. Select the appropriate input for the RAID Level, Number of Drives, and Number of LUNs options. Create an exact replica from the module profile.
3. To confirm, type: y 4. To disable the options, type: - 5. To confirm, type: y 6. To quit, type: q 7. To confirm, type: y 8. When you are back at the prompt enter: -> writeZerosFlag=0 sysReboot or -> sysReboot 6.4 Dynamic Reconfiguration This section contains the following topics: ■ Section 6.4.1, “Prominent Bugs” on page 6-12 ■ Section 6.4.
6.4.1 6.4.2 Prominent Bugs ■ Bug 4356814 - Dynamic reconfiguration fails with A3500FC, Leadville drivers, Qlogic 2202 on an E10000 The resolution of this bug demonstrates that dynamic reconfiguration works on an E10000 over a PCI bus using QLogic 2202. ■ Bug 4330698 - Unable to detach (dynamic reconfiguration) system board with A3x00/A3500FC connect. Indicates a recent problem with dynamic reconfiguration under Solaris 2.
Both of these bugs might be due to a known problem with Vixel 1000 hubs, Sun’s only hub product as of October 2000. The hub doesn’t propagate the link failure back to the host, so if the path is lost on the array side of the hub, there is no notification sent to the host. Resetting the loop via software will rectify the problem. 6.6 GUI Hang Sometimes certain RAID Manager 6 applications such as Recovery or Maintenance will stop responding, either showing an hour glass or appearing to be dead.
6.8 Phantom Controllers Under RAID Manager 6.22 There have been issues regarding the installation and configuration of Solaris with RAID Manager 6.22 and VERITAS Volume Manager. These issues regard instances of "phantom controllers" or device nodes. This can cause problems for your installation. To avoid these issues perform the installation of your system in the following order: 1. Solaris: a. Install Solaris. b. Install required patches. 2. RAID Manager 6.22: a. Run pkgadd to install the RAID Manager 6.
NOTES: 6.9 ■ You can run the add16lun or add32lun script that comes with RAID Manager 6.22. It will do all the steps needed for 16 or 32 LUNs support (rdriver.conf gets modified). ■ Another new command, rdac_disks, cleans up the device tree so there is no confusion between VERITAS Volume Manager device tree and the /dev/osa device tree. If this step is omitted you will likely find phantom controllers, and that lad and format using different path’s etc. ■ If you are using any version prior to VxVM 3.
6.10 Data Corruption and Known Problems ■ Fujitsu 4/9-GB disk drive firmware 2848 has a bug and should be replaced using patch no. 108873. ■ Turning power off a disk tray when using a RAID Manager version prior to 6.22. Although this is not supported it can be done accidentally. See bug 4307641. RAID Manager 6.22 with firmware 3.1.x has a fix for this problem. ■ RAID Manager 6.0 with firmware 2.4.x doesn’t properly handle internal memory failures completely. It has been EOL’ed.
6.11 Disconcerting Error Messages During ufsdump operations, the following errno 5 message may appear: Apr 10 22:29:40 abc unix: WARNING: The Array driver is returning an Errored I/O, with errno 5, on Module 1, Lun 1, sector 43261180 This message can be ignored if the error only occurs when ufsdump is running. Otherwise the error needs to be further evaluated. See bug 4234852 and bug 4289725 and the related escalations. This message is not encountered when using RAID Manager 6.22x.
6-18 Sun StorEdge A1000 and A3x00/A3500FC Best Practices Guide • November 2002
APPENDIX A Reference This chapter contains the following topics: ■ Section A.1, “Scripts and man Pages” on page A-2 ■ Section A.2, “Template for Gathering Debug Information for CPRE/PDE” on page A-3 ■ Section A.3, “RAID Manager Bootability Support for PCI/SBus Systems” on page A-4 ■ Section A.4, “A3500/A3500FC Electrical Specifications” on page A-5 ■ Section A.
A.1 Scripts and man Pages A number of scripts are available in the Tools directory of the released CD. The README file in the Tools directory has a description of these scripts. A sample copy of the README file is available in the following directory: /net/artemas.ebay/export/releases/sonoma/rm_6.22/rm6_22_FCS \ /Tools/README The following man pages provide supplementary information for RAID Manager 6.22 array management and administration.
A.2 Template for Gathering Debug Information for CPRE/PDE The following template should be used when submitting information to engineering regarding problems encountered in the field with the A3x00/A3500FC. ■ What is the current version of Solaris that is running on the host processor? Does the problem also occur on previous versions of Solaris, for example, Solaris 2.5.1, Solaris 7, Solaris 8, etc? ■ Record the output of the "Save Module Profile" from RAID Manager 6 GUI.
■ The state of the components in the A3x00/A3500FC (for example are there any failed controllers or drives, have any cables been disconnected, etc) ■ A copy of the output from RAID Manager 6 health check Note – The engineer that is working on low level A3x00/A3500FC firmware may not be very familiar with low level system administration commands, details of the configuration, or how the system operates. The engineer will require detailed information to determine what the problem is.
TABLE A-3 A3x00 Bootability on PCI-Based Hosts RAID Manager Version Solaris 2.6 Solaris 2.7 Solaris 2.8 (02/2000) Solaris 2.8 (07/2001) Solaris 2.9 6.1.1 Pass Pass Not Supported Not Supported Not Supported 6.22x Pass Pass Fail Fail Not Supported TABLE A-4 A3x00 Bootability on SBus-Based Hosts RAID Manager Version Solaris 2.6 Solaris 2.7 Solaris 2.8 (02/2000) Solaris 2.8 (07/2001) Solaris 2.9 6.1.1 Pass Pass Not Supported Not Supported Not Supported 6.
The following table provides power consumption information for a given array system configuration (minimum and maximum). The difference in power consumption at 30˚C and 40˚C is due to the cooling fans spinning at a higher speed at 40˚C.
TABLE A-5 Power Consumption Specifications (Continued) Power Consumption at 30˚C (BTU/Watts) Power Consumption at 40˚C (BTU/Watts) 3x15 using 9-GB disk drives (maximum configuration) 18063/5293 19573/5735 3x15 using 18-GB or 36-GB (1”) disk drives (minimum configuration) 6618/1939 8126/2381 3x15 using 18-GB or 36-GB (1”) disk drives (maximum configuration) 19417/5689 20925/6131 3x15 using 36-GB (1.6”) disk drives (minimum configuration) 7427/2176 8935/2618 3x15 using 36-GB (1.
■ SE A3500FCd - Sun StorEdge A3500FC with D1000 disk trays ■ SE A3500FCr - Sun StorEdge A3500FC with RSM disk trays Definitions: ■ Rack Product Name—The marketing name for the rack. This name appears on the brochure or data sheet for the product. ■ Sun StorEdge Controller Name Tag—The name tag is located on the face plate of the controller.
Index NUMERICS 98/01 ASC/ASCQ error code, 2–5 bootability support matrix, A–4 box sharing setup, 2–13 Break key, 1–6 bug filing hints, 1–6 A A3x00/A3500FC Commandments, 1–2 accessing the serial port, 1–5 ACES web site, 1–4 add_disk command, 6–15 adding arrays, 2–6 arrays to a host with existing arrays, 2–6 arrays under VERITAS, 4–13 disk drives, 2–7 disk drives to existing arrays, 2–7 disk trays, 2–7 disk trays to existing arrays, 2–7 arrays, adding or moving, 2–6 ASC/ASCQ A0/00 error code, 2–2 ASC/Q 0C/
sysReboot, 2–9, 5–16 sysWipe, 2–9, 5–16 tip, 1–6 common problems, 6–1 configuration cache, 5–13 hardware, 2–1 RAID Manager, 3–1 RAID module, 3–3 reset, 5–16 software, 3–3, 4–1 configurations cluster, 2–10 multi-initiator, 2–10 SCSI, 2–10 connecting power cables for new installation, 2–2 controller board LEDs, 5–9 card replacement, 5–12 failover taking too long, 6–12 held in reset, 6–2 settings, 3–9 switch settings, 2–4 controller and disk tray switch settings, 2–4 converting 1x5 to 2x7 or 3x15, 2–8 cooling
FINs, 1–6 firmware guidelines, 5–16 information, 5–17 upgrade steps, 5–18 format command, 4–4 FRU replacement, 5–10 fsck command, 6–17 G GBIC support, 2–4 generating debug information, 4–3 ghost devices, 4–5 LUNs, 4–5 GUI hang, 6–13 guidelines for replacing the controller card, 5–12 L labeling volume, 4–5 lad command, 4–4 latest version of RAID Manager, 1–4 Linux, 1–6 local/remote switch, 2–3 long wave GBIC support, 2–4 loop ID, 2–4 LSI Logic web site, 1–4 LUN balancing taking too long, 6–12 creation, 3–5
O obtaining Debug Guide, 1–5 RAID Manager, 1–4 serial cable, 1–5 obtrusive controller, 6–2 onboard SOC+, 2–12 OneStop Sun Storage Products web site, 1–4 P parity check settings, 3–10 parityck command, 3–11 patch information, 5–17 PFA, 3–3 phantom controllers under RAID Manager 6.
sd_max_throttle setting, 4–3 SDS, 4–13 second port on the SOC+ card, 2–13 sequencer power, 2–3 serial cable, 1–5 serial port access, 1–5 service information, 5–1 setting up 2x7 and 3x15, 2–8 SNMP, 4–11 trap data, 4–11 SOC+, 2–12 software configuration, 3–3, 4–1 guidelines, 5–16 information, 5–17 installation, 3–2, 4–1 new installation, 4–2 Solaris kernel driver, 4–2 Solaris x86, 1–6 Solstice Disksuite (SDS), 4–13 Sonoma Engineering web site, 1–4 specifications electrical, A–5 power consumption, A–5 Storage
LSI Logic, 1–4 Network Storage, 1–4 OneStop Sun Storage Products, 1–4 patch information, 5–17 PatchPro, 5–17 QAST Group, 1–4 RAID Manager 6.22 documentation, 5–17 RAID Manager 6.22 upgrades, 1–5, 3–2 RAID Manager software documentation, 4–11 software information, 5–17 Sonoma Engineering, 1–4 Sun Cluster 2.1 documentation, 4–14 Sun Cluster 2.2 documentation, 4–14 Sun Cluster 2.2 field Q&A, 2–11 Sun Cluster 3.