HP SureStore E Disk Array FC60 Advanced User’s Guide This manual was downloaded from http://www.hp.com/support/fc60/ hpHH Edition E1200 Printed in U.S.A.
Notice Safety Notices © Hewlett-Packard Company, 1999, 2000. All rights reserved. Hewlett-Packard Company makes no warranty of any kind with regard to this document, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. HewlettPackard shall not be liable for errors contained herein or for incidental or consequential damages in connection with the furnishing, performance, or use of this material.
Format Conventions Denotes WARNING A hazard that can cause personal injury Caution A hazard that can cause hardware or software damage Note Significant concepts or operating instructions this font Text to be typed verbatim: all commands, path names, file names, and directory names this font Text displayed on the screen Printing History 1st Edition - September 1999 2nd Edition - October 1999 3rd Edition - February 2000 4th Edition - July 2000 5th Edition - September 2000 6th Edition - October 2000
Manual Revision History December 2000 Change Page Added Figure 87 to clarify operation of the write cache flush thresholds. 253 Added note regarding the impact of LUN binding on performance. 250 Added information on Managing the Universal Transport Mechanism (UTM). 298 Added information on major event logging available with firmware HP08. 307 Added Allocating Space for Disk Array Logs section describing use of environment variable AM60_MAX_LOG_SIZE_MB.
About This Book This guide is intended for use by system administrators and others involved in operating and managing the HP SureStore E Disk Array FC60. It is organized into the following chapters and section. Chapter 1, Product Description Describes the features, controls, and operation of the disk array. Chapter 2, Topology and Array Planning Guidelines for designing the disk array configuration that best meets your needs. Chapter 3, Installation Instruction for moving the disk array.
Related Documents and Information The following items contain information related to the installation and use of the HP SureStore E Disk Array and its management software. • HP SureStore E Disk Array FC60 Advanced User’s Guide - this is the expanded version of the book you are reading. Topics that are discussed in more detail in the Advanced User’s Guide are clearly identified throughout this book. !"Download: www.hp.
1 Product Description Product Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Operating System Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Management Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Contents Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Disk Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Disk Array Caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 Dynamic Capacity Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 2 Topology and Array Planning Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Non-High Availability Topologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 High Availability Topologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 3 Installation Contents Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 Host System Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 HP-UX .
Attaching Power Cords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 Attaching SCSI Cables and Configuring the Disk Enclosure Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 Full-Bus Cabling and Switch Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 Split-Bus Switch and Cabling Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . 191 Connecting the Fibre Channel Cables . . . . . . .
4 Managing the Disk Array on HP-UX Tools for Managing the Disk Array FC60. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 System Administration Manager (SAM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 Array Manager 60 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 Contents STM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Managing the Disk Array Using Array Manager 60. . . . . . . . . . . . . . . . . . . . . . . . . . 276 Command Syntax Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 Array Manager 60 man pages. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 Quick Help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 Selecting a Disk Array and Its Components . . . . . . . . . .
Support Tools Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 STM User Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347 STM Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351 Using the STM Information Tool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Disk Enclosure Fan Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392 Disk Enclosure Power Supply Module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394 Controller Enclosure Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396 Front Cover Removal/Replacement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397 Controller Fan Module . . . . . . . . . . . . . . . . . . . . . . . . . . .
AC Power:. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425 DC Power Output: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425 Heat Output:. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425 Environmental Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426 Acoustics . . . . . . . .
PRODUCT DESCRIPTION Product Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Disk Enclosure Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Array Controller Enclosure Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Disk Array High Availability Features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Product Description The HP SureStore E Disk Array FC60 (Disk Array FC60) is a disk storage system that features high data availability, high performance, and storage scalability. To provide high availability, the Disk Array FC60 uses redundant, hot swappable modules, which can be replaced without disrupting disk array operation should they fail.
Product Description Array Controller FC60 SureStore E Disk System SC10 Figure 1 HP SureStore E Disk Array FC60 (Controller with Six Disk Enclosures) Product Description 19
Operating System Support The Disk Array FC60 is currently supported on the following operating systems: • • • HP-UX 11.0, 11.11, and 10.20 Windows NT 4.0 Windows 2000 Some disk array features are specific to each operating system. These features are clearly identified throughout this book. Note Management Tools HP-UX Tools The following tools are available for managing the disk array on HP-UX. These tools are included with the disk array.
RAID levels 0, 1, 0/1, 3, and 5 (RAID level 3 supported on Windows NT and Windows 2000 only) • EMS hardware monitoring (HP-UX only) Product Description • High Availability High availability is a general term that describes hardware and software systems that are designed to minimize system downtime — planned or unplanned. The Disk Array FC60 qualifies as high-availability hardware, achieving 99.99% availability.
modules. This provides a storage capacity range from 36 Gbytes to over 3 Tbytes of usable storage. LED Status Monitoring Both the controller enclosure and disk enclosure monitor the status of their internal components and operations. At least one LED is provided for each swappable module. If an error is detected in any module, the error is displayed on the appropriate module’s LED. This allows failed modules to be quickly identified and replaced.
Product Description Disk Enclosure Components The SureStore E Disk System SC10, or disk enclosure, is a high availability Ultra2 SCSI storage product. It provides an LVD SCSI connection to the controller enclosure and ten slots on a single-ended backplane for high-speed, high-capacity LVD SCSI disks. Six disk enclosures fully populated with 9.1 Gbtye disks provide 0.54 Tbytes of storage in a 2-meter System/E rack. When fully populated with 73.4 Gbyte disks, the array provides over 3 Tbytes of storage.
BCC Modules Power Supply Modules Fan Modules Disk Modules Chassis (and Backplane) (Front Door Not Shown) Figure 2 24 Disk Enclosure Components, Exploded View Disk Enclosure Components
Product Description Operation Features The disk enclosure is designed to be installed in a standard 19-inch rack and occupies 3.5 EIA units (high). Disk drives mount in the front of the enclosure. Also located in the front of the enclosure are a power switch and status LEDs. A lockable front door shields RFI and restricts access to the disk drives and power button (Figure 3 on page 26). BCCs are installed in the back of the enclosure along with redundant power supplies and fans.
A B C D E F G H I J K Figure 3 system LEDs power button disk module disk module LEDs door lock ESD plug mounting ear power supply BCCs fans component LEDs Disk Enclosure Front and Back View Power Switch The power switch (B in Figure 3) interrupts power from the power supplies to the disk enclosure components. Power to the power supplies is controlled by the power cords and the AC source.
Product Description Disk Enclosure SC10 Modules The disk enclosure hot-swappable modules include the following: • • • Disks and fillers Fans Power supplies Disks and Fillers Hot-swappable disk modules make it easy to add or replace disks. Fillers are required in all unused slots to maintain proper airflow within the enclosure. Figure 4 illustrates the 3.5-inch disks in a metal carrier. The open carrier design allows ten half height (1.
A B C D Figure 4 bezel handle cam latch carrier frame standoffs E circuit board F insertion guide G capacity label Disk Module Disks fit snugly in their slots. The cam latch (B in Figure 4) is used to seat and unseat the connectors on the backplane. A label (G) on the disk provides the following information: • • • Disk mechanism height: 1.6 inch (half height) or 1 inch (low profile) Rotational speed: 10K RPM and 15K RPM (18 Gbyte only) Capacity: 9.1 Gbyte, 18.2 Gbyte, 36.4 Gbyte, or 73.
Product Description BCCs Two Backplane Controller Cards, BCCs, control the disks on one or two buses according to the setting of the Full Bus switch. When the Full Bus switch is set to on, BCC A, in the top slot, accesses the disks in all ten slots. When the Full Bus switch is off, BCC A accesses disks in the even-numbered slots and BCC B accesses disks in the odd-numbered slots. Note A B C D In full bus mode, all ten disks can be accessed through either BCC.
Each BCC provides two LVD SCSI ports (B in Figure 5) for connection to the controller enclosure. The EEPROM on each BCC stores 2 Kbytes of configuration information and user-defined data, including the manufacturer serial number, World Wide Name, and product number. The following are additional features of the BCC: • LEDs (C in Figure 5) show the status of the BCC and the bus.
Product Description Fans Redundant, hot-swappable fans provide cooling for all enclosure components. Each fan has two internal high-speed blowers (A in Figure 6), an LED (B), a pull tab (C), and two locking screws (D). A B C D internal blowers LED pull tab locking screws Figure 6 Fan Internal circuitry senses blower motion and triggers a fault when the speed of either blower falls below a critical level. If a fan failure occurs, the amber fault LED will go on.
Power Supplies Redundant, hot-swappable 450-watt power supplies convert wide-ranging AC voltage from an external main to stable DC output and deliver it to the backplane. Each power supply has two internal blowers, an AC receptacle (A in Figure 7), a cam handle (B) with locking screw, and an LED (C). Internal control prevents the rear DC connector from becoming energized when the power supply is removed from the disk enclosure. NOTE: LED position varies.
Product Description Power supplies share the load reciprocally; that is, each supply automatically increases its output to compensate for reduced output from the other. If one power supply fails, the other delivers the entire load. Internal circuitry triggers a fault when a power supply fan or other power supply part fails. If a power supply failure occurs, the amber fault LED will go on. An alert should also be generated by EMS Hardware Monitoring when a power supply failure occurs.
Array Controller Enclosure Components The array controller enclosure, like the disk enclosure, consists of several modules that can be easily replaced, plus several additional internal assemblies. See Figure 8. Together, these removable modules and internal assemblies make up the field replaceable units (FRUs). Many modules can be removed and replaced without disrupting disk array operation.
Product Description Power Supply Fan Module Power Supply Modules Controller Chassis Controller Fan Controller Module A Controller Module B BBU (Front Cover Not Shown) Figure 8 Controller Enclosure Exploded View During operation, controller enclosure status is indicated by five LEDs on the front left of the controller enclosure. Faults detected by the controller module cause the corresponding controller enclosure fault LED to go on.
Figure 9 36 Controller Enclosure Front View Array Controller Enclosure Components
Product Description Figure 10 Controller Enclosure Rear View Front Cover The controller enclosure has a removable front cover which contains slots for viewing the main operating LEDs. The cover also contains grills that aid air circulation. The controller modules, controller fan, and battery backup unit are located behind this cover. This cover must be removed to gain access to these modules, and also, to observe the controller status and BBU LEDs.
Controller Modules The controller enclosure contains one or two controller modules. See Figure 11. These modules provide the main data and status processing for the Disk Array FC60. The controller modules slide into two controller slots (A and B) and plug directly into the backplane. Two handles lock the modules in place.
Product Description Each controller module has ten LEDs. See Figure 12. One LED identifies the controller module’s power status. A second LED indicates when a fault is detected. The remaining eight LEDs provide detailed fault condition status. The most significant LED, the heartbeat, flashes approximately every two seconds beginning 15 seconds after power-on. "Troubleshooting" on page 359 contains additional information on controller LED operation.
Controller Memory Modules Each controller module contains SIMM and DIMM memory modules. Two 16-Mbyte SIMMs (32 Mbytes total) store controller program and other data required for operation. The standard controller module includes 256-Mbytes of cache DIMM, which is upgradeable to 512 Mbytes. The cache may be configured as either two 128-Mbyte DIMMs, or a single 256Mbyte DIMM. Cache memory serves as temporary data storage during read and write operations, improving I/O performance.
Product Description Figure 13 Controller Fan Module Array Controller Enclosure Components 41
Power Supply Modules Two separate power supplies provide electrical power to the internal components by converting incoming AC voltage to DC voltage. Both power supplies are housed in removable power supply modules that slide into two slots in the back of the controller and plug directly into the power interface board. See Figure 14. Figure 14 Power Supply Modules Each power supply uses a separate power cord. These two power cords are special ferrite bead cords (part no.
Product Description Each power supply is equipped with a power switch to disconnect power to the supply. Turning off both switches turns off power to the controller. This should not be performed unless I/O activity to the disk array has been stopped, and the write cache has been flushed as indicated by the Fast Write Cache LED being off. CAUTION The controller power switches should not be turned off unless all I/O activity to the disk array has been suspended from the host.
Figure 15 Power Supply Fan Module 44 Array Controller Enclosure Components
Product Description Battery Backup Unit The controller enclosure contains one removable battery backup unit (BBU) that houses two rechargeable internal batteries (A and B) and a battery charger board. The BBU plugs into the front of the controller enclosure where it provides backup power to the controller’s cache memory during a power outage. The BBU will supply power to the controllers for up to five days (120 hrs). All data stored in memory will be preserved as long as the BBU supplies power.
The BBU contains four LEDs that identify the condition of the battery. Internally, the BBU consists of two batteries or banks, identified as bank “A” and bank “B.” During normal operation both of the Full Charge LEDs (Full Charge-A and Full Charge-B) are on and the two amber Fault LEDs are off. If one or both of the Fault LEDs are on, refer to "Troubleshooting" on page 359 for information on solving the problem. The Full Charge LEDs flash while the BBU is charging.
Product Description Disk Array High Availability Features High availability systems are designed to provide uninterrupted operation should a hardware failure occur. Disk arrays contribute to high availability by ensuring that user data remains accessible even when a disk or other component within the Disk Array FC60 fails.
The disk array uses hardware mirroring, in which the disk array automatically synchronizes the two disk images, without user or operating system involvement. This is unlike the software mirroring, in which the host operating system software (for example, LVM) synchronizes the disk images. Disk mirroring is used by RAID 1 and RAID 0/1 LUNs. A RAID 1 LUN consists of exactly two disks: a primary disk and a mirror disk.
0 If this bit is now written as 1... Data + 1 Data + 1 Data + 1 Product Description Data Parity + 1 = 0 1 0 0 1 0 0 1 1 1 0 1 0 0 0 1 0 1 0 0 1 0 1 0 0 This bit will also be changed to a 1 so the total still equals 0. Figure 17 Calculating Data Parity Data Striping Data striping, which is used on RAID 0, 0/1, 3 and 5 LUNs, is the performance-enhancing technique of reading and writing data to uniformly sized segments on all disks in a LUN simultaneously.
using a 5-disk RAID 5 LUN, a stripe segment size of 32 blocks (16 KB) would ensure that an entire I/O would fit on a single stripe (16 KB on each of the four data disks). The total stripe size is the number of disks in a LUN multiplied by the stripe segment size. For example, if the stripe segment size is 32 blocks and the LUN comprises five disks, the stripe size is 32 X 5, or 160 blocks (81,920 bytes).
Product Description fails. RAID-0 provides enhanced performance through simultaneous I/Os to multiple disk modules. Software mirroring the RAID-0 group provides high availability. Figure 18 illustrates the distribution of user and parity data in a four-disk RAID 0 LUN. The the stripe segment size is 8 blocks, and the stripe size is 32 blocks (8 blocks times 4 disks).
individual disks. For highest data availability, each disk in the mirrored pair must be located in a different enclosure. When a data disk or disk mirror in a RAID 1 LUN fails, the disk array automatically uses the remaining disk for data access. Until the failed disk is replaced (or a rebuild on a global hot spare is completed), the LUN operates in degraded mode. While in degraded mode the LUN is susceptible to the failure of the second disk.
Product Description pair. For highest data availability, each disk in the mirrored pair must be located in a different enclosure. When a disk fails, the disk array automatically uses the remaining disk of the mirrored pair for data access. A RAID 0/1 LUN can survive the failure of multiple disks, as long as one disk in each mirrored pair remains accessible. Until the failed disk is replaced (or a rebuild on a global hot spare is completed), the LUN operates in degraded mode.
more disks. For highest availability, the disks in a RAID 3 LUN must be in different enclosures. If a disk fails or becomes inaccessible, the disk array can dynamically reconstruct all user data from the data and parity information on the remaining disks. When a failed disk is replaced, the disk array automatically rebuilds the contents of the failed disk on the new disk. The rebuilt LUN contains an exact replica of the information it would have contained had the disk not failed.
Product Description RAID 3 works well for single-task applications using large block I/Os. It is not a good choice for transaction processing systems because the dedicated parity drive is a performance bottleneck. Whenever data is written to a data disk, a write must also be performed to the parity drive. On write operations, the parity disk can be written to four times as often as any other disk module in the group.
Figure 22 RAID 5 LUN With its individual access characteristics, RAID 5 provides high read throughput for small block-size requests (2 KB to 8 KB) by allowing simultaneous read operations from each disk in the LUN. During a write I/O, the disk array must perform four individual operations, which affects the write performance of a RAID 5 LUN. For each write, the disk array must perform the following steps: 1. Read the existing user data from the disks. 2. Read the corresponding parity information. 3.
Product Description RAID Level Comparisons To help you decide which RAID level to select for a LUN, the following tables compare the characteristics for the supported RAID levels. Where appropriate, the relative strengths and weakness of each RAID level are noted. Note Table 1 RAID 3 is supported on Windows NT and Windows 2000 only. RAID Level Comparison: Data Redundancy Characteristics Handle multiple disk failures? RAID Level Disk Striping Mirroring Parity RAID 0 Yes No No No.
Table 2 RAID Level Comparison: Storage Efficiency Characteristics RAID Level Storage Efficiency RAID 0 100%. All disk space is use for data storage. RAID 1 and 0/1 50%. All data is duplicated, requiring twice the disk storage for a given amount of data capacity. RAID 3 and 5 One disk’s worth of capacity from each LUN is required to store parity data. As the number of disks in the LUN increases, so does the storage efficiency.
Product Description Table 4 RAID Level Comparison: General Performance Characteristics RAID Level General Performance Characteristics RAID 0 – Simultaneous access to multiple disks increases I/O performance. In general, the greater the number of mirrored pairs, the greater the increase in performance. RAID 1 – A RAID 1 mirrored pair requires one I/O operation for a read and two I/O operations for a write, one to each disk in the pair.
Table 5 RAID Level Comparison: Application and I/O Pattern Performance Characteristics RAID level Application and I/O Pattern Performance RAID 0 RAID 0 is a good choice in the following situations: – Data protection is not critical. RAID 0 provides no data redundancy for protection against disk failure. – Useful for scratch files or other temporary data whose loss will not seriously impact system operation. – High performance is important.
Product Description Global Hot Spare Disks A global hot spare disk is reserved for use as a replacement disk if a data disk fails. Their role is to provide hardware redundancy for the disks in the array. To achieve the highest level of availability, it is recommended that one global hot spare disk be created for each channel. A global hot spare can be used to replace any failed data disk within the array regardless of what channel it is on.
Settings that give a higher priority to the rebuild process will cause the rebuild to complete sooner, but at the expense of I/O performance. Lower rebuild priority settings favors host I/Os, which will maintain I/O performance but delay the completion of the rebuild. The rebuild priority settings selected reflect the importance of performance versus data availability. The LUN being rebuilt is vulnerable to another disk failure while the rebuild is in progress.
Product Description Data and parity from the remaining disks are used to rebuild the contents of disk 3 on the hot spare disk. The information on the hot spare is copied to the replaced disk, and the hot spare is again available to protect against another disk failure.
Primary and Alternate I/O Paths There are two I/O paths to each LUN on the disk array - one through controller A and one through controller B. Logical Volume Manager (LVM) is used to establish the primary path and the alternate path to a LUN. The primary path becomes the path for all host I/Os to that LUN. If a failure occurs in the primary path, LVM automatically switches to the alternate path to access the LUN.
Product Description Capacity Management Features The disk array uses a number of features to manage its disk capacity efficiently. The use of LUNs allow you to divide the total disk capacity into smaller, more flexible partitions. Caching improves disk array performance by using controller RAM to temporarily store data during I/Os. Note Differences in HP-UX and Windows NT/2000 Capacity Management Capacity management on Windows NT and Windows 2000 offers some unique features.
• Hot spare group – All disks assigned the role of global hot spare become members of this group. Up to six disks (one for each channel) can be assigned as global hot spares. • Unassigned group – Any disk that is neither part of a LUN nor a global hot spare is considered unassigned and becomes a member of this group. Unassigned disks can be used to create a LUN or can be used as global hot spares. Unassigned disks do not contribute to the capacity of the disk array.
Product Description controller with 256 Mbytes of cache will use half of the memory to mirror the other controller, leaving only 128 Mbytes for its own cache. The write cache contents cannot be flushed when both controllers are removed from the disk array simultaneously. In this case the write cache image is lost and data integrity on the disk array is compromised. To avoid this problem, never remove both controllers from the disk array simultaneously.
Capacity Management Features
2 TOPOLOGY AND ARRAY PLANNING Topology and Array Planning Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Array Design Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 Recommended Disk Array Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 Topologies for HP-UX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Overview This chapter provides information to assist you in configuring the Disk Array FC60 to meet your specific storage needs. Factors to be considered when configuring the disk array include high availability requirements, performance, storage capacity, and future expandability. This chapter discusses configuration features of the Disk Array FC60 as it relates to these requirements. In addition it provides information on system topologies following the array configuration.
Array Design Considerations • • • • Topology and Array Planning The Disk Array FC60 provides the versatility to meet varying application storage needs. To meet a specific application need, the array should be configured to optimize the features most important for the application.
enclosures can be added incrementally (up to six) as storage requirements grow. Multiple SCSI channels also increase data throughput. This increased data throughput occurs as a result of the controller’s ability to transfer data simultaneously over multiple data paths (channels). The more channels used, the faster the data throughput. Disk Enclosure Bus Configuration The disk enclosure can connect to either one or two SCSI channels, depending on its bus configuration.
the array for high availability, there must be no single points of failure. This means that the configuration must have at least these minimum characteristics: Two controllers connected to separate Fibre Channel loops (using separate Fibre Channel host I/O adaptors) • Two disk enclosures (minimum) • Eight disk modules, four in each disk enclosure (minimum) • LUNs that use only one disk per disk enclosure.
of the buses must be configured with at least four disk modules (eight disk modules per disk enclosure). This configuration also offers full sequential performance and is more economical to implement. To scale up sequential transfer performance from the host, configure additional disk arrays. This will increase the total I/O bandwidth available to the server. Performance can also be measured by the number of I/O operations per second a system can perform.
Storage Capacity Topology and Array Planning For configurations where maximum storage capacity at minimum cost is a requirement, consider configuring the disk array in RAID 5 (using the maximum number of data drives per parity drives) and only supplying one or two hot spare drives per disk array. Also, purchase the lowest cost/Mbyte drive available (typically the largest capacity drives available at the time of purchase). This configuration allows the maximum volume of storage at the lowest cost.
another, two or one disk enclosures, respectively, can be added by using split-bus mode. However, if you are adding up to four, five, or six enclosure, the enclosures configuration will need to be switched from split-bus to full-bus (refer to “Disk Enclosure Bus Configuration” section, earlier in this chapter, for additional information). Note Typically adding only one enclosure does not provide any options for creating LUNs.
Recommended Disk Array Configurations Topology and Array Planning This section presents recommended configurations for disk arrays using one to six disk enclosures. Configurations are provided for achieving high availability/high performance, and maximum capacity. The configuration recommended by Hewlett-Packard is the high availability/ high performance configuration, which is used for factory assembled disk arrays (A5277AZ).
• Global hot spares - although none of the configurations use global hot spares, their use is recommended to achieve maximum protection against disk failure. For more information, see "Global Hot Spare Disks" on page 61. • Split bus operation - With three or fewer disk enclosures, increased performance can be achieved by operating the disk enclosures in split bus mode, which increases the number of SCSI busses available for data transfer.
• Data Availability – Not recommended for maximum high availability. – Handles a single disk failure, single BCC failure, a single channel failure, or a single controller failure Topology and Array Planning – Expansion requires powering down the disk array, removing terminators and/or cables from the enclosures, and cabling additional disk enclosures.
Two Disk Enclosure Configurations High Availability/ High Performance • Hardware Configuration – Two disk array controllers connected directly to host Fibre Channel adapters – Two disk enclosures with ten 73 GByte disk modules (20 disks total) – Disk enclosures configured for split-bus mode (two SCSI channels per enclosure) • LUN Configuration – Ten RAID 1 LUNs, each comprising two disks (1+1) – Each disk in a LUN is in a separate enclosure • High Availability – Handles a single disk failure, BCC failu
Topology and Array Planning Figure 25 Two Disk Enclosure High Availability/ High Performance Configuration Recommended Disk Array Configurations 81
Maximum Capacity Note • This configuration is not recommended for environments where high availability is critical. To achieve high availability each disk in a LUN should be in a different disk enclosure. This configuration does not achieve that level of protection.
Topology and Array Planning Figure 26 Two Disk Enclosure Maximum Capacity Configuration Recommended Disk Array Configurations 83
Three Disk Enclosure Configurations High Availability/ High Performance • Hardware Configuration – Two disk array controllers connected directly to host Fibre Channel adapters – Three disk enclosures with ten 73 GByte disks each (30 disks total) – Disk enclosures configured for split-bus mode (two SCSI channels per enclosure) • LUN Configuration – 15 RAID 1 LUNs, each comprising two disks (1+1) – Each disk in a LUN is in a separate enclosure • High Availability – Handles a single disk failure, a single
Topology and Array Planning Figure 27 Three Disk Enclosure High Availability/ High Performance Configuration Recommended Disk Array Configurations 85
Maximum Capacity • Hardware Configuration – Two disk array controllers connected directly to host Fibre Channel adapters – Three disk enclosures with ten 73 GByte disks each (30 disks total) – Disk enclosures configured for split-bus mode (two SCSI channels per enclosure) • LUN Configuration – Ten RAID 5 LUNs, each comprising three disks (2 data + 1 parity). – Each disk in a LUN is in a separate enclosure.
Topology and Array Planning Figure 28 Three Disk Enclosure Maximum Capacity Configuration Recommended Disk Array Configurations 87
Four Disk Enclosure Configurations High Availability/High Performance • Hardware Configuration – Two disk array controllers connected directly to host Fibre Channel adapters – Four disk enclosures with ten 73 GByte disks each (40 disks total) – Disk enclosures configured for full-bus mode (one SCSI channel per enclosure) • LUN Configuration – Ten RAID 0/1 LUNs, each comprising four disks (2+2) – Each disk in a LUN is in a separate enclosure.
Topology and Array Planning Figure 29 Four Disk Enclosure High Availability/High Performance Configuration Recommended Disk Array Configurations 89
Maximum Capacity • Hardware Configuration – Two disk array controllers connected directly to host Fibre Channel adapters – Four disk enclosures with ten 73 GByte disks each (40 disks total) – Disk enclosures configured for full-bus mode (one SCSI channel per enclosure) • LUN Configuration – Ten RAID 5 LUNs, each comprising four disks (3 data + 1 parity) – Each disk in a LUN is in a separate enclosure.
Topology and Array Planning Figure 30 Four Disk Enclosure Maximum Capacity Configuration Recommended Disk Array Configurations 91
Five Disk Enclosure Configurations High Availability/High Performance • Hardware Configuration – Two disk array controllers connected directly to host Fibre Channel adapters – Five disk enclosures with ten 73 GByte disks each (50 disks total) – Disk enclosures configured for full-bus mode (one SCSI channel per enclosure) • LUN Configuration – Ten RAID 0/1 LUNs, each comprising four disks (2+2) – Five RAID 1 LUNs, each comprising two disks (1+1) – Each disk in a LUN is in a separate enclosure.
Topology and Array Planning Figure 31 Five Disk Enclosure High Availability/High Performance Configuration Recommended Disk Array Configurations 93
Maximum Capacity • Hardware Configuration – Two disk array controllers connected directly to host Fibre Channel adapters – Five disk enclosures with ten 73 GByte disks each (50 disks total) – Disk enclosures configured for full-bus mode (one SCSI channel per enclosure) • LUN Configuration – Ten RAID 5 LUNs, each comprising five disks (4 data + 1 parity) – Each disk in a LUN is in a separate enclosure.
Topology and Array Planning Figure 32 Five Disk Enclosure Maximum Capacity Configuration Recommended Disk Array Configurations 95
Six Disk Enclosure Configurations High Availability/High Performance • Hardware Configuration – Two disk array controllers connected directly to host Fibre Channel adapters – Six disk enclosures with ten 73 GByte disks each (60 disks total) – Disk enclosures configured for full-bus mode (one SCSI channel per enclosure) • LUN Configuration – Ten RAID 0/1 LUNs, each comprising six disks (3+3) – Each disk in a LUN is in a separate enclosure • High Availability – Handles a single disk failure, single BCC f
Topology and Array Planning Figure 33 Six Disk Enclosure High Availability/High Performance Configuration Recommended Disk Array Configurations 97
Maximum Capacity • Hardware Configuration – Two disk array controllers connected directly to host Fibre Channel adapters – Six disk enclosures with ten 73 GByte disks each (60 disks total) – Disk enclosures configured for full-bus mode (one SCSI channel per enclosure) • LUN Configuration – Ten RAID 5 LUNs, each comprising six disks (5 data + 1 parity) – Each disk in a LUN is in a separate enclosure • High Availability – Handles a single disk failure, single disk enclosure/BCC failure, single channel fa
Topology and Array Planning Figure 34 Six Disk Enclosure High Maximum Capacity Configuration Recommended Disk Array Configurations 99
Total Disk Array Capacity The total capacity provided by the disk array depends on the number and capacity of disks installed in the array, and the RAID levels used. RAID levels are selected to optimize performance or capacity. Table 6 lists the total capacities available when using fully loaded disk enclosures configured for optimum performance. Table 7 lists the same for optimum capacity configurations. The capacities listed reflect the maximum capacity of the LUN.
For high-availability, one disk per SCSI channel is used as a global hot spare. Table 6 Capacities for Optimized Performance Configurations Number of disk enclosures RAID Level No. of LUNs Disks per LUN 2 (split bus) 1 8 2 72.8 GB 145.6 GB 291.2 GB 584 GB 9.1 GB 18.2 GB 36.4 GB 73 GB 3 (split bus) 1 12 2 109.1 GB 218.4 GB 436.8 GB 876 GB 4 (full bus) 0/1 9 4 (2+2) 163.8 GB 327.6 GB 655.2 GB 1314 GB 6 (full bus) 0/1 9 6 (3+3) 245.7 GB 491.4 GB 982.
Topologies for HP-UX The topology of a network or a Fibre Channel Arbitrated Loop (Fibre Channel-AL) is the physical layout of the interconnected devices; that is, a map of the connections between the physical devices. The topology of a Fibre Channel-AL is extremely flexible because of the variety and number of devices, or nodes, that can be connected to the Fibre ChannelAL. A node can be a host system or server, a 10-port HP Fibre Channel-AL hub, or the controller modules in a disk array.
Basic Topology The basic topology covers a number of physical implementations of host systems and disk arrays.
For high availability the hosts and disk arrays can be connected in any of the following ways, with each connection of adapter and controller module creating a separate 2-node Fibre Channel-AL: • One disk array with two controller modules with each controller module connected to a separate adapter in a single host • In K-Class and T-Class systems, two disk arrays with two controller modules per array, connected to a single host with four Fibre Channel adapters.
Topology and Array Planning Figure 35 Basic Topology, High Availability Version: Host with Two Fibre Channel I/O Adapters Figure 36 shows the high availability version of the basic topology implemented on a either a K -Class or T-Class host with four Fibre Channel I/O adapters. Two of the Fibre Channel adapters are connected to one dual-controller module disk array while the other two Fibre Channel adapters are connected to a second dual-controller module disk array.
Figure 36 Basic Topology, High Availability Version: Host with Four Fibre Channel I/O Adapters The non-high availability version of this topology connects a host or server to one or more single-controller module disk arrays. This version provides no hardware redundancy and does not protect against single points of controller module, Fibre Channel cable, host Fibre Channel I/O adapter, or internal Ultra2 SCSI bus failure.
controller modules in four disk arrays. Each connection between adapter and controller module creates a separate Fibre Channel-AL.
Table 8 Basic Topology Error Recovery Failing component Continue after failure Disk module Yes What happens and how to recover Applications continue to run on all supported RAID levels (RAID1, 0/1, and 5). The system administrator or service provider hotswaps the failed disk module. Controller No module in singlecontroller module array (no high availability) The disk array fails.
Table 8 Basic Topology Error Recovery (cont’d) Continue after failure Fibre Channel cable No on path to failed cable; Yes if array has dual controller modules and alternate paths have been configured What happens and how to recover I/O operations fail along the path through the failed cable. If the host has two Fibre Channel adapters connected to a dualcontroller module disk array, the array can be accessed through the operational path if primary and alternate paths have been configured in LVM.
Single-System Distance Topology Each instance of the single-system distance topology generally uses the following hardware components: • One host system or server • Two Fibre Channel I/O adapters in each server • One 10-port HP Fibre Channel-AL Hub • One to four dual-controller module disk arrays (for high availability) or One to eight single-controller module disk arrays (for non-high availability) • Maximum 500 m fibre optic cable distance on each connection between the host and the HP Fibre Channel-AL Hu
chapter for part numbers). Fibre optic cables longer than 100 m must be custom-fabricated for each implementation. Topology and Array Planning Like the basic topology, both high availability versions (two controller modules per disk array) and non-high availability versions (one controller module per disk array) of this topology can be implemented. For high availability implementations, up to four disk arrays with two controller modules per disk array can be connected to an HP Fibre Channel-AL Hub.
Figure 38 illustrates the single-system distance topology with one host with two Fibre Channel I/O adapters and three dual-controller module disk arrays. In this example two of the HP Fibre Channel-AL Hub’s ten ports are unused.
Table 9 Single-System Distance Topology Error Recovery Continue after failure Disk module Yes What happens and how to recover Topology and Array Planning Failing component Applications continue to run on all supported RAID levels (RAID1, 0/1, and 5). The system administrator or service provider hotswaps the failed disk module. Controller No module in singlecontroller module array (no high availability) The disk array fails.
Table 9 Single-System Distance Topology Error Recovery (cont’d) Failing component Continue after failure Fibre Channel I/O adapter No on path to failed adapter; Yes if array has dual controller modules and alternate paths have been configured I/O operations fail along the path through the failed adapter.
High Availability Topology Topology and Array Planning The high availability topology increases the availability of the single system distance topology by protecting against single points of HP Fibre Channel-AL Hub failure with the use of redundant HP Fibre Channel-AL Hubs connected to each other. Adding a second HP Fibre Channel-AL Hub also increases the number of hosts and disk arrays that can be connected to a single Fibre Channel-AL.
Because each HP Fibre Channel-AL Hub has ten ports, either two host adapters and eight controller modules or four host adapters and six controller modules can attach to each HP Fibre Channel-AL Hub. If any hardware component (controller module, host Fibre Channel I/O adapter, HP Fibre Channel-AL Hub, or fibre optic cable) fails in one Fibre Channel-AL, the I/O communication between hosts and disk arrays can continue through the other Fibre Channel-AL.
Topology and Array Planning Figure 39 High Availability Topology Topologies for HP-UX 117
Table 10 High Availability Topology Error Recovery Failing component Continue after failure Disk module Yes Applications continue to run on all supported RAID levels (RAID-1, 0/1, and 5). The system administrator or service provider hotswaps the failed disk module.
Table 10 High Availability Topology Error Recovery (cont’d) Continue after failure Fibre Channel cable No on path to failed cable; Yes if array has dual controller modules and alternate paths have been configured What happens and how to recover I/O operations fail along the path through the failed cable.
High Availability, Distance, and Capacity Topology The high availability, distance, and capacity topology expands on the high availability topology by using cascaded HP Fibre Channel-AL Hubs to increase the distance of each Fibre Channel-AL and the number of devices that can be interconnected on the Fibre Channel-AL. Cascaded HP Fibre Channel-AL Hubs are two HP Fibre Channel-AL Hubs connected together.
Topology and Array Planning Supported cable lengths for each segment of the Fibre Channel-AL include 2 m, 16 m, 50 m, 100 m, and 500 m. The maximum combined cable lengths for all segments, that is, the total length of the Fibre Channel-AL should not exceed 5000 m because performance can degrade due to propagation delay. Because of this it is recommended that the total cable length of the Fibre Channel-AL be as short as possible.
Figure 40 High Availability, Distance, and Capacity Topology 122 Topologies for HP-UX
Table 11 High Availability, Distance, and Capacity Topology Error Recovery Failing component Continue after failure Disk module Yes Applications continue to run on all supported RAID levels (RAID1, 0/1, and 5). The system administrator or service provider hotswaps the failed disk module.
Table 11 High Availability, Distance, and Capacity Topology Error Recovery (cont’d) Failing component Continue after failure Fibre Channel No on path to cable failed cable; Yes if array has dual controller modules and alternate paths have been configured 124 Topologies for HP-UX What happens and how to recover I/O operations fail along the path through the failed cable.
Campus Topology Topology and Array Planning The campus topology uses the same hardware components as the high availability, distance, and capacity topology.
Figure 41 Campus Topology 126 Topologies for HP-UX
Table 12 Campus Topology Error Recovery Continue after failure Disk module Yes Applications continue to run on all supported RAID levels (RAID-1, 0/1, and 5). The system administrator or service provider hot-swaps the failed disk module.
Failing component Continue after failure Fibre Channel No on path to cable failed cable; Yes if array has dual controller modules and alternate paths have been configured 128 Topologies for HP-UX What happens and how to recover I/O operations fail along the path through the failed cable. If the host has two Fibre Channel adapters connected to a dual-controller module disk array, the array can be accessed through the operational path if primary and alternate paths have been configured in LVM.
Performance Topology with Switches Note Topology and Array Planning Previous topologies use Fibre Channel HUBs for interconnecting the arrays with the hosts. In these topologies there is basically one loop with all devices connected to it. The disk array FC60 can be connected to switches. Connecting the array using switches provides for increased performance. Two switch topologies are provided in Figure 42 and Figure 43. Hubs and switches should not be mixed in these array topologies.
Figure 43 Four Hosts Connected to Cascaded Switches 130 Topologies for HP-UX
Topologies for Windows NT and Windows 2000 Topology and Array Planning The topology of a network or a Fibre Channel Arbitrated Loop (Fibre Channel-AL) is the physical layout of the interconnected devices; that is, a map of the connections between the physical devices. The topology of a Fibre Channel-AL is extremely flexible because of the variety and number of devices, or nodes, that can be connected to the Fibre ChannelAL.
Unsupported Windows Topology Because this topology provides four paths from the host to each disk array, it is not supported. Any topology that provides more than two paths from a host to the disk array is not supported.
Non-High Availability Topologies Topology and Array Planning Figure 45 through Figure 47 illustrate non-high availability topologies. These topologies do not achieve the highest level of data availability because they have a hardware component that represents a single point of failure. That is, if the critical component fails, access to the data on the disk array will be interrupted. These topologies are simpler and less expensive to implement than true high availability topologies.
Figure 45 Four Host/Single Hub/ Single Disk Array Non-HA Topology 134 Topologies for Windows NT and Windows 2000
Topology and Array Planning Figure 46 Four Host/Cascaded Hubs/ Dual Disk Array Non-HA Topology Topologies for Windows NT and Windows 2000 135
Figure 47 Four Host/Single Switch/ Dual Disk Array Non-HA Topology 136 Topologies for Windows NT and Windows 2000
High Availability Topologies Topology and Array Planning Figure 48 through Figure 51 illustrate high availability topologies. These topologies achieve the highest level of availability because they have fully redundant hardware data paths to each disk array. There are no single points of failure in these topologies. These topologies are more complex and expensive to implement than non-high availability topologies.
Figure 48 Direct Connect Single Host/Single Disk Array HA Topology 138 Topologies for Windows NT and Windows 2000
Topology and Array Planning Figure 49 Dual Host/Dual Hub/Four Disk Array HA Topology Topologies for Windows NT and Windows 2000 139
Figure 50 Four Host/Dual Hub/Dual Disk Array HA Topology 140 Topologies for Windows NT and Windows 2000
Topology and Array Planning Figure 51 Four Host/Dual Cascaded-Hubs/Four Disk Array HA Topology Topologies for Windows NT and Windows 2000 141
Topologies for Windows NT and Windows 2000
3 INSTALLATION Host System Requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Site Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147 Power Distribution Units (PDU/PDRU) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 Installing the Disk Array FC60 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Overview This chapter explains how to install the Disk Array FC60 enclosures into a cabinet and how to configure and connect the controller enclosure to the disk enclosures. It also covers the Fibre Cable connection to the host. Finally this chapter provides power up instructions and initial software installation requirements for operation of the disk array. Before installing the Disk Array FC60, the topology and array configuration should be established.
Host System Requirements HP-UX The Disk Array FC60 is supported on the following host configurations: • Supported host platforms: Table 13 lists the supported host platforms. The table also identifies which platforms support the disk array as a boot device when running HP-UX 11.x • Supported HP-UX versions: 11.0, 11.11, and 10.20 Note Installation • The Disk Array FC60 is supported on HP-UX 11.11 with array firmware release HP03 only.
Table 13 Supported Host Platform Information (cont’d) Supported Host Boot Support on HP-UX 11.x? Fibre Channel I/O Adapter C-class No A5158A on HP-UX 11.x A3740A on HP-UX 10.20 A4xx-A5xx class Yes A5158A Fibre Channel I/O Adapters The host must have the correct adapter installed. The supported host adapters are listed in Table 13. Fibre Channel drivers are provided with the operating system.
Site Requirements Environmental Requirements The area around the array must be cooled sufficiently so it does not overheat. Chapter 8, Reference and Regulatory, contains environmental specifications for the Disk Array FC60. Refer to that section for the required environmental specifications. Electrical Requirements Installation The site must be able to provide sufficient power to meet the needs of the devices in the cabinet(s).
Table 14 Total Operating and In-Rush Currents Operating Current @ 110v Operating Current @ 220v In-Rush* Current Power Cords Controller w/ 6 Disk Enclosures 41.3A 20.4A 124A 14 Controller w/ 5 Disk Enclosures 34.8A 17.2A 104A 12 Controller w/ 4 Disk Enclosures 28.3A 14.0A 84A 10 Controller w/ 3 Disk Enclosures 21.8A 10.8A 64A 8 Controller w/ 2 Disk Enclosures 15.3A 7.6 44A 6 Controller w/ 1 Disk Enclosure 8.8 4.
Table 16 Disk Enclosure Electrical Requirements Measurement Value Voltage – Range – Frequency 220-240V 50-60Hz Current – Typical Maximum Operating – 100 - 120 V – 200 - 240 V – Maximum In-rush * 100 - 127V 2.9 - 3.2A 2.6 - 3.2A 5.3 - 6.
Power Distribution Units (PDU/PDRU) PDUs provide a sufficient number of receptacles for a large number of devices installed in a cabinet. A PDU connects to the site power source and distributes power to its ten receptacles. The disk array power cords are connected to the PDUs and each PDU is connected to a separate UPS. For high availability, two PDUs should be installed and connected to separate site power sources. PDUs come in a variety of heights, from five feet down to 19 inches.
The following tables show recommended PDU/PDRU combinations for one or more components in a rack. Data assumes 220V AC nominal power and redundant PDU/PDRUs. For nonredundant configurations, divide the number of recommended PDU/PDRUs by 2. Table 18 Recommended PDU/PDRUs for HP Legacy Racks Number of Components 1.1-meter (21 U) Rack 1–4 Two 3-foot/16-amp PDUs* Two 5-foot/16-amp PDUs* or or 1.6-meter (32 U) Rack 2.
Installing PDUs Choose PDU/PDRU locations with the following guidelines in mind: • Place PDU/PDRUs within the reach of power cords. • Place PDUs vertically whenever possible. See sample installations in Figure 52 and Figure 53. Installing PDUs horizontally can interfere with accessibility to units behind the PDU. (PDRUs must be installed vertically, as per the installation instructions.
Installation PDU 16 Amp or PDRU 30 Amp PDU 16 Amp or PDRU 30 Amp Figure 52 PDU Placement in 1.
PDU (16 Amp) or PDRU (30 Amp) Figure 53 PDRU Placement in 2.
Installing the Disk Array FC60 Note The A5277AZ factory assembled disk array is fully configured and requires only connection to the host. Proceed to "Connecting the Fibre Channel Cables" on page 196 to complete the installation. Do not remove any of the factory installed disk enclosures before powering on the disk array, or the disk array will not function properly.
Table 21 EIA Spacing for Racks and Array Enclosures Component Legacy Cabinets Measure (EIA Units) (1 EIA Unit = 1.75”) 1.1 Meter Cabinet 21 EIA Units, Total Available 1.6 Meter Cabinet 32 EIA Units, Total Available 2.0 Meter Cabinet 41 EIA Units, Total Available Controller Enclosure FC60 Disk Enclosure SC10 5 EIA Units Used (includes 1/2 rail space below and remaining 1/2 EIA unit above enclosure) 4 EIA Units Used (3.5 disk enclosure plus 1/2 rail space) System/E Racks (1 EIA Unit = 1.75”) 1.
installation to utilize 1/2 EIA units available from the disk system SC10’s 3.5 EIA unit height. Figure 55 shows rack locations for installation of six disk enclosures and one controller enclosure (positioned on top) for legacy racks. When disk enclosures are installed in legacy racks, an unusable 1/2-EIA space is left at the bottom of the enclosures. This space must be filled with a 1/2-EIA unit filler for each enclosure installed.
Figure 54 Enclosure EIA Positions for System/E Racks 158 Installing the Disk Array FC60
Installation Figure 55 Enclosure EIA Positions for Legacy Cabinets Installing the Disk Array FC60 159
Installing the Disk Enclosures Disk enclosures should be installed in the rack starting at the bottom and proceeding upward. When all disk enclosures are installed, the controller enclosure is installed at the top, directly above the disk enclosure. Installation instructions for the disk enclosure SC10 are provided below; installation instructions for the controller enclosure FC60 follow this section.
Installation Figure 56 Disk Enclosure Contents Installing the Disk Enclosures 161
Step 3: Install Mounting Rails Select the rail kit for the appropriate rack and follow the instructions included with the rail kit to install the rails in the rack. The following rail kits are available for use with the disk enclosure: • HP A5250A for legacy HP Racks (C2785A, C2786A, C2787A A1896A, or A1897A) • HP A5251A for HP Rack System/E • HP 5656A for Rittal 9000 racks Step 4: Install the Disk Enclosure CAUTION 1. Do not try to lift the disk enclosure using the handles on the power supplies.
Installation A Front Mounting Ears B Chassis C Rail D Rail clamp Figure 57 Mounting the Disk Enclosure (Rack System/E shown) Installing the Disk Enclosures 163
CAUTION 3. To protect the door, do not lift or move the disk enclosure with the door open. Unlock and open the disk enclosure door, using a thin flat-blade screwdriver to turn the lock (Figure 58). Figure 58 Door Lock 4. Ensure that one hole in each mounting ear (A in Figure 57) aligns with the sheet metal nuts previously installed on the rack front columns. 5. Insert two screws (A in Figure 57) through the matching holes in the disk enclosure mounting ears and rack front columns. Tighten screws. 6.
7. If using an HP rack, fasten the back of the disk enclosure to the rails using the rear hold-down clamps from the rail kit. a. If you are installing the disk enclosure in an HP legacy rack, set the clamp (A in Figure 59) on top of the rail (B) so that the tabs point up and the screw holes are on the slotted side of the rail. Skip to step c. b. If you are installing the disk enclosure in an HP Rack System/E, set the clamp (D in Figure 57) c. Push the clamp tight against the back of the disk enclosure.
Step 5: Install Disks and Fillers CAUTION Touching exposed areas on the disk can cause electrical discharge and disable the disk. Be sure you are grounded and be careful not to touch exposed circuits. Disks are fragile and ESD sensitive. Dropping one end of the disk two inches is enough to cause permanent damage. Static electricity can destroy the magnetic properties of recording surfaces. Grip disks only by their handles (B in Figure 60) and carriers, and follow strict ESD procedures.
1. Open the disk enclosure door. 2. Put on the ESD strap (provided with the accessories) and insert the end into the ESD plug-in (D in Figure 60) near the upper left corner of the disk enclosure. CAUTION 3. Disks are fragile. Handle them carefully. Remove the bagged disk from the disk pack. CAUTION Do not touch exposed circuit board side of the disk module. Remove the disk from the ESD bag, grasping the disk by its handle (B). 5.
6. Open the cam latch (C Figure 60) by pulling the tab toward you. 7. Align the disk insertion guide (F) with a slot guide (G) and insert the disk into the slot. Typically, install disk modules on the left side of the enclosure and fillers on the right Installing disks left to right allows you to insert the disk completely without releasing your grip on the handle. Note 8. Push the disk all the way into the chassis, letting the internal guides control the angle. 9.
Note What if LUN 0 is on disks in the enclosure? If any of the disks in the enclosure are part of LUN 0, you will not be able to unbind the LUN before moving the disks. Instead, you must replace LUN 0 and exclude any of the disks in the enclosures from LUN 0. 4. Power down the disk array, both the array controller enclosure and the disk enclosures. 5. Remove the disk enclosures from the rack. To remove the enclosures, reverse the steps listed in "Installing the Disk Enclosures" on page 160. 6.
Installing the Controller This procedure describes how to install the Disk Array FC60 controller enclosure into an HP legacy rack or an HP System/E Rack. Step 1: Gather Required Tools • Torx T25 screwdriver • Torx T15 screwdriver • Small flat-blade screwdriver Step 2: Unpack the Product 1. 170 Lift off the overcarton and verify the contents (see Table 23 and Figure 62).
Table 23 Controller Package Contents Figure Label Part Description (See) Controller chassis with pre-installed modules B Filler panel, 1/2 EIA unit, 2ea. C Rail kit (A5251A) for System/E racks D Rail kit (A5250A) for legacy racks, 1 ea. E SCSI Cables (length depends on option ordered) 2 meter (5064-2492) or 5 meter (5064-2470) 2 ea. / 1 disk enclosure 4 ea. / 4 disk enclosures 4 ea. / 2 disk enclosures 5 ea. / 5 disk enclosures 6 ea. / 3 disk enclosures 6 ea.
Figure 62 Controller Enclosure Package Contents 172 Installing the Controller
Step 3: Install Mounting Rails Select the rail kit for the appropriate rack and follow the instructions included with the rail kit to install the rails in the rack. The following rail kits are available for use with the controller enclosure: • HP A5250A for legacy HP Racks ((C2785A, C2786A, C2787A A1896A, or A1897A) • HP A5251A for HP Rack System/E • HP 5656A for Rittal 9000 racks Step 4: Install the Controller Enclosure Note Installation 1.
Figure 63 Mounting the Controller Enclosure 174 Installing the Controller
5. If installing in an HP rack, secure the back of the enclosure to the rails using the two rail clamps from the rail kit. In legacy HP racks: a. Align screw holes and insert the clamp tab into the slot in the upper surface of the rail. b. Insert a screw through the hole in the clamp and the rail and tighten with a Torx T25 screwdriver. In HP Rack System/E racks: a. Set the clamp (E in Figure 63) inside the rail with the holes in the clamp along the slots in the rail. Installation b.
Configuration Switches This section describes the configuration switches on the controller enclosure and the disk enclosures. Configuration switch settings must be set to support the configuration (full-bus ore split-bus) being installed (as planned for from chapter 2, Topology and Array Planning).
Note Note that one BCC is inverted with respect to the other. Thus, the settings on one BCC appear as inverted and in reverse order from the other.
Table 24 Disk Enclosure Switches Switch Setting Operation 1 Full-Bus Mode 0 Split-Bus Mode 2 - Stand-Alone/Array Mode 0 Always set to Off (Array Mode) 3 - Bus Reset on Power Fail 0 Must be set to 0 4 - High/Low Bus Addressing 0 Set to 0 (Low addressing) 5 - Not Used 0 Not used; must be set to 0 1 - Full-/Split Bus Mode Full-Bus/Split-Bus (Switch 1) Configuration The disk enclosure’s internal bus connects the disk drives together and to the BCCs.
a low range of IDs (0, 1, 2, 3, and 4) and a high range of IDs (8, 9, 10, 11, and 12). (BCCs are also provided addresses as shown in Table 25). Note that the SCSI IDs do not correspond to the physical slot number. The assignment of the SCSI IDs differs depending on whether the enclosure is operating in full-bus or split-bus mode. When full-bus mode is selected, the low ID range (0 - 4) is assigned to the even disk slots, and the high range (8 - 12) is assigned to the odd slots. See Table 25.
controller module A (Fibre Channel connector J3) and Host ID BD2 SW2 selects the address for controller module B (Fibre Channel connector J4). Each Fibre Channel Host ID DIP switch contains a bank of seven switches that select the address using a binary value, 000 0000 (0) through 111 1111 (126). To set an address, set the switches in the up position for “1” or down for “0” (refer to Table 26 for binary switch settings). Figure 65 illustrates the loop ID switch set to 42 (0101010).
Fibre Channel Host ID Switch ( 0 1 0 1 0 1 0 = 42) Installation Figure 65 Fibre Channel Connectors and Fibre Channel Host (Loop) ID Switches Note Occasionally two or more ports in an arbitrated loop will arbitrate simultaneously. Priorities are decided according to the loop IDs. The higher the loop ID, the higher the priority.
. Table 26 Fibre Channel Addresses Decimal 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 182 Binary 000 0000 000 0001 000 0010 000 0011 000 0100 000 0101 000 0110 000 0111 000 1000 000 1001 000 1010 000 1011 000 1100 000 1101 000 1110 000 1111 001 0000 001 0001 001 0010 001 0011 001 0100 001 0101 001 0110 001 0111 001 1000 001 1001 001 1010 001 1011 001 1100 001 1101 001 1110 001 1111 010 0000 010 0001 010 0010 010 0011 010 0100 010 0101 Decimal
Attaching Power Cords Each enclosure (controller and disk enclosures) contains dual power supplies that must be connected to the power source (PDU). When connecting power cords for high availability installations, connect one enclosure power cord to one source (PDU) and the other power cord to an alternate source (PDU). To complete the power connection, follow the steps below. Set the power switch on the disk enclosures to OFF. The switch is located at the front, top, right corner of the enclosure. 2.
letters among all disk enclosures. “Cascading” refers to overload faults that occur on a backup PDU as a result of power surges after the primary PDU fails. – Serviceability. Choose PDU locations that prevent power cords from interfering with the removal and replacement of serviceable components. Also leave a 6-inch service loop to allow for the rotation of PDRUs. The letters A, B, C, D, E and F in Figure 66 and Figure 67 represent independent PDUs or PDU banks.
30A PDRU D A D A C A C B C B C B D B D C C C C AC IN D D D D Installation A A A A AC IN B B B B A 30A PDRU Figure 66 Wiring Scheme for 1.
30A PDRU A A A A AC IN B B B B 30A PDRU C C C C AC IN D D D D 30A PDRU A E A F A G B E B F B H C F C G C H D G D H Figure 67 Wiring Scheme for 2.
Attaching SCSI Cables and Configuring the Disk Enclosure Switches NOTE! It is critical that all SCSI cables be tightened securely. Use the following steps to ensure the cable connectors are seated properly. 1. Connect the cable to the enclosure connector and tighten the mounting screws finger tight. 2. Push on the connector and retighten the mounting screws. Repeat once more. 3. Use a flat blade screwdriver to tighten the screw appropriately. Be sure the screw is not cross-threaded.
Full-Bus Cabling and Switch Configuration Cabling for a full bus configuration requires connecting one SCSI cable from the controller to the disk enclosure and setting the configuration switches. Figure 69 illustrates full-bus cabling connections for a six disk enclosure array. It is possible to configure any number of disk enclosures, from one to six, using this method. However, full bus is typically used when four or more disk enclosures are installed in the array.
Segment 1 set to “1” All other segments set to “0” Tray ID set to unique value for each enclosure Installation Tray ID set to same value on both BCCs in the enclosure Segment 1 set to “1” Note inverted orientation from upper BCC Figure 68 Full Bus BCC Configuration Switch Settings Attaching SCSI Cables and Configuring the Disk Enclosure Switches 189
SCSI terminator required here Figure 69 Full-Bus Cabling 190 Attaching SCSI Cables and Configuring the Disk Enclosure Switches
Split-Bus Switch and Cabling Configurations Split-bus cabling requires two SCSI cables from each disk enclosure to the controller enclosure. Split-bus cabling is typically used for installations with 3 or fewer disk enclosures. Cabling for a split-bus configuration is shown in Figure 71.
All segments set to “0” Tray ID set to unique value for each enclosure Tray ID set to same value on both BCCs in the enclosure All segments set to “0” Note inverted orientation from upper BCC Figure 70 Split- Bus Configuration Switch Settings 192 Attaching SCSI Cables and Configuring the Disk Enclosure Switches
Installation SCSI terminators required on both BCCs Figure 71 Split-Bus Cabling Attaching SCSI Cables and Configuring the Disk Enclosure Switches 193
Bus Addressing Examples Each disk module within the disk array is identified by its channel number and SCSI ID. These values will differ depending on which type of bus configuration is used for the disk enclosures. See "How are disk modules in the array identified?" on page 244 for more information. Figure 72 is an example of split-bus addressing. Figure 73 is an example of full-bus addressing.
Installation This disk is on channel 4 with an ID of 12 Figure 73 Full-Bus Addressing Example Attaching SCSI Cables and Configuring the Disk Enclosure Switches 195
Connecting the Fibre Channel Cables Fibre Channel cables provide the I/O path to the disk array. The Fibre Channel cable connects the controller enclosure directly to a host, or to a hub. For operation on HP-UX, the host must contain an HP Fibre Channel Mass Storage/9000 adapter.
Installation Figure 74 MIA, RFI Gasket, and Fibre Channel Installation 3. Connect the Fibre Channel connectors to the controller MIAs. a. Remove the optical protectors from the ends of the MIAs and the Fibre Channel cables (Figure 74). b. Insert the Fibre Channel connectors into the MIAs. The fibre optic cable connector is keyed to install only one way.
Applying Power to the Disk Array Once the hardware installation is complete, the disk array can be powered up. It is important that the proper sequence be followed when powering up the components of the disk array. To ensure proper operation, power should be applied to the disk enclosures first and then to the controller enclosure, or all components can be powered up simultaneously. This gives the disks time to spin up, ensuring that the disks are detected when the controller comes on line.
Installation Figure 75 Disk Enclosure Power Switch and System LEDs 3. Check the LEDs on the front of the disk enclosures (see Figure 77). The System Power LED (B in Figure 75) should be on and the Enclosure Fault LED (C) should be off. It is normal for the Enclosure Fault LED (amber) to go on momentarily when the enclosure is first powered on. However, if the Enclosure Fault LED remains on, it indicates that a fault has been detected. Refer to "Troubleshooting" on page 359 for additional information. 4.
Power Switches Figure 76 Controller Enclosure Power Switches 5. Check the controller enclosure LEDs (see Figure 78). The Power LED should be on and the remaining LEDs should be off. If any fault LED is on, an error has been detected. Refer to "Troubleshooting" on page 359, for additional information. 6. Close and lock the disk enclosure doors. 7. If the host was shutdown to install the array, boot the host. 8. Perform an ioscan to verify that the host sees the array.
Table 27 Normal LED Status for the Disk Enclosure Module LED Front Enclosure System Fault Power Supply BCC Module Off System Power On (green) Disk Activity Flashing (green) when disk is being accessed. Disk Fault LED Off Power Supply On (green) Term. Power On (green) Full Bus Off (if split bus) On (green - if single bus) BCC Fault Off Bus Active On (green bus is available for use) Off (Isolator chip disabled & bus not avail.
A B C D E F G H I J K Figure 77 Disk Enclosure LEDs 202 Applying Power to the Disk Array System fault LED System power LED Disk activity LED Disk fault LED Power On LED Term. Pwr.
Table 28 Normal LED Status for Controller Enclosure LED Normal State Controller Enclosure Power On On (green) Power Fault Off Fan Fault Off Controller Fault Off Fast Write Cache On (green) while data is in cache Controller Power On (green) Controller Fault Off Heartbeat Blink (green) Status Green There are 8 status LEDs. The number and pattern of these LEDs depend on how your system is configured.
A B C D E F G H I J K L M N O P Q Figure 78 Controller Enclosure LEDs 204 Applying Power to the Disk Array Power On LED Power Fault LED Fan Fault LED Controller Fault LED Fast Write Cache LED Controller Power LED Controller Fault LED Heartbeat LED Status LEDs Fault B LED Full Charge B LED Fault A LED Full Charge A LED Power 1 LED Power 2 LED Fan Power LED Fan Fault LED
Powering Down the Array When powering down the disk array, the controller enclosure should be powered down before the disk enclosures. To power down the disk array: 1. Stop all I/Os from the host to the disk array. 2. Wait for the Fast Write Cache LED to go off, indicating that all data in cache has been written to the disks. 3. Power down the controller enclosure. 4. Power down the disk enclosures.
Verifying Disk Array Connection On Windows NT and Windows 2000 The HP Storage Manager 60 software is used to verify that the disk array is visible to the Windows host. See the HP Storage Manager 60 User’s Guide for instructions on installing and using the HP Storage Manager 60 software. On HP-UX To verify that the Disk Array FC60 is visible to the HP-UX host, run an ioscan by typing the following: ioscan -fn An output will be displayed similar to that in Figure 79.
Class I H/W Path Driver State H/W Type Description ============================================================================ 0 0 8/8.8.0.6.0.0 8/8.8.0.6.0.0.0 disk 1 8/8.8.0.6.0.0.1 disk 2 8/8.8.0.6.0.1.0 disk 3 8/8.8.0.6.0.2.0 disk 4 8/8.8.0.6.0.3.0 target disk 4 5 8/8.8.0.7.0.0 8/8.8.0.7.0.0.0 disk 6 8/8.8.0.7.0.0.1 disk 7 8/8.8.0.7.0.1.0 disk 8 8/8.8.0.7.0.2.0 disk 9 8/8.8.0.6.0.3.0 target ctl 8 0 8/8.8.0.255.0.6 8/8.8.0.255.0.6.0 target ctl 9 1 8/8.8.0.255.0.
Interpreting Hardware Paths Each component on the disk array is identified by a unique hardware path. The interpretation of the hardware path differs depending on the type of addressing used to access the component. Two types of addressing are used with the Disk Array FC60: • Peripheral Device Addressing (PDA) - used to address the disk array controller modules.
The port value will always be 255 when using PDA. The loop address, Fibre Channel Host ID of the disk array controller module (two address possible, one for controller module A and one for module B), is encoded in the Bus and Target segments of the hardware path.
VSA is an enhancement that increases the number of LUNs that can be addressed on a fibre channel disk array to 16382 (214). This compares with the 8 LUN limit imposed by PDA. The HP SureStore E Disk Array FC60 requires that 32 LUNs (0 - 31) be addressable. To implement VSA, the fibre channel driver creates four virtual SCSI busses, each capable of supporting up to eight LUNs.
The following information is returned: SCSI describe of dev/rdsk/c9t1d0 vendor: hp product: id type: direct access size: 10272kbytes bytes per sector: 512 If the LUN exists, the size will reflect the capacity of the LUN. If the size returned is 0 kbytes, there is no LUN 0 created for that logical SCSI bus. Determining LUN numbers from the hardware path Installation LUN numbers can be determined by using the last three segments of the VSA hardware path, which represent the 14-bit volume number.
A quick way to determine the LUN number is to multiply the value of the next-to-last segment times 8, and add the result to the last segment value.
Installing the Disk Array FC60 Software (HP-UX Only) Once the disk array hardware is installed and operating, the disk array management software, diagnostic tools, and system patches must be installed. The software required for the Disk Array FC60 is distributed on the HP-UX Support Plus CD-ROM (9909 or later). System Requirements The following host system requirements must be met to install and use the Array Manager 60 utilities successfully.
Verifying the Operating System The Disk Array FC60 is supported on the following operating systems versions: • HP-UX 11.0, 11.11, and 10.20 Before installing the Disk Array FC60 system software, verify that you are running the required operating system version. To identify the operating system version, type: uname -a A response similar to the following should be displayed indicating the version of HP-UX the host is running: HP-UX myhost B.11.0.
swlist 3. Execute the following command to create the required device files (this is not required if the system was re-booted): insf -e 4. Run the Array Manager 60 amdsp command to re-scan the system for the array: amdsp -R Downgrading the Disk Array Firmware for HP-UX 11.11 Hosts Controller firmware HP08 is not supported on HP-UX 11.11. If the disk array is being installed on a host running HP-UX 11.11, it will be necessary to downgrade the controller firmware to HP03 after installing the software.
Configuring the Disk Array HP-UX After installing the disk array software, the following steps must be performed to configure the disk array for operation. These steps should be performed by the service-trained installer with assistance from the customer where appropriate. Step 1. Determine the Disk Array ArrayID The ArrayID is used to identify the disk array when performing the remaining tasks, so the first step is to determine the ArrayID.
Step 3. Reformat Disk Array Media CAUTION This step will destroy all data on the disk array and remove any LUN structure that has been created. If there is data on the disk array, make sure it is backed up before performing this step. If a LUN structure has been created on the disk array either at the factory (A5277AZ) or by a reseller, do not perform steps 3, 4, and 5. Go to step 6 to continue configuration of the disk array. The disk media should be reformatted to its factory default configuration.
Step 5. Replace LUN 0 LUN 0 was created solely to allow the host to communicate with the disk array when it is first powered on. It should be replaced with a usable LUN. If not replaced, a substantial amount of storage will be wasted. To replace LUN 0, type: amcfg -R :0 -d ,.....
settings on the host to ensure valid time stamps. This ensures that any information created by the disk array such as log entries reflect the proper time they occurred. To set the controller date and time, type: ammgr -t Step 8. Bind LUNs If a LUN structure has been created on the disk array either at the factory (A5277AZ) or by a reseller, it may not be necessary to perform this step. If no LUN structure has been created, only LUN 0 will exist.
• • For more information, see "Adding a Global Hot Spare" on page 296 To use SAM, see "Adding a Global Hot Spare" on page 273 Step 10. Install Special Device Files After binding LUNs, you must install special device files on the LUNs. This makes the LUNs usable by the operating system. To install the special device files, type: insf -e Step 11. Check Disk Array Status The final step is to display the disk array status to ensure that all features are enabled and that the array is working properly.
6. Set up storage partitions if this premium feature is enabled. 7.
Using the Disk Array FC60 as a Boot Device (HP-UX Only) The Disk Array FC60 is supported for use as boot device on the following HP 9000 Servers running HP-UX 11.0: – – – – Note K-Class D-Class N-Class L-Class Not all levels of server PDC (processor dependent code) support Fibre Channel boot. Before performing the following procedure, ensure that the level of PDC on the server supports booting from a Fibre Channel device.
Solving Common Installation Problems Problem. When performing an ioscan, the host sees the disk array controllers but none of the disks or LUNs in the disk array. Solution. This is typically caused by powering on the disk array controller enclosure before powering on the disk enclosure(s). Turn off power to all disk array components, power on the disk enclosures and wait for the disks to spin up, then power on the controller enclosure. Installation Problem.
Adding Disk Enclosures to Increase Capacity Scalability is an important part of the design of the HP SureStore E Disk Array FC60. The capacity of the disk array can be increased in a variety of ways to meet growing storage needs. See "Adding Capacity to the Disk Array" on page 254 for more information on scalability options. Adding disk array enclosure(s) is the most effective way of significantly increasing the capacity of the disk array. It is also the most involved.
• Consider Adding More Than One Disk Enclosure - Because the process of adding disk enclosures involves backing up data and powering off the disk array, you should consider adding more than one enclosure to meet your future capacity needs. This will avoid having to redo the procedure each time you add another disk enclosure. And the addition of a single enclosure provides limited flexibility for configuring LUNs on the disk array.
2. Identify the expanded disk array layout by performing the following tasks: a. Create a detailed diagram of the expanded HP FC60 array layout. Include all Fibre Channel and SCSI cabling connections. This diagram will serve as your configuration guide as you add the new enclosures. The Capacity Expansion Map on page 235 should assist you in identifying where disk will be moved in the new configuration. b.
CAUTION 3. Do not proceed to the next step if any LUN is not in an optimal state and you intend to move any of the disks which comprise the LUN. Contact HP Support if the LUNs cannot be made OPTIMAL before the moving disk drives. If you intend to move any global hot spares, remove them from the hot spare group as follows: a. Verify that the hot spare disk to be moved is not in use. b. Remove the disk from the hot spare group. 4. Unmount any file systems associated with the disk array.
5. Configure the necessary disk enclosures for full-bus operation. See "Configuration Switches" on page 176. Set the disk enclosure DIP switches on both BCC A and BCC B to the following settings for full-bus operation: sw1=1 (This switch is set to 0 for split-bus mode.) sw2=0 sw3=0 sw4=0 sw5=0 6. Install SCSI terminators on each disk enclosure. Install a SCSI Terminator on the right-most connector on both the BBC A and BCC B cards. CAUTION 7.
8. Set the disk Enclosure (Tray) ID switches. See "Disk Enclosure (Tray) ID Switch" on page 176. a. Set the Enclosure ID switches on both BCC A and BCC B cards to identify the disk enclosure. The Enclosure ID switch setting must be the same for both BCC A and BCC B. b. The Enclosure ID switch settings are made as follows for the disk enclosures installed. The enclosure connected to channel 1 should be set to 0. The enclosure connected to channel 2 should be set to 1.
Step 5. Completing the Expansion CAUTION The disk array components must be powered up in the specified sequence disk enclosures first, followed by the controller enclosure. Failure to follow the proper sequence may result in the host not recognizing LUNs on the disk array. 1. Ensure all power cables are connected to the controller enclosure and disk enclosures. 2. Power up the disk array in the following sequence: a. Power up all the disk enclosures.
taken not to cross the cables, as this may cause problems with applications that depend on a specific path. 6. Rescan the disk array from the host using the ioscan -fnC disk command. The disk array and the paths to each LUN should be displayed. This completes the process of expanding the disk array. You can now make the capacity provided by the new disks available to the host by binding LUNs.
Capacity Expansion Example An example of expanding an Disk Array FC60 is shown in Figure 83. In this example, three new disk enclosures are added to a disk array with three fully loaded enclosures. The disk array is configured with five 6-disk LUNs. The original enclosures were operating in split-bus mode, and have been reconfigured to full-bus mode. The disks have been moved from their original locations to slots with the corresponding channel:ID.
Installation Disks are moved to the slot that corresponds to their original channel:ID. High availability is maintained by having no more than one disk per LUN or volume group on each channel.
Adding Disk Enclosures to Increase Capacity
0 0 8 0 1 1 9 1 2 2 10 2 3 3 11 3 4 4 12 4 Full-bus IDs Split-bus IDs 0 0 8 0 1 1 9 1 2 2 10 2 3 3 11 3 4 4 12 4 Full-bus IDs Split-bus IDs 0 0 8 0 1 1 9 1 2 2 10 2 3 3 11 3 4 4 12 4 Installation Full-bus IDs Split-bus IDs Figure 84 Capacity Expansion Map Adding Disk Enclosures to Increase Capacity 235
Full-bus IDs Split-bus IDs 0 0 8 0 1 1 9 1 2 2 10 2 3 3 11 3 4 4 12 4 Full-bus IDs Split-bus IDs 0 0 8 0 1 1 9 1 2 2 10 2 3 3 11 3 4 4 12 4 Full-bus IDs Split-bus IDs 0 0 8 0 1 1 9 1 2 2 10 2 3 3 11 3 4 4 12 4 Adding Disk Enclosures to Increase Capacity
4 MANAGING THE DISK ARRAY ON HP-UX Tools for Managing the Disk Array FC60 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 Installing the Array Manager 60 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 Managing Disk Array Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 Adding Capacity to the Disk Array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Tools for Managing the Disk Array FC60 Note On Windows NT and Windows 2000, the disk array is managed using the HP Storage Manager 60 software. See the HP Storage Manager 60 User’s Guide for information on managing the disk array on Windows NT and Windows 2000. There are three tools available for managing the Disk Array FC60 on HP-UX: the HP-UX System Administrator Manager (SAM), Array Manager 60, and Support Tools Manager (STM).
Table 30 Management Tools and Tasks Task Tool SAM Array Manager 60 STM Checking disk array status Yes Yes (amdsp) Yes Managing LUNs Yes Yes (amcfg) Yes Managing global hot spares Yes Yes (ammgr) Yes Assigning an alias to the disk array Yes Yes (ammgr) No Managing cache memory No Yes (ammgr) No Managing the rebuild process No Yes (amutil) No Synchronizing controller date and time No Yes (ammgr) No Locating disk array components Yes Yes (amutil) Yes Performing a parity scan
Installing the Array Manager 60 Software The Array Manager 60 software must be installed on the host to which the disk array is connected. The software should have been installed with disk array hardware. However, if you decide to move the disk array to a different host, you will need to install the software on the new host. See "Installing the Disk Array FC60 Software (HP-UX Only)" on page 213 for more information. Note 240 Must Array Manager 60 be installed to use SAM? Yes.
AM60Srvr Daemon The AM60Srvr daemon is the server portion of the Array Manager 60 software. It monitors the operation and performance of all disk arrays, and services requests from clients used to manage the disk arrays. Tasks initiated from SAM or Array Manager 60 are serviced by the AM60Srvr daemon. The AM60Srvr daemon monitors disk array performance and status, maintains disk array logs, initiates diagnostics, and allows clients to examine and change the disk array configuration.
Managing Disk Array Capacity During installation, a LUN structure is created on the disk array. This structure may meet your initial storage requirements, but at some point additional capacity may be required. This involves adding disks and binding new LUNs. Careful LUN planning will ensure that you achieve the desired levels of data protection and performance from your disk array.
Selecting Disks for a LUN When binding a LUN , you must select the disks that will be used. The capacity of the LUN is determined by the number and capacity of the disks you select, and the RAID level. When selecting disks for a LUN , consider the following: To maximize high availability, select disks in different disk enclosures or on different channels. Multiple disks in the same enclosure make a RAID 5 LUN vulnerable to an enclosure failure.
Selecting disks in the incorrect order of 1:2, 2:2, 1:3, and 2:3 results in mirrored pairs of 1:2/1:3 and 2:2/2:3. This puts the primary disk and mirror disk of each pair in the same enclosure, making the LUN vulnerable to an enclosure failure. Correct Disk Selection Order Pair 1 2:2 Pair 2 1:2 1:2 1:3 Incorrect Disk Selection Order Primary and mirror disks on separate channels. 2:3 2:2 1:3 2:3 Pair 1 Primary and mirror disks on the same channel.
internal management of enclosure components.) If the disk enclosure is configured for split-bus operation, the both the even-numbered slots and the odd-numbered slots are assigned IDs of 0 - 4. • When viewing status information for the disk array, you may also see the disk enclosure number and slot number displayed. These parameters identify the physical location of the disk module within the disk array.
Disk Module Addressing Parameters Disk enclosure ID set to 3 Enclosure connected to channel 2 Slot Numbers 0 SCSI IDs Full bus configuration Split bus configuration 1 0 0 8 0 2 3 4 5 1 1 9 1 2 2 10 2 6 3 3 7 11 3 8 4 4 This disk module uses the following address parameters: Channel 2 SCSI ID 10 (full bus) or 2 (split bus) Enclosure 3 Slot 5 Figure 86 Disk Module Addressing Parameters 246 Managing Disk Array Capacity 9 12 4
Assigning LUN Ownership When a LUN is bound, you must identify which disk array controller (A or B) owns the LUN. The controller that is assigned ownership serves as the primary I/O path to the LUN. The other controller serves as the secondary or alternate path to the LUN. If there is a failure in the primary I/O path and alternate links are configured, ownership of the LUN automatically switches to the alternate path, maintaining access to all data on the LUN.
the RAID level used by a LUN, you must unbind the LUN and rebind it using the new RAID level. With the exception of RAID 0, all RAID levels supported by the disk array provide protection against disk failure. However, there are differences in performance and storage efficiency between RAID levels. For more information on RAID levels and their comparative operating characteristics, see "RAID Technology" on page 47.
• If you choose to limit the number of global hot spares, make sure you are able to respond quickly to replace a failed disk. If an operator is always available to replace a disk, you may not need the added protection offered by multiple global hot spares. Setting Stripe Segment Size Another factor you may have to consider is the stripe segment size you use for a LUN. The stripe segment size determines how much data is written to a disk before moving to the next disk in the LUN to continue writing.
Evaluating Performance Impact Several disk array configuration settings have a direct impact on I/O performance of the array. When selecting a setting, you should understand how it may affect performance. Table 31 identifies the settings that impact disk array performance and what the impact is. Note The LUN binding process impacts disk array performance. While a LUN is being bound, benchmarking tools should not be used to evaluate performance.
Table 31 Performance Impact of Configuration Settings (cont’d) Setting: Cache flush threshold (default 80%) Function: Sets the level at which the disk array controller begins flushing write cache content to disk media. The setting is specified as the percentage of total available cache that can contain write data before flushing begins. The cache flush threshold can be set independently for each controller. Note that available cache is reduced by half with cache mirroring enabled.
Table 31 Performance Impact of Configuration Settings (cont’d) Setting: Cache flush limit (default 100%) Function: Determines how much data will remain in write cache when flushing stops. It is expressed as a percentage of the cache flush threshold. For optimum performance this value is set to 100% by default. This ensures that the entire amount of cache specified by the cache flush threshold will contain write cache data, increasing the number of write cache hits.
Cache Flush Threshold set to 80% Cache Flush Limit set to 100% 80% Initial cache settings Write Data Write data exceeds flush threshold 80% Write Data Managing the Disk Array on HP-UX 80% Start data flushing Write data below flush threshold Write Data Stop data flushing Figure 87 Cache Flush Threshold Example Managing Disk Array Capacity 253
Adding Capacity to the Disk Array As your system storage requirements grow, you may need to increase the capacity of your disk array. Disk array capacity can be increased in any of the following ways: • You can add new disk modules to the disk array if there are empty slots in the disk enclosures. • You can add additional disk enclosures to the disk array. • You can replace existing disk modules with higher capacity modules.
2. Bind a LUN with the new disks using the management tool of your choice,: – To use SAM, see "Binding a LUN" on page 267 – To use Array Manager 60, see "Binding a LUN" on page 289 – To use STM, see "Binding a LUN" on page 314 After binding a LUN, you must execute the insf -e command to install special device files on the LUN. This makes the LUN usable by the operating system. Note 3.
Adding Additional Disk Enclosures Adding additional disk enclosures is another way to increase the capacity of the disk array. Up to six disk enclosures can be added to a disk array. Because it requires shutting down and possibly reconfiguring the disk array, adding new disk enclosures should be done by a trained-service representative. See "Adding Disk Enclosures to Increase Capacity" on page 224 for more information on adding disk enclosures.
6. Bind a LUN with the new disks using the management tool of your choice: – To use SAM, see "Binding a LUN" on page 267 – To use Array Manager 60, see "Binding a LUN" on page 289 – To use STM, see "Binding a LUN" on page 314 Note After binding a LUN, you must execute the insf -e command to install special device files on the LUN. This makes the LUN usable by the operating system. 7. Mount the file system on the LUN and restore the data to the LUN. 8.
Upgrading Controller Cache to 512 Mbytes Controller cache can be upgraded from the standard 256 Mbytes of cache to 512 Mbytes. This provides improved I/O performance for write operations. See Table 58 on page 416 for cache upgrade kit part numbers. CAUTION The cache upgrade kit must be installed by service-trained personnel only. Attempting to install the cache upgrade kit without the proper training may damage the disk array controllers.
Table 32 Controller Cache Upgrade Kit Selection Initial controller configuration Cache Upgrade Kit(s) Dual controllers, each with two 128 MB DIMMs Two A5279A kits Dual controllers, each with one 256 MB DIMM One A5279A kit Single controller with two 128 MB DIMMs One A5279A kit Single controller with one 256 MB DIMM One A5279A kit (only one of the DIMMs will be used) Managing the Disk Array on HP-UX Upgrading Controller Cache to 512 Mbytes 259
Managing the Disk Array Using SAM Most of the tasks involved in everyday management of the disk array can be performed using SAM. Using SAM you can: • • • Check disk array status Bind and unbind LUNs Add and remove global hot spares Note 260 Does it make any difference which controller I select when managing the disk array? When using SAM, you must select one of the controllers on the disk array you want to manage.
Checking Disk Array Status All aspects of disk array operation are continually monitored and the current status is stored for viewing. You can selectively view the status of any portion of the disk array configuration. To view disk array status: 1. On the main SAM screen, double-click the Disks and File Systems icon. 2. On the Disks and File Systems screen, double-click the Disk Devices icon. The Disk Devices list is displayed.
3. Select a controller for the appropriate disk array from the Disk Devices list. 4. Select the Actions menu, and the View More Information... menu option. The Main Status screen is displayed showing the overall status of the disk array. 5. Click the appropriate button to view the detailed status for the corresponding disk array component. If you need any help in interpreting the status information, access the online help.
General disk array status displayed here Managing the Disk Array Using SAM Managing the Disk Array on HP-UX Click here for detailed status of indicated component 263
Interpreting Status Indicators A common set of colored status indicators are used to convey the current operating status of each disk array component. The colors are interpreted as follows: Green The component is functioning normally. On a disk it also indicates that the disk is part of a LUN. Red The component has failed or is not installed. Blue Used only for disks, indicates the disk is functioning normally and is a member of the hot spare disk group.
4. Select the Actions menu, the Disk Array Maintenance menu option, then Modify Array Alias. The Modify Array Alias screen is displayed. Enter alias name here 5. Enter the name in the Alias field. An alias can contain up to 16 of the following characters: upper case letters, numbers, pound sign (#), period (.), and underscore (_). All other characters are invalid. 6. Click OK.
5. Click the Disk Module Status button. The Disk Status window is displayed.
6. Select the disk you want to identify. A check mark will appear on the selected disk and all the other disks in the same disk group. For example, if the selected disk is part of a LUN, all disks within the LUN will be checked. If the disk is a global hot spare, all global hot spares will be checked. 7. Click on the Disk LED Flash Options button and select the desired option: – Flash One will flash the Fault LED on the selected disk only.
To bind a LUN: 1. On the main SAM screen, double-click the Disks and File Systems icon. 2. On the Disks and File Systems screen, double-click the Disk Devices icon. The Disk Devices list is displayed. There is an entry for each disk array controller. 3. Select a controller for the appropriate disk array from the Disk Devices list. 4. Select the Actions menu, the Disk Array Maintenance menu option, and then Bind LUN. The LUN Management screen is displayed.
Select unassigned disks for a new LUN Managing the Disk Array on HP-UX Order of selected disks displayed here Managing the Disk Array Using SAM 269
5. Click the LUN # button and select the desired number for the LUN. You can also enter the LUN number directly in the field. 6. Click the RAID Level button and select the desired RAID level for the LUN. 7. Select the disks to include in the LUN. All unassigned disks are identified with a status of white. As you select disks, the capacity of the LUN is calculated and displayed, and the disks are added to the Selected Disks field.
Unbinding a LUN Unbinding a LUN makes its capacity available for the creation of a new LUN. All disks in the LUN are returned to the Unassigned disk group when the LUN is unbound. CAUTION All data on a LUN is lost when it is unbound. Make sure you backup any important data on the LUN before unbinding it. Unbinding a LUN will have the same effect on the host as removing a disk.
Note Can I replace any LUN on the disk array? Yes. In addition, the replace command is the only way you can alter the configuration of LUN 0. LUN 0 is unique in that it must exist on the disk array to permit communication with the host. Consequently, you cannot unbind LUN 0. If you want to alter LUN 0, you must use the replace command. To replace a LUN: 1. On the main SAM screen, select Disks and File Systems. 2. On the Disks and File Systems screen, double-click the Disk Devices icon.
Adding a Global Hot Spare Global hot spares provide an additional level of protection for the data on your disk array. A global hot spare automatically replaces a failed disk, restoring redundancy and protecting against a second disk failure. For maximum protection against disk failures it is recommended that you add a global hot spare for each channel. For more information on using global hot spares, see "Global Hot Spares" on page 248. A global hot spare is added using an unassigned disk.
Unassigned disks selected as hot spares 274 Managing the Disk Array Using SAM
5. Select the disk to be used as a global hot spare. Only unassigned disks, identified by a white status indicator, are available for selection as hot spares. 6. Click OK to add the global hot spare and exit the screen, or click Apply if you want to add more global hot spares. Removing a Global Hot Spare If you need to increase the available capacity of your disk array, you can do so by removing a global hot spare.
Managing the Disk Array Using Array Manager 60 The Array Manager 60 command line utilities allow you to configure, control, and monitor all aspects of disk array operation. Array Manager 60 is intended for performing the more advanced tasks involved in managing the disk array. The Array Manager 60 utilities and the tasks they are used to perform are summarized in Table 33 and Table 34. Note You must log in as superuser or root to use the Array Manager 60 utilities to manage the disk array.
Table 33 Array Manager 60 Task Summary (cont’d) Task Command Disk Array Configuration Assigning an Alias to the Disk Array ammgr -D Setting Cache Page Size ammgr -p {4 | 16} Setting the Cache Flush Threshold ammgr -T : Setting the Cache Flush Limit ammgr -L : Disabling Disk Module Write Cache Enable (WCE) amutil -w Synchronizing the Controller Date ammgr -t and Time Managing the Univers
Table 34 Array Manager 60 Command Summary Command Tasks amcfg Binding a LUN Unbinding a LUN Changing LUN Ownership Replacing a LUN Calculating LUN Capacity ammgr Adding a Global Hot Spare Removing a Global Hot Spare Assigning an Alias to the Disk Array Setting Cache Page Size Setting the Cache Flush Threshold Setting the Cache Flush Limit Synchronizing the Controller Date and Time Performing a Parity Scan Resetting Battery Age Managing the Universal Transport Mechanism (UTM) amdsp Identifying Disks D
Command Syntax Conventions The following symbols are used in the command descriptions and examples in this chapter. Table 35 Symbol <> Syntax Conventions Meaning Indicates a variable that must be entered by the user. | Only one of the listed parameters can be used (exclusive OR). [] Values enclosed in these braces are optional. {} Values enclosed in these braces are required. Array Manager 60 man pages Online man pages are included for each Array Manager 60 command.
Selecting a Disk Array and Its Components When using Array Manager 60, you must select the disk array you will be managing. In addition, many commands also require you to identify the controller, disk, or LUN within the disk array that will be impacted by the command. The command parameters used to select a disk array and its internal components are listed and described in Table 36. Note Does it make any difference which controller I select? There are two paths to the disk array — one for each controller.
Preparing to Manage the Disk Array Before you begin using Array Manager 60 to manage your disk array for the first time, you may want to perform the following procedure. It will locate all the disk arrays on the host and allow you to assign an alias to each one. A short, meaningful alias should be easier to remember than the disk array ID when using the Array Manager 60 commands. 1. Rescan for all disk arrays on the host by typing: amdsp -R 2.
Checking Disk Array Status An important part of managing the disk array involves monitoring its status to ensure it is working properly. Changes in disk array status may indicate a possible hardware failure, so it is important to check disk array status regularly. All aspects of disk array operation are continually monitored and the current status is stored for viewing. You can selectively view the status of any portion of the disk array configuration.
Table 37 Command Options for Displaying Disk Array Status (cont’d) Option Status Information Displayed -a All status. This option displays all the information returned by the preceding options. -p Hardware path information. Displays hardware path information for the controller corresponding to the specified device file. -r Rebuild status. Display the progress of any rebuilds occurring on the disk array. -A Array server status.
Figure 88 Disk Array Sample Status Output (amdsp) Vendor ID = HP Product ID = A5277A Array ID = 000A00A0B80673A6 Array alias = Array1 ----------------------------------------Array State = READY Server name = speedy Array type = 3 Mfg. Product Code = 348-0040789 --- Disk space usage -------------------Total physical = 271.4 GB Allocated to LUNs = 135.4 GB Used as Hot spare = 0.0 GB Unallocated (avail for LUNs) = 0.
Vendor ID = HP Product ID = A5277A Array ID = 000A00A0B80673A6 Array alias = Array1 ----------------------------------------SCSI Channel:ID = 1:0 Enclosure = 0 Slot = 0 Disk State = OPTIMAL Disk Information - the status of each disk should be Optimal for each disk Disk Group and Type = 060E86000238C6360F LUN assigned to a LUN. Capacity = 17.0 GB Manufacturer and Model = SEAGATE ST318203LC Serial Number = LRB61150 Firmware Revision = HP01 . . . Total capacity of all installed physical disks = 271.
Information for Controller A - 000A00A0B80673A6: Controller Status = GOOD Controller Mode = ACTIVE Vendor ID = HP Product ID = A5277A Serial Number = 1T00310110 Controller Information - make sure the Firmware Revision = 04000304 following conditions are met: Boot Revision = 04000200 - Both controllers should be ACTIVE HP Revision = HP08 Loop ID = 5 - The Loop ID must be unique for each AL_PA = 0xE0 controller Preferred AL_PA = 0xE0 - The three levels of firmware revisions must Controller Date = 05/08/2000 b
Vendor ID = HP Product ID = A5277A Array ID = 000A00A0B80673A6 Array alias = Array1 ----------------------------------------Overall State of Array = READY Array configuration: Cache Block Size = 4 KB / 4 KB Cache Flush Threshold = 80 % / 80 % Cache Flush Limit = 100 % / 100 % Cache Size = 256MB / 256MB Cache settings Managing the Disk Array Using Array Manager 60 Managing the Disk Array on HP-UX Vendor ID = HP Product ID = A5277A Array ID = 000A00A0B80673A6 Array alias = Array1 -------------------------
Listing Disk Array IDs You may find it useful to list the disk arrays recognized by the host. The list will include both the disk array ID (or S/N) and alias name if one has been assigned. This is a quick way to determine the ID of each disk array on your system. Note What if a disk array is not listed? If the list does not reflect the current disk arrays on your system, rescan for disk array as described in the next procedure. This will update the disk array list.
Managing LUNs Using Array Manager 60 you can perform the following tasks: • • • • Binding and unbinding LUNs Calculating LUN capacity Changing LUN ownership Replacing a LUN Binding a LUN Binding LUNs is one of the most common tasks you will perform in managing the disk array. A number of settings allow you to define the LUN configuration. Before binding a LUN, make sure you understand what each of the settings does and how it will impact LUN behavior.
• RAIDlevel - RAID level used for the LUN. Valid RAID levels are 0, 1, and 5. RAID 0 support requires firmware version HP08 or later. A RAID 0/1 LUN is created by selecting RAID 1 with more than two disks. • - options giving you control over how the LUN is configured. Table 38 lists valid options and what they do. Note After binding a LUN, you must execute the insf -e command to install special device files for the LUN. This makes the LUN usable by the operating system.
Table 38 Command Options for Binding a LUN (cont’d) Option Description Default -force Allows you to bind a LUN using two or more disks in the same enclosure or on the same channel. This option allows you to override the high-availability protection designed into the LUN binding process. Using this option you can specify more than one disk per enclosure or channel. You can also use this option to create a RAID 5 LUN that includes more than six disks.
Identifying Disks Binding a LUN requires the use of unassigned disks. If you are not sure which disks are unassigned, you can determine which disks are available. To identify unassigned disks, type amdsp -d The status of all disks in the array will be returned. The information includes the disk group the disk is a member of. Disks in the Unassigned disk group can be used for binding a LUN. You may want to print the information or write down the unassigned disks before you begin binding the LUN.
Unbinding a LUN Unbinding a LUN makes its capacity available for the creation of a new LUN. All disks assigned to the LUN are returned to the Unassigned disk group when the LUN is unbound. CAUTION All data on a LUN is lost when it is unbound. Make sure you backup any important data on the LUN before unbinding it. Unbinding a LUN will have the same impact on the host as removing a disk.
Does the primary path selected using LVM impact LUN ownership? Yes. The primary path established using LVM defines the owning controller for the LUN. This may override the controller ownership defined when the LUN was bound. For example, if controller A was identified as the owning controller when the LUN was bound, and LVM subsequently established the primary path to the LUN through controller B, controller B becomes the owning controller.
To replace a LUN, type: amcfg -R : -d ,..... -r • The parameters and options available when replacing a LUN are the same as those used when binding a LUN. See "Binding a LUN" on page 289. Command Examples The following example replaces existing LUN 0 on disk array 0000005EBD20. The new LUN is RAID 5, uses a stripe segment size of 16 Kbytes, and is owned by controller A.
Adding a Global Hot Spare A global hot spare is added using an unassigned disk. If there are no unassigned disks available, you cannot add a global hot spare unless you install a new disk or unbind an existing LUN. To add a global hot spare, type: ammgr -h channel:ID Command Example The following example adds a global hot spare using disk 1:1 on disk array 0000005EBD20.
Managing Disk Array Configuration Assigning an Alias to the Disk Array If you have many disk arrays to manage, you may find it useful to assign an alias name to each disk array to help you identify them. A short, meaningful alias should be easier to remember than the disk array ID when using the Array Manager 60 commands. The naming strategy you use may reflect the physical location of the disk array or its function.
Managing the Universal Transport Mechanism (UTM) On firmware HP08 and later, the Universal Transport Mechanism (UTM) serves as the SCSI communication path between the host and the disk array. In earlier versions of firmware, this communication was done using LUN 0. The UTM is configured as a separate LUN, which is used only for communication and not for storing data. Because it consumes one of the available LUNs, only 31 LUNs are available when using the UTM.
Note After executing the above command, the disk array controllers must be manually reset or power cycled before the new setting will be invoked. When the power on completes, execute the following commands: ioscan insf -e amdsp -R Disabling the UTM Although it possible to disable the UTM, it is not recommended that you do so. The benefits provided by the UTM, such as major event logging, are not realized when the UTM is disabled.
see Table 31 on page 250 for details on what performance impact altering these settings may have. Setting Cache Page Size Data caching improves disk array performance by storing data temporarily in cache memory. The cache page size can be set to either 4 Kbytes or 16 Kbytes. The default page size is 4 Kbytes. The page size is set for both controllers in the disk array.
Setting the Cache Flush Limit Sets the amount of unwritten data to remain in cache after a flush is completed on the given controller. The cache flush limit sets the level at which the disk array stops flushing cache contents to the disks. This value is expressed as a percentage of the current cache flush threshold. The default value for this setting is 100%. The cache flush limit can be set independently for each controller.
Disabling Disk Module Write Cache Enable (WCE) Note To ensure optimum protection against data loss, it is recommended that Write Cache Enable be disabled on all disks in the array. Disabling disk WCE will impact disk array performance. However, it reduces the potential for data loss during a power loss. To ensure optimum data integrity, it is recommended that the Write Cache Enable (WCE) feature be disabled on all disk modules in the array.
Enabling Disk Write Cache Enable (WCE) CAUTION WCE should only be enabled in environments that provide uninterruptible power to the disk array. A loss of power to the disk array may result in data loss with WCE enabled. If maximum I/O performance is critical, disk WCE can be enabled on all the disks in the array. Disk WCE enhances disk array I/O performance, but increases the possibility of data loss during a power loss.
Performing Disk Array Maintenance At some point during operation of the disk array, you may need to perform maintenance tasks that involve error recovery and problem isolation. This section describes the tasks involved in maintaining the disk array. Locating Disk Modules Array Manager 60 provides the means of identifying a disk module by flashing its amber Fault LED. You can flash the Fault LED on an individual disk, or on all the disks in the array.
Managing the Rebuild Process If a disk fails, the disk array automatically begins the rebuild process the first time an I/O is performed to the LUN, providing that there is a global hot spare available. If no global hot spare is available, the rebuild will not occur until the failed disk has been replaced. While a rebuild is in process, you can check its progress and change the rate at which the rebuild occurs. A rebuild must be in process to perform either of these tasks.
• amount identifies the number of blocks to rebuild at a time. This value can be from 1 to 64K and specifies the number of 512-byte blocks processed during each rebuild command. The higher the setting, the more blocks that will be processed, reducing I/O performance. A lower setting gives priority to host I/Os, delaying the completion of the rebuild. The default value for this setting is 64 blocks, or 32 Kbytes of data.
A parity scan compares the data and its associated parity to ensure they match. A parity scan cannot be performed on a LUN that has experienced a disk failure and is operating in degraded mode. Although RAID 1 LUNs and 0/1 LUNs do not use parity, you can still perform a parity scan on them. The parity scan compares the data on the mirrored disks.
previous firmware releases are logged in the major event log. Earlier versions of firmware (prior to HP08) use the standard log file format, also called Asychrnonous Event Notification (AEN). Note On firmware HP08 and later, major event logging is enabled by default. If major event logging has been disabled by disabling the UTM, only standard log entries will be available.
Viewing Disk Array Logs To display the disk array controller log files, type: amlog [-s ] [-e ] [-t ] [-c] [-d ] [-a ] • StartTime identifies the starting date and time. Log entries with an earlier date and time will not be displayed. The default is the time of the oldest log entry.
actual ArrayID must be used here. An alias cannot be used because alias names are not recorded in the log file. Command Example The following example displays the major event log entries for disk array rack_1. The log entries displayed are limited to only critical entries, and entries made after 0900 on 15 May 2000. amlog -s 05150000 -t mel -c rack_1 Sample Log Entries The following is a sample of a major event log entry.
FRU State = Failed Decoded SCSI Sense: Non-media Component Failure Reporting LUN = 0 For information on interpreting SCSI sense codes, see "SCSI Sense Codes" on page 327. Flushing Disk Array Log Contents Array Manager 60 automatically retrieves the contents of the disk array controller log at regular intervals, typically 15 minutes. However, if necessary you can manually flush (retrieve) the contents of the disk array log to the host.
To purge the oldest log file in the host directory, type: amutil -p Note Always use the amutil -p command to purge the controller logs. This command maintains the catalog pointers used to access the log files. Using a system command such as rm to remove the log files will cause log catalog errors. Management of the log files can be automated by creating a script that purges the oldest log files at regular intervals using amutil -p.
Note The patches are not currently included on the HP-UX Support Plus CD-ROM. They must be downloaded from the indicated web sites. Upgrading Disk Firmware The firmware on each disk can be upgraded individually.
Managing the Disk Array Using STM STM is an online diagnostic tool, but it can be used to perform some of the common tasks involved in managing the disk array. The tasks described here are available to all users and do not require the purchase of a license. See "Support Tools Manager" on page 347 for more information on using this tool. Checking Disk Array Status Information The STM Information Tool displays disk array status information.
Unbinding a LUN The STM Expert Tool can be used to unbind a LUN. See "Using the STM Expert Tool" on page 355 for more information on running and using this tool. STM Tool Action xstm, mstm Select: Tools > Expert Tool > Run Select: Utilities > Unbind LUN Adding a Global Hot Spare The STM Expert Tool can be used to add a global hot spare. See "Using the STM Expert Tool" on page 355 for more information on running and using this tool.
Locating Disk Modules The STM Expert Tool can be used to locate disk modules. to aid in identification. The LEDs on the disk array components are flashed to aid in identification. See "Using the STM Expert Tool" on page 355 for more information on running and using this tool.
Status Conditions and Sense Code Information The following tables may be useful interpreting the various types of disk array status information that is returned by the management tools. Where appropriate, any required action is identified. LUN Status Conditions The LUN status condition terminology used by Array Manager 60 (AM60) may differ from that used by STM. Both terms are identified in the table.
Table 40 LUN Status Conditions (cont’d) Status Definition/Action AM60: DEGRADED--REPLACED DISK BEING REBUILT STM: DEGRADED - 2 A rebuild is in progress on the LUN. No action is required. AM60: DEAD--MORE DISK FAILURES THAN REDUNDANT DISKS STM: UNAVAILABLE - 4 Multiple, simultaneous disk failures have occurred on the LUN, causing data to be inaccessible. On a RAID 5 LUN, losing more than one disk will cause this status.
Disk Status Conditions The disk status condition terminology used by Array Manager 60 (AM60) may differ from that used by STM. Both terms are identified in the table. Table 41 Disk Status Conditions Definition/Action AM60: OPTIMAL STM: OPTIMAL (OPT) The disk is operating normally. No action is required. AM60: NON-EXISTENT STM: No Disk (NIN) The disk array controller has no knowledge of a disk in this slot.
Table 41 Disk Status Conditions (cont’d) Status Definition/Action AM60: READ FAILED STM: FLT - 19 The disk array could not read from the disk. Replace the failed disk. AM60: WRONG BLOCK SIZE STM: OFF - 22 The disk uses an incompatible block size (not 512 bytes). Replace with a supported disk. AM60: DISK LOCKED OUT STM: UNSUPP The disk is not supported. Install a supported disk. AM60: NON-SUPPORTED ID STM: FLT - 33 The command made a request using an unsupported ID.
Component Status Conditions Component status conditions are organized into the categories listed in Table 42. The interpretation and action associated with a status depends on the component. See Table 51 on page 379 for more information on Disk System SC10 component status. The component status condition terminology used by Array Manager 60 (AM60) may differ from that used by STM. Both terms are identified in the table.
FRU Codes The FRU codes indicate which disk array component is responsible for the log entry. Log entries that do not involve disk modules typically require you to interpret the FRU Code and the FRU Code Qualifier values to determine which component is identified. To simplify reporting events, components within the disk array have been placed in FRU groups. The FRU Code indicates which FRU group the component is a member of.
Table 43 FRU Code Groups FRU Code Value Group Description 0x08 Disk Enclosure Group - comprises attached disk enclosures. This group includes the power supplies, environmental monitor, and other components in the disk enclosure. See "Disk Enclosure Group FRU Code Qualifier" on page 326 for information on identifying component and status.
Controller Enclosure Group FRU Code Qualifier When the Controller Enclosure group is identified (FRU Code = 0x06), the FRU Code Qualifier is interpreted as follows: Status & Component ID Byte FRU Code Qualifier: 7 Bit Field x 5 n n 4 Component Status Component Status Value 324 6 0 Status 0 Optimal 1 Warning 2 Failed 3 Missing Status Conditions and Sense Code Information n n 3 2 Component ID 1 0
Component ID Value Component 0 Unspecified 1 Device 2 Power Supply 3 Cooling Element 4 Temperature Sensors 6 Audible Alarm 7 Environmental Services Electronics 8 Controller Electronics 9 Nonvolatile Cache B Uninterruptible Power Supply 0x0C - 0x13 Reserved 0x14 SCSI Target Port 0x15 SCSI Initiator Port Managing the Disk Array on HP-UX Status Conditions and Sense Code Information 325
Disk Enclosure Group FRU Code Qualifier When the Disk Enclosure group is identified (FRU Code = 0x08), the FRU Code Qualifier is interpreted as follows: Status & Component ID Byte Disk Enclosure ID Byte (See Controller Enclosure group for values) 0 FRU Code Qualifier: 7 Bit Field 6 TIE x n 5 n n 4 n 3 Reserved 2 1 0 Disk Enclosure ID TIE Value Disk Enclosure ID 0 When TIE (Tray Identifier Enable) is set to 0, the Disk Enclosure ID field indicates both the channel and enclosure as f
SCSI Sense Codes Table 44 lists the SCSI sense codes that may be returned as part of the log entry. This information may be helpful interpreting log entries. Only the Additional Sense Code and Additional Sense Code Qualifier fields are required to identify each condition.
Table 44 SCSI Sense Codes (cont’d) Additional Sense Code Additional Sense Code Qualifier 0C 00 Interpretation If the accompanying sense key = 4, error is interrupted as follows: Unrecovered Write Error Data could not be written to media due to an unrecoverable RAM, battery or drive error. If the accompanying sense key = 6, error is interrupted as follows: Caching Disabled Data caching has been disabled due to loss of mirroring capability or low battery capacity.
Table 44 SCSI Sense Codes (cont’d) Additional Sense Code Additional Sense Code Qualifier Interpretation 11 00 Unrecovered Read Error An unrecovered read operation to a drive occurred and the controller has no redundancy to recover the error (RAID 0, degraded RAID 1, degraded mode RAID 3, or degraded RAID 5). 11 8A Miscorrected Data Error - Due to Failed Drive Read A media error has occurred on a read operation during a reconfiguration operation, User data for the LBA indicated has been lost.
Table 44 SCSI Sense Codes (cont’d) Additional Sense Code Additional Sense Code Qualifier Interpretation 21 00 Logical Block Address Out of Range The controller received a command that requested an operation at a logical block address beyond the capacity of the logical unit. This error could be in response to a request with an illegal starting address or a request that started at a valid logical block address and the number of blocks requested extended beyond the logical unit capacity.
Table 44 SCSI Sense Codes (cont’d) Additional Sense Code Additional Sense Code Qualifier Interpretation 29 04 Device Internal Reset The controller has reset itself due to an internal error condition. 29 81 Default Configuration has been Created The controller has completed the process of creating a default logical unit. There is now an accessible logical unit that did not exist previously. The host should execute its device scan to find the new logical unit.
Table 44 SCSI Sense Codes (cont’d) Additional Sense Code Additional Sense Code Qualifier Interpretation 2F 00 Commands Cleared by Another Initiator The controller received a Clear Queue message from another initiator. This error is to notify the current initiator that the controller cleared the current initiators commands if it had any outstanding. 31 01 Format Command Failed A Format Unit command issued to a drive returned an unrecoverable error.
Table 44 SCSI Sense Codes (cont’d) Additional Sense Code Additional Sense Code Qualifier Interpretation 3F 8N Drive No Longer Usable. The controller has set a drive to a state that prohibits use of the drive. The value of N in the ASCQ indicates the reason why the drive cannot be used.
Table 44 SCSI Sense Codes (cont’d) Additional Sense Code Additional Sense Code Qualifier 3F BD The controller has detected a drive with Mode Select parameters that are not recommended or which could not be changed. Currently this indicates the QErr bit is set incorrectly on the drive specified in the FRU field of the Request Sense data. 3F C3 The controller had detected a failed drive side channel specified in the FRU Qualifier field.
Table 44 SCSI Sense Codes (cont’d) Additional Sense Code Additional Sense Code Qualifier Interpretation 3F D0 Write Back Cache Battery Has Been Discharged The controller’s battery management has indicated that the cache battery has been discharged. 3F D1 Write Back Cache Battery Charge Has Completed The controller’s battery management has indicated that the cache battery is operational. 3F D8 Cache Battery Life Expiration The cache battery has reached the specified expiration age.
Table 44 SCSI Sense Codes (cont’d) Additional Sense Code Additional Sense Code Qualifier Interpretation 40 NN Diagnostic Failure on Component NN (0x80 - 0xFF) The controller has detected the failure of an internal controller component. This failure may have been detected during operation as well as during an on-board diagnostic routine. The values of NN supported in this release of the software are listed below.
Table 44 SCSI Sense Codes (cont’d) Additional Sense Code Additional Sense Code Qualifier Interpretation 44 00 Internal Target Failure The controller has detected a hardware or software condition that does not allow the requested command to be completed. If the accompanying sense key is 0x04: Indicates a hardware failure. The controller has detected what it believes is a fatal hardware or software failure and it is unlikely that a retry would be successful.
Table 44 SCSI Sense Codes (cont’d) Additional Sense Code Additional Sense Code Qualifier Interpretation 49 80 Drive Reported Reservation Conflict A drive returned a status of reservation conflict. 4B 00 Data Phase Error The controller encountered an error while transferring data to/ from the initiator or to/ from one of the drives.
Table 44 SCSI Sense Codes (cont’d) Additional Sense Code Additional Sense Code Qualifier Interpretation 85 01 Drive IO Request Aborted IO Issued to Failed or Missing drive due to recently failed removed drive. This error can occur as a result of I/Os in progress at the time of a failed or removed drive. 87 00 Microcode Download Error The controller detected an error while downloading microcode and storing it in non-volatile memory.
Table 44 SCSI Sense Codes (cont’d) Additional Sense Code Additional Sense Code Qualifier Interpretation 8B 02 Quiescence Is In Progress or Has Been Achieved 8B 03 Quiescence Could Not Be Achieved Within the Quiescence Timeout Period 8B 04 Quiescence Is Not Allowed 8E 01 A Parity/ Data Mismatch was Detected The controller detected inconsistent parity/data during a parity verification. 91 00 General Mode Select Error An error was encountered while processing a Mode Select command.
Table 44 SCSI Sense Codes (cont’d) Additional Sense Code Additional Sense Code Qualifier Interpretation 91 36 Command Lock Violation The controller received a Write Buffer Download Microcode, Send Diagnostic, or Mode Select command, but only one such command is allowed at a time and there was another such command active. 91 3B Improper Volume Definition for Auto-Volume Transfer mode AVT is disabled. Controller will operate in normal redundant controller mode without performing Auto-Volume transfers.
Table 44 SCSI Sense Codes (cont’d) Additional Sense Code Additional Sense Code Qualifier 95 02 Interpretation Controller Removal/Replacement Detected or Alternate Controller Released from Reset The controller detected the activation of the signal/signals used to indicate that the alternate controller has been removed or replaced. 98 01 The controller has determined that there are multiple sub-enclosures with the same ID value selected.
Table 44 SCSI Sense Codes (cont’d) Additional Sense Code Additional Sense Code Qualifier Interpretation A6 00 Recovered processor memory failure The controller has detected and corrected a recoverable error in processor memory. A7 00 Recovered data buffer memory error The controller has detected and corrected a recoverable error in the data buffer memory.
Table 44 SCSI Sense Codes (cont’d) Additional Sense Code Additional Sense Code Qualifier Interpretation E0 20/21 Fibre Channel Destination Channel Error ASCQ = 20: Indicates redundant path is not available to devices ASCQ = 21: Indicates destination drive channels are connected to each other Sense byte 26 will contain the Tray ID Sense byte 27 will contain the Channel ID 344 Status Conditions and Sense Code Information
5 HP-UX DIAGNOSTIC TOOLS Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346 Support Tools Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Overview STM (Support Tools Manager) is the primary diagnostic tool available for the Disk Array FC60. For diagnosing problems, STM provides the capability to gather and display detailed status information about the disk array. STM can also be used to perform common management tasks.
Support Tools Manager The STM host-based utility provides capability for managing the Disk Array FC60. STM comes with HP-UX instant ignition and support media. The Support Tools Manager (STM) host-based utility is the primary online diagnostic tool available for the HP SureStore E Disk Array FC60. STM provides the capability for testing, configuring, and evaluating the operational condition of the disk array. STM comes with HP-UX instant ignition and support media.
xstm — the X Windows Interface xstm is the X-Windows screen-based STM interface. Because it is the easiest to use, xstm is the recommended interface for systems that support graphical displays. The main xstm window displays a map representing system resources. The STM system map represents each Disk Array FC60 as two icons labeled “A5277A Array”. See figure Figure 89. Each icon represents one of the disk array controllers, which are identified by their hardware paths.
mstm — the Menu-based Interface mstm is the menu-based STM interface. It serves as an alternate interface for systems that do not support graphical displays. The main mstm window displays a list of system resources. The Disk Array FC60 is identified as product type “A5277A Array”. See Figure 90. Each entry in the list represents one of the disk array controllers, which are identified by their hardware paths. Select the entry for the disk array you will be testing.
Figure 90 mstm Interface Main Window 350 Support Tools Manager
STM Tools The STM tools available for use with the HP SureStore E Disk Array FC60 are listed in Table 45. Table 45 Available Support Tools Tool type Description Information Provides detailed configuration and status information for all disk arrays components. Expert Provides capability to perform common disk array management tasks.
Using the STM Information Tool The STM Information Tool gathers status and configuration information about the selected disk array and stores this information in three logs: the information log, the activity log, and the failure log. Running Information Tool in X Windows 1. At the system prompt: – Type xstm & 2. Click on the desired A5277A Array device icon. 3. To run the Information Tool and view the Information log: – From the menu bar, select Tools – Select Information – Select Run.
Running Information Tool in Menu Mode 1. At the system prompt: – Type mstm – Select Ok 2. To select the desired disk array: – Scroll down using the arrow key, select the A5277A Array – Press . 3. To run the Information Tool and display the Information log: – From the Menubar, select Tools – Select Information – Select Run. The Information Tool builds and displays the Information log. – Select Done when done viewing the log 4.
Interpreting the Information Tool Information Log The Information Log contains status and configuration information for all disk array components.
Using the STM Expert Tool The Expert Tool provides the capability to manage the HP SureStore E Disk Array FC60. Before using the Expert Tool for the first time you are encouraged to read through the Expert Tool help topics. The Step-by-Step instructions in particular provide useful tips on using the Expert Tool. As you perform tasks using the Expert Tool, the status of each operation is displayed in the main window.
Running Expert Tool in Menu Mode 1. At the system prompt: – Type mstm – Select Ok 2. To select the disk array: – Scroll down using the arrow key, select A5277A ARRAY. – Press 3. To run the Expert tool: – Select Menubar on or use arrow keys to get to Menubar – Select Tools – Select Expert Tool – Select Run 4. Perform the desired operation using the Expert Tool menus. The Expert Tool menu options are listed in Table 46. 5.
Table 46 Expert Tool Menus and Descriptions Menu Option Property Description Logs View Event Log NA Displays selected event log entries Tests Parity Scan NA Perform a parity scan on a LUN. Utilities Bind LUN NA Bind selected disk modules into a LUN with a specified RAID level. Unbind LUNs NA Unbind a LUN. Replace LUN Zero NA Unbind and rebind LUN 0 Hot Spares Create Create hot spares. Delete Delete hot spares. Drive Flash Fault LED on selected disk.
Support Tools Manager
Troubleshooting 6 TROUBLESHOOTING Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360 Disk Array Installation/Troubleshooting Checklist . . . . . . . . . . . . . . . . . . . . . . . . . . 365 Power-Up Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366 Controller Enclosure Troubleshooting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Introduction The modular design of the Disk Array FC60 simplifies the isolation and replacement of failed hardware components. Most disk array components are hot-swappable Field Replaceable Units (FRUs), which can be replaced while the disk array is operating. Some of the FRUs are customer replaceable. Other array components can be replaced in the field, but only by a trained service representative. A complete list of product and part numbers are included in "Replaceable Parts" on page 418.
Troubleshooting About Field Replaceable Units (FRUs) The Disk Array FC60 consists of a Controller Enclosure and one or more SureStore E Disk System SC10 enclosures. Table 47 identifies the disk array FRUs and whether they are customer replaceable. See "Removal and Replacement" on page 383 for more information.
HP-UX Troubleshooting Tools There are several tools available for troubleshooting the disk array on an HP-UX host. This includes monitoring the operation of the disk array and gathering information that will help identify and solve the problem. • Array Manager 60 - primarily used to manage the disk array, Array Manager 60 can also be used to check the status of the disk array and to retrieve log information.
Troubleshooting EMS Monitor Event Severity Levels Each event detected and reported by the EMS monitor is assigned a severity level, which indicates the impact the event may have on disk array operation. The following severity levels are used for all events: Critical An event that causes host system downtime, or other loss of service. Host system operation will be affected if the disk system continues to be used without correction of the problem. Immediate action is required.
• Probable Cause/Recommended Action – The cause of the event and suggested steps toward a solution. This information should be the first step in troubleshooting. Notification Time: Thu Aug 6 15:18:03 1998 yourserver sent Event Monitor notification information: /peripherals/events/mass_storage/LVD/enclosure/10_12.8.0.255.0.10.0 is !=1. Its current value is CRITICAL(5) Event data from monitor: Event Time: Thu Aug 6 15:18:03 1998 Hostname: yourserver.rose.hp.
Troubleshooting Disk Array Installation/Troubleshooting Checklist The following checklist is intended to help isolate and solve problems that may occur when installing the disk array.
Power-Up Troubleshooting When the disk array is powered up, each component perform an internal self-test, to ensure it is operating properly. Visual indications of power-up are: • The green Power LED on the controller enclosure is on • The green Power LED on each disk enclosures is on • All fans are operating • No Fault LEDs are on. See Figure 92 on page 368 and Figure 93 on page 377.
Troubleshooting Note If no LEDs are ON and the fans are not running, it indicates that no AC power is being supplied to the disk array power supply modules. Check the input AC power to the disk array. See "Applying Power to the Disk Array" on page 198 for information on powering up the disk array. Controller Enclosure Troubleshooting Introduction This section contains information on identifying problems with the disk array controller enclosure.
Controller Enclosure LEDs Figure 92 shows the locations of the status LEDs for the controller enclosure. Table 48 summarizes the operating LED states for all components within the controller enclosure.
Troubleshooting Table 48 Normal LED Status for Controller Enclosure Module LED Normal State Controller Enclosure Power On On (green) Power Fault Off Fan Fault Off Controller Fault Off Fast Write Cache On (green) while data is in cache Controller Power On (green) Controller Fault Off Heartbeat Blink (green) Status Green There are 8 status LEDs. The number and pattern of these LEDs depend on how your system is configured.
Master Troubleshooting Table Table 49 contains troubleshooting information for the controller enclosure and modules. Table 49 Master Troubleshooting Controller Symptom Possible Cause Procedure A Controller missing or unplugged Check the power LEDs on both controller modules. If a Power LED is off, make sure that the module is plugged in correctly and its handles are locked in place. B Controller failed If the Fault LED remains on after replacing the controller, go to cause C.
Troubleshooting Table 49 Master Troubleshooting Controller (cont’d) Symptom Possible Cause Procedure Controller enclosure and Fan Fault LED (front cover) are on Controller enclosure fan failure caused one or both controller(s) to overheat 1. Stop all activity to the controller module and turn off the power. 2. Replace the failed controller enclosure fan module. 3. Allow the controller to cool down, then turn on the power. 4. Check both controllers for fault LEDs.
Table 49 Master Troubleshooting Controller (cont’d) Symptom Possible Cause Procedure Software errors occur when attempting to access controller or disks A Software function or configuration problems Check the appropriate software and documentation to make sure the system is set up correctly or that the proper command was executed. B Controller enclosure power switches or main circuit breakers in rackmount cabinet turned off Make sure that all power switches are turned on.
Troubleshooting Table 49 Master Troubleshooting Controller (cont’d) Symptom Possible Cause Procedure One or both of the fans in the controller fan module has failed. Replace the controller fan module. The power supply fan module is unplugged or has failed. 1. Make sure the power supply fan module is plugged in correctly. Reseat the module if necessary. Controller Fan Module Fan Fault LED is on 2. Check the LEDs on the power supply fan module.
Table 49 Master Troubleshooting Controller (cont’d) Symptom Possible Cause Procedure “Battery Low” error issued by software Power turned OFF for extended period and drained battery power. Turn ON the power and allow controller module to run 7 hours to recharge the batteries. If after 7 hours, the battery low error persists, replace the BBU. Batteries are weak and FRU is due for replacement. Check the last service date for the BBU.
Troubleshooting Table 49 Master Troubleshooting Controller (cont’d) Symptom Possible Cause Procedure Power Supply LED (front cover) is on A Power supply module is missing or not plugged in properly. Insert and lock the power supply module into place. If the Fault LED is still on, go to cause B. B Power supply module is overheated or failed.
SureStore E Disk System SC10 Troubleshooting This section contains information on identifying and isolating problems with the Disk System SC10 disk enclosure. Disk Enclosure LEDs Figure 93 shows the locations of the disk enclosure status LEDs. Table 50 summarizes the operating LED states for all components within the disk enclosure. Losing LUN 0 If LUN 0 becomes unavailable because of multiple disk failures, Array Manager 60 may not be able to communicate with the disk array.
Troubleshooting A B C D E F G H I J K System fault LED System power LED Disk activity LED Disk fault LED Power On LED Term. Pwr. LED Full Bus LED BCC Fault LED Bus Active LED LVD LED Fan Fault LED Figure 93 Disk Enclosure LEDs Table 50 Disk Enclosure LED Functions LED System Power System Fault State Indication Green Power is on. Normal operation.
Table 50 Disk Enclosure LED Functions (cont’d) LED State Indication BCC Fault Amber Self-test1 / Fault OFF Normal operation Flashing Peer BCC DIP switch settings do not match LVD Term. Pwr. Full Bus Bus Active Fan Power Supply Disk Fault Disk Activity3 1 2 3 378 Green Bus operating in LVD mode OFF Bus operating in single-ended mode Green Termination power is available from the host. Normal operation. OFF There is no termination power.
Troubleshooting Note It is normal for the amber Fault LED on a component to go on briefly when the component initially starts up. However, if the Fault LED remains on for more than a few seconds, a fault has been detected. Interpreting Component Status Values (HP-UX Only) Common status terms have specific indications for various disk enclosure components. The component status condition terminology used by Array Manager 60 (AM60) may differ from that used by STM. Both terms are identified in Table 51.
Isolating Causes Table 52 lists the probable causes and solutions for problems you may detect on the disk enclosure. When more than one problem applies to your situation, investigate the first description that applies. The table lists the most basic problems first and excludes them from subsequent problem descriptions.
Troubleshooting Table 52 Disk Enclosure Troubleshooting Table (cont’d) Problem Description HW Event Category LED State Status Probable Cause/Solution Power Supply LED is amber Critical Amber Critical – An incompatible or defective component caused a temporary fault. – Power supply hardware is faulty. Unplug the power cord and wait for the LED to turn off. Reinsert the power cord. If fault persists, replace the power supply. Temperature is over limit Critical none Critical Temp is >54.
Table 52 Disk Enclosure Troubleshooting Table (cont’d) Problem Description HW Event Category Peer BCC status, Major temperature and Warning voltage are Not Available 382 LED State Status none Both BCCs: Firmware on BCC A and BCC B are Non-critical different versions. none Not Available SureStore E Disk System SC10 Troubleshooting Probable Cause/Solution Internal bus is faulty. Contact HP technical support to replace midplane.
7 REMOVAL AND REPLACEMENT Removal and Replacement Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384 Disk Enclosure Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386 Controller Enclosure Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Overview This chapter describes removal and replacement procedures for the disk array hotswappable modules that are customer replaceable. Hot-swappable modules can be replaced without impacting host interaction with the disk array.
Note Removal and Replacement Is the HP SureStore E Disk Array FC60 customer repairable? Although the modular design of the Disk Array FC60 makes it easy to replace failed components, it is recommended that repair of the product be done by a trained service representative. This includes troubleshooting, and removal and replacement of hot-swappable components.
Disk Enclosure Modules This section describes the procedures for replacing the hot swappable modules in the disk enclosure. Disk Module or Filler Module ! Hot Swappable Component! This procedure describes how to add or replace disk modules and disk slot filler modules. When adding or replacing disk filler modules use the same procedure, ignoring any steps or information that applies to disk modules only.
Note When a disk module is replaced, the new disk inherits the group properties of the original disk. For example, if you replace a disk that was part of LUN 1, the replacement will also become part of LUN 1. If the disk is a replacement for a global hot spare or an unassigned disk, the replacement will become a global hot spare or an unassigned disk. Removal and Replacement A special feature called drive lockout prevents unsupported disk drives from being used in the disk array.
A ESD plug-in B cam latch C handle Figure 94 Disk Module Removal Installing a Disk Module or Filler Module CAUTION Touching the disk circuit board can cause high energy discharge and damage the disk. Disks modules are fragile and should be handled carefully.
Note If the disk module you are installing has been removed from another Disk Array FC60, you should ensure that the module has a status of Unassigned. This is done by unbinding the LUN the disk module was a part of in the original disk array. See "Moving Disks from One Disk Array to Another" on page 255. Remove the replacement disk from its ESD bag, being careful to grasp the disk by its handle (A in Figure 96). 2. Pull the cam latch (B) away from the disk module. 3.
5. Close the cam latch to seat the module firmly into the backplane. An audible click indicates the latch is closed properly. 6. Check the LEDs (D in Figure 96) above the disk module for the following behavior: – Both LEDs should turn on briefly. – The amber Fault LED should turn off. – The green disk Activity LED should blink for a few seconds and then go out. If the host begins to access the disk, the Activity LED will flash.
Removal and Replacement A B C D handle cam latch capacity label LEDs Figure 96 Disk Module Replacement Disk Enclosure Modules 391
Disk Enclosure Fan Module ! Hot Swappable Component! A failed fan module should be replaced as soon as possible. There are two fan modules in the enclosure. If a fan fails, the remaining fan module will maintain proper cooling. However, if the remaining fan module fails before the defective fan is replaced, the disk enclosure must be shut down to prevent heat damage. CAUTION Do not remove a disk fan module until you a replacement.
Removal and Replacement A - Locking screw B - Pull tab B Figure 97 Disk Enclosure Fan Module Removal and Replacement Installing the Fan Module 1. Slide the replacement fan module into the empty slot (C in Figure 97). 2. Tighten the locking screws (A). 3. Check the fan module LED for the following behavior: – The Fan Fault LED should flash amber, and then turn green after a few seconds. If the LED does not turn green, refer to "Troubleshooting" on page 359.
Disk Enclosure Power Supply Module ! Hot Swappable Component! A failed power supply module should be replaced as soon as possible. When one power supply fails, the remaining power supply will maintain the proper operating voltage for the disk enclosure. However, if the remaining power supply fails before the first power supply is replaced, all power will be lost to the disk enclosure. CAUTION Do not remove a power supply module until you have a replacement.
Removal and Replacement ABCD- cam handle locking screw power supplies power supply slot Figure 98 Disk Enclosure Power Supply Module Removal and Replacement Installing the Power Supply Module 1. With the handle down, slide the replacement power supply into the empty slot (D in Figure 98). The supply begins to engage the backplane with 3/8 inch (8 mm) still exposed. 2. Swing the handle upward to seat the power supply into the backplane. The power supply should be flush with the chassis. 3.
Controller Enclosure Modules This section provides removal and replacement procedures for the controller enclosure modules, plus the controller enclosure front cover. Most controller modules are hot swappable, however certain restrictions need to be observed for some modules, as identified in these descriptions. The controller modules, the controller fan module, and the BBU are accessed from the front of the controller enclosure. Access to these modules requires that the front cover be removed.
Front Cover Removal/Replacement ! Hot Swappable Component! To gain access to the front of the controller module, the controller fan module, or the battery backup unit (BBU), the front cover must be removed. Removal and Replacement Removing the Front Cover 1. Pull the bottom of the cover out about one inch to release the pins. See Figure 99. 2. Slide the cover down one inch and pull it away from the controller enclosure.
Installing the Front Cover 1. Slide the top edge of the cover up under the lip of the chassis. 2. Push the cover up as far as it will go, then push the bottom in until the pins snap into the mounting holes. Controller Fan Module ! Hot Swappable Component! CAUTION Do not operate the controller enclosure without adequate ventilation and cooling to the controller modules. Operating without proper cooling to the controller modules may damage them.
To Remove: Loosen captive screw, pull firmly on handle, and remove CRU. To Install: Removal and Replacement Push controller fan CRU firmly into slot and tighten captive screw. Figure 100 Controller Fan Module Removal and Replacement Installing the Controller Fan Module. 1. Slide the new module into the slot and tighten the screw. The captive screw is springloaded and will not tighten unless it is inserted all the way into the chassis.
Battery Backup Unit (BBU) Removal/Replacement ! Hot Swappable Component! Note If the Fast Write Cache LED is on when the BBU is removed from the enclosure (or if the BBU fails), write caching will be disabled and the write cache data will be written to disk. However, if a power outage occurs prior to completing the cache write to disk, data may be lost. Therefore, make sure the Fast Write Cache LED is off before replacing the BBU. Removing the BBU 1. Remove the controller enclosure front cover.
Removal and Replacement Figure 101 BBU Removal and Replacement Controller Enclosure Modules 401
Installing the BBU 1. Unpack the new BBU. Save the shipping material for transporting the used BBU to the disposal facility. 2. Fill in the following information on the “Battery Support Information” label on the front of the battery. See Figure 102. a. Record the current date on the blank line next to “Date of Installation.” b. Record the expiration date (two years from the current date) on the line next to “Replacement Date.” 3. Slide the new BBU into the slot and tighten the screws. See Figure 101.
6. Dispose of the old BBU. Note Dispose of the used BBU according to local and federal regulations, which may include hazardous material handling procedures. Removal and Replacement Power Supply Fan Module Removal/Replacement ! Hot Swappable Component! CAUTION Do not operate the enclosure without adequate ventilation and cooling to the power supplies. Operating the power supplies without proper cooling may damage their circuitry.
Figure 103 Power Supply Fan Module Removal and Replacement Installing the Power Supply Fan Module 1. Slide the power supply fan module into the enclosure. The latch will snap down when the module is seated properly. If the latch remains up, lift up on the ring/latch and push in on the module until it snaps into place. 2. Check the module LEDs for the following behavior: – The green Fan Power LED should be on and the amber Fan Fault LEDs should be off.
Power Supply Module Removal/Replacement ! Hot Swappable Component! A power supply should be replaced as quickly as possible to avoid the possibility of the remaining supply failing and shutting down the disk array. Removal and Replacement Removing the Power Supply Module 1. Turn off the power switch and unplug the power cord from the failed power supply module. See Figure 104. 2. Lift up on the pull ring to release the latch. See Figure 105. 3. Slide the supply out of the enclosure.
Figure 105 Power Supply Module Removal and Replacement Installing the Power Supply Module 1. Slide the supply into the slot until it is fully seated and the latch snaps into place. 2. Plug in the power cord and turn on the power. See Figure 104. 3. Check the power supply module LED for the following behavior: – The Power LED should go on. Once the power supply in installed and operating, there may be a delay of up to several minutes before the Power Fault LED goes off.
SCSI Cables Removal and Replacement Replacing SCSI cables requires that the disk enclosure be shut down. Shutting down the enclosure will degrade the performance of the array during the replacement. When the replacement is completed and the disk enclosure is powered up, the array will perform a rebuild (since I/O has occurred to the array while the disk enclosure was powered off). Array performance will be reduced until the rebuild is complete To replace a SCSI cable, complete the following steps: 1.
Once the disk enclosure is powered up, check the status of the disk modules using one of the software management tools. Initially the disk modules status will be either “write failed” or “no_response.” Eventually, all the disk modules should return to “replaced” status. Once this occurs, the disk array will perform a rebuild (a result of I/O occurring during the period the enclosure was powered off). The disk array will operate at reduced performance until the rebuild is completed.
8 REFERENCE / LEGAL / REGULATORY Models and Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412 PDU/PDRU Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 417 Replaceable Parts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418 Reference / Legal / Regulatory A5277A/AZ Controller Enclosure Specifications . . . . . . . . . . . . .
System Requirements Host Systems HP-UX Table 53 Supported HP-UX Host Platform Information Supported Host Boot Support on HP-UX 11.x? Fibre Channel I/O Adapter K-class Yes A3404A V-class Yes A5158A, A3740A L-class Yes A5158A, A3740A D-class Yes A3591B N-class Yes A5158A, A3740A R-class Yes A3591B T-class No A3636A C-class No A5158A on HP-UX 11.x A3740A on HP-UX 10.20 A4xx-A5xx class Yes A5158A Windows NT 4.0 and Windows 2000 Any host running Windows NT 4.0 or Windows 2000.
• Windows 2000 Fibre Channel Host Adapters HP-UX • • • • K-class: A3404A (assy number J2389-60001), 1063 Mbps, short-wave, non-OFC D- & R--class: A3591A (number A3395-60001), 1063 Mbps, short-wave, non-OFC T-600 class: A3636A (number A3329-60107), 1063 Mbps, short-wave, non-OFC V-class: A3740A (number A3740-60001), 1063 Mbps, short-wave, non-OFC Windows NT 4.0 and /2000 System Requirements Reference / Legal / Regulatory See the HP Storage Manager 60 User’s Guide for a list of supported host adapters.
Models and Options The HP SureStore E Disk Array FC60 consists of two products: the A5277A/AZ controller enclosure and the A5294A/AZ SureStore E Disk System SC10, or disk enclosure. Each of these products have their own options as indicated below. A5277A/AZ Controller Enclosure Models and Options • A5277A is a field racked controller enclosure integrated by qualified service -trained personnel. This model can be ordered with up to six A5294A. disk enclosures.
Table 54 A5277A/AZ Product Options Option Description Controller Options (must order one option) Single controller module with 256 Mbyte cache, one Media Interface Adaptor, and one controller slot filler module. Configured with HP-UX firmware. 204 Two controller modules with 256 Mbyte cache and two Media Interface Adaptors. Configured with HP-UX firmware. 205 Two controller modules with 256 Mbyte cache and two Media Interface Adaptors. Configured with Windows NT/2000 firmware and Windows NT NVSRAM.
A5294A/AZ Disk Enclosure SC10 Models and Options Order the following product and options as required. Enter the following product and options as sub-items to the A5277A and A5277AZ products above. • A5294A disk enclosure is a field racked Sure Store E Disk System SC10 integrated by a service-trained engineer. This product may be ordered in conjunction with A5277A controller enclosure. To order a disk system SC10 without integration into an array, order A5272A.
Table 55 A5294A Custom Cabling Option Option 701 Description Delete one 2m cable included in A5294A product and add one 5m VHDCI SCSI cable for connection of A5277A to A5294A in a different rack Table 56 A5294A/AZ Storage Capacity Options Option Description Note: All disk enclosures ordered with a single A5277A/AZ must have identical Storage Capacity Options.
Disk Array FC60 Upgrade and Add-On Products Order the following parts to expand or reconfigure your original purchase: Table 58 Upgrade Products Order No. A5276A 9.1-Gbyte disk drive module 10K rpm Ultra 2 LVD A5282A 18.2-Gbyte disk drive module 10K rpm Ultra 2 LVD A5633A 18.2-Gbyte disk drive module 15K rpm Ultra 2 LVD A5595A 36.4-Gbyte disk drive module 10K rpm Ultra 2 LVD A5622A 73.
PDU/PDRU Products Hewlett-Packard offers the following PDUs and PDRUs, with US and international power options, for meeting electrical requirements: Table 59 PDU/PDRU Products Order No.
Replaceable Parts A5277A/AZ Controller Enclosure Replaceable Parts Table 60 Controller Enclosure Replaceable Parts Part Number Field Replaceable Units Exchange Part # A5278-60001 HP-UX1 Controller Module (5v model2) w/32 MB SIMM (no cache DIMMs) This part has been replaced by the A527860006. A5278-69001 A5278-60006 HP-UX1 Controller Module (3.
Table 60 Controller Enclosure Replaceable Parts (cont’d) Part Number Field Replaceable Units Exchange Part # A5277-60004 Power Supply Modules n/a A5277-60002 Power Supply Fan Module n/a A5277-60001 Front Door Assembly n/a 5021-1121 Terminator, SCSI, 68 pin, LVD n/a 5064-2464 Media Interface Adapter (MIA) n/a Reference / Legal / Regulatory A5294A/AZ Disk Enclosure Replaceable Parts Table 61 Disk Enclosure Replaceable Parts Replacement Part Order No.
A5277A/AZ Controller Enclosure Specifications Dimensions: Height Width Depth 6.75 inches (17.1 cm) 17.5 inches (44.5 cm) 24 inches (61 cm) Weight: Component Weight of Each (lbs) Quantity Subtotal (lbs) Controller modules 6.6 2 13.2 Controller Fan 1.9 1 1.9 21.4 1 21.4 Power Supply 3.3 2 6.6 Power Supply Fan 1.5 1 1.5 Front Cover 2 1 2 31.6 1 31.
AC Power: AC Voltage and Frequency: • • • 120 VAC (100 - 127 VAC), 50 to 60 Hz single phase 230 VAC (220 - 240 VAC), 50 to 60 Hz single phase Auto-ranging Current: Voltage Maximum Operating Current In-Rush Current 100 - 127 VAC 1.5 A 2.3 A 21.7 A 220 - 240 VAC 0.8 A 1.2 A 42.9 A Reference / Legal / Regulatory Typical Operating Current Power Consumption: Incoming Voltage AC RMS Typical Power Consumption 100- 127 VAC 180 watts 200 - 240 VAC 180 watts Heat Output: • 615 BTU/hr.
Environmental Specifications The HP SureStore E Disk Array FC60 has been tested for proper operation in supported Hewlett-Packard cabinets. If the disk array is installed in an untested rack configuration, care must be taken to ensure that all necessary environmental requirements are met. This includes power, airflow, temperature, and humidity. Failure to meet the required operating specifications may result in product failure.
Non-operating Environmental (shipping and storage): • • • • Temperature: -40º C to 70º C (-40º F to 158º F) Maximum gradient: 20º C per hour (68º F per hour) Relative humidity: 10% to 90% RH @ 28º C (wet bulb) Altitude: 4572 m (0 - 15,000 ft) Acoustics • Meets or exceeds all known international acoustics specifications for computing environments.
A5294A/AZ Disk Enclosure Specifications Dimensions: Height Width Depth 5.91 in. (15.0 cm) 18.9 in. (48.0 cm) 27.2 in. (69.1 cm) Weight: Component Weight of Each (lbs) Quantity Subtotal (lbs) Disk Drive (HH) 2.8 10 28 Fan 3 .3 2 7 Power Supply 10.6 2 22 BCC 4.5 2 9 Midplane-Mezzanine 6 1 6 Door 2 1 2 Chassis 35 1 35 Total, Approx. 424 A5294A/AZ Disk Enclosure Specifications 110 lbs. (50 kg.
AC Power: AC Voltage and Frequency: • • 100 - 127 VAC, 50 to 60 Hz single phase 220 - 240 VAC, 50 to 60 Hz single phase: Current: Voltage Typical Current Maximum Current 100 - 127 VAC 4.8 a 6.5 a 220 - 240 VAC 2.4 a 3.
Environmental Specifications The HP SureStore E Disk Array FC60 has been tested for proper operation in supported Hewlett-Packard cabinets. If the disk array is installed in an untested rack configuration, care must be taken to ensure that all necessary environmental requirements are met. This includes power, airflow, temperature, and humidity. Failure to meet the required operating specifications may result in product failure.
For continuous, trouble-free operation, the disk enclosure should NOT be operated at its maximum environmental limits for extended periods of time. Operating within the recommended operating range, a less stressful operating environment, ensures maximum reliability. Note The environmental limits in a nonoperating state (shipping and storage) are wider: • • • • Temperature: -40º C to 70º C (-40º F to 158º F) Maximum gradient: 24º C per hour (43.
Warranty and License Information Hewlett-Packard Hardware Limited Warranty HP warrants to you, the end-user Customer, that HP SureStore E Disk Array FC60 hardware components and supplies will be free from defects in material and workmanship under normal use after the date of purchase for three years. If HP or Authorized Reseller receives notice of such defects during the warranty period, HP or Authorized Reseller will, at its option, either repair or replace products that prove to be defective.
Software Product Limited Warranty The HP Software Product Limited Warranty will apply to all Software that is provided to you by HP as part of the HP SureStore E Disk Array FC60 for the NINETY (90) day period specified below. This HP Software Product Limited Warranty will supersede any non-HP software warranty terms that may be found in any documentation or other materials contained in the computer product packaging with respect to covered Software.
This warranty extends only to the original owner in the original country of purchase and is not transferable. Consumables, such as batteries, have no warranty.
RANTY STATEMENT ARE CUSTOMER'S SOLE AND EXCLUSIVE REMEDIES. EXCEPT AS INDICATED ABOVE, IN NO EVENT WILL HP OR ITS SUPPLIERS BE LIABLE FOR LOSS OF DATA OR FOR DIRECT, SPECIAL, INCIDENTAL, CONSEQUENTIAL (INCLUDING LOST PROFIT OR DATA), OR OTHER DAMAGE, WHETHER BASED IN CONTRACT, TORT, OR OTHERWISE. Some Countries, states or provinces do not allow limitations on the duration of an implied warranty, so the above limitation or exclusion might not apply to you.
Software or disable any licensing or control features of the Software. If the Software is licensed for "concurrent use", you may not allow more than the maximum number of authorized users to Use the Software concurrently. You may not allow the Software to be used by any other party or for the benefit of any other party. Ownership. The Software is owned and copyrighted by HP or its third party suppliers.
Restricted Rights Legend 1.) Use, duplication or disclosure by the U.S. Government is subject to restrictions as set forth in subparagraph (c)(1)(ii) of the Rights in Technical Data and Computer Software clause of DFARS 252.227-7013. Hewlett-Packard Company 3000 Hanover Street Palo Alto, Ca 94304 U.S.A. Copyright © 1997, 1998 Hewlett-Packard Company. All Rights Reserved. 2.) Customer further agree that Software is delivered and Licensed as "Commercial Computer Software" as defined in DFARS 252.
Regulatory Compliance Safety Certifications: • • • • • UL listed CUL certified TUV certified with GS mark Gost Certified CE-Mark EMC Compliance • • • • • • 434 US FCC, Class A CSA, Class A VCC1, Class A BCIQ, Class A CE-Mark C-Tick Mark Regulatory Compliance
FCC Statements (USA Only) The Federal Communications Commission (in 47 CFR 15.105) has specified that the following notice be brought to the attention of the users of this product. Reference / Legal / Regulatory This equipment has been tested and found to comply with the limits for a Class A digital device, pursuant to Part 15 of the FCC Rules. These limits are designed to provide reasonable protection against harmful interference when the equipment is operated in a commercial environment.
VCCI Statement (Japan) This equipment is in the Class A category information technology equipment based on the rules of Voluntary Control Council For Interference by Information Technology Equipment (VCCI). When used in a residential area, radio interference may be caused.
Spécification ATI Classe A (France Seulement) DECLARATION D'INSTALLATION ET DE MISE EN EXPLOITATION d'un matèriel de traitement de l'information (ATI), classé A en fonction des niveaux de perturbations radioélectriques émis, définis dans la norme européenne EN 55022 concernant la Compatibilité Electromagnétique.
Geräuschemission (For Germany Only) 438 • • • LpA: 45.0 dB (suchend) • • Alle andere Konfigurationen haben geringere Geräuschpegel. Am fiktiven Arbeitsplatz nach DIN 45635 T. 19. Die Daten sind die Ergebnisse von Typprüfungen an Gerätekonfigurationen mit den höchsten Geräuschemissionen:12 Plattenlaufwerke. Für weitere Angaben siehe unter Umgebungsbedingungen.
Declaration of Conformity according to ISO / IEC Guide 22 and EN 45014 Manufacturer Name: Manufacturer Address: Hewlett-Packard Company Enterprise Storage Business Unit P.O.Box 15 Boise, Idaho U.S.A.
FCC Statements (USA Only)
GLOSSARY adapter A printed circuit assembly that transmits user data (I/Os) between the host system’s internal bus and the external Fibre Channel link and vice versa. Also called an I/O adapter, FC adapter, or host bus adapter (HBA). ArrayID The value used to identify a disk array when using Array Manager 60. The ArrayID can be either the disk array S/N, or an alias assigned to the disk array.
bind The process of configuring unassigned disks into a LUN disk group. Disks can be bound into one of the following LUN disk groups: RAID 5, RAID 1 (single mirrored pair), RAID 0/1 (multiple mirrored pairs). bootware This controller firmware comprises the bring-up or boot code, the kernel or executive under which the firmware executes, the firmware to run hardware diagnostics, initialize the hardware and to upload other controller firmware/software from Flash memory, and the XMODEM download functionality.
Class of Service The types of services provided by the Fibre Channel topology and used by the communicating port. controller A removable unit that contains an array controller. dacstore A region on each disk used to store configuration information. During the Start Of Day process, this information is used to configure controller NVSRAM and to establish other operating parameters, such as the current LUN configuration.
disk array controller A printed-circuit board with memory modules that manages the overall operation of the disk array. The disk array controllers manage all aspects of disk array operation, including I/O transfers, data recovery in the event of a failure, and management of disk array capacity. There are two controllers (A and B) in the disk array enclosure. Both controllers are active, each assuming ownership of LUNs within the disk array.
EPROM Erasable Programmable Read-Only Memory. fabric A Fibre Channel term that describes a crosspoint switched network, which is one of three existing Fibre Channel topologies. A fabric consists of one or more fabric elements, which are switches responsible for frame routing. A fabric can interconnect a maximum of 244 devices. The fabric structure is transparent to the devices connected to it and relieves them from responsibility for station management. FC-AL See Fibre Channel Arbitrated Loop (FC-AL).
25 MB/s (quarter speed), or 12.5 MB/s (eighth speed) over distances of up to 100 m over copper media, or up to 10 km over optical links. The disk array operates at full speed. Fibre Channel Arbitrated Loop (FC-AL) One of three existing Fibre Channel topologies in which two to 126 ports are interconnected serially in a single loop circuit. Access to the FC-AL is controlled by an arbitration scheme.
frame The smallest indivisible unit of application-data transfer used by Fibre Channel. Frame size depends on the hardware implementation and is independent of the application software. Frames begin with a 4-byte Start of Frame (SOF), end with a 4-byte End of Frame (EOF), include a 24-byte frame header and 4-byte Cyclic Redundancy Check (CRC), and can carry a variable data payload from 0 to 2112 bytes, the first 64 of which can be used for optional headers.
host A processor that runs an operating system using a disk array for data storage and retrieval. hot swappable Hot swappable components can be removed and replaced while the disk array is online without disrupting system operation. Disk array controller modules, disk modules, power supply modules, and fan modules are all hot swappable components. I/O operation An operation initiated by a host computer system during which data is either written to or read from a peripheral. image (disk image) See mirroring.
created on the same disk array. A numeric value is assigned to a LUN at the time it is created. LVD-SCSI Low voltage differential implementation of SCSI. Also referred to as Ultra2 SCSI. LVM (Logical Volume Manager) The default disk configuration strategy on HP-UX. In LVM one or more physical disk modules are configured into volume groups that are then configured into logical volumes. loop address The unique ID of a node in Fibre Channel loop topology, sometimes referred to as a Loop ID.
NVSRAM The disk array controller stores operating configuration information in this non-volatile SRAM (referred to as NVSRAM). The contents of NVSRAM can only be accessed or changed using special diagnostic tools. path See primary disk array path or primary path. parity A data protection technique that provides data redundancy by creating extra data based on the original data.
PROM (Programmable Read-Only Memory) SP-resident boot code that loads the SP microcode from one of the disk array’s database drives when the disk array is powered up or when an SP is enabled. RAID An acronym for “Redundant Array of Independent Disks.” RAID was developed to provide data redundancy using independent disk drives. RAID is essentially a method for configuring multiple disks into a logical entity (LUN) that appears to the host system as a single, contiguous disk drive.
parity information, depending on the RAID level of the LUN. Until a rebuild is complete, the disk array is operating in degraded mode and is vulnerable to a second disk failure. reconstruction See rebuild. resident controller The last controller to complete the Start-Of-Day process in a given slot. At the completion of the SOD process, the identification of the controller is stored. This controller remains the resident controller until another controller completes the SOD process in the same slot.
SIMM (Single In-line Memory Module) A memory module that provides the local storage (cache) for an SP. An SP must have at least two 4-MB memory modules to support the storage system cache. Start Of Day (SOD) The initialization process used by the disk array controllers to configure itself and establish various operating parameters. Each controller runs its own SOD process. The SOD process occurs following a power on reset, or following the insertion of a controller.
drivers on the bus, and also impedance matching to prevent signal reflections at the ends of the cable. The SCSI bus requires termination at both ends of the bus. One end of the SCSI bus is terminated by the adapter’s internal termination. The other end should have a terminator placed on the 68-pin high density SCSI connector on the last SCSI peripheral. If this device is not terminated, data errors may occur. topology The physical layout of devices on a network.
INDEX AM60Srvr starting 241 AM60Srvr daemon 241 amcfg binding a LUN 289 changing LUN ownership 293 unbinding a LUN 292 amdsp checking rebuild progress 305 listing disk arrays 292 amlog viewing logs 309 ammgr adding hot spare 296 assigning an alias 297 displaying parity scan status 307 halting parity scan 307 performing a parity scan 306 removing hot spare 296 resetting battery age 312 setting cache flush limit 301 setting cache flush threshold 300 setting cache page size 300 setting controller date and tim
calculating LUN capacity 292 changing LUN ownership 293 changing rebuild priority settings 305 checking disk array status 282 checking rebuild progress 305 command summary 278 described 238 displaying parity scan status 307 flushing disk array log 311 halting parity scan 307 identifying disk modules 292 installing 240, 241 listing all disk arrays 288 locating disk modules 304 managing disk array logs 309 managing log files 307 performing a parity scan 306 removing a global hot spare 296 replacing a LUN 294
controller fan module described 40 removal and replacement 398 controller memory modules DIMM 40 SIMM 40 controller module described 38 interface connectors 39 LEDs, described 39 controller time synchronizing with host 297 cstm described 351 current disk enclosure 425 inrush 147 steady state 147 total operating and in-rush 148 D data channel, verifying 206 data parity described 48 data striping described 49 device logs STM 351 dimensions controller enclosure 420 disk enclosure 424 DIMM 40 disabling disk WCE
power-down sequence 205 power-up sequence 198 rebuild process 61 upgrade and add-on products 416 using as a boot device 222 ventilation 403 disk array capacity maximum 75 disk array configurations five disk enclosure, high availability and performance 92 five disk enclosure, maximum capacity 94 four disk enclosure, high availability and performance 88 four disk enclosure, maximum capacity 90 one disk enclosure, non-high availability 78 recommended 77 six disk enclosure, high availability and performance 96
primary LUN path 64 drive lockout 387 drivers system 146 E electrical requirements 147 EMC compliance 434 EMS hardware event monitoring 21, 362 enabling disk WCE 303 enclosure number 245 environmental requirements disk enclosure electrical requirements 149 electrical 147 power distribution units (PDUs/ PDRUs) 150 recommended European circuit breakers 149 recommended PDU/PDRU for HP System/E racks 151 site 147 environmental specifications 422, 426 ESD strap part number 160 evaluating performance 250 event me
tips for selecting disks 62 global hot spare disks described 61 H hardware event monitoring See EMS hardware event monitoring hardware path interpreting 208 peripheral device addressing 208 sample ioscan 207 volume set addressing (VSA) 209 heat output controller enclosure 421 disk enclosure 425 high availability features 21, 47 planning 73 high availability topology 102 error recovery 118 hardware components 115 redundant HP FC-AL Hubs 115 high availability, distance, and capacity topology 102, 120–123 err
log files managing 307 logs managing 309 loop ID See Fibre Channel host ID losing LUN 0 376 LUN addressing 208 assigning ownership 247 binding using Array Manager 60 289 binding using SAM 267 binding using STM 314 calculating capacity using Array Manager 60 292 changing ownership using Array Manager 60 293 configuring 242 described 65 replacing using Array Manager 60 294 selecting disks for 243 selecting RAID level for 247 setting stripe segment size 249 unbinding using Array Manager 60 293 unbinding using
recommended for HP System/E racks 151 troubleshooting 380 performance array configuration 73 I/Os per second 74 impact of configuration settings 250 rebuild 61 SCSI channels 72 split-bus configurartion 74 peripheral device addressing (PDA) 208 planning a disk array configuration 71 power AC input specifications 425 AC input, disk enclosure 421 DC specifications 422, 426 disk enclosure 425 recommended UPS 152 power cable troubleshooting 380 power consumption controller enclosure 421 disk enclosure 425 power
modules 256 rescanning for disk arrays 288 Rittal rack 18 running Array Manager 60 241 S safety compliance 434 SAM 260 adding a global hot spare 273 assigning an alias 264 binding a LUN 267 checking disk array status 260 interpreting status indicators 264 locating disk modules 265 removing a global hot spare 275 replacing a LUN 271 unbinding a LUN 271 scanning for parity errors 306 SCSI drive connections 39 SCSI cables 187 SCSI cabling full-bus configurations 188 removal and replacement 407 split-bus config
cache flush threshold 300 cache page size 300 configuration switches 176 controller date and time 297 stripe segment size 249 SF21 384 SF88 384 SIMMs 40 single-system distance topology 102, 110 D-Class, K-Class, T-Class, and V-Class 110 error recovery 113 high availability 111 non-high availability 111 site requirements 147 slot number 245 software 21 installing disk array management 213 requirements 213 management 20 software configuration 216 software tools 20 specifications 424 controller enclosure 420 d
T throughput Fibre Channel 71 SCSI channels 72 topologies unsupported Windows 131 topology basic 102, 103 error recovery 108 campus 102, 125 high availability 102 error recovery 118 hardware components 115 redundant HP FC-AL Hubs 115, 118 high availability, distance, and capacity 102, 120–123 error recovery 123 single-system distance 102, 110, 113 error recovery 113 high availability 111 non-high availability 111 switch configurations 129 troubleshooting checklist 365 event notification 363 isolating causes
Index