HARDWARE GUIDE MegaRAID® SCSI 320-2 RAID Controller November 2002 ®
This document contains proprietary information of LSI Logic Corporation. The information contained herein is not to be used by or disclosed to third parties without the express written permission of an officer of LSI Logic Corporation. LSI Logic products are not intended for use in life-support appliances, devices, or systems. Use of any LSI Logic product in such applications without written consent of the appropriate LSI Logic officer is prohibited.
FCC Regulatory Statement This device complies with Part 15 of the FCC Rules. Operation is subject to the following two conditions: (1) this device may not cause harmful interference, and (2) this device must accept any interference received, including interference that may cause undesired operation. Warning: Changes or modifications to this unit not expressly approved by the party responsible for compliance could void the user's authority to operate the equipment.
iv Copyright © 2002 by LSI Logic Corporation. All rights reserved.
Preface This book is the primary reference and Hardware Guide for the LSI Logic MegaRAID® SCSI 320-2 Controller. It contains instructions for installing the MegaRAID controller and for configuring RAID arrays. It also contains background information on RAID. The MegaRAID SCSI 320-2 controller supports single-ended and lowvoltage differential (LVD) SCSI devices on two Ultra320 and Wide SCSI channels with data transfer rates up to 320 Mbytes/s.
• Chapter 6, Hardware Installation, explains how to install the MegaRAID SCSI 320-2 controller. • Chapter 7, Installing and Configuring Clusters, explains how to implement clustering to enable two independent servers to access the same shared data storage. • Chapter 8, Troubleshooting, provides troubleshooting information for the MegaRAID SCSI 320-2 controller. • Appendix A, SCSI Cables and Connectors, describes the SCSI cables and connectors used with the MegaRAID SCSI 320-2 controller.
MegaRAID Problem Report Form (Cont.) System Information Motherboard: BIOS manufacturer: Operating System: BIOS Date: Op. Sys. Ver.: Video Adapter: MegaRAID Driver Ver.: CPU Type/Speed: Network Card: System Memory: Other disk controllers installed: Other adapter cards Installed: Description of problem: Steps necessary to re-create problem: 1. 2. 3. 4. Preface Copyright © 2002 by LSI Logic Corporation. All rights reserved.
Logical Drive Configuration Use this form to record the configuration details for your logical drives. Logical Drive Configuration Logical Drive RAID Level Stripe Size Logical Drive Size Cache Policy Read Policy LD0 LD1 LD2 LD3 LD4 LD5 LD6 LD7 LD8 LD9 LD10 LD11 LD12 LD13 LD14 LD15 LD16 LD17 LD18 LD19 LD20 LD21 viii Preface Copyright © 2002 by LSI Logic Corporation. All rights reserved.
Logical Drive Configuration (Cont.) Logical Drive RAID Level Stripe Size Logical Drive Size Cache Policy Read Policy Write Policy # of Physical Drives LD22 LD23 LD24 LD25 LD26 LD27 LD28 LD29 LD30 LD31 LD32 LD33 LD34 LD35 LD36 LD37 LD38 LD39 Preface Copyright © 2002 by LSI Logic Corporation. All rights reserved.
Physical Device Layout Use this form to record the physical device layout.
Physical Device Layout (Cont.
Physical Device Layout (Cont.
Physical Device Layout (Cont.) Channel 0 Channel 1 Target ID Device type Logical drive number/Drive number Manufacturer/Model number Firmware level Preface Copyright © 2002 by LSI Logic Corporation. All rights reserved.
xiv Preface Copyright © 2002 by LSI Logic Corporation. All rights reserved.
Contents Chapter 1 Overview 1.1 1.2 1.3 1.4 1.5 1.6 1.7 Chapter 2 Introduction to RAID 2.1 2.2 2.3 SCSI Channels NVRAM and Flash ROM SCSI Connectors Single-Ended and Differential SCSI Buses Maximum Cable Length for SCSI Standards SCSI Bus Widths and Maximum Throughput Documentation 1.7.1 MegaRAID SCSI 320-2 Hardware Guide 1.7.2 MegaRAID Configuration Software Guide 1.7.3 MegaRAID Operating System Driver Installation Guide 1-4 RAID Benefits 2.1.1 Improved I/O 2.1.
2.3.7 2.3.8 2.3.9 2.3.10 2.3.11 2.3.12 2.3.13 2.3.14 2.3.15 Disk Spanning Parity Hot Spares Hot Swapping Disk Rebuild Logical Drive States SCSI Drive States Disk Array Types Enclosure Management 2-7 2-8 2-8 2-9 2-9 2-10 2-10 2-11 2-11 Chapter 3 RAID Levels 3.1 3.2 3.3 3.4 3.5 3.6 Selecting a RAID Level RAID 0 RAID 1 RAID 5 RAID 10 RAID 50 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 4.
4.11 4.12 4.10.7 SCSI Connectors 4.10.8 SCSI Termination 4.10.9 SCSI Firmware RAID Management 4.11.1 MegaRAID BIOS Configuration Utility 4.11.2 WebBIOS Configuration Utility 4.11.3 Power Console Plus 4.11.4 MegaRAID Manager Compatibility 4.12.1 Server Management 4.12.2 SCSI Device Compatibility 4.12.3 Software Chapter 5 Configuring Physical Drives, Arrays, and Logical Drives 5.1 Configuring SCSI Physical Drives 5.1.1 Distributing Drives 5.1.2 Basic Configuration Rules 5.1.
6.2.1 6.2.2 6.2.3 6.2.4 6.2.5 6.2.6 6.2.7 6.2.8 6.2.9 6.2.10 6.2.11 6.2.12 6.3 Step 1: Unpack Step 2: Power Down Step 3: Install Cache Memory Step 4: Check Jumper Settings Step 5: Set Termination Step 6 Set SCSI Terminator Power Step 7: Install Battery Pack (Optional) Step 8: Install MegaRAID SCSI 320-2 Step 9: Connect SCSI Devices Step 10: Set Target IDs Step 11: Power Up Step 12: Run the MegaRAID BIOS Configuration Utility 6.2.
7.5 7.4.13 Verifying Disk Access and Functionality 7.4.14 Installing Cluster Service Software 7.4.15 Configuring Cluster Disks 7.4.16 Validating the Cluster Installation 7.4.17 Configuring the Second Node 7.4.18 Verifying Installation Installing SCSI Drives 7.5.1 Configuring the SCSI Devices 7.5.2 Terminating the Shared SCSI Bus 7-15 7-16 7-18 7-25 7-25 7-26 7-27 7-28 7-28 Chapter 8 Troubleshooting 8.1 8.2 8.3 8.
Index Customer Feedback xx Contents Copyright © 2002 by LSI Logic Corporation. All rights reserved.
Chapter 1 Overview This chapter provides an overview of the MegaRAID® SCSI 320-2 controller. It contains the following sections: • Section 1.1, “SCSI Channels” • Section 1.2, “NVRAM and Flash ROM” • Section 1.3, “SCSI Connectors” • Section 1.4, “Single-Ended and Differential SCSI Buses” • Section 1.5, “Maximum Cable Length for SCSI Standards” • Section 1.6, “SCSI Bus Widths and Maximum Throughput” • Section 1.7, “Documentation” The MegaRAID SCSI 320-2 controller has two SCSI channels.
1.2 NVRAM and Flash ROM A 32 KB x 8 NVRAM stores RAID system configuration information. The MegaRAID SCSI 320-2 firmware is stored in flash ROM for easy upgrade. 1.3 SCSI Connectors The MegaRAID SCSI 320-2 has two ultra high-density 68-pin external SCSI connectors and two 68-pin internal SCSI connectors for internal SCSI drives. 1.4 Single-Ended and Differential SCSI Buses The SCSI standard defines two electrical buses: 1.
Table 1.1 Maximum Cable Length for SCSI Standards (Cont.) Single Ended SCSI Low-Voltage Differential SCSI Maximum # of Drives Ultra 2 SCSI 25 m 1 Ultra 2 SCSI 12 m 7 Wide Ultra 2 SCSI 25 m 1 Wide Ultra 2 SCSI 12 m 15 Ultra320 12 m 15 Standard 1.6 SCSI Bus Widths and Maximum Throughput Table 1.2 lists the SCSI bus widths and maximum throughput, based on the SCSI speeds. Table 1.2 1.
1.7.1 MegaRAID SCSI 320-2 Hardware Guide The Hardware Guide contains the RAID overview, RAID planning, and RAID system configuration information you need first. Read this document first. 1.7.2 MegaRAID Configuration Software Guide This manual describes the software configuration utilities that you can use to configure and modify RAID systems: 1.7.
Chapter 2 Introduction to RAID This chapter introduces important RAID concepts. It contains the following sections: • Section 2.1, “RAID Benefits” • Section 2.2, “MegaRAID SCSI 320-2 – Host-Based RAID Solution” • Section 2.3, “RAID Overview” RAID (Redundant Array of Independent Disks) is a data storage method in which data, along with parity information, is distributed among two or more hard disks (called an array) to improve performance and reliability.
2.1.2 Increased Reliability The electromechanical components of a disk subsystem operate more slowly, require more power, and generate more noise and vibration than electronic devices. These factors reduce the reliability of data stored on disks. RAID provides a way to achieve much better fault tolerance and data reliability. 2.2 MegaRAID SCSI 320-2 – Host-Based RAID Solution RAID products are either host-based or external. The MegaRAID SCSI 320-2 controller is a host-based RAID solution.
2.2.2 SCSI-to-SCSI External RAID A SCSI-to-SCSI external RAID product puts the RAID intelligence inside the RAID chassis and uses a plain SCSI host adapter installed in the network server. The data transfer rate is limited to the bandwidth of the SCSI channel. A SCSI-to-SCSI external RAID product that has two Wide SCSI channels operating at speeds up to 320 Mbytes/s must squeeze the data into a single Wide SCSI (320 Mbytes/s) channel back to the host computer.
• 2.3.3 A combination of any two of the above conditions Consistency Check A consistency check verifies the correctness of redundant data in a RAID array. For example, in a system with distributed parity, checking consistency means computing the parity of the data drives and comparing the results to the contents of the parity drives. 2.3.4 Fault Tolerance Fault tolerance is achieved through cooling fans, power supplies, and the ability to hot swap drives.
Figure 2.1 Disk Striping MegaRAID Controller Segment 1 Segment 5 Segment 9 Segment 2 Segment 6 Segment 10 Segment 3 Segment 7 Segment 11 Segment 4 Segment 8 Segment 12 Disk striping involves partitioning each disk drive’s storage space into stripes that can vary in size from 2 to 128 Kbytes. These stripes are interleaved in a repeated, sequential manner. The combined storage space is composed of stripes from each drive. MegaRAID SCSI 320-2 supports stripe sizes of 2, 4, 8, 16, 32, 64, or 128 Kbytes.
2.3.5.2 Stripe Size The stripe size is the length of the interleaved data segments that MegaRAID SCSI 320-2 writes across multiple drives. MegaRAID SCSI 320-2 supports stripe sizes of 2, 4, 8, 16, 32, 64, or 128 Kbytes. 2.3.6 Disk Mirroring With disk mirroring (used in RAID 1), data written to one disk drive is simultaneously written to another disk drive, as shown in Figure 2.2. Figure 2.
2.3.7 Disk Spanning Disk spanning allows multiple disk drives to function like one big drive. Spanning overcomes lack of disk space and simplifies storage management by combining existing resources or adding relatively inexpensive resources. For example, four 60 Gbyte disk drives can be combined to appear to the operating system as one single 240 Gbyte drive. Disk spanning alone does not provide reliability or performance enhancements.
Table 2.1 describes how disk spanning is used for RAID 10 and RAID 50. Table 2.1 Spanning for RAID 10 and RAID 50 Level Description 10 Configure RAID 10 by spanning two contiguous RAID 1 logical drives. The RAID 1 logical drives must have the same stripe size. 50 Configure RAID 50 by spanning two contiguous RAID 5 logical drives. The RAID 5 logical drives must have the same stripe size. Note: 2.3.
has a capacity closest to and at least as great as that of the failed drive to take the place of the failed drive. Note: Hot spares are used only in arrays with redundancy—for example, RAID levels 1, 5, 10, and 50. A hot spare connected to a specific MegaRAID SCSI 320-2 controller can be used only to rebuild a drive that is connected to the same controller. 2.3.10 Hot Swapping Hot swapping is the manual replacement of a defective physical disk unit while the computer is still running.
totally dedicated to rebuilding the failed drive. The MegaRAID SCSI 3202 rebuild rate can be configured between 0% and 100%. At 0%, the rebuild is only done if the system is not doing anything else. At 100%, the rebuild has a higher priority than any other system activity. 2.3.12 Logical Drive States Table 2.2 describes the logical drive states. Table 2.2 Logical Drive States State Description Optimal Drive operating condition is good. All configured drives are online.
2.3.14 Disk Array Types Table 2.4 describes the RAID disk array types. Table 2.4 Disk Array Types Type Description SoftwareBased The array is managed by software running in a host computer using the host CPU bandwidth. The disadvantages associated with this method are the load on the host CPU and the need for different software for each operating system. SCSI to SCSI The array controller resides outside of the host computer and communicates with the host via a SCSI adapter in the host.
2-12 Introduction to RAID Copyright © 2002 by LSI Logic Corporation. All rights reserved.
Chapter 3 RAID Levels This chapter describes each supported RAID level and the factors to consider when choosing a RAID level. It contains the following sections: 3.1 • Section 3.1, “Selecting a RAID Level” • Section 3.2, “RAID 0” • Section 3.3, “RAID 1” • Section 3.4, “RAID 5” • Section 3.5, “RAID 10” • Section 3.6, “RAID 50” Selecting a RAID Level To ensure the best performance, you should select the optimal RAID level when you create a system drive.
3.2 RAID 0 RAID 0 provides disk striping across all drives in the RAID subsystem. RAID 0 does not provide any data redundancy, but does offer the best performance of any RAID level. RAID 0 breaks up data into smaller blocks and then writes a block to each drive in the array. The size of each block is determined by the stripe size parameter, set during the creation of the RAID set. RAID 0 offers high bandwidth.
Figure 3.1 RAID 0 Array MegaRAID Controller Segment 1 Segment 5 Segment 9 3.3 Segment 2 Segment 6 Segment 10 Segment 3 Segment 7 Segment 11 Segment 4 Segment 8 Segment 12 RAID 1 In RAID 1, the MegaRAID SCSI 320-2 duplicates all data from one drive to a second drive. RAID 1 provides complete data redundancy, but at the cost of doubling the required data storage capacity. Uses Use RAID 1 for small databases or any other environment that requires fault tolerance but small capacity.
Figure 3.2 RAID 1 Array MegaRAID Controller Segment 1 Segment 2 Segment 3 Segment 4 3.4 Segment 1 Duplicated Segment 2 Duplicated Segment 3 Duplicated Segment 4 Duplicated RAID 5 RAID 5 includes disk striping at the byte level and parity. In RAID 5, the parity information is written to several drives. RAID 5 is best suited for networks that perform many small I/O transactions simultaneously. RAID 5 addresses the bottleneck issue for random I/O operations.
exclusive-or assist make RAID 5 performance exceptional in many different environments. Uses Provides high data throughput, especially for large files. Use RAID 5 for transaction processing applications, because each drive can read and write independently. If a drive fails, the MegaRAID SCSI 320-2 uses the distributed parity data to recreate all missing information. Use also for office automation and online customer service that requires fault tolerance.
3.5 RAID 10 RAID 10 is a combination of RAID 0 and RAID 1. RAID 10 has mirrored drives. RAID 10 breaks up data into smaller blocks, and then stripes the blocks of data to each RAID 1 RAID set. Each RAID 1 RAID set then duplicates its data to its other drive. The size of each block is determined by the stripe size parameter, which is set during the creation of the RAID set. RAID 10 can sustain one to four drive failures while maintaining data integrity, if each failed disk is in a different RAID 1 array.
Figure 3.4 RAID 10 Array MegaRAID Controller Data Flow RAID 1 Disk 1 Segment 1 Segment 3 Segment 5 RAID 1 Disk 2 Disk 3 Segment 1 Segment 3 Segment 5 Segment 2 Segment 4 Segment 6 Disk 4 Segment 2 Segment 4 Segment 6 RAID 0 3.6 RAID 50 RAID 50 provides the features of both RAID 0 and RAID 5, including both parity and disk striping across multiple drives. RAID 50 is best implemented on two RAID 5 disk arrays with data striped across both disk arrays.
RAID 50 can sustain one to four drive failures while maintaining data integrity, if each failed disk is in a different RAID 5 array. Uses Works best when used with data that requires high reliability, high request rates, high data transfer, and medium to large capacity. Strong Points Provides high data throughput, data redundancy, and very good performance. Weak Points Requires 2 to 4 times as many parity drives as RAID 5. Drives 6 to 30 The initiator takes one ID per channel.
Chapter 4 Features This chapter explains the features of the MegaRAID SCSI 320-2. It contains the following sections: • Section 4.1, “SMART Technology” • Section 4.2, “Configuration on Disk” • Section 4.3, “Configuration Features” • Section 4.4, “Array Performance Features” • Section 4.5, “RAID Management Features” • Section 4.6, “Fault Tolerance Features” • Section 4.7, “Software Utilities” • Section 4.8, “Operating System Software Drivers” • Section 4.
4-2 • SCSI data transfers up to 320 Mbytes/s • Synchronous operation on a wide LVD SCSI bus • Up to 15 LVD SCSI devices on each of the Wide buses • Up to 256 Mbytes of 3.3 V PC100 (or faster) SDRAM cache memory in one single-sided or double-sided DIMM socket (Cache memory is used for read and write-back caching and for RAID 5 parity generation.
4.1 SMART Technology The MegaRAID Self Monitoring Analysis and Reporting Technology (SMART) feature detects up to 70% of all predictable drive failures. SMART monitors the internal performance of all motors, heads, and drive electronics. You can recover from drive failures through online physical drive migration. 4.
Table 4.1 Configuration Features (Cont.
4.4 Array Performance Features Table 4.2 lists the array performance features. Table 4.2 4.5 Array Performance Features Specification Feature Host data transfer rate 532 Mbytes/s Drive data transfer rate 320 Mbytes/s Stripe sizes 2, 4, 8, 16, 32, 64, or 128 Kbytes RAID Management Features Table 4.3 lists the RAID management features. Table 4.
4.6 Fault Tolerance Features Table 4.4 lists the fault tolerance features. Table 4.4 4.7 Fault Tolerance Features Specification Feature Support for SMART Yes Optional battery backup for cache memory Standard. Provided on the MegaRAID Controller.
4.8 Operating System Software Drivers MegaRAID SCSI 320-2 includes a DOS software configuration utility, and drivers for: • Windows NT 4.0 • Windows 2000 • Windows .NET • Windows XP • Novell NetWare 5.1, 6.0 • Red Hat Linux 7.2, 7.3 • DOS version 6.xx or later The DOS drivers for MegaRAID are contained in the firmware on the MegaRAID controller, except for the DOS ASPI and CD drivers. Call your LSI OEM support representative or access the web site at www.lsilogic.
Table 4.6 MegaRAID SCSI 320-2 Specifications (Cont.) Parameter Specification Nonvolatile RAM 32 KB 8 for storing RAID configuration Memory type One 72-bit 168-pin SDRAM DIMM socket provides write-through or write-back caching on a logical drive basis. It also provides adaptive read-ahead. Operating voltage 5.00 V 0.25 V and 3.3 V +/- 0.
transfers, RAID processing, drive rebuilding, cache management, and error recovery. 4.10.2 Cache Memory Cache memory resides in a single 72-bit DIMM socket that requires one unbuffered 3.3 V SDRAM single-sided or double-sided DIMM. Possible configurations are 32, 64, 128, or 256 Mbytes. MegaRAID supports write-through or write-back caching, which can be selected for each logical drive.
4.10.6 SCSI Bus The MegaRAID SCSI 320-2 controller has two Ultra320 Wide SCSI channels that support low voltage differential SCSI devices with active termination. Both synchronous and asynchronous devices are supported. The MegaRAID controller provides automatic termination disable via cable detection. Each channel supports up to 15 wide or seven non-wide SCSI devices at speeds up to 320 Mbytes/s per SCSI channel. The MegaRAID controller supports up to six non-disk devices per controller.
Table 4.7 SCSI Firmware (Cont.) Feature Description Multi-threading Up to 255 simultaneous commands with elevator sorting and concatenation of requests per SCSI channel Stripe size Variable for all logical drives: 2, 4, 8, 16, 32, 64, or 128 Kbytes Rebuild Multiple rebuilds and consistency checks, with userdefinable priority 4.
4.11.3 Power Console Plus Power Console Plus runs in Windows NT, 2000, XP, and .NET. It configures, monitors, and maintains multiple RAID servers from any network node or a remote location. See the MegaRAID Configuration Software Guide for additional information. 4.11.4 MegaRAID Manager MegaRAID Manager is a character-based, non-GUI utility for Linux and Novell NetWare that changes policies and parameters, and monitors RAID systems. See the MegaRAID Configuration Software Guide for additional information.
Chapter 5 Configuring Physical Drives, Arrays, and Logical Drives This chapter explains how to configure SCSI physical drives, arrays, and logical drives connected to the MegaRAID SCSI 320-2 controller. It contains the following sections: 5.1 • Section 5.1, “Configuring SCSI Physical Drives” • Section 5.2, “Configuring Arrays” • Section 5.3, “Creating Logical Drives” • Section 5.4, “Configuring Logical Drives” • Section 5.
5.1.2 Basic Configuration Rules You should observe the following guidelines when connecting and configuring SCSI devices in a RAID array: • Attach non-disk SCSI devices to a single SCSI channel that does not have any disk drives. • Distribute the SCSI hard disk drives equally among all available SCSI channels except any SCSI channel that is being reserved for nondisk drives. • You can place up to 30 physical disk drives in a logical array, depending on the RAID level.
Table 5.1 Physical Device Configuration (Cont.) SCSI ID Device Description Termination? 4 5 6 8 9 10 11 12 13 14 15 SCSI Channel 1 0 1 2 3 4 5 6 8 9 10 11 12 13 Configuring SCSI Physical Drives Copyright © 2002 by LSI Logic Corporation. All rights reserved.
Table 5.1 Physical Device Configuration (Cont.) SCSI ID Device Description Termination? 14 15 5.1.4 Logical Drive Configuration Use Table 5.2 to record the configuration for your logical drives. Table 5.
Table 5.2 Logical Drive Logical Drive Configuration (Cont.) RAID Level Stripe Size Logical Drive Size Cache Policy Read Policy Write Policy # of Physical Drives LD17 LD18 LD19 LD20 LD21 LD22 LD23 LD24 LD25 LD26 LD27 LD28 LD29 LD30 LD31 LD32 LD33 LD34 LD35 LD36 LD37 LD38 LD39 Configuring SCSI Physical Drives Copyright © 2002 by LSI Logic Corporation. All rights reserved.
5.1.5 Physical Device Layout Use Table 5.3 to record the physical device layout. Table 5.
Table 5.3 Physical Device Layout (Cont.
Table 5.3 Physical Device Layout (Cont.
Table 5.3 Physical Device Layout (Cont.) Channel 0 Channel 1 Target ID Device type Logical drive number/Drive number Manufacturer/Model number Firmware level 5.2 Configuring Arrays Connect the physical drives to the MegaRAID SCSI 320-2, configure the drives, then initialize them. The number of physical disk drives that an array can support depends on the firmware version.
5.2.2 Creating Hot Spares Any drive that is present, formatted, and initialized, but is not included in a array or logical drive is automatically designated as a hot spare. You can also designate drives as hot spares using the MegaRAID BIOS Configuration Utility, the MegaRAID Manager, or Power Console Plus. See the MegaRAID Configuration Software Guide for additional information. 5.3 Creating Logical Drives Logical drives are arrays or spanned arrays that are presented to the operating system.
assumed. Table 5.4 describes the RAID levels, including the number of drives required, and the capacity. Table 5.4 RAID Level 5.3.1.
5.3.1.3 Maximizing Drive Performance You can configure an array for optimal performance. But optimal drive configuration for one type of application will probably not be optimal for any other application. Table 5.6 lists basic guidelines for the performance characteristics for RAID drive arrays at each RAID level. Table 5.6 Performance Characteristics for RAID Levels RAID Level 5.3.2 Performance Characteristics 0 Excellent for all types of I/O activity, but provides no data security.
5.4 Configuring Logical Drives After you have installed the MegaRAID SCSI 320-2 controller in the server and have attached all physical disk drives, perform the following actions to prepare a RAID array: 1. Optimize the MegaRAID SCSI 320-2 controller options for your system. See Chapter 3 for additional information. 2. Press to run the MegaRAID Manager. 3. If necessary, perform a low-level format of the SCSI drives that will be included in the array and the drives to be used for hot spares. 4.
must be available 24 hours per day? Will the information stored in this disk array contain large audio or video files that must be available on demand? Will this disk array contain data from an imaging system? You must identify the purpose of the data to be stored in the disk subsystem before you can confidently choose a RAID level and a RAID configuration. 5.5 Planning the Array Configuration Fill out Table 5.8 to help plan this array. Table 5.
Use Table 5.9 to plan the array configuration. Table 5.
5-16 Configuring Physical Drives, Arrays, and Logical Drives Copyright © 2002 by LSI Logic Corporation. All rights reserved.
Chapter 6 Hardware Installation This chapter explains how to install the MegaRAID SCSI 320-2 controller. It contains the following sections: 6.1 • Section 6.1, “Hardware Requirements” • Section 6.2, “Installation Steps” • Section 6.3, “Summary” Hardware Requirements You must have the following in order to install the MegaRAID SCSI 320-1 controller and create arrays: • A host computer with the following: – A motherboard with 5 V/3.
6.1.1 Optional Equipment You may also want to install SCSI cables that interconnect the MegaRAID SCSI 320-2 to external SCSI devices. 6.2 Installation Steps The MegaRAID SCSI 320-2 provides extensive customization options. If you need only basic MegaRAID SCSI 320-2 features and your computer does not use other adapter cards with resource settings that may conflict with MegaRAID SCSI 320-2 settings, even custom installation can be quick and easy. Table 6.1 lists the hardware installation steps.
Table 6.1 Step 6.2.1 Hardware Installation Steps (Cont.) Action Additional Information 11 Replace the computer cover and turn the power on. Be sure the SCSI devices are powered up before, or at the same time as, the host computer. 12 Run the MegaRAID BIOS Configuration Utility. Optional. 13 Install software drivers for the desired operating systems. Step 1: Unpack Unpack and install the hardware in a static-free environment.
6.2.3 Step 3: Install Cache Memory Important: A minimum of 32 Mbytes of cache memory is required. The cache memory must be installed before the MegaRAID controller is operational. Install cache memory DIMMs on the MegaRAID controller card in the cache memory socket. Use a 72-bit 3.3 V single-sided or double-sided 168-pin unbuffered DIMM. Lay the controller card component-side up on a clean static-free surface.
6. You can now add or remove DRAM modules from the MegaRAID controller, as described above. 7. Reattach the battery pack harness to J10 on the MegaRAID controller. 8. Reinstall the MegaRAID controller in the computer. Follow the instructions in this chapter. 9. Replace the computer cover and turn the computer power on. 6.2.3.2 Recommended Memory Vendors Call LSI Logic Technical Support at 678-728-1250 for a current list of recommended memory vendors. 6.2.
Figure 6.2 MegaRAID SCSI 320-2 Controller Layout J4 J5 Internal High-Density SCSI Connectors Channel 1 DIMM Socket J17 J10 J16 J18 Channel 1 External Very-High Density SCSI Connectors Channel 0 Channel 0 J2 J3 6.2.4.1 J2 SCSI Activity LED J2 is a four-pin connector that attaches to a cable that connects to the hard disk LED mounted on the computer enclosure. The LED indicates data transfers (SCSI bus activity.) 6.2.4.2 J3 Dirty Cache LED J3 is a two-pin header for the dirty cache LED.
settings. Leave at the default setting (jumper on pins 1 and 2) to allow the MegaRAID controller to automatically set its own SCSI termination. Table 6.3 6.2.4.4 Pinout for J4/J5 Termination Enable Type of SCSI Termination J4/J5 Setting Software control of SCSI termination using drive detection (default). Short pins 1-2 Permanently disable all onboard SCSI termination. Short pins 2-3 Permanently enable all onboard SCSI termination.
6.2.4.6 J17/J18 SCSI Bus Termination Power J17 and J18 are 2-pin jumpers that control the termination power setting for channel 0 and channel 1, respectively. Leave both jumpers at the default setting (jumper installed on pins 1 and 2) to allow the PCI bus to provide termination power. (When the jumpers are removed, the SCSI bus provides termination power.) 6.2.5 Step 5: Set Termination Each MegaRAID SCSI channel can be individually configured for termination enable mode by setting the J4 and J5 jumpers.
Figure 6.3 Termination of Internal SCSI Disk Arrays Terminator ID2 ID1 – No Termination ID0 – Boot Drive No Termination MegaRAID SCSI 320-2 Controller Host Computer 6.2.5.3 Terminating External Disk Arrays In most array enclosures, the end of the SCSI cable has an independent SCSI terminator module that is not part of any SCSI drive. In this way, Installation Steps Copyright © 2002 by LSI Logic Corporation. All rights reserved.
SCSI termination is not disturbed when any drive is removed, as shown in Figure 6.4: Figure 6.4 Terminating External Disk Arrays External SCSI Drives ID 0 ID 1 ID 2 ID 3 ID 4 ID 5 ID 6 Termination Enabled 6.2.5.4 Terminating Internal and External Disk Arrays You can use both internal and external drives with the MegaRAID SCSI 320-2. You still must make sure that the proper SCSI termination and termination power is preserved, as shown in Figure 6.
Figure 6.5 Terminating Internal and External Disk Arrays Host Computer Terminator ID2 External SCSI Drives ID1 – No Termination ID 0 ID0 – Boot Drive No Termination ID 1 ID 2 ID 3 ID 4 ID 5 ID 6* Note: *Termination enabled from last SCSI drive MegaRAID 320-2 Host Computer Installation Steps Copyright © 2002 by LSI Logic Corporation. All rights reserved.
6.2.5.5 Connecting Non-Disk SCSI Devices SCSI tape drives and SCSI CD-ROM drives must each have a unique SCSI ID regardless of the SCSI channel they are attached to. The general rule for Unix systems is: • Tape drive set to SCSI ID 2 • CD-ROM drive set to SCSI ID 5 Make sure that no hard drives are attached to the same SCSI Channel as the non-disk SCSI devices. Drive performance will be significantly degraded if SCSI hard disk drives are attached to this channel. Figure 6.
6.2.6 Step 6 Set SCSI Terminator Power J17 and J18 control the termination power setting for the MegaRAID 320-2 SCSI channels, as explained in Section 6.2.4.6, “J17/J18 SCSI Bus Termination Power,” page 6-8. By default (jumper installed on pins 1 and 2 of J17 and J18), the PCI bus supplies termination power. See the documentation for each SCSI device for information about enabling TermPWR. Important: 6.2.7 The SCSI channels need Termination power to operate.
Figure 6.7 6.2.7.1 MegaRAID Controller with Backup Battery Module Configuring the Battery Pack After you install the MegaRAID controller you must configure the battery pack in the BIOS Configuration Utility. To do this, you select the Objects menu, then select Battery Backup. Table 6.6 explains the battery backup menu options. Table 6.6 6-14 Backup Battery Menu Options Menu Item Explanation Battery Pack PRESENT appears if the battery pack is properly installed; ABSENT if it is not.
6.2.7.2 Charging the Battery Pack The battery pack is shipped uncharged, and you must charge it for 6 hours before you can use it. The battery pack will not supply power for the full data retention time until it is fully charged. It is a good idea to set the MegaRAID controller cache write policy option to Write-Through during the battery pack charging period. After the battery pack is fully charged, you can change the cache write policy to Write-Back. 6.2.7.
batteries must be sent to a specific location for proper disposal. Call the Rechargeable Battery Recycling Corporation at 352-376-6693 (FAX: 352-376-6658) for an authorized battery disposal site near you.
Figure 6.8 Installing the MegaRAID SCSI 320-2 Controller Bracket Screw 32-bit slots (3.3 V) 64-bit slots (5 V) 6.2.9 Step 9: Connect SCSI Devices Use SCSI cables to connect SCSI devices to the MegaRAID SCSI 320-1. The MegaRAID SCSI 320-2 has the following connectors: • Two internal high-density 68-pin SCSI connectors: J7 is for SCSI channel 0, J8 is for SCSI channel 1. • Two external very high-density 68-pin SCSI connector: J9 is for SCSI channel 0, J19 is for SCSI channel 1.
Use this procedure to connect SCSI devices: 1. Disable termination on any SCSI device that does not sit at the end of the SCSI bus. 2. Configure all SCSI devices to supply TermPWR. 3. Set proper target IDs (TIDs) for each SCSI device. 4. Distribute SCSI devices evenly across the SCSI channels for optimum performance. 5. Do not exceed the maximum cable length for the number and type of SCSI devices you are using. (See Table 1.1 for details.) 6.
automatically occupies TID 7 on each SCSI channel. Eight-bit SCSI devices can only use the TIDs from 0 to 6. 16-bit devices can use the TIDs from 0 to 15. The arbitration priority for a SCSI device depends on its TID. Table 6.7 shows the relative priority of each Target ID: Table 6.7 Priority of Target IDs Priority Highest TID 7 Important: 6.2.10.1 6 Lowest 5 ... 2 1 0 15 14 ...
Table 6.8 Example of Mapping for SCSI 320-2 (Cont.) ID Channel 0 Channel 1 12 A6-7 A6-8 13 A7-2 A7-3 14 A7-5 A7-6 15 A7-8 A8-1 6.2.11 Step 11: Power Up Replace the computer cover and reconnect the AC power cords. Turn power on to the host computer. Set up the power supplies so that the SCSI devices are powered up at the same time as or before the host computer. If the computer is powered up before a SCSI device, the device might not be recognized.
6.2.13 Step 13: Install the Operating System Driver MegaRAID can operate under MS-DOS or any DOS-compatible operating system using the standard AT BIOS INT 13h Hard Disk Drive interface. To operate with other operating systems, you must install software drivers. MegaRAID provides software drivers on the Driver and Documentation CD for the following operating systems: • MS-DOS version 6.xx or later • Microsoft Windows NT 4.0, Windows 2000, Windows XP, Windows .NET • Novell NetWare 5.1, 6.
6-22 Hardware Installation Copyright © 2002 by LSI Logic Corporation. All rights reserved.
Chapter 7 Installing and Configuring Clusters This chapter explains how clusters work and how to install and configure them. It has the following sections: 7.1 • Section 7.1, “Overview” • Section 7.2, “Benefits of Clusters” • Section 7.3, “Installation and Configuration” • Section 7.4, “Cluster Installation” • Section 7.
7.2 Benefits of Clusters Clusters provide three basic benefits: 7.3 • Improved application and data availability • Scalability of hardware resources • Simplified management of large or rapidly growing systems Installation and Configuration Use the following procedure to install and configure your system as part of a cluster. 1. Unpack the controller, following the instructions in Chapter 6. 2. Set the hardware termination for the controller as “always on”. See Section 6.2.4.
cannot be failed over individually when assigned driver letters in Windows 2000. 11. Follow the on-screen instructions to create arrays and save the configuration. 12. Repeat step 4 – step 7 for the second controller. 13. Power down the second server. 14. Attach the cables for the second controller to the shared enclosure and power up the second server. 15.
5. Repeat step 1 – step 4 to install the device driver on the second system. After the cluster is installed, and both nodes are in the Microsoft Windows 2000 Advanced Server, installation will detect a SCSI processor device. 6. On the Found New Hardware Wizard prompt, choose to display a list of the known drivers, so that you can select a specific driver. 7. Click on Next. 8. Select the driver that you want to install for the device.
configuration is unsupported. HCL certification requires a separate private network adapter. 7.3.3 Shared Disk Requirements Disks can be shared by the nodes. The requirements for sharing disks are as follows: • Physically attach all shared disks, including the quorum disk, to the shared bus. • Make sure that all disks attached to the shared bus are seen from all nodes. You can check this at the setup level in (the BIOS configuration utility.) See Section 7.
Table 7.1 shows which nodes and storage devices should be powered on during each step. Table 7.1 Nodes and Storage Devices Step Node 1 Node 2 Storage Comments Set Up Networks On On Off Make sure that power to all storage devices on the shared bus is turned off. Power on all nodes. Set up Shared Disks On Off On Power down all nodes. Next, power on the shared storage, then power on the first node. Verify Disk Configuration Off On On Shut down the first node. Power on the second node.
7.4.2 Installing Microsoft Windows 2000 Install Microsoft Windows 2000 on each node. See your Windows 2000 manual for information. Log on as administrator before you install the Cluster Services. 7.4.3 Setting Up Networks Note: Do not allow both nodes to access the shared storage device before the Cluster Service is installed. In order to prevent this, power down any shared storage devices and then power up nodes one at a time.
Note: 7.4.4 Configuring the Cluster Node Network Adapter Note: 7.4.4.1 Use crossover cables for the network card adapters that access the cluster nodes. If you do not use the crossover cables properly, the system will not detect the network card adapter that accesses the cluster nodes. If the network card adapter is not detected, then you cannot configure the network adapters during the Cluster Service installation.
the connection and correctly assign it. Follow these steps to change the name: 1. Right-click on the Local Area Connection 2 icon. 2. Click on Rename. 3. In the text box, type Private Cluster Connection and then press Enter. 4. Repeat steps 1-3 to change the name of the public LAN network adapter to Public Cluster Connection. 5. The renamed icons should look like those in the picture above. Close the Networking and Dial-up Connections window.
Select the network speed from the drop-down list. Do not use “Autoselect” as the setting for speed. Some adapters can drop packets while determining the speed. Set the network adapter speed by clicking the appropriate option, such as Media Type or Speed. 10. Configure identically all network adapters in the cluster that are attached to the same network, so they use the same Duplex Mode, Flow Control, Media Type, and so on. These settings should stay the same even if the hardware is different. 11.
7.4.7 Verifying Connectivity and Name Resolution Perform the following steps to verify that the network adapters are working properly: Note: Before proceeding, you must know the IP address for each network card adapter in the cluster. You can obtain it by using the IPCONFIG command on each node. 1. Click on Start. 2. Click on Run. 3. Type cmd in the text box. 4. Click on OK. 5. Type ipconfig /all and press Enter. IP information displays for all network adapters in the machine. 6.
Ping 192.168.0.172 and Ping 10.1.1.1 from Node 1. Then you would type: Ping 192.168.0.172 and 10.1.1.1 from Node 2. To confirm name resolution, ping each node from a client using the node’s machine name instead of its IP number. 7.4.8 Verifying Domain Membership All nodes in the cluster have to be members of the same domain and capable of accessing a domain controller and a DNS Server. You can configure them as either member servers or domain controllers.
6. Right-click on Users. 7. Point to New and click on User. 8. Type in the cluster name and click on Next. 9. Set the password settings to User Cannot Change Password and Password Never Expires. 10. Click on Next, then click on Finish to create this user. Note: If your company’s security policy does not allow the use of passwords that never expire, you must renew the password on each node before password expiration. You must also update the Cluster Service configuration. 11.
• Create a small partition [Use a minimum of 50 Mbytes as a quorum disk. Windows 2000 generally recommends a quorum disk to be 500 Mbytes.] • Dedicate a separate disk for a quorum resource. The failure of the quorum disk would cause the entire cluster to fail; therefore, Windows 2000 strongly recommends that you use a volume on a RAID disk array. During the Cluster Service installation, you have to provide the drive letter for the quorum disk.
7.4.12 Assigning Drive Letters After you have configured the bus, disks, and partitions, you must assign drive letters to each partition on each clustered disk. Follow these steps to assign drive letters. Note: Mountpoints is a feature of the file system that lets you mount a file system using an existing directory without assigning a drive letter. Mountpoints is not supported on clusters.
13. Highlight the file and press the Del key to delete it from the clustered disk. 14. Repeat the process for all clustered disks to make sure they can be accessed from the first node. After you complete the procedure, shut down the first node, power on the second node and repeat the procedure above. Repeat again for any additional nodes. After you have verified that all nodes can read and write from the disks, turn off all nodes except the first, and continue with this guide. 7.4.
9. Click on I Understand to accept the condition that Cluster Service is supported only on hardware listed on the Hardware Compatibility List. This is the first node in the cluster; therefore, you must create the cluster itself. 10. Select the first node in the cluster, as shown below and then click on Next. 11. Enter a name for the cluster (up to 15 characters), and click on Next. (In our example, the cluster is named ClusterOne.) 12.
14. Click on Next. The Add or Remove Managed Disks screen displays next. This screen is in the following section about configuring cluster disks. 7.4.15 Configuring Cluster Disks Windows 2000 Managed Disks displays all SCSI disks, as shown on the screen below. It displays SCSI disks that do not reside on the same bus as the system disk. Because of this, a node that has multiple SCSI buses will list SCSI disks that are not to be used as shared storage.
In production clustering scenarios, you need to use more than one private network for cluster communication to avoid having a single point of failure. Cluster Service can use private networks for cluster status signals and cluster management. This provides more security than using a public network for these roles. In addition, you can use a public network for cluster management, or you can use a mixed network for both private and public communications.
The order in which the Cluster Service Configuration Wizard presents these networks can vary. In this example, the public network is presented first. Follow these steps to configure the clustered disks: 1. The Add or Remove Managed Disks dialog box specifies disks on the shared SCSI bus that will be used by Cluster Service. Add or remove disks as necessary, then click on Next. The following screen displays. 7-20 Installing and Configuring Clusters Copyright © 2002 by LSI Logic Corporation.
2. Click on Next in the Configure Cluster Networks dialog box. 3. Verify that the network name and IP address correspond to the network interface for the public network. 4. Check the box Enable this network for cluster use. 5. Select the option All communications (mixed network), as shown below, and click on Next. The next dialog box configures the private network. Make sure that the network name and IP address correspond to the network interface used for the private network. 6.
In this example, both networks are configured so that both can be used for internal cluster communication. The next dialog window offers an option to modify the order in which the networks are used. Because Private Cluster Connection represents a direct connection between nodes, it remains at the top of the list. In normal operation, this connection is used for cluster communication.
9. Enter the unique cluster IP address and Subnet mask for your network, then click on Next. The Cluster Service Configuration Wizard shown below automatically associates the cluster IP address with one of the public or mixed networks. It uses the subnet mask to select the correct network. 10. Click Finish to complete the cluster configuration on the first node. Cluster Installation Copyright © 2002 by LSI Logic Corporation. All rights reserved.
The Cluster Service Setup Wizard completes the setup process for the first node by copying the files needed to complete the installation of Cluster Service. After the files are copied, the Cluster Service registry entries are created, the log files on the quorum resource are created, and the Cluster Service is started on the first node. 7-24 Installing and Configuring Clusters Copyright © 2002 by LSI Logic Corporation. All rights reserved.
11. When a dialog box appears telling you that Cluster Service has started successfully. Click on OK. 12. Close the Add/Remove Programs window. 7.4.16 Validating the Cluster Installation Use the Cluster Administrator snap-in to validate the Cluster Service installation on the first node. Follow these steps to validate the cluster installation. 1. Click on Start. 2. Click on Programs. 3. Click on Administrative Tools. 4. Click on Cluster Administrator. The Cluster Administrator screen displays.
Installation of Cluster Service on the second node takes less time than on the first node. Setup configures the Cluster Service network settings on the second node based on the configuration of the first node. Installation of Cluster Service on the second node begins the same way as installation on the first node. The first node must be running during installation of the second node. Follow the same procedures used to install Cluster Service on the first node, with the following differences: 1.
2. Right-click the group Disk Group 1 and select the option Move. This option moves the group and all its resources to another node. After a short period of time, the Disk F: G: will be brought online on the second node. If you watch the screen, you will see this shift. Close the Cluster Administrator snap-in. Congratulations! You have completed installing Cluster Service on all nodes. The server cluster is fully operational.
The SCSI bus listed in the hardware requirements must be configured prior to installation of Cluster Services. This includes: • Configuring the SCSI devices. • Configuring the SCSI controllers and hard disks to work properly on a shared SCSI bus. • Properly terminating the bus. The shared SCSI bus must have a terminator at each end of the bus. It is possible to have multiple shared SCSI buses between the nodes of a cluster.
Chapter 8 Troubleshooting This chapter provides troubleshooting information for the MegaRAID SCSI 320-2 controller. It contains the following sections: 8.1 • Section 8.1, “General Troubleshooting” • Section 8.2, “BIOS Boot Error Messages” • Section 8.3, “Other BIOS Error Messages” • Section 8.4, “Other Potential Problems” General Troubleshooting Table 8.1 lists the general problems that can occur, along with suggested solutions. Table 8.
Table 8.1 General Problems and Suggested Solutions (Cont.) Problem Suggested Solution Pressed . Ran Megaconf.exe and tried to make a new configuration. The system hangs when scanning devices. Check the drives IDs on each channel to make sure each device has a different ID. Check the termination. The device at the end of the channel must be terminated. Replace the drive cable. Multiple drives connected to the MegaRAID SCSI 320-2 use the same power supply.
8.2 BIOS Boot Error Messages Table 8.2 describes BIOS error messages that can display when you boot the system, and suggested solutions. Table 8.2 BIOS Boot Error Messages Message Problem Suggested Solution Adapter BIOS Disabled. No Logical Drives Handled by BIOS The MegaRAID BIOS is disabled. Sometimes the BIOS is disabled to prevent booting from the BIOS. Enable the BIOS using the MegaRAID BIOS Configuration Utility.
Table 8.2 BIOS Boot Error Messages (Cont.) Message Problem Suggested Solution X Logical Drives Degraded x number of logical drives signed on in a degraded state. Make sure all physical drives are properly connected and are powered on. Run MegaRAID Manager to find if any physical drives are not responding. Reconnect, replace, or rebuild any drive that is not responding. 1 Logical Drive Degraded A logical drive signed on in a degraded state.
8.3 Other BIOS Error Messages Table 8.3 describes other BIOS error messages, their meaning, and suggested solutions. Table 8.3 Other BIOS Error Messages Message Problem Suggested Solution Following SCSI disk not found and no empty slot available for mapping it The physical disk roaming feature did not find the physical disk with the displayed SCSI ID. No slot is available to map the physical drive. MegaRAID cannot resolve the physical drives into the current configuration. Reconfigure the array.
Table 8.4 Other Potential Problems (Cont.) Topic Information CD drives under DOS At this time, copied CDs are not accessible from DOS even after loading MEGASPI.SYS and MEGACDR.SYS. Physical drive errors To display the MegaRAID Manager Media Error and Other Error options, select the Objects menu, then Physical Drive. Select a physical drive and press . The windows displays the number of errors. A Media Error is an error that occurred while actually transferring data.
Table 8.4 Other Potential Problems (Cont.) Topic Information Windows NT Installation When Windows NT is installed using a bootable CD, the devices on the MegaRAID SCSI 320-2 are not recognized until after the initial reboot. The Microsoft documented workaround is in SETUP.TXT, which is on the CD. Perform the following steps to install drivers when Setup recognizes a supported SCSI host adapter but doesn’t make the devices attached to it available for use: 1. Restart Windows NT Setup. 2.
8-8 Troubleshooting Copyright © 2002 by LSI Logic Corporation. All rights reserved.
Appendix A SCSI Cables and Connectors The MegaRAID SCSI 320-2 provides several different types of SCSI connectors. The connectors are: A.1 • Two 68-pin high density internal connectors • Two 68-pin very high density external connectors 68-Pin High-Density SCSI Internal Connector Each SCSI channel on the MegaRAID SCSI 320-2 controller has a 68-pin high density 0.050 inch pitch unshielded connector. These connectors provide all signals needed to connect the MegaRAID SCSI 320-2 to Wide SCSI devices.
A.1.1 Cable Assembly for Internal Wide SCSI Devices The cable assembly for connecting internal Wide SCSI devices is shown below. Pin 1 Connectors: 68 Position Plug (Male) AMP – 786090-7 Cable: Flat Ribbon or Twisted-Pair Flat Cable 68 Conductor 0.025 Centerline 30 AWG Pin 1 Pin 1 A-2 SCSI Cables and Connectors Copyright © 2002 by LSI Logic Corporation. All rights reserved.
A.1.2 Connecting Internal and External Wide Devices The cable assembly for connecting internal Wide and external Wide SCSI devices is shown below. A Pin 1 Pin 1 B Connector A: 68 Position Panel Mount Receptacle with 4-40 Holes (Female) AMP – 786096-7 Note: To convert to 2-56 holes, use screwlock kit 749087-1, 749087-2, or 750644-1 from AMP Connectors B: 68 Position Plug (Male) AMP – 786090-7 Pin 1 B 68-Pin High-Density SCSI Internal Connector Copyright © 2002 by LSI Logic Corporation.
A.1.3 Converting Internal Wide to Internal Non-Wide (Type 2) The cable assembly for converting internal Wide SCSI connectors to internal non-Wide (Type 2) SCSI connectors is shown below.
A.1.4 Converting Internal Wide to Internal Non-Wide (Type 30) The cable assembly for converting internal Wide SCSI connectors to internal non-Wide (Type 30) SCSI connectors is shown below. Pin 1 Connector A: 68 Position Plug (Male) AMP– 749925-5 A Connector B: 50 Position Plug (Male) AMP – 749925-3 Pin 1 Wire: Twisted-Pair Flat Cable or Laminated Discrete Wire Cable 25 Pair 0.050 Centerline 28 AWG B A.1.
A.2 SCSI Cable and Connector Vendors Table A.1 lists SCSI cable vendors, and contact information. Table A.1 SCSI Cable Vendors Manufacturer Telephone Number Cables To Go Voice: 800-826-7904 Fax: 800-331-2841 System Connection Voice: 800-877-1985 Technical Cable Concepts Voice: 714-835-1081 GWC Voice: 800-659-1599 Table A.2 table lists SCSI connector vendors. Table A.
A.3 High-Density 68-Pin Connector Pinout for SE SCSI Table A.3 lists the pinout for the high-density 68-pin connectors for single-ended SCSI. Table A.
Table A.
A.4 68-Pin Connector Pinout for LVD SCSI Table A.4 lists the pinout for the 68-pin connector for LVD SCSI. Table A.
Table A.4 Signal Connector Pin Cable Pin Cable Pin Connector Pin Signal +BSY 23 45 46 57 -BSY +ACK 24 47 48 58 -ACK +RST 25 49 50 59 -RST +MSG 26 51 52 60 -MSG +SEL 27 53 54 61 -SEL +C/D 28 55 56 62 -C/D +REQ 29 57 58 63 -REQ +I/O 30 59 60 64 -I/O +DB(8) 31 61 62 65 -DB(8) +DB(9) 32 63 64 66 -DB(9) +DB(10) 33 65 66 67 -DB(10) +DB(11) 34 67 68 68 -DB(11) Note: A-10 68-Pin Connector Pinout for LVD SCSI (Cont.
Appendix B Audible Warnings The MegaRAID SCSI 320-1 RAID controller has an onboard tone generator that indicates events and errors. Note: Table B.1 This is available only if the optional series 502 Battery Backup Unit (BBU) is installed. Audible Warnings and Descriptions Tone Pattern Meaning Examples Three seconds on and one second off A logical drive is offline. One or more drives in a RAID 0 configuration failed. Two or more drives in a RAID 1, or 5 configuration failed.
B-2 Audible Warnings Copyright © 2002 by LSI Logic Corporation. All rights reserved.
Appendix C Glossary Array A grouping of individual disk drives that combines the storage space on the disk drives into a single segment of contiguous storage space. MegaRAID can group disk drives on one or more SCSI channels into an array. Array Management Software Software that provides common control and management for a disk array. Array management software most often executes in a disk controller or intelligent host bus adapter, but it can also execute in a host computer.
Channel An electrical path for the transfer of data and control information between a disk and a disk controller. Consistency Check An examination of the disk system to determine whether all conditions are valid for the specified configuration (such as parity.) Cold Swap A cold swap requires that you turn the power off before replacing a defective disk drive in a disk subsystem. Data Transfer Capacity The amount of data per unit time moved through a channel.
provides high I/O performance at low cost, but provides lowers data reliability than any of its member disks. Disk Subsystem A collection of disks and the hardware that connects them to one or more host computers. The hardware can include an intelligent controller, or the disks can attach directly to a host computer I/O bus adapter. Double Buffering A technique that achieves maximum data transfer bandwidth by constantly keeping two I/O requests for adjacent data outstanding.
Host Computer Any computer to which disks are directly attached. Mainframes, servers, workstations, and personal computers can all be considered host computers. Hot Spare A stand-by disk drive ready for use if a drive in an array fails. A hot spare does not contain any user data. Up to eight disk drives can be assigned as hot spares for an adapter. A hot spare can be dedicated to a single redundant array, or it can be part of the global hot-spare pool for all arrays controlled by the adapter.
Mbyte (Megabyte) An abbreviation for 1,000,000 (10 to the sixth power) bytes. One Mbyte equals 1,000 Kbytes (kilobytes). Multi-threaded Having multiple concurrent or pseudo-concurrent execution sequences. Used to describe processes in computer systems. Multi-threaded processes allow throughput-intensive applications to efficiently use a disk array to increase I/O performance.
RAID Redundant Array of Independent Disks. A data storage method in which data, along with parity information, is distributed among two or more hard disks (called an array) to improve performance and reliability. A RAID disk subsystem improves I/O performance on a server using only a single drive. The RAID array appears to the host server as a single storage unit. I/O is expedited because several disks can be accessed simultaneously. RAID Levels A style of redundancy applied to a logical drive.
Redundancy The provision of multiple interchangeable components to perform a single function to cope with failures or errors. Redundancy normally applies to hardware; disk mirroring is a common form of hardware redundancy. Replacement Disk A disk available to replace a failed member disk in a RAID array. Replacement Unit A component or collection of components in a disk subsystem that is always replaced as a unit when any part of the collection fails.
Service Provider The Service Provider (SP) is a program that resides in the desktop system or server and is responsible for all DMI activities. This layer collects management information from products (whether system hardware, peripherals, or software), stores that information in the DMI database, and passes it to management applications as requested. SNMP Simple Network Management Protocol.
Ultra320 A subset of Ultra3 SCSI that allows a maximum throughput of 320 Mbytes/s, which is twice as fast as Wide Ultra2 SCSI. Ultra320 SCSI provides 320 Mbytes/s on a 16-bit connection. Virtual Sizing FlexRAID virtual sizing is used to create a logical drive up to 80 Gbytes. A maximum of 40 logical drives can be configured on a RAID controller, and RAID migration is possible for all logical drives except the fortieth.
C-10 Glossary Copyright © 2002 by LSI Logic Corporation. All rights reserved.
Index Numerics 0 DIMM socket 6-4 160M and Wide SCSI 4-1 68-pin connector pinout for LVD SCSI A-9 68-Pin High Density Connectors A-1 A AMI Part Number Battery 6-13 AMPLIMITE .
Disconnect/reconnect 4-10 Disk C-2 Disk Access and Functionality 7-15 Disk array C-2 Disk Array Types 2-11 Disk duplexing C-2 Disk mirroring 2-6, C-2 Disk rebuild 2-9 Disk spanning 2-7, C-2 Disk striping 2-4, C-2 Disk subsystem C-3 Disposing of a Battery Pack 6-15 Documentation 1-3 DOS 4-7 Double buffering C-3 Drive roaming 4-3 Drivers 6-21 E Enclosure management 2-11 F Fail 2-10 Failed 2-10 Failed drive C-3 Fast SCSI C-3 Fault tolerance 2-4 Fault tolerance features 4-6 Features 4-1 Firmware 4-7, C-3 Flas
OS/2 2.
V Virtual sizing C-9 W WebBIOS Configuration Utility 4-11 Wide SCSI C-9 Windows .NET 6-21 Windows 2000 6-21 Windows NT 4-7, 6-21 Windows XP 6-21 Write-back caching 6-15 IX-4 Index Copyright © 2002 by LSI Logic Corporation. All rights reserved.
LSI Logic Confidential Customer Feedback We would appreciate your feedback on this document. Please copy the following page, add your comments, and fax it to us at the number shown. If appropriate, please also fax copies of any marked-up pages from this document. Important: Please include your name, phone number, fax number, and company address so that we may contact you directly for clarification or additional information. Thank you for your help in improving the quality of our documents.
LSI Logic Confidential Reader’s Comments Fax your comments to: LSI Logic Corporation Technical Publications M/S E-198 Fax: 408.433.4333 Please tell us how you rate this document: MegaRAID SCSI 320-2 Hardware Guide. Place a check mark in the appropriate blank for each category.