hp StorageWorks HSG80 ACS Solution Software Version 8.7 for Compaq OpenVMS Installation and Configuration Guide Part Number: AA-RH4BE-TE Fifth Edition (August 2002) Product Version: 8.7 This guide provides installation and configuration instructions and reference material for operation of the HSG80 ACS Solution Software Version 8.7 for Compaq OpenVMS.
© Hewlett-Packard Company, 2002. All rights reserved. Hewlett-Packard Company makes no warranty of any kind with regard to this material, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. Hewlett-Packard shall not be liable for errors contained herein or for incidental or consequential damages in connection with the furnishing, performance, or use of this material. This document contains proprietary information, which is protected by copyright.
1 Contents About this Guide Intended Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Related Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Document Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv Configuration Flowchart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Assigning Unit Numbers Depending on SCSI_VERSION . . . . . . . . . . . . . . . . . . Assigning Host Connection Offsets and Unit Numbers in SCSI-3 Mode. . . . Assigning Host Connection Offsets and Unit Numbers in SCSI-2 Mode. . . . Assigning Unit Identifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using CLI to Specify Identifier for a Unit . . . . . . . . . . . . . . . . . . . . . . . . . . . Using SWCC to Specify LUN ID Alias for a Virtual Disk. . . .
Contents RAIDset Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mirrorset Switches. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Partition Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Specifying Initialization Switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chunk Size. . . . . . . . . . . . . . .
Contents 4 Installing and Configuring HSG Agent Why Use StorageWorks Command Console (SWCC)?. . . . . . . . . . . . . . . . . . . . . . . . . 4–1 Installation and Configuration Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–2 About the Network Connection for the Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4–3 Before Installing the Agent. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Configuring a Single-Disk Unit (JBOD) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring a Partition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Assigning Unit Numbers and Unit Qualifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Assigning a Unit Number to a Storageset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Assigning a Unit Number to a Single (JBOD) Disk . . . . . . . . . . . . . . . .
Contents Storage Map Template 3 for the third BA370 Enclosure . . . . . . . . . . . . . . . . . . . . . . . A–6 Storage Map Template 4 for the Model 4214R Disk Enclosure . . . . . . . . . . . . . . . . . A–7 Storage Map Template 5 for the Model 4254 Disk Enclosure . . . . . . . . . . . . . . . . . . . A–9 Storage Map Template 6 for the Model 4310R Disk Enclosure . . . . . . . . . . . . . . . . A–11 Storage Map Template 7 for the Model 4350R Disk Enclosure . . . . . . . . . . . . . . . .
Contents Figures 1 2 3 1–1 1–2 1–3 1–4 1–5 1–6 1–7 1–8 1–9 1–10 2–1 2–2 2–3 2–4 2–5 2–6 2–7 2–8 2–9 2–10 2–11 2–12 2–13 3–1 3–2 4–1 5–1 5–2 5–3 5–4 6–1 General configuration flowchart (panel 1) . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii General configuration flowchart (panel 2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix Configuring storage with SWCC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents 6–2 6–3 7–1 B–1 B–2 B–3 x Example, three non-clustered host systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . 6–3 Example, logical or virtual disks comprised of storagesets. . . . . . . . . . . . . . . . 6–4 CLONE utility steps for duplicating unit members. . . . . . . . . . . . . . . . . . . . . . 7–2 Navigation Window. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B–10 Navigation window showing storage host system “Atlanta” . . . . . . . . .
Contents Tables 1 2 1–1 2–1 2–2 2–3 2–4 2–5 2–6 2–7 2–8 4–1 4–2 4–3 Document Conventions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv Summary of Chapter Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi Unit Assignments and SCSI_VERSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1–14 PTL addressing, single-bus configuration, six Model 4310R enclosures. . . . .
About this Guide This guide describes how to install and configure the HSG80 ACS Solution Software Version 8.7 for Compaq OpenVMS. This guide describes: • How to plan the storage array subsystem; and, • How to install and configure the storage array subsystem on individual operating system platforms. This book does not contain information about the operating environments to which the controller may be connected; nor does it contain detailed information about subsystem enclosures or their components.
About this Guide • Installation and Configuration Guide (platform-specific) - the guide you are reading • Solution Software Release Notes (platform-specific) • FC-AL Application Note (AA-RS1ZA-TE) - Solution software host support includes the following platforms: — IBM AIX — HP-UX — Linux (Red Hat x86/Alpha, SuSE x86/Alpha, Caldera x86) — Novell NetWare — Open VMS — Sun Solaris — Tru64 UNIX — Windows NT/2000 Additional support required by HSG80 ACS Solution Software Version 8.
About this Guide Document Conventions The conventions included in Table 1 apply. Table 1: Document Conventions Element Convention Cross-reference links Blue text: Figure 1 Key names, menu items, buttons, and dialog box titles Bold File names, application names, and text emphasis Italics User input, command names, system responses (output and messages) Monospace font Variables Monospace, italic font Website addresses Sans serif font (http://www.compaq.
About this Guide Configuration Flowchart A three-part flowchart (Figures 1-3) is shown on the following pages. Refer to these charts while installing and configuring a new storage subsystem. All references in the flowcharts pertain to pages in this guide, unless otherwise indicated. Table 2 below summarizes the content of the chapters. Table 2: Summary of Chapter Contents Chapters Description 1.
About this Guide Table 2: Summary of Chapter Contents (Continued) Chapters Appendix A. Description Subsystem Profile Templates This appendix contains storageset profiles to copy and use to create your system profiles. It also contains an enclosure template to use to help keep track of the location of devices and storagesets in your shelves. Four (4) templates will be needed for the subsystem. Appendix B. The Client monitors and manages a storage subsystem.
About this Guide Unpack subsystem See the unpacking instructions on shipping box Plan a Subsystem Chapter 1 Plan Storage Configurations Chapter 2 Prepare Host System Chapter 3 Make Local Connection Page 5-2 Controller pair Single controller Cable Controller Page 5-3 Cable Controllers Page 5-9 Configure Controller Page 5-4 Configure Controllers Page 5-11 Installing SWCC ? No A Yes B See Figure 3 on page xx See continuation of figure on next page Figure 1: General configuration flowchart (pan
About this Guide A Configure devices Page 5-17 Create Storagesets and Partitions: Stripeset, Page 5-18 Mirrorset, Page 5-19 RAIDset, Page 5-20 Striped Mirrorset, Page 5-20 Single (JBOD) Disk, Page 5-21 Continue creating units until you have you have completed your planned configuration. Partition, Page 5-21 Assign Unit Numbers Page 5-23 Configuration Options Page 5-25 Verify Storage Setup Figure 2: General configuration flowchart (panel 2) HSG80 ACS Solution Software Version 8.
About this Guide B Install Agent Chapter 4 Install Client Appendix B Create Storage See SWCC online help Verify Storage Set Up Figure 3: Configuring storage with SWCC xx HSG80 ACS Solution Software Version 8.
About this Guide Symbols in Text These symbols may be found in the text of this guide. They have the following meanings. WARNING: Text set off in this manner indicates that failure to follow directions in the warning could result in bodily harm or loss of life. CAUTION: Text set off in this manner indicates that failure to follow directions could result in damage to equipment or data. IMPORTANT: Text set off in this manner presents clarifying information or specific instructions.
About this Guide Any surface or area of the equipment marked with these symbols indicates the presence of a hot surface or hot component. Contact with this surface could result in injury. WARNING: To reduce the risk of injury from a hot component, allow the surface to cool before touching. Power supplies or systems marked with these symbols indicate the presence of multiple sources of power.
About this Guide Getting Help If you still have a question after reading this guide, contact an authorized service provider or access our website. Technical Support In North America, call technical support at 1-800-OK-COMPAQ, available 24 hours a day, 7 days a week. NOTE: For continuous quality improvement, calls may be recorded or monitored. Outside North America, call technical support at the nearest location.
1 Planning a Subsystem This chapter provides information that helps you plan how to configure the storage array subsystem. This chapter focuses on the technical terms and knowledge needed to plan and implement storage subsystems. NOTE: This chapter frequently references the command line interface (CLI). For the complete syntax and descriptions of the CLI commands, see the StorageWorks HSG80 Array Controller ACS Version 8.7 CLI Reference Guide.
Planning a Subsystem Defining Subsystems This section describes the terms this controller and other controller. It also presents graphics of the Model 2200 and BA370 enclosures. NOTE: The HSG80 controller uses the BA370 or Model 2200 enclosure. Controller Designations A and B The terms A, B, “this controller,” and “other controller,” respectively are used to distinguish one controller from another in a two-controller (also called dual-redundant) subsystem.
Planning a Subsystem BA370 Enclosure 1 2 3 4 5 2 1 3 4 5 6 6 EMU PVA Controller A Controller B Cache module A Cache module B CXO6283B Figure 1–2: Location of controllers and cache modules in a BA370 enclosure Controller Designations “This Controller” and “Other Controller” Some CLI commands use the terms “this” and “other” to identify one controller or the other in a dual-redundant pair. These designations are a shortened form of “this controller” and “other controller.
Planning a Subsystem Model 2200 Enclosure 1 2 CXO7366A 1 This controller 2 Other controller Figure 1–3: “This controller” and “other controller” for the Model 2200 enclosure BA370 Enclosure 1 2 CXO6468D 1 Other controller 2 This controller Figure 1–4: “This controller” and “other controller” for the BA370 enclosure 1–4 HSG80 ACS Solution Software Version 8.
Planning a Subsystem What is Failover Mode? Failover is a way to keep the storage array available to the host if one of the controllers becomes unresponsive. A controller can become unresponsive because of a controller hardware failure. Failover keeps the storage array available to the hosts by allowing the surviving controller to take over total control of the subsystem.
Planning a Subsystem • All hosts must have operating system software that supports multiple-bus failover mode Host 1 "RED" Host 2 "GREY" Host 3 "BLUE" FCA1 FCA2 FCA1 FCA2 FCA1 FCA2 Switch or hub Switch or hub Host port 1 active D0 Host port 2 active Controller A D1 D2 D100 D101 D120 All units visible to all ports Host port 1 active Controller B Host port 2 active NOTE: FCA = Fibre Channel Adapter CXO7094B Figure 1–5: Typical multiple-bus configuration 1–6 HSG80 ACS Solution Softwar
Planning a Subsystem Selecting a Cache Mode The cache module supports read, read-ahead, write-through, and write-back caching techniques. The cache technique is selected separately for each unit. For example, you can enable only read and write-through caching for some units while enabling only write-back caching for other units. Read Caching When the controller receives a read request from the host, it reads the data from the disk drives, delivers it to the host, and stores the data in its cache module.
Planning a Subsystem Write-Through Caching Write-through caching is enabled when write-back caching is disabled. When the controller receives a write request from the host, it places the data in its cache module, writes the data to the disk drives, then notifies the host when the write operation is complete. This process is called write-through caching because the data actually passes through—and is stored in—the cache memory on its way to the disk drives.
Planning a Subsystem What is the Command Console LUN? StorageWorks Command Console (SWCC) software communicates with the HSG80 controllers through an existing storage unit, or logical unit number (LUN). The dedicated LUN that SWCC uses is called the Command Console LUN (CCL). The CCL serves as the communication device for the HS-Series Agent and identifies itself to the host by a unique identification string. By default, a CCL device is enabled within the HSG80 controller on host port 1.
Planning a Subsystem Naming Connections It is highly recommended that you assign names to connections that have meaning in the context of your particular configuration. One system that works well is to name each connection after its host, its adapter, its controller, and its controller host port, as follows: HOST1A1 HOST NAME PORT CONTROLLER ADAPTER Examples: A connection from the first adapter in the host named RED that goes to port 1 of controller A would be called RED1A1.
Planning a Subsystem Host 1 "VIOLET" FCA1 FCA2 Switch or hub Connection VIOLET1B1 Switch or hub Connection VIOLET1A1 Connection VIOLET2A2 Host port 1 active D0 Host port 2 active Controller A D1 D2 D100 Connection VIOLET2B2 D101 D120 All units visible to all ports Host port 1 active Controller B Host port 2 active NOTE: FCA = Fibre Channel Adapter CXO7080B Figure 1–7: Connections in multiple-bus failover mode Assigning Unit Numbers The controller keeps track of the unit with the unit nu
Planning a Subsystem • The UNIT_OFFSET switch in the ADD CONNECTIONS (or SET connections) commands • The controller port to which the connection is attached • The SCSI_VERSION switch of the SET THIS_CONTROLLER/OTHER_CONTROLLER command The considerations for assigning unit numbers are discussed in the following sections. Matching Units to Host Connections in Multiple-Bus Failover Mode In multiple-bus failover mode, the ADD UNIT command creates a unit for host connections to access.
Planning a Subsystem Assigning Unit Numbers Depending on SCSI_VERSION The SCSI_VERSION switch of the SET THIS_CONTROLLER/OTHER_CONTROLLER command determines how the CCL is presented. There are two choices: SCSI-2 and SCSI-3. The choice for SCSI_VERSION affects how certain unit numbers and certain host connection offsets interact. IMPORTANT: OpenVMS requires the controllers be set to SCSI-3 mode.
Planning a Subsystem • Offsets should be divisible by 10 (for consistency and simplicity). • Unit numbers should be assigned at connection offsets (so that every host connection has a unit presented at LUN 0). Table 1–1 summarizes the recommendations for unit assignments based on the SCSI_VERSION switch.
Planning a Subsystem Using SWCC to Specify LUN ID Alias for a Virtual Disk Setting a LUN ID alias for a virtual disk is the same as setting a unit identifier. To set LUN ID alias for a previously created virtual disk perform the following procedure: 1. Open the storage window, where you see the properties for that virtual disk. 2. Click on the Settings Tab to see changeable properties. 3. Click on the “Enable LUN ID Alias” button. 4. Enter the LUN ID alias (identifier) in the appropriate field.
Planning a Subsystem For example: In Figure 1–8, restricting the access of unit D101 to host 3, the host named BROWN can be done by enabling only the connection to host 3. Enter the following commands: SET D101 DISABLE_ACCESS_PATH=ALL SET D101 ENABLE_ACCESS_PATH=BROWN1B2 If the storage subsystem has more than one host connection, carefully specify the access path to avoid providing undesired host connections access to the unit.
Planning a Subsystem Host 1 "RED" Host 2 "GREY" Host 3 "BLUE" FCA1 FCA2 FCA1 FCA2 FCA1 FCA2 Switch or hub Connections RED1B1 GREY1B1 BLUE1B1 Switch or hub Connections RED1A1 GREY1A1 BLUE1A1 Connections RED2A2 GREY2A2 BLUE2A2 Host port 1 active Host port 2 active Controller A D0 D1 D2 D100 Connections RED2B2 GREY2B2 BLUE2B2 D101 D120 All units visible to all ports Host port 1 active Controller B Host port 2 active NOTE: FCA = Fibre Channel Adapter CXO7078B Figure 1–8: Limiting host ac
Planning a Subsystem For example: Figure 1–8 shows a representative multiple-bus failover configuration. Restricting the access of unit D101 to host BLUE can be done by enabling only the connections to host BLUE. At least two connections must be enabled for multiple-bus failover to work. For most operating systems, it is desirable to have all connections to the host enabled.
Planning a Subsystem For example: In Figure 1–8, assume all host connections initially have the default offset of 0. Giving all connections access to host BLUE, an offset of 120 will present unit D120 to host BLUE as LUN 0. Enter the following commands: SET BLUE1A1 UNIT_OFFSET=120 SET BLUE1B1 UNIT_OFFSET=120 SET BLUE2A2 UNIT_OFFSET=120 SET BLUE2B2 UNIT_OFFSET=120 Host BLUE cannot see units lower than its offset, so it cannot access any other units.
Planning a Subsystem Restoring Worldwide Names (Node IDs) If a situation occurs that requires you to restore the worldwide name, you can restore it using the worldwide name and checksum printed on the sticker on the frame into which the controller is inserted. Figure 1–9 shows the placement of the worldwide name label for the Model 2200 enclosure, and Figure 1–10 for the BA370 enclosure.
Planning a Subsystem CAUTION: Each subsystem has its own unique worldwide name (node ID). If you attempt to set the subsystem worldwide name to a name other than the one that came with the subsystem, the data on the subsystem will not be accessible. Never set two subsystems to the same worldwide name, or data corruption will occur. Unit Worldwide Names (LUN IDs) In addition, each unit has its own worldwide name, or LUN ID.
2 Planning Storage Configurations This chapter provides information to help you plan the storage configuration of your subsystem. Storage containers are individual disk drives (JBOD), storageset types (mirrorsets, stripesets, and so on), and/or partitioned drives. Use the guidelines found in this section to plan the various types of storage containers needed.
Planning Storage Configurations Where to Start The following procedure outlines the steps to follow when planning your storage configuration. See Appendix A to locate the blank templates for keeping track of the containers being configured. 1. Determine your storage requirements. Use the questions in “Determining Storage Requirements,” page 2–3, to help you. 2. Review configuration rules. See “Configuration Rules for the Controller,” page 2–3. 3.
Planning Storage Configurations — Use the Command Line Interpreter (CLI) commands. This method allows you flexibility in defining and naming your storage containers. See the StorageWorks HSG80 Array Controller ACS Version 8.7 CLI Reference Guide. Determining Storage Requirements It is important to determine your storage requirements.
Planning Storage Configurations • 8 partitions of a storageset or individual disk • 6 physical devices per RAID 1 storageset (mirrorset) • 14 physical devices per RAID 3/5 storageset (RAIDset) • 24 physical devices per RAID 0 storageset (stripeset) • 45 physical devices per RAID 0+1 storageset (striped mirrorset) Addressing Conventions for Device PTL The HSG80 controller has six SCSI device ports, each of which connects to a SCSI bus.
Planning Storage Configurations The HSG80 controller identifies devices based on a Port-Target-LUN (PTL) numbering scheme, shown in Figure 2–2. The physical location of a device in its enclosure determines its PTL. • P—Designates the controller's SCSI device port number (1 through 6). • T—Designates the target ID number of the device. Valid target ID numbers for a single-controller configuration and dual-redundant controller configuration are 0 3 and 8 - 15, respectively.
Planning Storage Configurations The controller operates with BA370 enclosures that are assigned ID numbers 0, 2, and 3. These ID numbers are set through the PVA module. Enclosure ID number 1, which assigns devices to targets 4 through 7, is not supported. Figure 2–3 shows how data is laid out on disks in an extended configuration. Virtual disk Operating system view Actual device mappings Block 0 Block 1 Block 2 Block 3 Block 4 Block 5 etc. Disk 1 Disk 2 Disk 3 Block 0 Block 3 etc.
Planning Storage Configurations Examples - Model 2200 Storage Maps, PTL Addressing The Model 2200 controller enclosure can be combined with the following: • Model 4214R disk enclosure — Ultra2 SCSI with 14 drive bays, single-bus I/O module. • Model 4254 disk enclosure — Ultra2 SCSI with 14 drive bays, dual-bus I/O module. NOTE: The Model 4214R uses the same storage maps as the Model 4314R, and the Model 4254 uses the same storage maps as the Model 4354R disk enclosures.
Planning Storage Configurations Table 2–1: PTL addressing, single-bus configuration, six Model 4310R enclosures Model 4310R Disk Enclosure Shelf 6 (single-bus) 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 DISK ID Disk61200 9 Disk61100 8 Disk61000 7 Disk60800 6 Disk60500 5 Disk60400 4 Disk60300 3 Disk60200 2 Disk60100 1 Disk60000 Bay Model 4310R Disk Enclosure Shelf 5 (single-bus) 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 DISK ID Disk51200 9 Disk51100 8 Disk51000
Planning Storage Configurations Model 4310R Disk Enclosure Shelf 2 (single-bus) 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 DISK ID Disk21200 9 Disk21100 8 Disk21000 7 Disk20800 6 Disk20500 5 Disk20400 4 Disk20300 3 Disk20200 2 Disk20100 1 Disk20000 Bay Model 4310R Disk Enclosure Shelf 3 (single-bus) 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 DISK ID HSG80 ACS Solution Software Version 8.
Planning Storage Configurations Table 2–2: PTL addressing, dual-bus configuration, three Model 4350R enclosures Model 4350R Disk Enclosure Shelf 1 (single-bus) SCSI Bus A SCSI Bus B 10 SCSI ID 00 01 02 03 04 00 01 02 03 04 DISK ID Disk20400 9 Disk20300 8 Disk20200 7 Disk20100 6 Disk20000 5 Disk10400 4 Disk10300 3 Disk10200 2 Disk10100 1 Disk10000 Bay Model 4350R Disk Enclosure Shelf 2 (single-bus) SCSI Bus A SCSI Bus B 10 SCSI ID 00 01 02 03 04 00 01 02 03 04
Planning Storage Configurations Table 2–3: PTL addressing, single-bus configuration, six Model 4314R enclosures Model 4314R Disk Enclosure Shelf 6 (single-bus) 14 SCSI ID 00 01 02 03 04 05 08 09 10 11 12 13 14 15 DISK ID Disk61500 13 Disk61400 12 Disk61300 11 Disk61200 10 Disk61100 9 Disk61000 8 Disk60900 7 Disk60800 6 Disk60500 5 Disk60400 4 Disk60300 3 Disk60200 2 Disk60100 1 Disk60000 Bay Model 4314R Disk Enclosure Shelf 5 (single-bus) 14 SCSI ID 00 01 02
Planning Storage Configurations Model 4314R Disk Enclosure Shelf 2 (single-bus) 14 SCSI ID 00 01 02 03 04 05 08 09 10 11 12 13 14 15 DISK ID Disk21500 13 Disk21400 12 Disk21300 11 Disk21200 10 Disk21100 9 Disk21000 8 Disk20900 7 Disk20800 6 Disk20500 5 Disk20400 4 Disk20300 3 Disk20200 2 Disk20100 1 Disk20000 Bay Model 4314R Disk Enclosure Shelf 3 (single-bus) 14 SCSI ID 00 01 02 03 04 05 08 09 10 11 12 13 14 15 DISK ID 2–12 Disk31500 13 Disk
Planning Storage Configurations Table 2–4: PTL addressing, dual-bus configuration, three Model 4354A enclosures.
Planning Storage Configurations Choosing a Container Type Different applications may have different storage requirements. You probably want to configure more than one kind of container within your subsystem. In choosing a container, you choose between independent disks (JBODs) or one of several storageset types, as shown in Figure 2–4. The independent disks and the selected storageset may also be partitioned. The storagesets implement RAID (Redundant Array of Independent Disks) technology.
Planning Storage Configurations Table 2–5 compares the different kinds of containers to help you determine which ones satisfy your requirements.
Planning Storage Configurations Creating a Storageset Profile Creating a profile for your storagesets, partitions, and devices can simplify the configuration process. Filling out a storageset profile helps you choose the storagesets that best suit your needs and to make informed decisions about the switches you can enable for each storageset or storage device that you configure in your subsystem. For an example of a storageset profile, see Table 2–6.
Planning Storage Configurations Table 2–6: Example of Storageset Profile Type of Storageset: _____ Mirrorset __X_ RAIDset _____ Stripeset _____ Striped Mirrorset ____ JBOD Storageset Name R1.
Planning Storage Configurations Planning Considerations for Storageset This section contains the guidelines for choosing the storageset type needed for your subsystem: • “Stripeset Planning Considerations,” page 2–18 • “Mirrorset Planning Considerations,” page 2–21 • “RAIDset Planning Considerations,” page 2–22 • “Striped Mirrorset Planning Considerations,” page 2–24 • “Storageset Expansion Considerations,” page 2–26 • “Partition Planning Considerations,” page 2–26 Stripeset Planning Considerat
Planning Storage Configurations The relationship between the chunk size and the average request size determines if striping maximizes the request rate or the data-transfer rate. You can set the chunk size or use the default setting (see “Chunk Size,” page 2–30, for information about setting the chunk size). Figure 2–6 shows another example of a three-member RAID 0 stripeset. A major benefit of striping is that it balances the I/O load across all of the disk drives in the storageset.
Planning Storage Configurations • Striping does not protect against data loss. In fact, because the failure of one member is equivalent to the failure of the entire stripeset, the likelihood of losing data is higher for a stripeset than for a single disk drive. For example, if the mean time between failures (MTBF) for a single disk is l hour, then the MTBF for a stripeset that comprises N such disks is l/N hours.
Planning Storage Configurations Mirrorset Planning Considerations Mirrorsets (RAID 1) use redundancy to ensure availability, as illustrated in Figure 2–7. For each primary disk drive, there is at least one mirror disk drive. Thus, if a primary disk drive fails, its mirror drive immediately provides an exact copy of the data. Figure 2–8 shows a second example of a Mirrorset.
Planning Storage Configurations Keep these points in mind when planning mirrorsets • Data availability with a mirrorset is excellent but comes with a higher cost—you need twice as many disk drives to satisfy a given capacity requirement. If availability is your top priority, consider using dual-redundant controllers and redundant power supplies. • You can configure up to a maximum of 20 RAID 3/5 mirrorsets per controller or pair of dual-redundant controllers. Each mirrorset may contain up to 6 members.
Planning Storage Configurations Virtual disk Operating system view Disk 1 Block 0 Block 5 Block 10 Block 15 Block 0 Block 1 Block 2 Block 3 Block 4 Block 5 etc.
Planning Storage Configurations • A RAIDset must include at least 3 disk drives, but no more than 14. • A storageset should only contain disk drives of the same capacity. The controller limits the capacity of each member to the capacity of the smallest member in the storageset. Thus, if you combine 9 GB disk drives with 4 GB disk drives in the same storageset, you waste 5 GB of capacity on each 9 GB member.
Planning Storage Configurations p t Mirrorset1 Mirrorset2 Disk 20000 Disk 10100 Disk 20200 A B C Disk 10000 Disk 20100 Disk 10200 B' C' A' Mirrorset3 CXO7289A Figure 2–10: Striped mirrorset (example 1) The failure of a single disk drive has no effect on the ability of the storageset to deliver data to the host. Under normal circumstances, a single disk drive failure has very little effect on performance.
Planning Storage Configurations Plan the mirrorset members, and plan the stripeset that will contain them. Review the recommendations in “Planning Considerations for Storageset,” page 2–18, and “Mirrorset Planning Considerations,” page 2–21. Storageset Expansion Considerations Storageset Expansion allows for the joining of two of the same kind of storage containers by concatenating RAIDsets, Stripesets, or individual disks, thereby forming a larger virtual disk which is presented as a single unit.
Planning Storage Configurations Defining a Partition Partitions are expressed as a percentage of the storageset or single disk unit that contains them: • Mirrorsets and single disk units—the controller allocates the largest whole number of blocks that are equal to or less than the percentage you specify. • RAIDsets and stripesets—the controller allocates the largest whole number of stripes that are less than or equal to the percentage you specify.
Planning Storage Configurations The following sections describe how to enable/modify switches. They also contain a description of the major CLI command switches. Enabling Switches If you use SWCC to configure the device or storageset, you can set switches from SWCC during the configuration process, and SWCC automatically applies them to the storageset or device. See the SWCC online help for information about using SWCC.
Planning Storage Configurations • Replacement policy • Reconstruction policy • Remove/replace policy For details on the use of these switches refer to SET RAIDSET and SET RAIDset-name commands in the StorageWorks HSG80 Array Controller ACS Version 8.7 CLI Reference Guide.
Planning Storage Configurations • Destroy/Nodestroy • Geometry Each of these switches is described in the following sections. NOTE: After initializing the storageset or disk drive, you cannot change these switches without reinitializing the storageset or disk drive. Chunk Size With ACS software, a parameter for chunk size (chunksize=default or n) on some storagesets can be set. However, unit performance may be negatively impacted if a non-default value is selected as the chunksize.
Planning Storage Configurations Request A Chunk size = 128k (256 blocks) Request B Request C Request D CXO-5135A-MC Figure 2–13: Large chunk size increases request rate Large chunk sizes also tend to increase the performance of random reads and writes. StorageWorks recommends that you use a chunk size of 10 to 20 times the average request size, rounded to the closest prime number. In general, 113 works well for OpenVMS systems with a transfer size of 8 sectors.
Planning Storage Configurations Table 2–7 shows a few examples of chunk size selection. Table 2–7: Example Chunk Sizes Transfer Size Small Area of I/O (KB) Transfers Unknown Random Areas of I/O Transfers 2 41 59 79 4 79 113 163 8 157 239 317 e Increasing Sequential Data Transfer Performance RAID 0 and RAID 0+1 sets intended for high data transfer rates should use a relatively low chunk size (for example: 67 sectors).
Planning Storage Configurations • DESTROY (default) overwrites the user data and forced-error metadata when a disk drive is initialized. • NODESTROY preserves the user data and forced-error metadata when a disk drive is initialized. Use NODESTROY to create a single-disk unit from any disk drive that has been used as a member of a mirrorset. See the REDUCED command in the StorageWorks HSG80 Array Controller ACS Version 8.7 CLI Reference Guide for information on removing disk drives from a mirrorset.
Planning Storage Configurations To make a storage map, fill out the templates provided in Appendix A as you add storagesets, partitions, and JBOD disks to the configuration and assign them unit numbers. Label each disk drive in the map with the higher levels it is associated with, up to the unit level. Using LOCATE Command to Find Devices If you want to complete a storage map at a later time but do not remember where the disk drives and partitions are located, use the CLI command LOCATE.
Planning Storage Configurations Example Storage Map - Model 4310R Disk Enclosure Table 2–8 shows an example of four Model 4310R disk enclosures (single-bus I/O).
Planning Storage Configurations Model 4310R Disk Enclosure Shelf 2 (single-bus) 3 4 5 6 7 8 9 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 D1 S4 M5 D2 R3 D3 S5 D4 M7 Disk20800 Disk20500 Disk20400 Disk20300 Disk20200 Disk20100 DISK ID Disk20000 D100 D101 D102 D104 D106 D108 R1 S1 M3 S2 R2 S3 M1 Disk21200 2 Disk21100 1 Disk21000 Bay Model 4310R Disk Enclosure Shelf 3 (single-bus) 4 5 6 7 8 9 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 D1 S4 M6 D2 R3 D3 S5 s
Planning Storage Configurations • Unit D104 is 3-member stripeset named S2. S2 consists of Disk10300, Disk20300, and Disk30300. • Unit D105 is a single (JBOD) disk named Disk40300. • Unit D106 is a 3-member RAID 3/5 storageset named R2. R2 consists of Disk10400, Disk20400, and Disk30400. • Unit D107 is a single (JBOD) disk named Disk40400. • Unit D108 is a 4-member stripeset named S3. S3 consists of Disk10500, Disk20500, Disk30500, and Disk40500.
3 Preparing the Host System This chapter describes how to prepare your OpenVMS host computer to accommodate the HSG80 controller storage subsystem. The following information is included in this chapter: • “Installing RAID Array Storage System,” page 3–1 • “Making a Physical Connection,” page 3–6 • “Verifying/Installing Required Versions,” page 3–6 • “Solution Software Upgrade Procedures,” page 3–7 • “New Features, ACS 8.
Preparing the Host System CAUTION: Controller and disk enclosures have no power switches. Make sure the controller enclosures and disk enclosures are physically configured before turning the PDU on and connecting the power cords. Failure to do so can cause equipment damage. 1. Be sure the enclosures are empty before mounting them into the rack.
Preparing the Host System 4. Connect the six VHDCI UltraSCSI bus cables between the controller and disk enclosures as shown in Figure 3–1 for a dual bus system and Figure 3–2 for a single bus system. Note that the supported cable lengths are 1, 2, 3, 5, and 10 meters. 5. Connect the AC power cords from the appropriate rack AC outlets to the controller and disk enclosures. HSG80 ACS Solution Software Version 8.
Preparing the Host System 1 8 2 3 4 5 7 6 CXO7383A 1 3 5 7 SCSI Bus 1 Cable SCSI Bus 3 Cable SCSI Bus 5 Cable AC Power Inputs 2 4 6 8 SCSI Bus 2 Cable SCSI Bus 4 Cable SCSI Bus 6 Cable Fibre Channel Ports Figure 3–1: Dual-Bus Enterprise Storage RAID Array Storage System 3–4 HSG80 ACS Solution Software Version 8.
Preparing the Host System 6 5 4 8 1 7 2 3 CXO7382A 1 3 5 7 SCSI Bus 1 Cable SCSI Bus 3 Cable SCSI Bus 5 Cable AC Power Inputs 2 4 6 8 SCSI Bus 2 Cable SCSI Bus 4 Cable SCSI Bus 6 Cable Fibre Channel Ports Figure 3–2: Single-Bus Enterprise Storage RAID Array Storage System HSG80 ACS Solution Software Version 8.
Preparing the Host System Making a Physical Connection To attach a host computer to the storage subsystem, install one or more host bus adapters into the computer. A Fibre Channel (FC) cable goes from the host bus adapter to an FC switch. Preparing to Install Host Bus Adapter Before installing the host bus adapter, perform the following steps: 1. Perform a complete backup of the entire system. 2. Shut down the computer system or perform a hot addition of the adapter based upon directions for that server.
Preparing the Host System Solution Software Upgrade Procedures Use the following procedures for upgrades to your Solution Software. It is considered best practice to follow this order of procedures: 1. Perform backups of data prior to upgrade; 2. Verify operating system versions, upgrade operating systems to supported versions and patch levels; 3. Quiesce all I/O and unmount all file systems before proceeding; 4. Upgrade switch firmware; 5. Upgrade Solution Software 6.
Preparing the Host System 3. Choose 3) Agent Disable/Stop 4. Choose 4) Uninstall Agent NOTE: This OpenVMS uninstallation All client and storage files will be preserved. To remove agent software only and save client and storage data: 1. Halt SWCC agent 2. $ @sys$manager:swcc_config 3. Choose 3) Agent Disable/Stop 4. Choose E) Exit configuration procedure. 5. $ product remove swcc The second effort is upgrade the HS-Series Agent on OpenVMS: 1.
Preparing the Host System New Features, ACS 8.7 for OpenVMS The following are new features implemented in ACS 8.
Preparing the Host System Viewing Host Connection Table Lock State The state of the lock can be displayed using: CLI> SHOW The following string is displayed just before the port topology information: Host Connection Table is 3–10 HSG80 ACS Solution Software Version 8.
Preparing the Host System Example of Host Connection Table Unlock: (new output shown in bold) AP_Bot> show this Controller: HSG80 (C) DEC CX00000001 Software V87 Hardware 0000 NODE_ID = 5000-1FE1-FF00-0090 ALLOCATION_CLASS = 1 SCSI_VERSION = SCSI-3 Configured for dual-redundancy with ZG02804912 In dual-redundant configuration Device Port SCSI address 6 Time: 10-SEP-2001 15:45:54 Command Console LUN is lun 0 (IDENTIFIER = 99) Host Connection Table is NOT locked Host PORT_1: Reported PORT_ID = 5000-1FE
Preparing the Host System Example of Host Connection Table Locked: (new output shown in bold) AP_Bot> show this Controller: HSG80 (C) DEC CX00000001 Software XC21P-0, Hardware 0000 NODE_ID = 5000-1FE1-FF00-0090 ALLOCATION_CLASS = 1 SCSI_VERSION = SCSI-3 Configured for dual-redundancy with ZG02804912 In dual-redundant configuration Device Port SCSI address 6 Time: 10-SEP-2001 15:48:24 Command Console LUN is lun 0 (IDENTIFIER = 99) Host Connection Table is LOCKED Host PORT_1: Reported PORT_ID = 5000-1F
Preparing the Host System The state of the connection can be displayed using: CLI> SHOW CONN <<< LOCKED >>> appears in the title area when the connection table is locked. If unlocked, or not supported (HOST_FC only), the title area looks the same as it did for ACS version 8.6. The full switch displays the rejected hosts, with an index. Adding Rejected Host Connections to Locked Host Connection Table With ACS version 8.
Preparing the Host System • To Add a new Host to a SAN - A new host is added to the fabric that needs connectivity to the HSG80. Attempts to login are rejected because the connection table is locked. The system administrator is called, and manually adds an entry for the new host by creating a new connection from the rejected host. • To Delete a Host - While the connection table is locked, delete the connection for the selected host.
Preparing the Host System Display Enabled Management Agents The following command displays a list of the systems currently enabled to perform management functions.
Preparing the Host System In the event that all connections are enabled the display appears as follows.
Preparing the Host System Linking WWIDs for Snap and Clone Units LUN WWIDs (World Wide Identifiers) for snap and clone units are different each time they are created. This causes more system data records to keep track of the WWIDs as well as script changes at the customer sites. To eliminate this issue, a linked WWID scheme has been created, which keeps the WWIDs of these units constant each time they are created.
Preparing the Host System Implementation Notes Add Snap with Linked WWID - The user has a script that runs every night to create a snapshot, run a backup to tape from the snapshot, then delete the snapshot. Each time this is done, a new WWID is allocated. When the operating system runs out of room for all of these “orphaned” WWIDs, the host system must be rebooted.
Preparing the Host System SMART Error Eject When a SMART notification is received from a device, it is currently treated as a soft error - the notification is passed to the host and operations continue. A new CLI switch at the controller level changes this behavior. When this switch is enabled, drives in a normalized and redundant set that report a smart error are removed from that set.
Preparing the Host System CLI output - feature disabled: AP_TOP> show this Controller: HSG80 ZG02804912 Software V87S-0, Hardware E12 NODE_ID = 5000-1FE1-FF00-0090 ALLOCATION_CLASS = 1 SCSI_VERSION = SCSI-3 Configured for MULTIBUS_FAILOVER with ZG02804288 In dual-redundant configuration Device Port SCSI address 7 Time: 22-NOV-2001 01:14:32 Command Console LUN is lun 0 (IDENTIFIER = 99) Host Connection Table is NOT locked Smart Error Eject Disabled Host PORT_1: Reported PORT_ID = 5000-1FE1-FF00-0093 POR
Preparing the Host System Battery: NOUPS FULLY CHARGED Expires: WARNING: UNKNOWN EXPIRATION DATE! WARNING: AN UNKNOWN NUMBER OF DEEP DISCHARGES HAVE OCCURRED! HSG80 ACS Solution Software Version 8.
Preparing the Host System CLI Output - feature enabled: AP_TOP> show this Controller: HSG80 ZG02804912 Software V87S-0, Hardware E12 NODE_ID = 5000-1FE1-FF00-0090 ALLOCATION_CLASS = 1 SCSI_VERSION = SCSI-3 Configured for MULTIBUS_FAILOVER with ZG02804288 In dual-redundant configuration Device Port SCSI address 7 Time: 22-NOV-2001 01:17:47 Command Console LUN is lun 0 (IDENTIFIER = 99) Host Connection Table is NOT locked Smart Error Eject Enabled Host PORT_1: Reported PORT_ID = 5000-1FE1-FF00-0093 PORT_
Preparing the Host System NOUPS FULLY CHARGED Expires: WARNING: UNKNOWN EXPIRATION DATE! WARNING: AN UNKNOWN NUMBER OF DEEP DISCHARGES HAVE OCCURRED! Error Threshold for Drives A new limit for driver errors can be set. Once the limit is reached, the drive is removed from any redundant sets to which it belongs and put into the failed set. Errors counted are medium and recovered errors - there is no need to add hardware errors to this count as the drive fails immediately if a hardware error is encountered.
4 Installing and Configuring HSG Agent StorageWorks Command Console (SWCC) enables real-time configuration of the storage environment and permits the user to monitor and configure the storage connected to the HSG80 controller.
Installing and Configuring HSG Agent The Agent can also be used as a standalone application without Client. In this mode, which is referred to as Agent only, Agent monitors the status of the subsystem and provides local and remote notification in the event of a failure. A subsystem includes the HSG80 controller and its devices. Remote and local notification can be made by email and/or SNMP messages to an SNMP monitoring program.
Installing and Configuring HSG Agent Table 4–2: Installation and Configuration Overview (Continued) Step Procedure 3 Verify that there is a LUN for communications. This can be either the CCL or a LUN that was created with the CLI. See “What is the Command Console LUN?” on page 1–9 in Chapter 1. 4 Install the Agent (TCP/IP network connections) on a system connected to the HSG80 controller. See Chapter 3 for agent installation.
Installing and Configuring HSG Agent 7 1 A T V A T -S H V T N E C O O A T V O 4 4 7 A T V A T -S H 2 V T N E C O O 5 4 3 6 CXO7240A Figure 4–1: An example of a network connection 1 Agent system (has the Agent 5 Hub or switch software) 2 TCP/IP Network 6 HSG80 controller and its device subsystem 3 Client system (has the Client 7 Servers software) 4 Fibre Channel cable 4–4 HSG80 ACS Solution Software Version 8.
Installing and Configuring HSG Agent Before Installing the Agent The Agent requires the minimum system requirements, as defined in the release notes for your operating system. The program is designed to operate with the Client version 2.5 on Windows 2000 or Windows NT. Options for Running the Agent Agent runs as an OpenVMS process called “SWCC_AGENT.” You can use the Agent configuration program to control the execution of this process. You can: • Immediately start or stop your Agent.
Installing and Configuring HSG Agent Installing and Configuring the Agent For the following examples, you can replace DKB600 and DKB100:[SWCC] with “device names” more suitable for your system. 1. Insert the CD-ROM into the system that is connected to the controller. For the examples in this section, assume the CD-ROM device is DKB600. 2. To mount the CD-ROM, enter the following at the command prompt (Replace DKB600 with the name of your CD-ROM device.): $ MOUNT/OVER=ID/MEDIA=CD DKB600: 3.
Installing and Configuring HSG Agent 9. If you have an OpenVMS cluster running the MultiNet TCP/IP stack, the command procedure SWCC_CONFIG.COM will only upgrade the services of each system disk’s first node. Enter the following to upgrade the services database of the other nodes that share the system disk: $ @MULTINET:INSTALL_DATABASES or Restart the system. 10. Dismount the CD-ROM.
Installing and Configuring HSG Agent You can change your configuration using the SWCC Command Console Agent Configuration menu by entering the following command: $ @sys$manager:swcc_config The following is an example of the Agent Configuration menu: SWCC Agent for HS* Controllers Configuration Menu Agent is enabled as TCP/IP Services for OpenVMS service.
Installing and Configuring HSG Agent Table 4–3: Information Needed to Configure Agent Term/Procedure Description Adding a Client system entry For a client system to receive updates from the Agent, you must add it to the Agent’s list of client system entries. The Agent will only send information to client system entries that are on this list. In addition, adding a client system entry allows you to access the Agent system from the Navigation Tree on that Client system.
Installing and Configuring HSG Agent Table 4–3: Information Needed to Configure Agent (Continued) Term/Procedure Description Client system notification options 0 = No Error Notification−No error notification is provided over network. Note: For all of the client system notification options, local notification is available through an entry in the system error log file and Email (provided that Email notification in PAGEMAIL.COM has not been disabled).
Installing and Configuring HSG Agent Table 4–3: Information Needed to Configure Agent (Continued) Term/Procedure Password Description It must be a text string that has 4 to 16 characters. It can be entered from the client system to gain configuration access. Accessing the SWCC Agent Configuration menu can change it. You can change your configuration using the SWCC Agent Configuration menu.
Installing and Configuring HSG Agent Removing the Agent Instructions on how to remove the HSG Agent from OpenVMS are the following: Warning: This OpenVMS uninstallation will remove all configuration files! To fully remove agent software, including client and storage data: 1. Halt SWCC agent 2. $ @sys$manager:swcc_config 3. Choose 3) Agent Disable/Stop 4. Choose 4) Uninstall Agent NOTE: This OpenVMS uninstallation All client and storage files will be preserved.
Installing and Configuring HSG Agent NOTE: This option does the following: Stops all instances of the Agent on all cluster nodes Deletes all Agent files, except the .PCSI file used to install the Agent. HSG80 ACS Solution Software Version 8.
5 FC Configuration Procedures This chapter describes procedures to configure a subsystem that uses Fibre Channel (FC) fabric topology. In fabric topology, the controller connects to its hosts through switches.
FC Configuration Procedures Establishing a Local Connection A local connection is required to configure the controller until a command console LUN (CCL) is established using the CLI. Communication with the controller can be through the CLI or SWCC. The maintenance port, shown in Figure 5–1, provides a way to connect a maintenance terminal. The maintenance terminal can be an EIA-423 compatible terminal or a computer running a terminal emulator program. The maintenance port accepts a standard RS-232 jack.
FC Configuration Procedures Setting Up a Single Controller Power On and Establish Communication 1. Connect the computer or terminal to the controller as shown in Figure 5–1. The connection to the computer is through the COM1 or COM2 port. 2. Turn on the computer or terminal. 3. Apply power to the storage subsystem. 4. Verify that the computer or terminal is configured as follows: — 9600 baud — 8 data bits — 1 stop bit — no parity — no flow control 5. Press Enter.
FC Configuration Procedures 4 1 2 5 3 5 4 CXO6881B 1 Controller 4 Cable from the switch to the host Fibre Channel 2 Host port 1 adapter 3 Host port 2 5 FC switch Figure 5–2: Single controller cabling Configuring a Single Controller Using CLI To configure a single controller using CLI involves the following processes: • Verify the Node ID and Check for Any Previous Connections. • Configure Controller Settings. • Restart the Controller. • Set Time and Verify all Commands.
FC Configuration Procedures The node ID is located in the third line of the SHOW THIS result: HSG80> SHOW THIS Controller: HSG80 ZG80900583 Software V8.7, Hardware E11 NODE_ID = 5000-1FE1-0001-3F00 ALLOCATION_CLASS = 0 If the node ID is present, go to step 5. If the node ID is all zeroes, enter node ID and checksum, which are located on a sticker on the controller enclosure.
FC Configuration Procedures 6. Assign an identifier for the communication LUN (also called the command console LUN, or CCL). The CCL must have a unique identifier that is a decimal number in the range 1 to 32767, and which is different from the identifiers of all units. Use the following syntax: SET THIS IDENTIFIER=N Identifier must be unique among all the controllers attached to the fabric within the specified allocation class. 7. Set the topology for the controller.
FC Configuration Procedures When FRUTIL asks if you intend to replace the battery, answer “Y”: Do you intend to replace this controller's cache battery? Y/N [N] Y FRUTIL will print out a procedure, but will not give you a prompt. Ignore the procedure and press the Enter key. 3. Set up any additional optional controller settings, such as changing the CLI prompt. See the SET THIS CONTROLLER/OTHER CONTROLLER command in the StorageWorks HSG80 Array Controller ACS Version 8.
FC Configuration Procedures The following sample is a result of a SHOW THIS command, with the areas of interest in bold. Controller: HSG80 ZG94214134 Software V8.
FC Configuration Procedures 5. Turn on the switches, if not done previously. If you want to communicate with the Fibre Channel switches through Telnet, set an IP address for each switch. See the manuals that came with the switches for details. Plug in the FC Cable and Verify Connections 6. Plug the Fibre Channel cable from the first host bus adapter into the switch. Enter the SHOW CONNECTIONS command to view the connection table: SHOW CONNECTIONS 7.
FC Configuration Procedures Setting Up a Controller Pair Power Up and Establish Communication 1. Connect the computer or terminal to the controller as shown in Figure 5–1. The connection to the computer is through the COM1 or COM2 ports. 2. Turn on the computer or terminal. 3. Apply power to the storage subsystem. 4. Configure the computer or terminal as follows: — 9600 baud — 8 data bits — 1 stop bit — no parity — no flow control 5. Press Enter.
FC Configuration Procedures Figure 5–3 shows a controller pair with failover cabling showing one HBA per server with HSG80 controller in transparent failover mode. 5 6 1 3 4 2 6 5 CXO6887B 1 Controller A 4 Host port 2 2 Controller B 5 Cable from the switch to the host FC adapter 3 Host port 1 6 FC switch Figure 5–3: Controller pair failover cabling Configuring a Controller Pair Using CLI To configure a controller pair using CLI involves the following processes: • Configure Controller Settings.
FC Configuration Procedures The node ID is located in the third line of the SHOW THIS result: HSG80> show this Controller: HSG80 ZG80900583 Software V8.7, Hardware E11 NODE_ID = 5000-1FE1-0001-3F00 ALLOCATION_CLASS = 0 If the node ID is present, go to step 5. If the node ID is all zeroes, enter the node ID and checksum, which are located on a sticker on the controller enclosure.
FC Configuration Procedures 6. Assign an identifier for the communication LUN (also called the command console LUN, or CCL). The CCL must have a unique identifier that is a decimal number in the range 1 to 32767, and which is different from the identifiers of all units. Use the following syntax: SET THIS IDENTIFIER=N Identifier must be unique among all the controllers attached to the fabric within the specified allocation class. 7. Set the topology for the controller.
FC Configuration Procedures When FRUTIL asks if you intend to replace the battery, answer “Y”: Do you intend to replace this controller's cache battery? Y/N [N] Y FRUTIL will print out a procedure, but will not give you a prompt. Ignore the procedure and press Enter. 12. Set up any additional optional controller settings, such as changing the CLI prompt. See the SET THIS CONTROLLER/OTHER CONTROLLER command in the StorageWorks HSG80 Array Controller ACS Version 8.
FC Configuration Procedures 14. Verify node ID, allocation class, SCSI version, failover mode, identifier, and port topology. The following display is a sample result of a SHOW THIS command, with the areas of interest in bold. Controller: HSG80 ZG94214134 Software V8.
FC Configuration Procedures 15. Turn on the switches if not done previously. If you want to communicate with the FC switches through Telnet, set an IP address for each switch. See the manuals that came with the switches for details. Plug in the FC Cable and Verify Connections 16. Plug the FC cable from the first host adapter into the switch. Enter a SHOW CONNECTIONS command to view the connection table: SHOW CONNECTIONS The first connection will have one or more entries in the connection table.
FC Configuration Procedures Verify Installation To verify installation for your OpenVMS host, enter the following command: SHOW DEVICES Your host computer should report that it sees a device whose designation matches the identifier (CCL) that you assigned the controllers. For example, if you assigned an identifier of 88, your host computer will see device $1$GGA88. This verifies that your host computer is communicating with the controller pair.
FC Configuration Procedures • “Configuring a Single-Disk Unit (JBOD)” on page 5–21 • “Configuring a Partition” on page 5–21 Containers Partition Stripeset (R0) Single devices (JBOD) Mirrorset (R1) Striped mirrorset (R0+1) RAIDset (R3/5) Storagesets CXO6677A Figure 5–4: Storage container types Configuring a Stripeset 1. Create the stripeset by adding its name to the controller's list of storagesets and by specifying the disk drives it contains.
FC Configuration Procedures For example: The commands to create Stripe1, a stripeset consisting of three disks (DISK10000, DISK20000, and DISK10100) and having a chunksize of 128: ADD STRIPESET STRIPE1 DISK10000 DISK20000 DISK30000 INITIALIZE STRIPE1 CHUNKSIZE=128 SHOW STRIPE1 Configuring a Mirrorset 1. Create the mirrorset by adding its name to the controller's list of storagesets and by specifying the disk drives it contains.
FC Configuration Procedures Configuring a RAIDset 1. Create the RAIDset by adding its name to the controller's list of storagesets and by specifying the disk drives it contains. Optionally, you can specify RAIDset switch values: ADD RAIDSET RAIDSET-NAME DISKNNNNN DISKNNNNN DISKNNNNN SWITCHES NOTE: See the ADD RAIDSET command in the StorageWorks HSG80 Array Controller ACS Version 8.7 CLI Reference Guide for a description of the RAIDset switches. 2.
FC Configuration Procedures See “Specifying Initialization Switches” on page 2–29 for a description of the initialization switches. 4. Verify the striped mirrorset configuration: SHOW STRIPESET-NAME 5. Assign the stripeset mirrorset a unit number to make it accessible by the hosts. See “Assigning Unit Numbers and Unit Qualifiers” on page 5–23.
FC Configuration Procedures See “Specifying Initialization Switches” on page 2–29 for a description of the initialization switches. 2. Create each partition in the storageset or disk drive by indicating the partition's size. Also specify any desired switch settings: CREATE_PARTITION STORAGESET-NAME SIZE=N SWITCHES or CREATE_PARTITION DISK-NAME SIZE=N SWITCHES where N is the percentage of the disk drive or storageset that will be assigned to the partition.
FC Configuration Procedures Assigning Unit Numbers and Unit Qualifiers Each storageset, partition, or single (JBOD) disk must be assigned a unit number for the host to access. As the units are added, their properties can be specified through the use of command qualifiers, which are discussed in detail under the ADD UNIT command in the StorageWorks HSG80 Array Controller ACS Version 8.7 CLI Reference Guide.
FC Configuration Procedures Assigning Unit Identifiers One unique step is required when configuring storage units for OpenVMS: specifying an identifier (or LUN ID alias) for each unit. A unique identifier is required for each unit (virtual disk). This identifier must be unique in the cluster. This section gives two examples of setting an identifier for a previously created unit: one using CLI and one using SWCC.
FC Configuration Procedures Preferring Units In multiple-bus failover mode, individual units can be preferred to a specific controller. For example, to prefer unit D102 to “this controller,” use the following command: SET D102 PREFERRED_PATH=THIS RESTART commands must be issued to both controllers for this command to take effect: RESTART OTHER_CONTROLLER RESTART THIS_CONTROLLER NOTE: The controllers need to restart together for the preferred settings to take effect.
FC Configuration Procedures • To add one new disk drive to the list of known devices, use the following syntax: ADD DISK DISKNNNNN P T L • To add several new disk drives to the list of known devices, enter the following command: RUN CONFIG Adding a Disk Drive to the Spareset The spareset is a collection of spare disk drives that are available to the controller should it need to replace a failed member of a RAIDset or mirrorset.
FC Configuration Procedures Enabling Autospare With AUTOSPARE enabled on the failedset, any new disk drive that is inserted into the PTL location of a failed disk drive is automatically initialized and placed into the spareset. If initialization fails, the disk drive remains in the failedset until you manually delete it from the failedset.
FC Configuration Procedures Displaying the Current Switches To display the current switches for a storageset or single-disk unit, enter a SHOW command, specifying the FULL switch: SHOW STORAGESET-NAME or SHOW DEVICE-NAME NOTE: FULL is not required when showing a particular device. It is used when showing all devices, for example, SHOW DEVICES FULL.
FC Configuration Procedures Verifying Storage Configuration from Host This section briefly describes how to verify that multiple paths exist to virtual disk units under OpenVMS. After configuring units (virtual disks) through either the CLI or SWCC, reboot the host to enable access to the new storage and enter the following command to rescan the bus: $ MC SYSMAN ID AUTO After the host restarts, verify that the disk is correctly presented to the host.
6 Using CLI for Configuration This chapter presents an example of how to configure a storage subsystem using the Command Line Interpreter (CLI). The CLI configuration example shown assumes: • A normal, new controller pair, which includes: — NODE ID set — No previous failover mode — No previous topology set • Full array with no expansion cabinet • PCMCIA cards installed in both controllers A storage subsystem example is shown in Figure 6–1.
Using CLI for Configuration Figure 6–1 shows an example storage system map for the BA370 enclosure. Details on building your own map are described in Chapter 2. Templates to help you build your storage map are supplied in Appendix A.
Using CLI for Configuration Host 1 "RED" Host 2 "GREY" Host 3 "BLUE" FCA1 FCA2 FCA1 FCA2 FCA1 FCA2 Switch or hub Connections RED1B1 GREY1B1 BLUE1B1 Switch or hub Connections RED1A1 GREY1A1 BLUE1A1 Connections RED2A2 GREY2A2 BLUE2A2 Host port 1 active Connections RED2B2 GREY2B2 BLUE2B2 Host port 2 Standby Controller A active D0 D1 D2 D101 D102 D120 All units visible to all ports Host port 1 Standby active Controller B Host port 2 active NOTE: FCA = Fibre Channel Adapter CXO7547A Fi
Using CLI for Configuration "RED" "GREY" "BLUE" D1 D0 D2 D101 D102 D120 CXO7110B Figure 6–3: Example, logical or virtual disks comprised of storagesets CLI Configuration Example Text conventions used in this example are listed below: • Text in italics indicates an action you take. • Text in THIS FORMAT, indicates a command you type. Be certain to press Enter after each command. • Text enclosed within a box, indicates information that is displayed by the CLI interpreter.
Using CLI for Configuration SET THIS SCSI_VERSION=SCSI-3 SET THIS IDENTIFIER=88 SET THIS PORT_1_TOPOLOGY=FABRIC SET THIS PORT_2_TOPOLOGY=FABRIC SET OTHER PORT_1_TOPOLOGY=FABRIC SET OTHER PORT_2_TOPOLOGY=FABRIC SET THIS ALLOCATION_CLASS=1 RESTART OTHER RESTART THIS SET THIS TIME=10-Mar-2001:12:30:34 RUN FRUTIL Do you intend to replace this controller's cache battery? Y/N [Y] Y Plug serial cable from maintenance terminal into bottom controller. NOTE: Bottom controller (B) becomes “this” controller.
Using CLI for Configuration RENAME !NEWCON00 RED1B1 SET RED1B1 OPERATING_SYSTEM=VMS RENAME !NEWCON01 RED1A1 SET RED1A1 OPERATING_SYSTEM=VMS SHOW CONNECTIONS NOTE: Connection table sorts alphabetically.
Using CLI for Configuration Connection Name !NEWCON0 2 Operating System Controlle r VMS THIS Port Address Status Unit Offset 2 XXXXXX OL this 0 HOST_ID=XXXX-XXXX-XXXX-XX ADAPTER_ID=XXXX-XXXX-XXXX-XXX XX X !NEWCON0 3 VMS OTHER 2 XXXXXX OL other 0 HOST_ID=XXXX-XXXX-XXXX-XX ADAPTER_ID=XXXX-XXXX-XXXX-XXX XX X RED1A1 VMS OTHER 1 XXXXXX OL other 0 ...
Using CLI for Configuration Connection Name RED1A1 Operating System Controll er VMS OTHER Port 1 Address Status XXXXX OL other X Unit Offset 0 HOST_ID=XXXX-XXXX-XXXX-XX ADAPTER_ID=XXXX-XXXX-XXXX-XX XX XX RED1B1 VMS THIS 1 XXXXX X OL this 0 HOST_ID=XXXX-XXXX-XXXX-XX ADAPTER_ID=XXXX-XXXX-XXXX-XX XX XX RED2A2 VMS OTHER 2 XXXXX OL other X 0 HOST_ID=XXXX-XXXX-XXXX-XX ADAPTER_ID=XXXX-XXXX-XXXX-XX XX XX RED2B2 VMS THIS 2 XXXXX X OL this 0 HOST_ID=XXXX-XXXX-XXXX-XX ADAPTER_ID=XXXX-XXXX-
Using CLI for Configuration Connection Name GREY1A1 Operating System Controll er VMS OTHER Port 1 Address Status XXXXX OL other X Unit Offset 0 HOST_ID=XXXX-XXXX-XXXX-XX ADAPTER_ID=XXXX-XXXX-XXXX-XX XX XX GREY1B1 VMS THIS 1 XXXXX X OL this 0 HOST_ID=XXXX-XXXX-XXXX-XX ADAPTER_ID=XXXX-XXXX-XXXX-XX XX XX GREY2A2 VMS OTHER 2 XXXXX OL other X 0 HOST_ID=XXXX-XXXX-XXXX-XX ADAPTER_ID=XXXX-XXXX-XXXX-XX XX XX GREY2B2 VMS THIS 2 XXXXX X OL this 0 HOST_ID=XXXX-XXXX-XXXX-XX ADAPTER_ID=XXXX-X
Using CLI for Configuration HOST_ID=XXXX-XXXX-XXXX-XX ADAPTER_ID=XXXX-XXXX-XXXX-XX XX XX RED1A1 VMS OTHER 1 XXXXX OL other X 0 HOST_ID=XXXX-XXXX-XXXX-XX ADAPTER_ID=XXXX-XXXX-XXXX-XX XX XX RED1B1 VMS THIS 1 XXXXX X OL this 0 HOST_ID=XXXX-XXXX-XXXX-XX ADAPTER_ID=XXXX-XXXX-XXXX-XX XX XX RED2A2 VMS OTHER 2 XXXXX OL other X 0 HOST_ID=XXXX-XXXX-XXXX-XX ADAPTER_ID=XXXX-XXXX-XXXX-XX XX XX RED2B2 VMS THIS 2 XXXXX X OL this 0 HOST_ID=XXXX-XXXX-XXXX-XX ADAPTER_ID=XXXX-XXXX-XXXX-XX XX XX 6–1
Using CLI for Configuration SET CONNECTION BLUE1A1 UNIT_OFFSET=100 SET CONNECTION BLUE1B1 UNIT_OFFSET=100 SET CONNECTION BLUE2A2 UNIT_OFFSET=100 SET CONNECTION BLUE2B2 UNIT_OFFSET=100 RUN CONFIG ADD RAIDSET R1 DISK10000 DISK20000 DISK30000 DISK40000 DISK50000 DISK60000 INITIALIZE R1 ADD UNIT D102 R1 DISABLE_ACCESS_PATH=ALL SET D102 ENABLE_ACCESS_PATH=(RED1A1, RED1B1, RED2A2, RED2B2) SET D102 IDENTIFIER=102 ADD RAIDSET R2 DISK10100 DISK20100 DISK30100 DISK40100 DISK50100 DISK60100 INITIALIZE R2 ADD UNIT D12
Using CLI for Configuration INITIALIZE DISK50300 ADD UNIT D101 DISK50300 DISABLE_ACCESS_PATH=ALL SET D101 ENABLE_ACCESS_PATH=(BLUE1A1, BLUE1B1, BLUE2A2, BLUE2B2) SET D101 IDENTIFIER=101 ADD SPARESET DISK60300 SHOW UNITS FULL 6–12 HSG80 ACS Solution Software Version 8.
7 Backing Up, Cloning, and Moving Data This chapter includes the following topics: • “Backing Up Subsystem Configurations,” page 7–1 • “Creating Clones for Backup,” page 7–2 • “.Moving Storagesets,” page 7–5 Backing Up Subsystem Configurations The controller stores information about the subsystem configuration in its nonvolatile memory. This information could be lost if the controller fails or when you replace a module in the subsystem.
Backing Up, Cloning, and Moving Data Creating Clones for Backup Use the CLONE utility to duplicate the data on any unpartitioned single-disk unit, stripeset, mirrorset, or striped mirrorset in preparation for backup. When the cloning operation is complete, you can back up the clones rather than the storageset or single-disk unit, which can continue to service its I/O load. When you are cloning a mirrorset, CLONE does not need to create a temporary mirrorset.
Backing Up, Cloning, and Moving Data Use the following steps to clone a single-disk unit, stripeset, or mirrorset: 1. Establish a connection to the controller that accesses the unit you want to clone. 2. Start CLONE using the following command: RUN CLONE 3. When prompted, enter the unit number of the unit you want to clone. 4. When prompted, enter a unit number for the clone unit that CLONE will create. 5.
Backing Up, Cloning, and Moving Data The following example shows the commands you would use to clone storage unit D6. The clone command terminates after it creates storage unit D33, a clone or copy of D6. RUN CLONE CLONE LOCAL PROGRAM INVOKED UNITS AVAILABLE FOR CLONING: 98 ENTER UNIT TO CLONE? 98 CLONE WILL CREATE A NEW UNIT WHICH IS A COPY OF UNIT 98. ENTER THE UNIT NUMBER WHICH YOU WANT ASSIGNED TO THE NEW UNIT? 99 THE NEW UNIT MAY BE ADDED USING ONE OF THE FOLLOWING METHODS: 1.
Backing Up, Cloning, and Moving Data USE AVAILABLE DEVICE DISK20300(SIZE=832317) FOR MEMBER DISK10000(SIZE=832317) (Y,N) [Y]? Y MIRROR DISK10000 C_MB SET C_MB NOPOLICY SET C_MB MEMBERS=2 SET C_MB REPLACE=DISK20300 COPY IN PROGRESS FOR EACH NEW MEMBER. PLEASE BE PATIENT... . .
Backing Up, Cloning, and Moving Data CAUTION: Never initialize any container or this procedure will not protect data in the storageset. Use the following procedure to move a storageset, while maintaining the data the storageset contains: 1. Show the details for the storageset you want to move. Use the following command: SHOW STORAGESET-NAME 2. Label each member with its name and PTL location.
Backing Up, Cloning, and Moving Data 8. Recreate the storageset by adding its name to the controller's list of valid storagesets and by specifying the disk drives it contains. (Although you have to recreate the storageset from its original disks, you do not have to add the storagesets in their original order.) Use the following syntax to recreate the storageset: ADD STORAGESET-NAME DISK-NAME DISK-NAME 9. Represent the storageset to the host by giving it a unit number the host can recognize.
A Subsystem Profile Templates This appendix contains storageset profiles to copy and use to create your profiles. It also contains an enclosure template to use to help keep track of the location of devices and storagesets in your shelves. Four (4) templates will be needed for the subsystem. NOTE: The storage map templates for the Model 4310R and Model 4214R or 4314R reflect the physical location of the disk enclosures in the rack.
Subsystem Profile Templates Storageset Profile Type of Storageset: _____ Mirrorset __X_ RAIDset _____ Stripeset _____ Striped Mirrorset ____ JBOD Storageset Name Disk Drives Unit Number Partitions: Unit # Unit # Unit # Unit # Unit # Unit # Unit # Unit # RAIDset Switches: Reconstruction Policy ___Normal (default) Reduced Membership __ _No (default) Replacement Policy ___Best performance (default) ___Fast ___Yes, missing: ___Best fit ___None Mirrorset Switches: Replacement Policy Copy
Subsystem Profile Templates Unit Switches: Caching Read caching__________ Read-ahead caching_____ Write-back caching______ Write-through caching____ Access by following hosts enabled _________________________________________________ ___________ _________________________________________________ ___________ _________________________________________________ ___________ _________________________________________________ ___________ HSG80 ACS Solution Software Version 8.
Subsystem Profile Templates Storage Map Template 1 for the BA370 Enclosure Use this template for: • BA370 single-enclosure subsystems • first enclosure of multiple BA370 enclosure subsystems 1 2 Port 3 4 5 6 Power Supply Power Supply 3 D10300 D20300 D30300 D40300 D50300 D60300 Power Supply Power Supply 2 D20200 D30200 D40200 D50200 Targets D10200 D60200 Power Supply Power Supply 1 D10100 D20100 D30100 D40100 D50100 D60100 Power Supply Power Supply 0 D10000 A–4 D20000
Subsystem Profile Templates Storage Map Template 2 for the second BA370 Enclosure Use this template for the second enclosure of multiple BA370 enclosure subsystems.
Subsystem Profile Templates Storage Map Template 3 for the third BA370 Enclosure Use this template for the third enclosure of multiple BA370 enclosure subsystems.
Subsystem Profile Templates Storage Map Template 4 for the Model 4214R Disk Enclosure Use this template for a subsystem with a three-shelf Model 4214R disk enclosure (single-bus). You can have up to six Model 4214R disk enclosures per controller shelf.
Subsystem Profile Templates A–8 Bay 1 2 3 4 5 6 7 8 9 1 0 1 1 1 2 1 3 1 4 SCSI ID 0 0 0 1 0 2 0 3 0 4 0 5 0 8 0 9 1 0 1 1 1 2 1 3 1 4 1 5 DISK ID Disk30000 Disk30100 Disk30200 Disk30300 Disk30400 Disk30500 Disk30800 Disk30900 Disk31000 Disk31100 Disk31200 Disk31300 Disk31400 Disk31500 Model 4214R Disk Enclosure Shelf 3 (single-bus) HSG80 ACS Solution Software Version 8.
Subsystem Profile Templates Storage Map Template 5 for the Model 4254 Disk Enclosure Use this template for a subsystem with a three-shelf Model 4254 disk enclosure (dual-bus). You can have up to three Model 4254 disk enclosures per controller shelf.
Subsystem Profile Templates continued from previous page Model 4254 Disk Enclosure Shelf 3 (dual-bus) A–10 Bay 1 2 3 4 5 6 7 8 9 1 0 1 1 1 2 1 3 1 4 SCSI ID 0 0 0 1 0 2 0 3 0 4 0 5 0 8 0 0 0 1 0 2 0 3 0 4 0 5 0 8 DISK ID Disk50100 Disk50200 Disk50300 Disk50400 Disk50500 Disk50800 Disk60000 Disk60100 Disk60200 Disk60300 Disk60400 Disk60500 Disk60800 Bus B Disk50000 Bus A HSG80 ACS Solution Software Version 8.
Subsystem Profile Templates Storage Map Template 6 for the Model 4310R Disk Enclosure Use this template for a subsystem with a six-shelf Model 4310R disk enclosure (single-bus). You can have up to six Model 4310R disk enclosures per controller shelf.
Subsystem Profile Templates Model 4310R Disk Enclosure Shelf 4 (single-bus) 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 DISK ID Disk41200 9 Disk41100 8 Disk41000 7 Disk40800 6 Disk40500 5 Disk40400 4 Disk40300 3 Disk40200 2 Disk40100 1 Disk40000 Bay Model 4310R Disk Enclosure Shelf 1 (single-bus) 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 DISK ID Disk11200 9 Disk11100 8 Disk11000 7 Disk10800 6 Disk10500 5 Disk10400 4 Disk10300 3 Disk10200 2 Disk10100
Subsystem Profile Templates Model 4310R Disk Enclosure Shelf 3 (single-bus) 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 DISK ID HSG80 ACS Solution Software Version 8.
Subsystem Profile Templates Storage Map Template 7 for the Model 4350R Disk Enclosure Use this template for a subsystem with a three-shelf Model 4350R disk enclosure (single-bus). You can have up to three Model 4350R disk enclosures per controller shelf.
Subsystem Profile Templates Model 4350R Disk Enclosure Shelf 4 (single-bus) 10 SCSI ID 00 01 02 03 04 05 08 10 11 12 DISK ID HSG80 ACS Solution Software Version 8.
Subsystem Profile Templates Storage Map Template 8 for the Model 4314R Disk Enclosure Use this template for a subsystem with a six-shelf Model 4314R disk enclosure. You can have a maximum of six Model 4314R disk enclosures with each Model 2200 controller enclosure.
Subsystem Profile Templates Model 4314R Disk Enclosure Shelf 4 (single-bus) 14 SCSI ID 00 01 02 03 04 05 08 09 10 11 12 13 14 15 DISK ID Disk41500 13 Disk41400 12 Disk41300 11 Disk41200 10 Disk41100 9 Disk41000 8 Disk40900 7 Disk40800 6 Disk40500 5 Disk40400 4 Disk40300 3 Disk40200 2 Disk40100 1 Disk40000 Bay continued from previous page Model 4314R Disk Enclosure Shelf 1 (single-bus) 14 SCSI ID 00 01 02 03 04 05 08 09 10 11 12 13 14 15 DISK ID
Subsystem Profile Templates Model 4314R Disk Enclosure Shelf 3 (single-bus) 14 SCSI ID 00 01 02 03 04 05 08 09 10 11 12 13 14 15 DISK ID A–18 Disk31500 13 Disk31400 12 Disk31300 11 Disk31200 10 Disk31100 9 Disk31000 8 Disk30900 7 Disk30800 6 Disk30500 5 Disk30400 4 Disk30300 3 Disk30200 2 Disk30100 1 Disk30000 Bay HSG80 ACS Solution Software Version 8.
Subsystem Profile Templates Storage Map Template 9 for the Model 4354R Disk Enclosure Use this template for a subsystem with a three-shelf Model 4354R disk enclosure (dual-bus). You can have up to three Model 4354R disk enclosures per controller shelf.
Subsystem Profile Templates Model 4354R Disk Enclosure Shelf 3 (dual-bus) SCSI Bus A SCSI Bus B 14 SCSI ID 00 01 02 03 04 05 08 00 01 02 03 04 05 08 DISK ID A–20 Disk60800 13 Disk60500 12 Disk60400 11 Disk60300 10 Disk60200 9 Disk60100 8 Disk60000 7 Disk50800 6 Disk50500 5 Disk50400 4 Disk50300 3 Disk50200 2 Disk50100 1 Disk50000 Bay HSG80 ACS Solution Software Version 8.
B Installing, Configuring, and Removing the Client The following information is included in this appendix: • “Why Install the Client?,” page B–2 • “Before You Install the Client,” page B–2 • “Installing the Client,” page B–4 • “Installing the Integration Patch,” page B–5 • “Troubleshooting Client Installation,” page B–8 • “Adding Storage Subsystem and its Host to Navigation Tree,” page B–10 • “Removing Command Console Client,” page B–12 • “Where to Find Additional Information,” page B–13 HS
Installing, Configuring, and Removing the Client Why Install the Client? The Client monitors and manages a storage subsystem by performing the following tasks: • Create mirrored device group (RAID 1) • Create striped device group (RAID 0) • Create striped mirrored device group (RAID 0+1) • Create striped parity device group (3/5) • Create an individual device (JBOD) • Monitor many subsystems at once • Set up pager notification Before You Install the Client 1.
Installing, Configuring, and Removing the Client 7. If you have Command Console Client version 1.1b or earlier, remove the program with the Windows Add/Remove Programs utility. 8. If you have a previous version of Command Console, you can save the Navigation Tree configuration by copying the SWCC2.MDB file to another directory. After you have installed the product, move SWCC2.MDB to the directory to which you installed SWCC. 9. Install the HS-Series Agent. For more information, see Chapter 4.
Installing, Configuring, and Removing the Client Installing the Client The following restriction should be observed when installing SWCC on Windows NT 4.0 Workstations. If you select all of the applets during installation, the installation will fail on the HSG60 applet and again on one of the HSG80 applets. The workaround is to install all of the applets you want except for the HSG60 applet and the HSG80 ACS 8.5 applet. You can then return to the setup program and install the one that you need. 1.
Installing, Configuring, and Removing the Client Installing the Integration Patch The integration patch determines which version of firmware the controller is using and launches the appropriate StorageWorks Command Console (SWCC) Storage Window within Insight Manager (CIM) version 4.23. Should I Install the Integration Patch? Install this patch if your HSG80 controller uses ACS 8.6 or later. This patch enables you to use the controller’s SWCC Storage Window within CIM to monitor and manage the controller.
Installing, Configuring, and Removing the Client Integrating Controller’s SWCC Storage Window with CIM You can open the controller’s Storage Window from within the Windows-based CIM version 4.23 by doing the following: 1. Verify that you have installed the following by looking in Add/Remove Programs in Control Panel: • The HSG80 Storage Window for ACS 8.6 or later (Required to open the correct Storage Window for your firmware). • The HSG80 Storage Window version 2.1 (StorageWorks HSG80 V2.
Installing, Configuring, and Removing the Client Insight Manager Unable to Find Controller’s Storage Window If you installed Insight Manager before SWCC, Insight Manager will be unable to find the controller’s Storage Window. To find the controller’s Storage Window, perform the following procedure: 1. Double-click the Insight Agents icon (Start > Settings > Control Panel). A window appears showing you the active and inactive Agents under the Services tab. 2.
Installing, Configuring, and Removing the Client Troubleshooting Client Installation This section provides information on how to resolve some of the problems that may appear when installing the Client software: • Invalid Network Port Assignments During Installation • “There is no disk in the drive” Message Invalid Network Port Assignments During Installation SWCC Clients and Agents communicate by using sockets.
Installing, Configuring, and Removing the Client The following shows how the network port assignments appear in the services file: spgui 4998/tcp #Command Console ccdevmgt 4993/tcp #Device Management Client and Agent kzpccconnectport 4991/tcp #KZPCC Client and Agent kzpccdiscoveryport 4985/tcp #KZPCC Client and Agent ccfabric 4989/tcp #Fibre Channel Interconnect Agent spagent 4999/tcp #HS-Series Client and Agent spagent3 4994/tcp #HSZ22 Client and Agent ccagent 4997/tcp #RA200 Client
Installing, Configuring, and Removing the Client Adding Storage Subsystem and its Host to Navigation Tree The Navigation Tree enables you to manage storage over the network by using the Storage Window. If you plan to use pager notification, you must add the storage subsystem to the Navigation Tree. 1. Verify that you have properly installed and configured the HS-Series Agent on the storage subsystem host. 2. Click Start > Programs > Command Console > StorageWorks Command Console.
Installing, Configuring, and Removing the Client Figure B–2: Navigation window showing storage host system “Atlanta” 6. Click the plus sign to expand the host icon. When expanded, the Navigation Window displays an icon for the storage subsystem. To access the Storage Window for the subsystem, double-click the Storage Window icon. Figure B–3: Navigation window showing expanded “Atlanta” host icon HSG80 ACS Solution Software Version 8.
Installing, Configuring, and Removing the Client NOTE: You can create virtual disks by using the Storage Window. For more information on the Storage Window, refer to StorageWorks Command Console Version 2.5, User Guide. Removing Command Console Client Before you remove the Command Console Client (CCL) from the computer, remove AES. This will prevent the system from reporting that a service failed to start every time the system is restarted. Steps 2 through 5 describe how to remove the CCL.
Installing, Configuring, and Removing the Client Where to Find Additional Information You can find additional information about SWCC by referring to the online Help and to StorageWorks Command Console Version 2.5, User Guide. About the User Guide StorageWorks Command Console Version 2.5, User Guide contains additional information on how to use SWCC.
Glossary This glossary defines terms pertaining to the ACS solution software. It is not a comprehensive glossary of computer terms. 8B/10B A type of byte definition encoding and decoding to reduce errors in data transmission patented by the IBM Corporation. This process of encoding and decoding data for transmission has been adopted by ANSI. adapter A device that converts the protocol and hardware interface of one bus type into another without changing the function of the bus.
Glossary array controller See controller. array controller software Abbreviated ACS. Software contained on a removable ROM program card that provides the operating system for the array controller. association set A group of remote copy sets that share selectable attributes for logging and failover. Members of an association set transition to the same state simultaneously.
Glossary block Also called a sector. The smallest collection of consecutive bytes addressable on a disk drive. In integrated storage elements, a block contains 512 bytes of data, error codes, flags, and the block address header. bootstrapping A method used to bring a system or device into a defined state by means of its own action. For example, a machine routine whose first few instructions are enough to bring the rest of the routine into the computer from an input device.
Glossary command line interface CLI. A command line entry utility used to interface with the HS-series controllers. CLI enables the configuration and monitoring of a storage subsystem through textual commands. concat commands Concat commands implement storageset expansion features. configuration file A file that contains a representation of a storage subsystem configuration. container 1) Any entity that is capable of storing data, whether it is a physical device or a group of physical devices.
Glossary data striping The process of segmenting logically sequential data, such as a single file, so that segments can be written to multiple physical devices (usually disk drives) in a round-robin fashion. This technique is useful if the processor is capable of reading or writing data faster than a single disk can supply or accept the data. While data is being transferred from the first disk, the second disk can locate the next segment. DDL Dual data link.
Glossary DWZZA A StorageWorks SCSI bus signal converter used to connect 8-bit single-ended devices to hosts with 16-bit differential SCSI adapters. This converter extends the range of a single-ended SCSI cable to the limit of a differential SCSI cable. DWZZB A StorageWorks SCSI bus signal converter used to connect a variety of 16-bit single-ended devices to hosts with 16-bit differential SCSI adapters. ECB External cache battery.
Glossary failedset A group of failed mirrorset or RAIDset devices automatically created by the controller. failover The process that takes place when one controller in a dual-redundant configuration assumes the workload of a failed companion controller. Failover continues until the failed controller is repaired or replaced. The ability for HSG80 controllers to transfer control from one controller to another in the event of a controller failure. This ensures uninterrupted operation.
Glossary FCC Class B This certification label appears on electronic devices that can be used in either a home or a commercial environment within the United States. FCP The mapping of SCSI-3 operations to Fibre Channel. FDDI Fiber Distributed Data Interface. An ANSI standard for 100 megabaud transmission over fiber optic cable. FD SCSI The fast, narrow, differential SCSI bus with an 8-bit data transfer rate of 10 MB/s. See also FWD SCSI and SCSI. fiber A fiber or optical strand.
Glossary FRU Field replaceable unit. A hardware component that can be replaced at the customer location by service personnel or qualified customer service personnel. FRUTIL Field Replacement utility. full duplex (n) A communications system in which there is a capability for 2-way transmission and acceptance between two sites at the same time. full duplex (adj) Pertaining to a communications method in which data can be transmitted and received at the same time.
Glossary host adapter A device that connects a host system to a SCSI bus. The host adapter usually performs the lowest layers of the SCSI protocol. This function may be logically and physically integrated into the host system. HBA Host bus adapter host compatibility mode A setting used by the controller to provide optimal controller performance with specific operating systems. This improves the controller performance and compatibility with the specified operating system.
Glossary initiator A SCSI device that requests an I/O process to be performed by another SCSI device, namely, the SCSI target. The controller is the initiator on the device bus. The host is the initiator on the host bus. instance code A four-byte value displayed in most text error messages and issued by the controller when a subsystem error occurs. The instance code indicates when during software processing the error was detected.
Glossary link A connection between two Fibre Channel ports consisting of a transmit fibre and a receive fibre. local connection A connection to the subsystem using either its serial maintenance port or the host SCSI bus. A local connection enables you to connect to one subsystem controller within the physical range of the serial or host SCSI cable. local terminal A terminal plugged into the EIA-423 maintenance port located on the front bezel of the controller. See also maintenance terminal.
Glossary Mbps Approximately one million (106) bits per second—that is, megabits per second. maintenance terminal An EIA-423-compatible terminal used with the controller. This terminal is used to identify the controller, enable host paths, enter configuration information, and check the controller status. The maintenance terminal is not required for normal operations. See also local terminal. member A container that is a storage element in a RAID array.
Glossary nonparticipating mode A mode within an L_Port that inhibits the port from participating in loop activities. L_Ports in this mode continue to retransmit received transmission words but are not permitted to arbitrate or originate frames. An L_Port in non-participating mode may or may not have an AL_PA. See also participating mode. nominal membership The desired number of mirrorset members when the mirrorset is fully populated with active devices.
Glossary offset A relative address referenced from the base element address. Event Sense Data Response Templates use offsets to identify various information contained within one byte of memory (bits 0 through 7). other controller The controller in a dual-redundant pair that is connected to the controller serving the current CLI session. See also this controller. outbound fiber One fiber in a link that carries information away from a port.
Glossary pluggable A replacement method that allows the complete system to remain online during device removal or insertion. The system bus must be halted, or quiesced, for a brief period of time during the replacement procedure. See also hot-pluggable. point-to-point connection A network configuration in which a connection is established between two, and only two, terminal installations. The connection may include switching facilities.
Glossary RAID Redundant Array of Independent Disks. Represents multiple levels of storage access developed to improve performance or availability or both. RAID level 0 A RAID storageset that stripes data across an array of disk drives. A single logical disk spans multiple physical disks, enabling parallel data processing for increased I/O performance. While the performance characteristics of RAID level 0 is excellent, this RAID level is the only one that does not provide redundancy.
Glossary read ahead caching A caching technique for improving performance of synchronous sequential reads by prefetching data from disk. read caching A cache management method used to decrease the subsystem response time to a read request by allowing the controller to satisfy the request from the cache memory rather than from the disk drives. reconstruction The process of regenerating the contents of a failed member data.
Glossary RFI Radio frequency interference. The disturbance of a signal by an unwanted radio signal or frequency. replacement policy The policy specified by a switch with the SET FAILEDSET command indicating whether a failed disk from a mirrorset or RAIDset is to be automatically replaced with a disk from the spareset. The two switch choices are AUTOSPARE and NOAUTOSPARE. SBB StorageWorks building block.
Glossary SCSI-P cable A 68-conductor (34 twisted-pair) cable generally used for differential bus connections. SCSI port (1) Software: The channel controlling communications to and from a specific SCSI bus in the system. (2) Hardware: The name of the logical socket at the back of the system unit to which a SCSI device is connected. serial transmission A method transmission in which each bit of information is sent sequentially on a single channel rather than simultaneously as in parallel transmission.
Glossary storage unit The general term that refers to storagesets, single-disk units, and all other storage devices that are installed in your subsystem and accessed by the host. A storage unit can be any entity that is capable of storing data, whether it is a physical device or a group of physical devices. StorageWorks A family of modular data storage products that allow customers to design and configure their own storage subsystems.
Glossary tape A storage device supporting sequential access to variable sized data records. target (1) A SCSI device that performs an operation requested by an initiator. (2) Designates the target identification (ID) number of the device. target ID number The address a bus initiator uses to connect with a bus target. Each bus target is assigned a unique target address. this controller The controller that is serving your current CLI session through a local or remote terminal.
Glossary UPS Uninterruptible power supply. A battery-powered power supply guaranteed to provide power to an electrical device in the event of an unexpected interruption to the primary power supply. Uninterruptible power supplies are usually rated by the amount of voltage supplied and the length of time the voltage is supplied. VHDCI Very high-density-cable interface. A 68-pin interface. Required for Ultra-SCSI connections.
Glossary write hole The period of time in a RAID level 1 or RAID level 5 write operation when an opportunity emerges for undetectable RAIDset data corruption. Write holes occur under conditions such as power outages, where the writing of multiple members can be abruptly interrupted. A battery backed-up cache design eliminates the write hole because data is preserved in cache and unsuccessful write operations can be retried.
Index A accessing the CLI, SWCC 1–14, 5–24 accessing the configuration menu Agent 4–8, 4–11 ADD CONNECTIONS multiple-bus failover 1–12 ADD UNIT multiple-bus failover 1–12 adding client system entry Agent 4–8, 4–11 subsystem entry Agent 4–8, 4–11 virtual disks B–13 adding a disk drive to the spareset configuration options 5–26 adding disk drives configuration options 5–25 Agent accessing the configuration menu 4–8, 4–11 client system entry adding 4–8, 4–11 configuration menu 4–8, 4–11 configuring 4–8, 4–11
Index fabric topology 5–27 availability 2–22 B Back up, Clone, Move Data 7–1 backup cloning data 7–2 subsystem configuration 7–1 C cabling controller pair 5–11 multiple-bus failover fabric topology configuration 5–10 single controller 5–4 cache modules location 1–2, 1–3 read caching 1–7 write-back caching 1–7 write-through caching 1–8 caching techniques mirrored 1–8 read caching 1–7 read-ahead caching 1–7 write-back caching 1–7 write-through caching 1–8 changing switches configuration options 5–28 chunk
Index removing a disk drive from the spareset 5–26 configuring Agent 4–8, 4–11 pager notification B–13 configuring devices fabric topology 5–17 configuring storage SWCC 1–14 connections 1–9 naming 1–10 containers attributes 2–14 comparison 2–15 illustrated 2–14, 5–18 mirrorsets 2–21 planning storage 2–14 stripesets 2–19 controller verification of installation 5–17 controller verification installation 5–9, 5–17 controllers cabling 5–4, 5–11 location 1–2, 1–3 node IDs 1–19 verification of installation 5–9, 5
Index G geometry initialize switches 2–33 Geometry parameters 2–33 H Host access restricting in multiple-bus failover mode disabling access paths 1–16 host access restricting by offsets multiple-bus failover 1–18 restricting in multiple-bust failover mode 1–16 restricting in transparent failover mode disabling access paths 1–15 host adapter installation 3–6 preparation 3–6 host connections 1–9 naming 1–10 Host storage configuration verify 5–29 HSG Agent install and configure 4–1 network connection 4–3 ove
Index ADD UNIT command 1–12 ADD UNITcommand 1–12 CLI configuration procedure fabric topology 6–4 fabric topology preferring units 5–25 fabric topology configuration cabling 5–10 host connections 1–12 restricting host access 1–16 disabling access paths 1–16 SET CONNECTIONS command 1–12 SET UNITcommand 1–12 N network port assignments B–8 new features 3–9 node IDs 1–19 restoring 1–20 NODE_ID worldwide name 1–19 NOSAVE_CONFIGURATION 2–32 O offset restricting host access multiple-bus fafilover 1–18 online hel
Index read caching enabled for all storage units 1–7 general description 1–7 read requests decreasing the subsystem response time with read caching 1–7 read-ahead caching 1–7 enabled for all disk units 1–7 removing Client B–12 removing a subsystem entry Agent 4–8, 4–11 request rate 2–30 requirements host adapter installation 3–6 storage configuration 1–14 restricting host access disabling access paths multiple-bus failover 1–16 transparent failover 1–15 multiple-bus failover 1–16 running Agent 4–5 S SAVE_
Index first enclosure of multiple-enclosure subsystem A–14 Storage map template 8 first enclosure of multiple-enclosure subsystem A–16 Storage map template 9 first enclosure of multiple-enclosure subsystem A–19 storageset deleting fabric topology 5–27 fabric topology changing switches 5–27 planning considerations 2–18 mirrorsets 2–21 partitions 2–26 RAIDsets 2–22 striped mirrorsets 2–24 stripesets 2–18 profile 2–16 profiles A–1 storageset profile 2–16 storageset switches SET command 2–28 storagesets creati
Index assigning depending on SCSI version 1–13 assigning in fabric topology partition 5–23 single disk 5–23 unit qualifiers assigning fabric topology 5–23 unit switches changing fabric topology 5–28 units LUN IDs 1–21 Upgrade procedures solution software 3–7 using the configuration menu Agent 4–8, 4–11 V verification controller installation 5–9, 5–17 verification of installation Index–8 controller 5–9, 5–17 Verifying/Installing Required Versions 3–6 virtual disks adding B–13 W where to start 1–1 worldw