Front cover The IBM TotalStorage Storage DS8000 Series: s: Concepts and Architecture Advanced features and performance breakthrough with POWER5 technology Configuration flexibility with LPAR and virtualization Highly scalable solutions for on demand storage Cathy Warrick Olivier Alluis Werner Bauer Heinz Blaschek Andre Fourie Juan Antonio Garay Torsten Knobloch Donald C Laing ibm.
International Technical Support Organization The IBM TotalStorage DS8000 Series: Concepts and Architecture April 2005 SG24-6452-00
Note: Before using this information and the product it supports, read the information in “Notices” on page xiii. First Edition (April 2005) This edition applies to the DS8000 series per the October 12, 2004 announcement. Please note that pre-release code was used for the screen captures and command output; some details may vary from the generally available product. Note: This book is based on a pre-GA version of a product and may not apply when the product becomes generally available.
Contents Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv The team that wrote this redbook. . . . . . . . . . . . . . . . . . . . . . . . . .
2.4.2 Disk enclosures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5 Host adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.1 FICON and Fibre Channel protocol host adapters . . . . . . . . . . . . . . . . . . . . . . . . 2.6 Power and cooling. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.
5.3 The abstraction layers for disk virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 5.3.1 Array sites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86 5.3.2 Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 5.3.3 Ranks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
.5.1 8.5.2 8.5.3 8.5.4 vi S-HMC network requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Remote support connection requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Remote power control requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SAN requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 154 154 154 Chapter 9. Configuration planning . . . . . . . .
10.3.1 Configuring a storage complex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.2 Configuring the storage unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.3 Configuring the logical host systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.4 Creating arrays from array sites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.5 Creating extent pools . . . . . . . . . . . . . . . . . . .
12.4.1 Workload characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.4.2 Cache size considerations for open systems . . . . . . . . . . . . . . . . . . . . . . . . . . 12.4.3 Data placement in the DS8000 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.4.4 LVM striping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.4.
Chapter 15. Open systems support and software . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.1 Open systems support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.1.1 Supported operating systems and servers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.1.2 Where to look for updated and detailed information . . . . . . . . . . . . . . . . . . . . . 15.1.3 Differences to the ESS 2105. . . . . . . . . . . . . . . . . . . . . . . . . . .
HBA and operating system settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SDD for Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Windows Server 2003 VDS support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . HP OpenVMS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . FC port configuration . . . . . . . . .
Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414 How to get IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415 Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
xii DS8000 Series: Concepts and Architecture
Notices This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used.
Trademarks The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both: Eserver® Redbooks (logo) ibm.
Preface This IBM® Redbook describes the IBM TotalStorage® DS8000 series of storage servers, its architecture, logical design, hardware design and components, advanced functions, performance features, and specific characteristics. The information contained in this redbook is useful for those who need a general understanding of this powerful new series of disk enterprise storage servers, as well as for those looking for a more detailed understanding of how the DS8000 series is designed and operates.
Werner Bauer is a certified IT specialist in Germany. He has 25 years of experience in storage software and hardware, as well as S/390®. He holds a degree in Economics from the University of Heidelberg. His areas of expertise include disaster recovery solutions in enterprises utilizing the unique capabilities and features of the IBM Enterprise Storage Server, ESS. He has written extensively in various redbooks, including Technical Updates on DFSMS/MVS® 1.3, 1.4, 1.5. and Transactional VSAM.
working on System/34, System/38™, AS/400®, and iSeries™. Most recently, he has focused on iSeries Storage, and at the beginning of 2004, he transferred into the IBM TotalStorage division. Over the years, Stu has been a co-author for many Redbooks, including “iSeries in Storage Area Networks” and “Moving Applications to Independent ASPs.” His work in these areas has formed a natural base for working with the new TotalStorage DS6000 and DS8000. Torsten Rothenwaldt is a Storage Architect in Germany.
Front row - Cathy, Torsten R, Torsten K, Andre, Toni, Werner, Tetsuroh. Back row - Roland, Olivier, Anthony, Tang, Christine, Alex, Stu, Heinz, Chuck. We want to thank all the members of John Amann’s team at the Washington Systems Center in Gaithersburg, MD for hosting us. Craig Gordon and Rosemary McCutchen were especially helpful in getting us access to beta code and hardware.
Gerry Cote IBM Southfield Dari Durnas IBM Tampa Linda Benhase, Jerry Boyle, Helen Burton, John Elliott, Kenneth Hallam, Lloyd Johnson, Carl Jones, Arik Kol, Rob Kubo, Lee La Frese, Charles Lynn, Dave Mora, Bonnie Pulver, Nicki Rich, Rick Ripberger, Gail Spear, Jim Springer, Teresa Swingler, Tony Vecchiarelli, John Walkovich, Steve West, Glenn Wightwick, Allen Wright, Bryan Wright IBM Tucson Nick Clayton IBM United Kingdom Steve Chase IBM Waltham Rob Jackard IBM Wayne Many thanks to the graphics editor, Emma
xx DS8000 Series: Concepts and Architecture
Part 1 Part 1 Introduction In this part we introduce the IBM TotalStorage DS8000 series and its key features. These include: Product overview Positioning Performance © Copyright IBM Corp. 2005. All rights reserved.
2 DS8000 Series: Concepts and Architecture
1 Chapter 1. Introduction to the DS8000 series This chapter provides an overview of the features, functions, and benefits of the IBM TotalStorage DS8000 series of storage servers. The topics covered include: The IBM on demand marketing strategy regarding the DS8000 Overview of the DS8000 components and features Positioning and benefits of the DS8000 The performance features of the DS8000 © Copyright IBM Corp. 2005. All rights reserved.
1.1 The DS8000, a member of the TotalStorage DS family IBM has a wide range of product offerings that are based on open standards and that share a common set of tools, interfaces, and innovative features. The IBM TotalStorage DS family and its new member, the DS8000, gives you the freedom to choose the right combination of solutions for your current needs and the flexibility to help your infrastructure evolve as your needs change.
The DS8000 is a flexible and extendable disk storage subsystem because it is designed to add and adapt to new technologies as they become available. In the entirely new packaging there are also new management tools, like the DS Storage Manager and the DS Command-Line Interface (CLI), which allow for the management and configuration of the DS8000 series as well as the DS6000 series.
1.2.1 Hardware overview The hardware has been optimized to provide enhancements in terms of performance, connectivity, and reliability. From an architectural point of view the DS8000 series has not changed much with respect to the fundamental architecture of the previous ESS models and 75% of the operating environment remains the same as for the ESS Model 800. This ensures that the DS8000 can leverage a very stable and well-proven operating environment, offering the optimum in availability.
Storage Hardware Management Console (S-HMC) for the DS8000 The DS8000 offers a new integrated management console. This console is the service and configuration portal for up to eight DS8000s in the future. Initially there will be one management console for one DS8000 storage subsystem. The S-HMC is the focal point for configuration and Copy Services management, which can be done by the integrated keyboard display or remotely via a Web browser.
workloads, and with different operating environments, within a single physical DS8000 storage subsystem. The LPAR functionality is available in the DS8300 Model 9A2. The first application of the pSeries Virtualization Engine technology in the DS8000 will partition the subsystem into two virtual storage system images. The processors, memory, adapters, and disk drives are split between the images. There is a robust isolation between the two images via hardware and the POWER5 Hypervisor™ firmware.
IBM TotalStorage FlashCopy FlashCopy can help reduce or eliminate planned outages for critical applications. FlashCopy is designed to provide the same point-in-time copy capability for logical volumes on the DS6000 series and the DS8000 series as FlashCopy V2 does for ESS, and allows access to the source data and the copy almost immediately. FlashCopy supports many advanced capabilities, including: Data Set FlashCopy Data Set FlashCopy allows a FlashCopy of a data set in a zSeries environment.
IBM TotalStorage Global Mirror (Asynchronous PPRC) Global Mirror copying provides a two-site extended distance remote mirroring function for z/OS and open systems servers. With Global Mirror, the data that the host writes to the storage unit at the local site is asynchronously shadowed to the storage unit at the remote site. A consistent copy of the data is then automatically maintained on the storage unit at the remote site.
physically located (installed) inside the DS8000 subsystem and can automatically monitor the state of your system, notifying you and IBM when service is required. The S-HMC is also the interface for remote services (call home and call back). Remote connections can be configured to meet customer requirements. It is possible to allow one or more of the following: call on error (machine detected), connection for a few days (customer initiated), and remote error investigation (service initiated).
DS8000 compared to DS6000 DS6000 and DS8000 now offer an enterprise continuum of storage solutions. All copy functions (with the exception of Global Mirror for z/OS Global Mirror, which is only available on the DS8000) are available on both systems. You can do Metro Mirror, Global Mirror, and Global Copy between the two series. The CLI commands and the GUI look the same for both systems.
The DS CLI is described in detail in Chapter 11, “DS CLI” on page 231. DS Open application programming interface The DS Open application programming interface (API) is a non-proprietary storage management client application that supports routine LUN management activities, such as LUN creation, mapping and masking, and the creation or deletion of RAID-5 and RAID-10 volume spaces. The DS Open API also enables Copy Services functions such as FlashCopy and Remote Mirror and Copy. 1.3.
offer the most value to the customers. On the list of possible applications are, for example, Backup/Recovery applications (TSM, Legato, Veritas, and so on). 1.4 Performance The IBM TotalStorage DS8000 offers optimally balanced performance, which is up to six times the throughput of the Enterprise Storage Server Model 800.
reduce device queue delays. This is achieved by defining multiple addresses per volume. With Dynamic PAV, the assignment of addresses to volumes can be automatically managed to help the workload meet its performance objectives and reduce overall queuing. PAV is an optional feature on the DS8000 series. Multiple Allegiance expands the simultaneous logical volume access capability across multiple zSeries servers.
16 DS8000 Series: Concepts and Architecture
Part 2 Part 2 Architecture In this part we describe various aspects of the DS8000 series architecture. These include: Hardware components The LPAR feature RAS - Reliability, Availability, and Serviceability Virtualization concepts Overview of the models Copy Services © Copyright IBM Corp. 2005. All rights reserved.
18 DS8000 Series: Concepts and Architecture
2 Chapter 2. Components This chapter describes the components used to create the DS8000. This chapter is intended for people who wish to get a clear picture of what the individual components look like and the architecture that holds them together. In this chapter we introduce: Frames Architecture Processor complexes Disk subsystem Host adapters Power and cooling Management console network © Copyright IBM Corp. 2005. All rights reserved.
2.1 Frames The DS8000 is designed for modular expansion. From a high-level view there appear to be three types of frames available for the DS8000. However, on closer inspection, the frames themselves are almost identical. The only variations are what combinations of processors, I/O enclosures, batteries, and disks the frames contain. Figure 2-1 is an attempt to show some of the frame variations that are possible with the DS8000.
Between the disk enclosures and the processor complexes are two Ethernet switches, a Storage Hardware Management Console (an S-HMC) and a keyboard/display module. The base frame contains two processor complexes. These eServer p5 570 servers contain the processor and memory that drive all functions within the DS8000. In the ESS we referred to them as clusters, but this term is no longer relevant.
Line cord indicators Fault indicator EPO switch cover Figure 2-2 Rack operator panel You will note that there is not a power on/off switch on the operator panel. This is because power sequencing is managed via the S-HMC. This is to ensure that all data in non-volatile storage (known as modified data) is de-staged properly to disk prior to power down. It is thus not possible to shut down or power off the DS8000 from the operator panel (except in an emergency, with the EPO switch mentioned previously). 2.
use fast-write, in which the data is written to volatile memory on one complex and persistent memory on the other complex. The server then reports the write as complete before it has been written to disk. This provides much faster write performance. Persistent memory is also called NVS or non-volatile storage.
If you can view Figure 2-3 on page 23 in color, you can use the colors as indicators of how the DS8000 hardware is shared between the servers (the cross hatched color is green and the lighter color is yellow). On the left side, the green server is running on the left-hand processor complex. The green server uses the N-way SMP of the complex to perform its operations. It records its write data and caches its read data in the volatile memory of the left-hand complex.
SARC basically attempts to determine four things: When data is copied into the cache. Which data is copied into the cache. Which data is evicted when the cache becomes full. How does the algorithm dynamically adapt to different workloads. The DS8000 cache is organized in 4K byte pages called cache pages or slots. This unit of allocation (which is smaller than the values used in other storage systems) ensures that small I/Os do not waste cache memory.
SEQ RANDOM MRU MRU Desired size SEQ bottom LRU RANDOM bottom LRU Figure 2-4 Cache lists of the SARC algorithm for random and sequential data To follow workload changes, the algorithm trades cache space between the RANDOM and SEQ lists dynamically and adaptively. This makes SARC scan-resistant, so that one-time sequential requests do not pollute the whole cache. SARC maintains a desired size parameter for the sequential list. The desired size is continually adapted in response to the workload.
For details on the server hardware used in the DS8000, refer to IBM p5 570 Technical Overview and Introduction, REDP-9117, available at: http://www.redbooks.ibm.com The symmetric multiprocessor (SMP) p5 570 system features 2-way or 4-way, copper-based, SOI-based POWER5 microprocessors running at 1.5 GHz or 1.9 GHz with 36 MB off-chip Level 3 cache configurations. The system is based on a concept of system building blocks.
Power supply 1 PCI-X adapters in blind-swap carriers Power supply 2 Front view DVD-rom drives SCSI disk drives Operator panel Processor cards Power supply 1 Power supply 2 Rear view PCI-X slots RIO-G ports RIO-G ports Figure 2-5 Processor complex Processor memory The DS8100 Model 921 offers up to 128 GB of processor memory and the DS8300 Models 922 and 9A2 offer up to 256 GB of processor memory. Half of this will be located in each processor complex.
prevent any data loss without operating system or firmware involvement. Non-critical environmental events are also logged and reported. 2.3.1 RIO-G The RIO-G ports are used for I/O expansion to external I/O drawers. RIO stands for remote I/O. The RIO-G is evolved from earlier versions of the RIO interconnect. Each RIO-G port can operate at 1 GHz in bidirectional mode and is capable of passing data in each direction on each cycle of the port. It is designed as a high performance self-healing interconnect.
Two I/O enclosures side by side Single I/O enclosure Front view Rear view SPCN ports RIO-G ports Redundant power supplies Slots: 1 2 3 4 5 6 Figure 2-7 I/O enclosures Each I/O enclosure has the following attributes: 4U rack-mountable enclosure Six PCI-X slots: 3.3 V, keyed, 133 MHz blind-swap hot-plug Default redundant hot-plug power and cooling devices Two RIO-G and two SPCN ports 2.
integrity it supports metadata creation and checking. The device adapter design is shown in Figure 2-8. Figure 2-8 DS8000 device adapter The DAs are installed in pairs because each storage partition requires its own adapter to connect to each disk enclosure for redundancy. This is why we refer to them as DA pairs. 2.4.2 Disk enclosures Each DS8000 frame contains either 8 or 16 disk enclosures depending on whether it is a base or expansion frame.
Figure 2-9 DS8000 disk enclosure Non-switched FC-AL drawbacks In a standard FC-AL disk enclosure all of the disks are arranged in a loop, as depicted in Figure 2-10. This loop-based architecture means that data flows through all disks before arriving at either end of the device adapter (shown here as the Storage Server). Figure 2-10 Industry standard FC-AL disk enclosure The main problems with standard FC-AL access to DDMs are: The full loop is required to participate in data transfer.
These problems are solved with the switched FC-AL implementation on the DS8000. Switched FC-AL advantages The DS8000 uses switched FC-AL technology to link the device adapter (DA) pairs and the DDMs. Switched FC-AL uses the standard FC-AL protocol, but the physical implementation is different. The key features of switched FC-AL technology are: Standard FC-AL communication protocol from DA to DDMs. Direct point-to-point links are established between DA and DDM.
Switched connections server 0 device adapter Fibre channel switch Fibre channel switch server 1 device adapter Figure 2-12 Disk enclosure switched connections DS8000 switched FC-AL implementation For a more detailed look at how the switched disk architecture expands in the DS8000 you should refer to Figure 2-13 on page 35. It depicts how each DS8000 device adapter connects to two disk networks called loops. Expansion is achieved by adding enclosures to the expansion ports of each switch.
Rear storage enclosure N max=6 Rear enclosures Rear storage enclosure 2 Rear storage enclosure 1 15 0 FC switch 15 8 or 16 DDMs per enclosure 0 2Gbs FC-AL link 15 4 FC-AL Ports 0 Server 0 device adapter Server 1 device adapter 0 Front storage enclosure 1 Front enclosures Front storage enclosure 2 Front storage enclosure N max=6 15 0 15 0 15 Figure 2-13 DS8000 switched disk expansion Expansion Expansion enclosures are added in pairs and disks are added in groups of 16.
The intention is to only have four spares per DA pair, but this number may increase depending on DDM intermix. We need to have four DDMs of the largest capacity and at least two DDMs of the fastest RPM. If all DDMs are the same size and RPM, then four spares will be sufficient. Arrays across loops Each array site consists of eight DDMs. Four DDMs are taken from the front enclosure in an enclosure pair, and four are taken from the rear enclosure in the pair.
the array is placed on each loop. If the disk enclosures were fully populated with DDMs, there would be four array sites. Fibre channel switch 1 loop 0 loop 0 2 Server 0 device adapter Array site 0 Server 1 device adapter Array site 1 3 loop 1 loop 1 4 There are two separate switches in each enclosure. Figure 2-15 Array across loop AAL benefits AAL is used to increase performance.
The ESCON adapter in the DS8000 is a dual ported host adapter for connection to older zSeries hosts that do not support FICON. The ports on the ESCON card use the MT-RJ type connector. Control units and logical paths ESCON architecture recognizes only 16 3990 logical control units (LCUs) even though the DS8000 is capable of emulating far more (these extra control units can be used by FICON). Half of the LCUs (even numbered) are in server 0, and the other half (odd-numbered) are in server 1.
The card itself is PCI-X 64 Bit 133 MHz. The card is driven by a new high function, high performance ASIC. To ensure maximum data integrity, it supports metadata creation and checking. Each Fibre Channel port supports a maximum of 509 host login IDs. This allows for the creation of very large storage area networks (SANs). The design of the card is depicted in Figure 2-16.
Primary power supplies The DS8000 primary power supply (PPS) converts input AC voltage into DC voltage. There are high and low voltage versions of the PPS because of the varying voltages used throughout the world. Also, because the line cord connector requirements vary widely throughout the world, the line cord may not come with a suitable connector for your nation’s preferred outlet. This may need to be replaced by an electrician once the machine is delivered.
Ethernet switches In addition to the Fibre Channel switches installed in each disk enclosure, the DS8000 base frame contains two 16-port Ethernet switches. Two switches are supplied to allow the creation of a fully redundant management network. Each processor complex has multiple connections to each switch. This is to allow each server to access each switch. This switch cannot be used for any equipment not associated with the DS8000.
42 DS8000 Series: Concepts and Architecture
3 Chapter 3. Storage system LPARs (Logical partitions) This chapter provides information about storage system Logical Partitions (LPARs) in the DS8000. The following topics are discussed in detail: Introduction to LPARs DS8000 and LPARs – LPAR and storage facility images (SFIs) – DS8300 LPAR implementation – Hardware components of a storage facility image – DS8300 Model 9A2 configuration options LPAR security and protection LPAR and Copy Services LPAR benefits © Copyright IBM Corp. 2005.
3.1 Introduction to logical partitioning Logical partitioning allows the division of a single server into several completely independent virtual servers or partitions. IBM began work on logical partitioning in the late 1960s, using S/360 mainframe systems with the precursors of VM, specifically CP40.
Building block A building block is a collection of system resources, such as processors, memory, and I/O connections. Physical partitioning (PPAR) In physical partitioning, the partitions are divided along hardware boundaries. Each partition might run a different version of the same operating system. The number of partitions relies on the hardware.
Logical Partition Logical Partition 0 Logical Partition 1 Logical Partition 2 Processor Processor Processor Processor Processor Processor Cache Cache Cache Cache Cache Cache I/O I/O I/O I/O I/O I/O I/O I/O I/O I/O I/O I/O Memory hardware management console Figure 3-1 Logical partition Software and hardware fault isolation Because a partition hosts an independent operating system image, there is strong software isolation.
allowing the on-demand allocation of those resources to different partitions and the management of I/O devices. The physical resources are owned by the Virtual I/O server. 3.1.3 Why Logically Partition? There is a demand to provide greater flexibility for high-end systems, particularly the ability to subdivide them into smaller partitions that are capable of running a version of an operating system or a specific set of application workloads.
application on separate smaller partitions can provide better throughput than running a single large instance of the application. Increased flexibility of resource allocation A workload with resource requirements that change over time can be managed more easily within a partition that can be altered to meet the varying demands of the workload. 3.2 DS8000 and LPAR In the first part of this chapter we discussed the LPAR features in general.
Processor complex 0 Processor complex 1 LPAR01 Storage Facility Image 1 LPAR11 LPAR02 Storage Facility Image 2 LPAR12 LPARxy x=Processor complex number y=Storage facility number Figure 3-2 DS8300 Model 9A2 - LPAR and storage facility image The DS8300 series incorporates two eServer p5 570s. We call each of these a processor complex. Each processor complex supports one or more LPARs. Currently each processor complex on the DS8300 is divided into two LPARs.
Storage enclosures Processor complex 0 Storage Facility Image 1 RIO-G I/O drawers Processor complex 1 RIO-G (LPAR01) Storage Facility Image 2 Storage Facility Image 1 (LPAR11) RIO-G I/O drawers RIO-G (LPAR02) Storage Facility Image 2 (LPAR12) Storage enclosures Figure 3-3 DS8300 LPAR resource allocation Each storage facility image has access to: 50 percent of the processors 50 percent of the processor memory 1 loop of the RIO-G interconnection Up to 16 host adapters (4 I/O drawers wi
Storage Facility Image 1 Processor complex 0 2 Processors Processors RIO-G interface I/O drawer 0 RIO-G interface HA HA DA HA HA DA SCSI controller A A' boot data data boot data data 2 Processors Processors RIO-G interface I/O drawer 1 Ethernet-Port RIO-G interface Memory SCSI controller Ethernet-Port B B' boot data data boot data data S-HMC C C' boot data data boot data data Ethernet-Port Ethernet-Port D' boot data data HA HA DA HA HA DA Memory I/O drawer 3 SCSI controller 2
RIO-G interconnect separation Figure 3-4 on page 51 depicts that the RIO-G interconnection is also split between the two storage facility images. The RIO-G interconnection is divided into 2 loops. Each RIO-G loop is dedicated to a given storage facility image. All I/O enclosures on the RIO-G loop with the associated host adapters and drive adapters are dedicated to the storage facility image that owns the RIO-G loop.
An additional 256 DDMs – Up to 128 DDMs per storage facility image The second Model 9AE (expansion frame) has: An additional 256 DDMs – Up to 128 drives per storage facility image A fully configured DS8300 with storage facility images has one base frame and two expansion frames. The first expansion frame (9AE) has additional I/O drawers and disk drive modules (DDMs), while the second expansion frame contains additional DDMs.
Table 3-1 Model conversions regarding LPAR functionality From Model To Model 921 (2-way processors without LPAR) 9A2 (4-way processors with LPAR) 922 (4-way processors without LPAR) 9A2 (4-way processors with LPAR) 9A2 (4-way processors with LPAR) 922 (4-way processors without LPAR) 92E (expansion frame without LPAR) 9AE (expansion frame with LPAR) 9AE (expansion frame with LPAR) 92E (expansion frame without LPAR) Note: Every model conversion is a disruptive operation. 3.
LPAR Protection in IBM POWER5™ Hardware N Proc N Proc N Real Addresses Proc 0 Proc 0 Proc Virtual Addresses Partition 2 Partition 1 I/O Slot I/O Load/Store { Bus Addresses Proc Physical Memory HypervisorControlled TCE Tables For DMA I/O Slot I/O Slot I/O Slot Addr N I/O Load/Store 0 Real Addresses { Addr 0 Bus Addresses Partition 1 Virtual Addresses HypervisorControlled Page Tables Partition 2 I/O Slot I/O Slot The Hardware and Hypervisor manage the real to virtual memory mapping
DS8300 Storage Facility Images and Copy Services ü Storage Facility Image 1 ü PPRC Primary Storage Facility Image 2 PPRC Secondary PPRC Primary FlashCopy Source PPRC Secondary FlashCopy Source û FlashCopy Target FlashCopy Target üRemote Mirroring and Copy (PPRC) within a Storage Facility Image or across Storage Facility Images üFlashCopy within a Storage Facility Image Figure 3-7 DS8300 storage facility images and Copy Services FlashCopy The DS8000 series fully supports the FlashCopy V2 capa
The hardware-based LPAR implementation ensures data integrity. The fact that you can create dual, independent, completely segregated virtual storage systems helps you to optimize the utilization of your investment, and helps to segregate workloads and protect them from one another.
System 1 Open System System 2 zSeries Storage Facility Image 2 Storage Facility Image 1 Capacity: LIC level: License function: License feature: 20 TB Fixed Block (FB) A Point-in-time Copy FlashCopy LUN0 LUN1 3390-3 LUN2 3390-3 Capacity: LIC level: License function: License feature: 10 TB Count Key Data (CKD) B no Copy function no Copy feature DS8300 Model 9A2 (Physical Capacity: 30TB) Figure 3-8 Example of storage facility images in the DS8300 This example shows a DS8300 with a total physical c
DS8300 addressing capabilities ESS 800 DS8300 DS8300 with LPAR Max Logical Subsystems 32 255 510 Max Logical Devices 8K 63.75K 127.5K Max Logical CKD Devices 4K 63.75K 127.5K Max Logical FB Devices 4K 63.75K 127.5K Max N-Port Logins/Port 128 509 509 Max N-Port Logins 512 8K 16K Max Logical Paths/FC Port 256 2K 2K Max Logical Paths/CU Image 256 512 512 Max Path Groups/CU Image 128 256 256 Figure 3-9 Comparison with ESS Model 800 and DS8300 with and without LPAR 3.
60 DS8000 Series: Concepts and Architecture
4 Chapter 4. RAS This chapter describes the RAS (reliability, availability, serviceability) characteristics of the DS8000. It will discuss: Naming Processor complex RAS Hypervisor: Storage image independence Server RAS Host connection availability Disk subsystem Power and cooling Microcode updates Management console © Copyright IBM Corp. 2005. All rights reserved.
4.1 Naming It is important to understand the naming conventions used to describe DS8000 components and constructs in order to fully appreciate the discussion of RAS concepts. Storage complex This term describes a group of DS8000s managed by a single Management Console. A storage complex may consist of just a single DS8000 storage unit. Storage unit A storage unit consists of a single DS8000 (including expansion frames).
LPARs Processor complex 0 Processor complex 1 Server 0 Storage facility image 1 Server 1 Server 0 Storage facility image 2 Server 1 LPARs Figure 4-2 Dual image mode In Figure 4-2 we have two storage facility images (SFIs). The upper server 0 and upper server 1 form SFI 1. The lower server 0 and lower server 1 form SFI 2. In each SFI, server 0 is the darker color (green) and server 1 is the lighter color (yellow).
Reliability, availability, and serviceability Excellent quality and reliability are inherent in all aspects of the IBM Server p5 design and manufacturing. The fundamental objective of the design approach is to minimize outages. The RAS features help to ensure that the system performs reliably, and efficiently handles any failures that may occur. This is achieved by using capabilities that are provided by both the hardware, AIX 5L, and RAS code written specifically for the DS8000.
generate Early Power-Off Warning (EPOW) events. Critical events (for example, a Class 5 AC power loss) trigger appropriate signals from hardware to the affected components to prevent any data loss without operating system or firmware involvement. Non-critical environmental events are logged and reported using Event Scan. The operating system cannot program or access the temperature threshold using the SP. Temperature monitoring is also performed.
Fault masking If corrections and retries succeed and do not exceed threshold limits, the system remains operational with full resources and no client or IBM Service Representative intervention is required. Resource deallocation If recoverable errors exceed threshold limits, resources can be deallocated with the system remaining operational, allowing deferred maintenance at a convenient time. Dynamic deallocation of potentially failing components is non-disruptive, allowing the system to continue to run.
A mechanism must exist to allow this sharing of resources in a seamless way. This mechanism is called the hypervisor. The hypervisor provides the following capabilities: Reserved memory partitions allow the setting aside of a certain portion of memory to use as cache and a certain portion to use as NVS. Preserved memory support allows the contents of the NVS and cache memory areas to be protected in the event of a server reboot.
disk system. It is also checked by the DS8000 before the data is sent to the host in response to a read I/O request. Further, the metadata also contains information used as an additional level of verification to confirm that the data being returned to the host is coming from the desired location on the disk. 4.4.2 Server failover and failback To understand the process of server failover and failback, we have to understand the logical construction of the DS8000.
NVS for odd LSSs NVS for even LSSs Cache memory for even LSSs Cache memory for odd LSSs Server 0 Server 1 Figure 4-3 Normal data flow Figure 4-3 illustrates how the cache memory of server 0 is used for all logical volumes that are members of the even LSSs. Likewise, the cache memory of server 1 supports all logical volumes that are members of odd LSSs. But for every write that gets placed into cache, another copy gets placed into the NVS memory located in the alternate server.
NVS for odd LSSs NVS NVS for for even odd LSSs LSSs Cache memory for even LSSs Cache Cache for for even odd LSSs LSSs Server 0 Server 1 Failover Figure 4-4 Server 0 failing over its function to server 1 This entire process is known as a failover. After failover the DS8000 now operates as depicted in Figure 4-4. Server 1 now owns all the LSSs, which means all reads and writes will be serviced by server 1. The NVS inside server 1 is now used for both odd and even LSSs.
The single purpose of the batteries is to preserve the NVS area of server memory in the event of a complete loss of input power to the DS8000. If both power supplies in the base frame were to stop receiving input power, the servers would be informed that they were now running on batteries and immediately begin a shutdown procedure. Unless the power line disturbance feature has been purchased, the BBUs are not used to keep the disks spinning.
Single pathed host HBA Server owning all even LSS logical volumes RIO-G RIO-G HP HP HP HP HP HP HP HP HP HP HP HP Fibre channel Slot 1 Fibre channel Slot 2 ESCON ESCON Slot 4 Slot 5 RIO-G RIO-G I/O enclosure 2 I/O enclosure 3 RIO-G RIO-G RIO-G RIO-G Slot 1 Slot 2 Slot 4 Slot 5 Fibre channel Fibre channel ESCON ESCON HP HP HP HP HP HP HP HP HP HP Server owning all odd LSS logical volumes HP HP Figure 4-5 Single pathed host It is always preferable that hosts that access the
Dual pathed host HBA HBA Server owning all even LSS logical volumes RIO-G RIO-G HP HP HP HP HP HP HP HP HP HP HP HP Fibre channel Slot 1 Fibre channel Slot 2 ESCON ESCON Slot 4 Slot 5 RIO-G RIO-G I/O enclosure 2 I/O enclosure 3 Slot 1 Fibre channel HP HP HP HP Slot 2 Fibre channel RIO-G RIO-G RIO-G RIO-G Slot 4 Slot 5 ESCON ESCON HP HP HP HP HP HP Server owning all odd LSS logical volumes HP HP Figure 4-6 Dual pathed host SAN/FICON/ESCON switches Because a large number of h
4.5.1 Open systems host connection In the majority of open systems environments, IBM strongly recommends the use of the Subsystem Device Driver (SDD) to manage both path failover and preferred path determination. SDD is a software product that IBM supplies free of charge to all customers who use ESS 2105, SAN Volume Controller (SVC), DS6000, or DS8000. There will be a new version of SDD that will also allow SDD to manage pathing to the DS6000 and DS8000 (Version 1.6).
CUIR allows the DS8000 to request that all attached system images set all paths required for a particular service action to the offline state. System images with the appropriate level of software support will respond to such requests by varying off the affected paths, and either notifying the DS8000 subsystem that the paths are offline, or that it cannot take the paths offline.
Figure 4-7 also shows the connection paths for expansion on the far left and far right. The paths from the switches travel to the switches in the next disk enclosure. Because expansion is done in this linear fashion, the addition of more enclosures is completely non-disruptive. 4.6.2 RAID-5 overview RAID-5 is one of the most commonly used forms of RAID protection. RAID-5 theory The DS8000 series supports RAID-5 arrays.
RAID-10 theory RAID-10 provides high availability by combining features of RAID-0 and RAID-1. RAID-0 optimizes performance by striping volume data across multiple disk drives at a time. RAID-1 provides disk mirroring, which duplicates data between two disk drives. By combining the features of RAID-0 and RAID-1, RAID-10 provides a second optimization for fault tolerance. Data is striped across half of the disk drives in the RAID-1 array.
On the ESS 800 the spare creation policy was to have four DDMs on each SSA loop for each DDM type. This meant that on a specific SSA loop it was possible to have 12 spare DDMs if you chose to populate a loop with three different DDM sizes. With the DS8000 the intention is to not do this.
4.6.6 Disk scrubbing The DS8000 will periodically read all sectors on a disk. This is designed to occur without any interference with application performance. If ECC-correctable bad bits are identified, the bits are corrected immediately by the DS8000. This reduces the possibility of multiple bad bits accumulating in a sector beyond the ability of ECC to correct them.
4.7.1 Building power loss The DS8000 uses an area of server memory as non-volatile storage (NVS). This area of memory is used to hold data that has not been written to the disk subsystem. If building power were to fail, where both primary power supplies (PPSs) in the base frame were to report a loss of AC input power, then the DS8000 must take action to protect that data. 4.7.
data from NVS memory to a variably sized disk area to preserve that data until power is restored. However, the EPO switch does not allow this destage process to happen and all NVS data is lost. This will most likely result in data loss. If you need to power the DS8000 off for building maintenance, or to relocate it, you should always use the S-HMC to achieve this. 4.8 Microcode updates The DS8000 contains many discrete redundant components. Most of these components have firmware that can be updated.
While the installation process described above may seem complex, it will not require a great deal of user intervention. The code installer will normally just start the process and then monitor its progress using the S-HMC. S-HMC considerations Before updating the DS8000 code, the Storage Hardware Management Consoles should be updated to the latest code version.
5 Chapter 5. Virtualization concepts This chapter describes virtualization concepts as they apply to the DS8000. In particular, it covers the following topics: Storage system virtualization Abstraction layers for disk virtualization – Array sites – Arrays – Ranks – Extent pools – Logical volumes – Logical storage subsystems – Address groups – Volume groups – Host attachments © Copyright IBM Corp. 2005. All rights reserved.
5.1 Virtualization definition In our fast-paced world, where you have to react quickly to changing business conditions, your infrastructure must allow for on demand changes. Virtualization is key to an on demand infrastructure. However, when talking about virtualization many vendors are talking about different things. One important feature of the DS8000 is the virtualization of a whole storage subsystem.
Logical view: virtual Storage Facility images Physical view: physical storage unit Storage Facility image 1 LIC RIO-G I/O Memory Processor I/O Storage Facility image 2 LIC LIC Memory Memory Processor RIO-G I/O I/O LIC Memory Processor Processor LPAR Hypervisor takes part of takes part of takes part of RIO-G I/O I/O Memory Memory Processor Processor Figure 5-1 Storage Facility virtualization 5.
RIO-G I/O Enclosure I/O Enclosure HA HA DA HA HA DA Switches Switched loop 1 Server1 Server0 Storage enclosure pair HA HA DA HA HA DA Switched loop 2 Figure 5-2 Physical layer as the base for virtualization Figure 5-2 shows the physical layer on which virtualization is based. Compare this with the ESS design, where there was a real loop and having an 8-pack close to a device adapter was an advantage. This is no longer relevant for the DS8000.
Array Site Switch Loop 1 Loop 2 Figure 5-3 Array site As you can see from Figure 5-3, array sites span loops. Four DDMs are taken from loop 1 and another four DDMs are taken from loop 2. Array sites are the building blocks used to define arrays. 5.3.2 Arrays An array is created from one array site. Forming an array means defining it for a specific RAID type. The supported RAID types are RAID-5 and RAID-10 (see 4.6.2, “RAID-5 overview” on page 76 and 4.6.3, “RAID-10 overview” on page 76).
Array Site Creation of an array Data Data Data Data Data Data Parity Spare D1 D7 D13 ... D2 D8 D14 ... D3 D9 D15 ... D4 D10 D16 ... D5 D11 P ... D6 P D17 ... P D12 D18 ... RAID Array Spare Figure 5-4 Creation of an array So, an array is formed using one array site, and while the array could be accessed by each adapter of the device adapter pair, it is managed by one device adapter.
a Model 1, and a Model 1 has 1113 cylinders which is about 0.94 GB. The extent size of a CKD rank therefore was chosen to be one 3390 Model 1 or 1113 cylinders. One extent is the minimum physical allocation unit when a LUN or CKD volume is created, as we discuss later. It is still possible to define a CKD volume with a capacity that is an integral multiple of one cylinder or a fixed block LUN with a capacity that is an integral multiple of 128 logical blocks (64K bytes).
The DS Storage Manager GUI guides the user to use the same RAID types in an extent pool. As such, when an extent pool is defined, it must be assigned with the following attributes: – Server affinity – Extent type – RAID type The minimum number of extent pools is one; however, you would normally want at least two, one assigned to server 0 and the other assigned to server 1 so that both servers are active.
5.3.5 Logical volumes A logical volume is composed of a set of extents from one extent pool. On a DS8000 up to 65280 (we use the abbreviation 64K in this discussion, even though it is actually 65536 - 256, which is not quite 64K in binary) volumes can be created (64K CKD, or 64K FB volumes, or a mix of both types, but the sum cannot exceed 64K). Fixed Block LUNs A logical volume composed of fixed block extents is called a LUN.
Extent Pool CKD0 Rank-x 1113 1113 1113 3390 Mod. 3 Rank-y used 1113 free used 1113 free 1113 free Logical 3390 Mod. 3 Allocate 3226 cylinder volume Extent Pool CKD0 Rank-x 1113 1113 1113 3390 Mod. 3 Rank-y used 1113 used used 1113 used Volume with 3226 cylinders 1000 used 113 cylinders unused Figure 5-7 Allocation of a CKD logical volume Figure 5-7 shows how a logical volume is allocated with a CKD volume as an example.
Extent Pool FBprod Rank-a 1 GB 1 GB Logical 3 GB LUN 1 GB 1 GB free 3 GB LUN Rank-b used 1 GB free used 1 GB free Allocate a 3 GB LUN Extent Pool FBprod Rank-a 1 GB 1 GB 1 GB 1 GB used 2.9 GB LUN created 3 GB LUN Rank-b used 1 GB used used 1 GB used 100 MB unused Figure 5-8 Creation of an FB LUN iSeries LUNs iSeries LUNs are also composed of fixed block 1 GB extents. There are, however, some special aspects with iSeries LUNs. LUNs created on a DS8000 are always RAID protected.
allocation of logical volumes across ranks to improve performance, except for the case in which the logical volume needed is larger than the total capacity of the single rank. This construction method of using fixed extents to form a logical volume in the DS8000 allows flexibility in the management of the logical volumes. We can now delete LUNs and reuse the extents of those LUNs to create other LUNs, maybe of different sizes.
created and determines the LSS that it is associated with. The 256 possible logical volumes associated with an LSS are mapped to the 256 possible device addresses on an LCU (logical volume X'abcd' maps to device address X'cd' on LCU X'ab'). When creating CKD logical volumes and assigning their logical volume numbers, users should consider whether Parallel Access Volumes (PAV) are required on the LCU and reserve some of the addresses on the LCU for alias addresses.
Address groups Address groups are created automatically when the first LSS associated with the address group is created, and deleted automatically when the last LSS in the address group is deleted. LSSs are either CKD LSSs or FB LSSs. All devices in an LSS must be either CKD or FB. This restriction goes even further. LSSs are grouped into address groups of 16 LSSs. LSSs are numbered X'ab', where a is the address group and b denotes an LSS within the address group.
server access to logical volumes, the DS8000 introduced the concept of host attachments and volume groups. Host attachment HBAs are identified to the DS8000 in a host attachment construct that specifies the HBAs’ World Wide Port Names (WWPNs). A set of host ports can be associated through a port group attribute that allows a set of HBAs to be managed collectively. This port group is referred to as host attachment within the GUI. A given host attachment can be associated with only one volume group.
FB logical volumes may be defined in one or more volume groups. This allows a LUN to be shared by host HBAs configured to different volume groups. An FB logical volume is automatically removed from all volume groups when it is deleted. The maximum number of volume groups is 8320 for the DS8000.
This new virtualization concept provides for much more flexibility. Logical volumes can dynamically be created and deleted. They can be grouped logically to simplify storage management. Large LUNs and CKD volumes reduce the total number of volumes and this also contributes to a reduction of the management effort. Figure 5-12 summarizes the virtualization hierarchy.
Extent Pool FB-1a Extent Pool FB-0b Extent Pool FB-1b Extent Pool FB-0c Extent Pool FB-1c Extent Pool FB-0d Extent Pool FB-1d Server1 Extent Pool FB-0a DA pair 1 LSS 01 Loop 1 Loop 2 Loop 2 LSS 00 DA pair 2 DA pair 2 Server0 DA pair 1 Loop 1 Host LVM volume Figure 5-13 Optimal placement of data 5.4 Benefits of virtualization The DS8000 physical and logical architecture defines new standards for enterprise storage virtualization.
Any mix of CKD or FB addresses in 4096 address groups. Increased logical volume size: – CKD: 55.6 GB (65520 cylinders), architected for 219 TB – FB: 2 TB, architected for 1 PB Flexible logical volume configuration: – Multiple RAID types (RAID-5, RAID-10) – Storage types (CKD and FB) aggregated into extent pools – Volumes allocated from extents of extent pool – Dynamically add/remove volumes Virtualization reduces storage management requirements. Chapter 5.
102 DS8000 Series: Concepts and Architecture
6 Chapter 6. IBM TotalStorage DS8000 model overview and scalability This chapter provides an overview of the IBM TotalStorage DS8000 storage server which is from here on referred to as a DS8000. We include information on the two models and how well they scale regarding capacity and performance. © Copyright IBM Corp. 2005. All rights reserved.
6.1 DS8000 highlights The DS8000 is a member of the DS product family. It offers disk storage servers with high performance and has the capability to scale very well to the highest disk storage capacities. The scalability is designed to support continuous operations. The DS8000 series models follow the 2105 (ESS 800) as device type 2107 and are based on IBM POWER5 server technology. The DS8000 series models consist of a storage unit and one or two management consoles.
In the following sections, we describe these models further: 1. DS8100 Model 921 This model features a dual two-way processor complex and it includes a base frame and an optional expansion frame. 2. DS8300 Models 922 and 9A2 The DS8300 is a dual four-way processor complex with a Model 922 base frame and up to two optional Model 92E expansion frames. Model 9A2 is also a dual four-way processor complex, but it offers two IBM TotalStorage storage system Logical Partitions (LPARs) in one machine.
The DS8100 Model 921 can connect to one expansion frame. This expansion frame is called a Model 92E. Figure 6-2 on page 105 displays the front view of a DS8100 Model 921with the cover off, and the Model 921 with an expansion Model 92E with covers. The base and expansion frame together allow for a maximum capacity for a DS8100 with 384 DDMs. There are 128 DDMs in the base frame and 256 DDMs in the expansion frame. With all DDMs being 300 GB, this results in a maximum disk storage capacity of 115.2 TB.
Both models provide the following features: Two processor complexes with pSeries POWER5 1.9 GHz four-way CEC each. Up to 128 DDMs for a maximum of 38.4 TB with 300 GB DDMs. This is the same as for the DS8100 Model 921. Up to 256 GB of processor memory, which was referred to as cache in the ESS 800. Up to 16 host adapters (HAs).
– Up to 32 host adapters (HAs) with four 2 Gbps Fibre Channel ports on each HA. Each HA can be freely configured to hold: • FCP ports to connect to FCP-based host servers • FCP ports for PPRC links, which in turn can be shared between PPRC and FCP-based hosts through a SAN fabric • FICON ports to connect to one or more zSeries servers • Any combination within a HA – A HA can also have two ESCON ports, but you cannot mix ESCON ports with Fibre Channel ports on the very same HA.
Table 6-1 DS8000 model comparison Base Mod Images Exp Mod Proc type DDMs Proc Mem HAs 921 1 None 2-way 1.5 GHz <= 128 <= 128 GB <= 16 <= 256 GB <= 16 1 x 92E 922 1 None 1 x 92E 9A2 2 4-way 1.9 GHz <= 384 <= 128 <= 384 <= 32 2 x 92E <= 640 None <= 128 <= 16 1 x 9AE <= 384 <= 32 2 x 9AE <= 640 Depending on the DDM sizes, which can be different within a 921, 922, or 9A2, and the number of DDMs, the total capacity is calculated accordingly.
Table 6-2 Comparison of models for capacity 2-way (Base frame only) 2-way + Expansion frame 4-way or LPAR (Base frame only) 4-way or LPAR + Expansion frame 4-way or LPAR + 2 Expansion frames Device adapters 2 to 8 (1 - 4 Fibre Loops) 2 to 8 2 to 8 2 to 16 2 to 16 Drives 73 GB (15k RPM) 146 GB (10k RPM) 300 GB (10k RPM) 16 to 128 (increments of 16) 16 to 384 (increments of 16) 16 to 128 (increments of 16) 16 to 384 (increments of 16) 16 to 640 (increments of 16) Physical capacity 1.1 to 37.
Linear-scalable architecture The following two figures illustrate how you can realize the linear scalability in the DS8000 series. Hosts 2-way I/O controllers Host Adapter Device Adapter Processor Complex Data Cache and Persistent Memory I/O Enclosure 2-way processor RIO-G Disks Figure 6-6 2-way model components Chapter 6.
Hosts 4-way I/O controllers Host Adapter Device Adapter Processor Complex Data Cache and Persistent Memory 2-way processor I/O Enclosure RIO-G Disks Figure 6-7 4-way model components These two figures describe the main components of the I/O controller for the DS8000 series. The main components include the I/O processors, data cache, internal I/O bus (RIO-G loop), host adapters, and device adapters. Figure 6-6 on page 111 is a 2-way model and Figure 6-7 is a 4-way model.
The DS8000 series has a longer life cycle than other storage devices. 6.3.3 Model upgrades The DS8000 series models are modular systems that are designed to be built upon and upgraded from one model to another in the field, helping clients respond swiftly to changing business requirements. IBM service representatives can upgrade a Model 921 in the field when you order a model conversion to a Model 922 or Model 9A2.
114 DS8000 Series: Concepts and Architecture
7 Chapter 7. Copy Services In this chapter, we describe the architecture and functions of Copy Services for the DS8000 series. Copy Services is a collection of functions that provide disaster recovery, data migration, and data duplication functions. Copy Services run on the DS8000 storage unit and they support open systems and zSeries environments.
7.1 Introduction to Copy Services Copy Services is a collection of functions that provide disaster recovery, data migration, and data duplication functions. With the Copy Services functions, for example, you can create backup data with little or no disruption to your application, and you can back up your application data to the remote site for the disaster recovery. Copy Services run on the DS8000 storage unit and support open systems and zSeries environments.
Note: In this chapter, track means a piece of data in the DS8000; the DS8000 uses the logical tracks to manage the Copy Services functions. See Figure 7-1 for an illustration of FlashCopy concepts.
– If the backup data is not copied yet, first the backup data is copied to the target volume, and after that it is updated on the source volume. Write to the target volume When you write some data to the target volume, it is written to the data cache and persistent memory, and FlashCopy manages the bitmaps to not overwrite the latest data. FlashCopy does not overwrite the latest data by the physical copy.
Incremental FlashCopy Write Source Target Read Initial FlashCopy relationship established with change recording and Write persistent copy options Read Control bitmap for each volume created Incremental FlashCopy started Tracks changed on the target are overwritten by the corresponding tracks from the source Tracks changed on the source are copied to the target Possible reverse operation, the target updates the source Figure 7-2 Incremental FlashCopy In the Incremental FlashCopy operations: 1.
D ataset Dataset Volum e level FlashCopy D ataset level FlashCopy Figure 7-3 Data Set FlashCopy Multiple Relationship FlashCopy Multiple Relationship FlashCopy allows a source to have FlashCopy relationships with multiple targets simultaneously. A source volume or extent can be FlashCopied to up to 12 target volumes or target extents, as illustrated in Figure 7-4.
Consistency Group FlashCopy Consistency Group FlashCopy allows you to freeze (temporarily queue) I/O activity to a LUN or volume. Consistency Group FlashCopy helps you to create a consistent point-in-time copy across multiple LUNs or volumes, and even across multiple storage units.
A more detailed discussion of the concept of data consistency and how to manage the Consistency Group operation is in 7.2.5, “What is a Consistency Group?” on page 132. Important: Consistency Group FlashCopy can create host-based consistent copies, they are not application-based consistent copies. The copies have power-fail or crash level consistency.
Persistent FlashCopy Persistent FlashCopy allows the FlashCopy relationship to remain even after the copy operation completes. You must explicitly delete the relationship. Inband Commands over Remote Mirror link In a remote mirror environment, commands to manage FlashCopy at the remote site can be issued from the local or intermediate site and transmitted over the remote mirror Fibre Channel links. This eliminates the need for a network connection to the remote site solely for the management of FlashCopy.
Server write 1 4 Write acknowledge Write to secondary 2 3 Write hit to secondary Figure 7-7 Metro Mirror Global Copy (PPRC-XD) Global Copy copies data non-synchronously and over longer distances than is possible with Metro Mirror. When operating in Global Copy mode, the source volume sends a periodic, incremental copy of updated tracks to the target volume, instead of sending a constant stream of updates.
2 Write acknowledge Server write 1 Write to secondary (non-synchronously) Figure 7-8 Global Copy Global Mirror (Asynchronous PPRC) Global Mirror provides a long-distance remote copy feature across two sites using asynchronous technology. This solution is based on the existing Global Copy and FlashCopy. With Global Mirror, the data that the host writes to the storage unit at the local site is asynchronously shadowed to the storage unit at the remote site.
Efficient synchronization of the local and remote sites with support for failover and failback modes, helping to reduce the time that is required to switch back to the local site after a planned or unplanned outage. 2 Write acknowledge Server write 1 B A FlashCopy (automatically) Write to secondary (non-synchronously) C Automatic cycle in active session Figure 7-9 Global Mirror How Global Mirror works We explain how Global Mirror works in Figure 7-10 on page 127.
Global Mirror - How it works PPRC Primary PPRC Secondary FlashCopy Source Global Copy A Local Site FlashCopy Target FlashCopy B C Remote Site Automatic Cycle in an active Global Mirror Session 1. Create Consistency Group of volumes at local site 2. Send increment of consistent data to remote site 3. FlashCopy at the remote site 4. Resume Global Copy (copy out-of-sync data only) 5.
Note: When you implement Global Mirror, you set up the FlashCopy between the B and C volumes with No Background copy and Start Change Recording options. It means that before the latest data is updated to the B volumes, the last consistent data in the B volume is moved to the C volumes. Therefore, at some time, a part of consistent data is in the B volume, and the other part of consistent data is in the C volume.
Primary server 2 Write acknowledge Server write 1 Secondary server System Data Mover Write ashynchronously System Data Mover is managing data consistency Figure 7-11 z/OS Global Mirror z/OS Metro/Global Mirror (3-site z/OS Global Mirror and Metro Mirror) This mirroring capability uses z/OS Global Mirror to mirror primary site data to a location that is a long distance away and also uses Metro Mirror to mirror primary site data to a location within the metropolitan area.
Secondary server Primary server System Data Mover 4 Server write 1 Write asynchronously Write acknowledge Write to secondary 2 3 Write hit to secondary System Data Mover is managing data consistency Figure 7-12 z/OS Metro/Global Mirror 7.2.4 Comparison of the Remote Mirror and Copy functions In this section we summarize the use of and considerations for Remote Mirror and Copy functions. Metro Mirror (Synchronous PPRC) Description Metro Mirror is a function for synchronous data copy at a distance.
Note: If you want to use a PPRC copy from the application server which has the PPRC primary volume, you need to compare its function with OS mirroring. You will have some disruption to recover your system with PPRC secondary volumes in an open system environment because PPRC secondary volumes are not online to the application servers during the PPRC relationship. You may also need some operations before assigning PPRC secondary volumes.
Considerations: When the link bandwidth capability is exceeded with a heavy workload, the RPO might grow. Note: To manage Global Mirror, you need many complicated operations. Therefore, we recommend management utilities (for example, Global Mirror Utilities) or management software (for example, IBM Multiple Device Manager) for Global Mirror. z/OS Global Mirror (XRC) Description z/OS Global Mirror is an asynchronous copy controlled by z/OS host software, called System Data Mover.
Operation 3 In the Consistency Group operation, data consistency means this sequence is always kept in the backup data. And, the order of non-dependent writes does not necessarily need to be preserved. For example, consider the following two sequences: 1. Deposit paycheck in checking account A 2. Withdraw cash from checking account A 3. Deposit paycheck in checking account B 4.
. LSS11 1st This write operation is not completed because of extended long busy condition. Wait LSS12 Servers 2nd Wait LSS13 3rd dependency for each write operation These write operations are not completed because 1st write operation is not completed. Wait Figure 7-13 Consistency Group: Example1 Because of the time lag for Consistency Group operations, some volumes in some LSSs are in an extended long busy state and other volumes in the other LSSs are not.
LSS11 1st This write operation is completed because LSS11 is not in an extended long busy condition. Completed LSS12 Servers 2nd This write operation is not completed because LSS12 is in an extended long busy condition. Wait LSS13 3rd dependency for each write operation Wait This write operation is not completed because 2nd write operation is not completed.
LSS11 1st This write operation is completed because LSS11 is not in an extended long busy condition. Completed LSS12 Servers 2nd Wait LSS13 3rd independent for each write operation Completed This write operation is not completed because LSS12 is in an extended long busy condition. This write operation is completed because LSS13 is not in an extended long busy condition. (I/O sequence is not kept.) Figure 7-15 Consistency Group: Example.
from these interfaces, S-HMC communicates with each server in the storage units via the Ethernet network. Therefore, the S-HMC is a key component to configure and manage the DS8000. The network components for Copy Services are illustrated in Figure 7-16.
7.3.3 DS Command-Line Interface (DS CLI) The IBM TotalStorage DS Command-Line Interface (CLI) helps enable open systems hosts to invoke and manage the Point-in-Time Copy and Remote Mirror and Copy functions through batch processes and scripts. The CLI provides a full-function command set that allows you to check your storage unit configuration and perform specific application functions when necessary.
IBM will support IBM TotalStorage Multiple Device Manager (MDM) for the DS8000 under the IBM TotalStorage Productivity Center in the future. MDM consists of software components which enable storage administrators to monitor, configure, and manage storage devices and subsystems within a SAN environment. MDM also has a function to manage the Copy Services functions, called the MDM Replication Manager. For further information about MDM, see 15.5, “IBM TotalStorage Productivity Center” on page 326. 7.
140 DS8000 Series: Concepts and Architecture
Part 3 Part 3 Planning and configuration In this part we present an overview of the planning and configuration necessary before installing your DS8000. The topics include: Installation planning Configuration planning Logical configuration CLI - Command-Line Interface Performance © Copyright IBM Corp. 2005. All rights reserved.
142 DS8000 Series: Concepts and Architecture
8 Chapter 8. Installation planning This chapter discusses various physical considerations and preparation involved in planning the physical installation of a new DS8000 in your environment. The following topics are covered: Installation site preparation Host attachments Network and SAN requirements © Copyright IBM Corp. 2005. All rights reserved.
8.1 General considerations Successful installation of a DS8000 requires careful planning. The main considerations when planning for the physical installation of a new DS8000 include the following: Delivery Floor loading Space occupation Electrical power Operating environment FAN and cooling Host attachment Network and SAN requirements Always refer to the most recent information for physical planning in the IBM TotalStorage DS8000 Introduction and Planning Guide, GC35-0495.
Shipping container Packaged Dimensions (in centimeters and inches) Maximum Packaged Weight (in kilograms and pounds) Model 9AE (expansion unit) pallet or crate Height 207.5 cm (81.7 in.) Width 101.5 cm (40 in.) Depth 137.5 cm (54.2 in.) 1209 kg (2665 lb) (If ordered) External S-HMC container Height 69.0 cm (27.2 in.) Width 80.0 cm (31.5 in.) Depth 120.0 cm (47.3 in.) 75 kg (165 lb) Attention: A fully configured model in the packaging can weigh over 1406 kg (3100 lbs).
Important: Any expansion units within the storage unit must be attached to the base model on the right side (as you face the front of the units). Installing on raised or nonraised floors You can install your DS8000 storage units on a raised or nonraised floor. However, installing the models on a raised floor provides the following benefits: Improves operational efficiency and allows greater flexibility in the arrangement of equipment. Increases air circulation for better cooling.
Table 8-2 The DS8000 dimensions Configuration/ Attribute 2107-921/922/9A2 (Base frame only) 2107-921/922/9A2 2107-92E/9AE (Expansion frame) 2107-922/9A2 2107-92E/9AE (2 Expansion frames) Dimensions with covers Height x Width x Depth (Std/Metric) 76 x 33.3 x 46.7 inch 193 x 84.7 x 118.3 cm 76 x 69.7 x 46.7 inch 193 x 172.7 x 118.3 cm 76 x 102.6 x 46.7 inch 193 x 260.9 x 118.3 cm Footprint® (Std/Metric) 10.77ft2 1.002 m2 22.0 ft2 2.05 m2 33.23 ft2 3.095 m2 2.
Note: Use this switch only in extreme emergencies. Using this switch may result in data loss.
Use the following feature codes when you specify the input voltage for your base or expansion model: 9090 AC input voltage: 200 V to 240 V 9091 AC input voltage: 380 V to 480 V Power connector requirements The cable connectors supplied with various line cords, and the required receptacles are covered in Chapter 4, “Meeting DS8000 delivery and installation requirements” in the IBM TotalStorage DS8000 Introduction and Planning Guide, GC35-0495.
Cooling the storage complex Adequate airflow needs to be maintained to ensure effective cooling. You can take steps to optimize the air circulation and cooling for your storage units. To optimize the cooling around your storage units, prepare the location of your storage units as recommended in the following steps: 1. Install the storage unit on a raised floor.
Channel adapter has four ports. Each port has a unique worldwide port name (WWPN). You can configure the port to operate with the SCSI-FCP upper-layer protocol. Shortwave adapters and longwave adapters are available on the storage unit.
or point-to-point topologies. With Fibre Channel adapters that are configured for FICON, the storage unit provides the following configurations: Either fabric or point-to-point topologies A maximum of 64 host ports for DS8100 Model 921 and a maximum of 128 host ports for DS8300 Models 922 and 9A2 A maximum of 2048 logical paths on each Fibre Channel port Access to all 64 control-unit images (16,384 CKD devices) over each FICON port 8.4.
CNT (INRANGE): http://www.cnt.com/ibm/ McDATA: http://www.mcdata.com/ibm/ Cisco: http://www.cisco.com/go/ibm/storage Channel extension technology products Channel extension technology product vendor documentation and Web pages should be reviewed to obtain information regarding configuration planning, hardware and software requirements, firmware and driver levels, and release notes. Cisco: http://www.cisco.com/go/ibm/storage CIENA: http://www.ciena.com/products/transport/shorthaul/cn2000/index.
8.5.2 Remote support connection requirements You must meet the requirements for the modem and for an outside connection if you will use remote support. The DS8000 S-HMC contains a modem to take advantage of remote support, which can include outbound support (call home) or inbound support (remote service performed by an IBM next level support representative).
You should also keep the following considerations in mind: Fibre channel SANs can provide the capability to interconnect open systems and storage in the same network as S/390 and zSeries host systems and storage. A single Fibre Channel host adapter can have physical access to multiple Fibre Channel ports on the storage unit. For some Fibre Channel attachments, you can establish zones to limit the access of host adapters to storage system adapters.
156 DS8000 Series: Concepts and Architecture
9 Chapter 9. Configuration planning This chapter discusses planning considerations related to implementing the DS8000 series in your environment. The topics covered are: Configuration planning overview Storage Hardware Management Console (S-HMC) DS8000 licensed functions Capacity planning Data migration planning Planning for performance For a complete discussion of these topics, see the IBM TotalStorage DS8000 Introduction and Planning Guide, GC35-0495. © Copyright IBM Corp. 2005.
9.1 Configuration planning overview When installing a DS8000 disk system, various physical requirements need to be addressed to successfully integrate the DS8000 into your existing environment. These requirements include: The DS Management Console (S-HMC), which is the focal point for configuration, Copy Services management, and maintenance for a DS8000 storage unit.
as shown. To interconnect two DS8000 base frames, FC1190 would provide a pair of 31m Ethernet cables to connect from port 16 of each switch in the second base frame into port 15 of the first frame. If the second S-HMC is installed in the second DS8000, it would remain plugged into port 1 of its Ethernet switches.
9.2.2 S-HMC software components The S-HMC consists of the following software functions: Remote services DS Storage Manager System and partition management Service Storage facility RAS Storage Management Initiative Specification Common Information Mode (SMI-S CIM) server Figure 9-2 shows the different software components that were available in the ESS 2105 master console, the ESS operating system functions and the pSeries hardware management console, which are now integrated into the S-HMC.
customer server and can be used for the configuration of a DS8000 series system at initial installation or for reconfiguration activities. Online configuration, which provides real-time configuration management support. Copy Services to allow a user to execute Copy Services functions. The DS Storage Manager on the internal S-HMC provides only real-time configuration, as opposed to offline configuration. The user may also opt to install an additional DS Storage Manager on the user’s own workstation.
System and partition management Customer access to the S-HMC is provided to allow the management of optional LPAR environments. This access allows for configuration, activation, and deactivation of logical partitions. Service The service function of the S-HMC is typically used to perform service actions such as a repair action, the installation of a storage facility or an MES, or the installation or upgrade of licensed internal code releases.
TCP/IP interface network mask (for example, 255.255.254.0). If you plan to use a Domain Name Server (DNS) to resolve network names, you will need the IP address of your DNS server and the name of your DNS. You may have more than one DNS. You can specify a default gateway in dotted decimal form or just the name in case you are using a DNS.
S-HMC - Network Topology Storage Facility Processor Complex 1 SFI Complex 0 SFI Complex 1 Firewall Customer Network Storage Facility Image (SFI) Customer network 172.16-BLACK Network Processor Complex 0 Functional Service Processor (FSP) FSP Internet MC 1 Processor Complex 1 FSP SFI Complex 0 SFI Complex 1 FSP 172.
request an IP connection, or if the modem is unavailable, will use voice phone or e-mail to request an IP connection. Note: The IP connection initiated by the S-HMC will always be to a specific telephone number for modem access or to a specific IP address for internet access. The communication will be by way of VPN and will use data encryption.
Security mechanism 3 - Login security When the network connection and session are established, the IBM Service personnel will be able to log into the S-HMC, without a secure password, for the purpose of collecting problem determination and sending the problem determination to the IBM data collection site.
Note: Data sent using the ftp protocol is neither encrypted nor authenticated. Regardless of the type of connectivity (VPN or ftp), no customer data is transmitted to IBM. 9.3 DS8000 licensed functions Licensed functions are the storage unit’s operating system and functions. Each licensed function indicator feature selected on a DS8000 base unit enables that function at the system level. These features enable a licensed function subject to you applying a feature activation code made available by IBM.
Table 9-3 Operating environment license feature codes Feature code Description 7000 OEL-inactive 7001 OEL-1TB 7002 OEL-5TB 7003 OEL-10TB 7004 OEL-25TB 7005 OEL-50TB 7010 OEL-100TB Licensed functions are activated and enforced within a defined license scope. License scope refers to the following types of storage and servers with which the function can be used: Fixed block (FB) - The function can be used only with data from Fibre Channel-attached servers.
Copy functions, you specify the feature code that represents the physical capacity to authorize for the function. Table 9-5 provides the feature codes for the Point-in-Time Copy function. (The codes apply to models 921,922, and 9A2.
Table 9-7 Remote Mirror for zSeries (RMZ) feature codes Feature code Description 7600 RMZ-inactive 7601 RMZ-1TB unit 7602 RMZ-5TB unit 7603 RMZ-10TB unit 7604 RMZ-25TB unit 7605 RMZ-50TB unit 7610 RMZ-100TB unit 9.3.5 Parallel Access Volumes (2244 Model PAV) The Parallel Access Volumes model and features establish the extent of IBM authorization for the use of the Parallel Access Volumes licensed function. Table 9-8 provides the feature codes for the PAV function.
DS8000 with 45 TB disk capacity in total, 25 TB of CKD FlashCopy authorization with license scope of CKD, 45 TB of OEL Licensed function User cannot FlashCopy any FB 20 TB FB 25 TB CKD User authorize to FlashCopy up to 25 TB 20 TB (FB) + 25 TB (CKD) + -------45 TB total capacity -------- Figure 9-4 User authorize to FlashCopy 25 TB of CKD data AS illustrated in Figure 9-5 on page 172, the user decides to change the license scope from CKD to ALL and FlashCopy the FB data as well.
I want to FlashCopy 10 TB of FB and 12 TB of CKD. DS8000 with 45 TB disk capacity in total, 45 TB FlashCopy authorization, 45 TB of OEL Licensed function and license scope of ALL User FlashCopies up to 20 TB of FB user data plus FC data upto 20 TB FB solution User has 20 TB of FB data and has 25 TB of CKD data allocated. The user has to purchse a Point-in-Time function equal to the total of both the FB and CKD capacity, that is 20 TB (FB) + 25 TB (CKD) equals 45 TB.
Primary DS8000 with 45 TB disk capacity in total, Secondary DS8000 with 45 TB disk capacity, 45 TB Remote Mirror Copy authorization for primary DS8000 and 45 TB Remote Mirror Copy for secondary DS8000, 45 TB of OEL Licensed function for primary DS8000 and 45 TB of OEL Licensed function for secondary DS8000.
Activation refers to the retrieval and installation of the feature activation code into the DS8000 system. The feature activation code is obtained using the DSFA Web site and is based on the license scope and license value. The high-level steps for storage feature activation are: Have machine-related information available (model, serial number, and machine signature). This information is obtained from the DS8000 Storage Manager. Log onto the DSFA Web site.
DS8000 adopts a virtualization concept, which was introduced in Chapter 5, “Virtualization concepts” on page 83. The logical capacity depends on the RAID type and the number of spare disks in a rank. We explain the capacity with the following figures. Attention: The figures used in this section are still subject to change. Specifically, the exact number of extents per array may be slightly different.
FB RAID rank capacity Rank RAID10 73GB RAID10 73GB RAID10 146GB RAID10 146GB RAID10 300GB RAID10 300GB RAID5 73GB RAID5 73GB RAID5 146GB RAID5 146GB RAID5 300GB RAID5 300GB Spare / No spare Spare No spare Spare No spare Spare No spare Spare No spare Spare No spare Spare No spare Extents 192 256 386 519 785 1,048 386 450 779 909 1,582 1,844 Binary Decimal GB GB 192 206.16 256 274.88 386 414.46 519 557.27 785 842.89 1,048 1,125.28 386 414.46 450 483.18 779 836.44 909 976.03 1,582 1,698.66 1,844 1,979.
9.4.3 Sparing examples This section provides some examples for configuring each rank according to the rule of sparing disks.
Sparing Example 2 – RAID-10: All same capacity, same RPM 3x2 + 2S Array DA 4x2 Array 4x2 Array 3x2 + 2S Array 1 … 3 2 … 4 DA • Assumes all devices same capacity & same RPM • Minimum of 4 spares per DA pair – 2 spares per loop & 2 spares in each array group – Additional RAID-10 arrays will be 4x2 – Any additional RAID-5 arrays will 7 + P • All spares available to all arrays on DA pair Figure 9-10 Sparing example 2: RAID-10 - All same capacity, same RPM In Figure 9-10, four RAID-10 arrays are in
Sparing Example 3 – RAID-5: 1st 4 arrays 146GB & next 2 arrays 300GB (same RPM) 6+P Array DA 6+P Array 5 1 146GB 6+P Array … … 3 146GB 6+P Array 6+P Array 300GB 6+P Array DA 4 2 6 300GB 146GB 146GB • Assumes all devices same RPM • Minimum of 4 spares per DA pair – 2 spares per loop & 2 spares in each array group – Additional 146GB RAID arrays will be 7 + P (RAID-5) or 4x2 (RAID-10) • Minimum 4 spares of the largest capacity array site on the DA pair – Next 2 300GB arrays will also be 6 +
Sparing Example 4 – RAID-5: 1st 4 arrays 146GB 10k RPM & next 4 arrays 73GB 15k RPM 6+P Array DA 6+P Array 1 5 146GB 10k rpm 73GB 15k rpm 6+P Array … 6+P Array DA 4 … 3 146GB 10k rpm 6+P Array 6+P Array 2 6 73GB 15k rpm 146GB 10k rpm 146GB 10k rpm • Assumes mixture of different RPM devices • Minimum of 4 spares per DA pair – – 2 spares per loop & 2 spares in each array group Additional 146GB / 10k RPM RAID arrays will be 7 + P (RAID-5) or 4x2 (RAID-10) • Minimum 2 spares of capacity an
A Standby CoD disk set contains 16 disk drives of the same capacity and RPM (10000 or 15000 RPM). With this offering, up to four Standby CoD disk drive sets (64 disk drives) can be factory or field installed into your system. To activate, you logically configure the disk drives for use. This is a non-disruptive activity that does not require intervention from IBM. Upon activation of any portion of the Standby CoD disk drive set, you must place an order with IBM to initiate billing for the activated set.
DDM to DA Mapping -- 2-way 2 disk enclosure pair (32 DDMs can be installed 2 means to attach to DA pair 2) 0/1 I/O drawer (0/1 means device adapters for DA pair 0 and 1) 2 2 0 3 3 1 0 1 S0 2 2 0 S1 S0 0 Server 0/1 1/0 (0 means server 0) 2/3 3/2 disk enclosure pair installation sequence Base Frame Expansion Frame Figure 9-13 DDM to DA mapping (2-way model) Model 921 can have four DA pairs and 12 disk enclosure pairs (1 disk enclosure pair can have 16 x 2 =32 DDMs).
DDM to DA Mapping -- 4-way 2 disk enclosure pair 2 2 0 6 6 4 3 3 1 0 4 1 I/O drawer S0 (0/1 means device adapters for DA pair 0 and 1) S1 7 7 5 2 2 0 5 0 (32 DDMs can be installed 2 means to attach to DA pair 2) 0/1 S0 Server (0 means server 0) disk enclosure pair installation sequence 0/1 1/0 4/5 5/4 2/3 3/2 6/7 7/6 Base Frame Expansion Frame 1 Expansion Frame 2 Figure 9-14 DDM to DA Mapping (4-way model) Model 922 can have eight DA pairs and 20 disk enclosure pairs.
9.5.1 Operating system mirroring Logical volume mirroring (LVM) and Veritas Volume Manager have little or no application service disruption and the original copy will stay intact while the second copy is being made. The disadvantages of this approach include: Host cycles are utilized. It is labor intensive to set up and test. There is a potential for application delays due to dual writes occurring. It does not allow for point-in-time copy or easy backout once the first copy is removed. 9.5.
Minimal host application outages. The disadvantages of remote copy technologies are: The same storage device types are required. For example, in a Metro Mirror configuration you need ESS 800 mirroring to a DS8000 (or an IBM approved configuration), but cannot have a non-IBM disk system mirroring to a DS8000. Physical volume ID (PVID) and device name are not maintained if not under LVM. 9.5.
Data migration methods Environment Data migration method S/390 IBM TotalStorage Global Mirror, Remote Mirror and Copy (when available) zSeries IBM TotalStorage Global Mirror, Remote Mirror and Copy (when available) Linux environment IBM TotalStorage Global Mirror, Remote Mirror and Copy (when available) z/OS operating system DFSMSdss (simplest method) DFSMShsm IDCAMS Export/Import (VSAM) IDCAMS Repro (VSAM, SAM, BDAM) IEBCOPY ICEGENER, IEBGENER (SAM) Specialized database utilities for CICS, DB2 or
9.6.1 Disk Magic An IBM representative or an IBM Business Partner can model your workload using Disk Magic before migrating to the DS8000. Modelling should be based on performance data covering several time intervals, and should include peak I/O rate, peak R/T and peak (read and write) MB/second throughput. Disk Magic will provide insight when you are considering deploying remote technologies such as Metro Mirror. Consult your sales representative for assistance with Disk Magic.
For example, in the zSeries environment, the RMF™ RAID rank report can be used to investigate RAID rank saturation when the DS8000 is already installed in your environment. New counters will be reported by RMF that will provide statistics on a volume level for channel and disk analysis. In an open system environment the iostat command is useful to determine whether a system’s I/O load is balanced or whether a single volume is becoming a performance bottleneck.
10 Chapter 10. The DS Storage Manager - logical configuration In this chapter, the following topics are discussed: Configuration hierarchy, terminology, and concepts Summary of the DS Storage Manager logical configuration steps Introducing the GUI and logical configuration panels The logical configuration process © Copyright IBM Corp. 2005. All rights reserved.
10.1 Configuration hierarchy, terminology, and concepts The DS Storage Manager provides a powerful, flexible, and easy to use application for the logical configuration of the DS8000. It is the client’s responsibility to configure the storage server to fit their specific needs. It is not in the scope of this redbook to show detailed steps and scenarios for every possible setup. Help and guidance can be obtained from an IBM FTSS or an IBM Business Partner if the client requires further assistance. 10.1.
Fiber Channel Port W PN 4 W W PN 3 W W W W PN 2 W PN 1 pSeries1 NO Host Attachment Group defined for host pSeries1.
One set of host definitions may be used for multiple storage images, storage units, and storage complexes. DDM A Disk Drive Module (DDM) is a field replaceable unit that consists of a single disk drive and its associated packaging. DDMs are ordered in a group of 16 called a drive set. A drive set is installed across two disk enclosures, 8 DDMs in the front disk enclosure and 8 DDMs in the rear disk enclosure. When ordering disk enclosures they come in pairs, each able to hold 16 drives.
Extent Pool 2 3 1 0 7 5 0 1 1GB Extents Ranks formated for FB data Figure 10-2 An extent pool containing 2 volumes Figure 10-2 is an example of one extent pool composed of ranks formatted for FB data. Two logical volumes are defined (volumes 2310 and 7501). Each volume is made up of 6 extents at 1 GB each. This makes each volume 6 GBs. The numbering sequence of LUN id 2310, shown in the diagram as the top volume, translates into an address.
There are thresholds that warn you when you are nearing the end of space utilization on the extent pool. There is a Reserve space option that will prevent the logical volume from being created in reserved space until the space is explicitly released. Note: A user can't control or specify which ranks in an extent pool are used when allocating extents to a volume.
They contain one or more host attachments from different hosts and one or more LUNs. This allows sharing of the volumes/LUNs in the group with other host port attachments or other hosts that might be, for example, configured in clustering. A specific host attachment can be in only one volume group. Several host attachments can be associated with one volume group. For example, a port with a WWPN number of 10000000C92EF123 can only reside in one volume group, not two or more.
Host System A Host Attachment Host Attachment WWPN Host System B Host System C WWPN WWPN WWPN Host Attachment WWPN WWPN Host Attachment WWPN WWPN It is possible to have several host attachments associated to one volume group. 1 2 4 3 Volume Group 1 Volume Group 2 5 7 6 8 We do recommend, for ease of management, to associate only one host attachment to each volume group.
Figure 10-5 shows an example of the relationship between LSSs, extent pools, and volume groups: Extent pool 4, consisting of two LUNs, LUN 2210 and LUN 7401; and extent pool 5, consisting of three LUNs, LUN 2313, 7512, and 7515. Here are some considerations regarding the relationship between LSS, extent pools, and volume groups: Volumes from different LSSs and different extent pools can be in one volume group as shown in Figure 10-5.
– FB LSSs’ definitions are configured during the volume creation. LSSs have no predetermined relation to physical ranks or extent pools other than their server affinity to either Server0 or Server1. – One LSS can contain volumes/LUNs from different extent pools. – One extent pool can contain volumes/LUNs that are in different LSSs, as shown in Figure 10-6. – One LCU can contain CKD volumes/LUNs of different types, for example, type 3390 Model 3 and Model 9.
Tip: We recommend that you reserve all LSSs in address group 0 (LSSs 00-0F) for CKD ESCON attachments because ESCON attachments must use address group 0. 10.1.2 Summary of the DS Storage Manager logical configuration steps It is our recommendation that you review the following concepts before performing the logical configuration. These recommendations are discussed in this chapter.
Raw DDM layer DDM 16-pack Y DDM 16-pack X Array Site X Array Site Y array site layer Array X RAID5 or RAID10 Array Y array layer RAID5 or RAID10 Rank1 Rank2 rank layer CKD or FB Virtualization Layers Hierarchy extent pool layer 6 Volume layer (Extent Pool A) Each extent = 1 GB Volumes/LUNs Host Figure 10-7 View of the raw DDM to LUN relationship Raw or physical DDM layer At the very top of Figure 10-7 you can see the raw DDMs. There are 16 DDMs in a disk drive set.
Chunk1 Chunk1 Chunk1 Chunk1 Chunk1 Parity Chunk2 Chunk2 Chunk2 Chunk2 Chunk2 Paity Chunk2 Chunk3 Chunk3 Chunk3 Chunk3 Parity Chunk3 Chunk3 Chunk4 Chunk4 Chunk4 Parity Chunk4 Chunk4 Chunk4 Chunk5 Chunk5 Parity Chunk5 Chunk5 Chunk5 Chunk5 Chunk6 Parity Chunk6 Chunk6 Chunk6 Chunk6 Chunk6 Parity Chunk7 Chunk7 Chunk7 Chunk7 Chunk7 Chunk7 Spare Chunk1 Figure 10-8 Diagram of how parity is striped across physical disks Rank layer At this level the ranks are formed.
Create Storage Complex. Create Storage Units (Storage Facility) and define associated Storage Images (Storage Facility Images). Define Host Systems and associated Host Attachments. Create Arrays by selecting Array Sites. Array Sites are already automatically pre-defined. Create Ranks and add Arrays in the ranks. Create Extent Pools, add ranks in Extent Pools and define the server 0 or server 1 affinity. Create Logical Volumes and add the Logical Volumes in LSSs.
Figure 10-10 Entering the URL using the TCP/IP address for the S-HMC In Figure 10-10, we show the TCP/IP address and the port number 8451 afterwards. Figure 10-11 Entering the URL using the fully qualified name of your S-HMC In Figure 10-11, we show the fully qualified name and the port number 8452 separated by a colon. For ease of identification, you could add a suffix such as 0 or 1 to the selected fully qualified name, for example, SHMC_0 for the default S-HMC as shown in Figure 10-11.
Web browser running directly to the on-board S-HMC, or in a remote machine connected into the user’s network. Once the GUI is started and the user has successfully logged on, the Welcome panel shown in Figure 10-12 is displayed. The triangle expands the menu Figure 10-12 The Welcome panel Figure 10-12 shows the Welcome panel’s two menu choices. Click the triangle beside either menu item to expand the menu; this displays the options needed to configure the storage.
support. A view of the fully expanded Real-time manager menu choices is shown in Figure 10-13. Figure 10-13 The fully expanded Real-time manager menu choices – Copy Services You can use the Copy Services selections of the DS Storage Manager if you chose Real-time during the installation of the DS Storage Manager and you purchased these optional features. A further requirement to using the Copy Services features is to apply the license activation keys.
Figure 10-14 The fully expanded Simulated manager menu choices The following items should be considered as first steps in the use of either of these modes. Log in Logging in to the DS Storage Manager requires that you provide your user name and password. This function is generally administered through your system administrator and by your company policies.
Figure 10-15 User administration panel Click Go to advance to the panel shown in Figure 10-16. Figure 10-16 Add User panel You can grant user access privileges from the panel shown in Figure 10-16. Using the help panels (information center) The information center displays product and application information. The system provides a graphical user interface for browsing and searching online documentation.
Figure 10-17 View of the information center 10.2.3 Navigating the GUI Knowing what icons, radio buttons, and check boxes to click in the GUI will help you properly navigate your way through the configurator and successfully configure your storage. 1 2 3 4 Figure 10-18 The DS Storage Manager Welcome panel The picture icons that appear near the top of the Welcome screen and that are identified by number in Figure 10-18 have the following meanings: 1.
2. Icon 2 will hide the black banner across the top of the screen, again to increase the space to display the panel you are working on. 3. Icon 3 allows you to properly log out and exit the DS Storage Manager GUI. 4. Icon 4 accesses the Information Center. You get a help menu screen that prompts you for input on help topics. Figure 10-19 shows how the screen looks if you expand the work area using icons 1 and 2 from the previous illustration.
2 Deselect All 3 Show Filter Row 4 Clear All Filters 5 Edit Sort 6 Clear All Sorts The caret button (number 7) is for a simple ascending/descending sort on a single column. Clicking the pull-down (number 8) results in the expanded action list shown in Figure 10-21. Near the center of the list you can access the same selection and filtering options mentioned previously.
In the example shown in Figure 10-22, the radio button is checked to allow specific host attachments for selected storage image I/O ports only. The check box has also been selected to show the recommended location view for the attachment. 10.3 The logical configuration process We recommend that you configure your storage environment in the following order. This does not mean that you have to follow this guide exactly.
Figure 10-24 The Create Storage Complex panel, with the Nickname and Description defined Do not click Create new storage unit at the bottom of the screen as shown in Figure 10-24. Click Next, then Finish in the verification step. 10.3.2 Configuring the storage unit To create the storage unit, expand the Manage Hardware section, click Storage Units (2), click Create from the Select Action pull-down, and click Go. Follow the panel directions with each advancing panel.
1 2 3 4 Figure 10-25 The General storage unit information panel Fill in the required fields as shown in Figure 10-25, and choose the following: 1. Click the Machine Type-Model from the pull-down. 2. Fill in the Nickname. 3. Type in the Description. 4. Click the Select Storage complex from the pull-down, and choose the storage complex on which you wish to create the storage image.
Configuration advancement steps Figure 10-26 View of the Defined licensed function panel Fill in the fields shown in Figure 10-26 with the following information: The quantity of images The number of licensed TBs for the Operating environment The quantity of storage covered by the FlashCopy License, in TB.
Figure 10-27 View of Specify DDM packs panel, with the Quantity and DDM type added 5. Click Add and Next to advance to the Specify I/O adapter configuration panel shown in Figure 10-28. Figure 10-28 Specify I/O adapter configuration panel Enter the appropriate information and click Next. The storage facility image will be created automatically, by default, as you create the storage unit. Chapter 10.
10.3.3 Configuring the logical host systems To create a logical host for the storage unit that you just created, click Host Systems as shown in Figure 10-29. You may want to expand the work area. Expand the Work area Host Systems Figure 10-29 Create host systems, screen 1 You can expand the view by clicking the left arrow in the My Work area as shown in Figure 10-29; the expanded view is shown in Figure 10-30.
Figure 10-31 View of the General host information panel Click Next to advance to the Define host ports panel shown in Figure 10-32. Figure 10-32 View of Define host port panel Enter the appropriate information on the Define host ports panel shown in Figure 10-32. Note: Selecting “Group ports to share a common set of volumes” will group the host ports together into one attachment.
Click Add, and the Define host ports panel will be updated with the new information as shown in Figure 10-33. Figure 10-33 Define host ports panel, with updated host information Click Next, and the screen will advance to the Select storage images panel shown in Figure 10-34. Figure 10-34 Select storage images panel Highlight the Available storage images that you wish, click Add and Next.
The screen will advance to the Specify storage image parameters section shown in Figure 10-35. Figure 10-35 Specify storage image parameters panel Make the following entries and selections on the Specify storage image parameters panel: 1. Click the Select volume group for host attachment pull-down and highlight Select volume group later. 2. Click any valid storage image I/O port under the “This host attachment can login to” field. 3. Click Apply assignment and OK. 4. Verify and click Finish. 10.3.
Figure 10-36 The Definition method panel From the Definition method panel, if you choose Create arrays automatically, the system will automatically take all the space from the array site and place it into an array. Physical disks from any array site could be placed, through a predetermined algorithm, into the array. It is at this point that you create the RAID-5 or RAID-10 format and striping in the array being created.
Click Next to advance to the Add arrays to ranks panel shown in Figure 10-38. If you click the check box next to Add these arrays to ranks you will not have to configure the ranks separately at a later time. The ranks can be either FB or CKD; this is specified in the Storage type pull-down shown in Figure 10-38. Figure 10-38 The Add arrays to ranks panel with FB selected Click Next and Finish to configure the arrays and ranks in one step. 10.3.
Figure 10-39 The Definition method panel The extent pools are given either a server 0 or server 1 affinity at this point, as shown in Figure 10-40. Figure 10-40 The Define properties panel Click Next and Finish. 10.3.6 Creating FB volumes from extents Under the Simulated Manager, expand the Open Systems section and click Volumes. Click Create from the Select Action pull-down and click Go. Follow the panel directions with each advancing window.
Choose the extent pool from which you wish to configure the volumes, as shown in Figure 10-41. radio button Figure 10-41 The Select extent pool panel Determine the quantity and size of the volumes. Use the calculators to determine the max size versus quantity, as shown in Figure 10-42. Even LSS numbers Figure 10-42 The Define volume properties panel Chapter 10.
It is here that the volume will take on the LSS numbering affinity. Note: Since server 0 was selected for the extent pool, only even LSS numbers are selectable, as shown in Figure 10-42. You can give the volume a unique name and number as shown in Figure 10-43. This can be helpful for managing the volumes. Figure 10-43 The Create volume nicknames panel Click Next and Finish to end the process of creating the volumes. 10.3.
Figure 10-44 The Define volume group properties filled out 4. Select the host attachment you wish to associate the volume group with. See Figure 10-45. Figure 10-45 The Select host Attachments panel with an attachment selected Chapter 10.
5. Select volumes on the Select volumes for group panel shown in Figure 10-46. Figure 10-46 The Select volumes for group panel Click Next and Finish. 10.3.8 Assigning LUNs to the hosts Under Simulated Manager, perform the following steps to configure the volumes: 1. Click Volumes. 2. Select the check box next to the volume that you want to assign. 3. Click the Select Action pull-down, and highlight Add To Volume Group. 4. Click Go. 5. Click the check box next to the desired volume group and click Apply.
4. Click Go. 5. Click OK. 10.3.10 Creating CKD LCUs Under Simulated Manager, zSeries, perform the following steps: 1. Click LCUs. 2. Click the Select Action pull-down and highlight Create. 3. Click Go. 4. Click the check box next to the LCU ID you wish to create. 5. Click Next. 6. In the panel returned, make the following entries: a. Enter the desired SSID b. Select the LCU type c. Accept the defaults on the other input boxes, unless you are using Copy Services. 7. Click Next. 8. Click Finish. 10.3.
12.Under the Define alias assignments panel, do the following: a. Click the check box next to the LCU number. b. Enter the starting address. c. Specify the order as Ascending or Descending. d. Select the number of aliases per volume, for example, 1 alias to every 4 base volumes, or 2 aliases to every 1 base volume. e. Click Next. 13.On the Verification panel, click Finish. 10.3.12 Displaying the storage unit WWNN To display the WWNN of the storage unit: 1. Click Real-time Manager as shown in Figure 10-47.
Figure 10-49 View of the WWNN in the General panel 10.4 Summary In this chapter we have discussed the configuration hierarchy, terminology, and concepts. We have recommended an order and methodology for configuring the DS8000 storage server. We have included some logical configuration steps and examples and explained how to navigate the GUI. Chapter 10.
230 DS8000 Series: Concepts and Architecture
11 Chapter 11. DS CLI This chapter provides an introduction to the DS Command-Line Interface (DS CLI), which can be used to configure and maintain the DS6000 and DS8000 series. It also describes how the DS CLI can be used to manage Copy Services relationships.
11.1 Introduction The IBM TotalStorage DS Command-Line Interface (the DS CLI) is a software package that allows open systems hosts to invoke and manage Copy Services functions as well as to configure and manage all storage units in a storage complex. The DS CLI is a full-function command set. In addition to the DS6000 and DS8000, the DS CLI can also be used to manage Copy Services on the ESS 750s and 800s, provided they are on ESS code versions 2.4.2.x and above.
Manage host access to volumes Configure host adapter ports The DS CLI can be used to invoke the following Copy Services functions: FlashCopy - Point-in-time Copy IBM TotalStorage Metro Mirror - Synchronous Peer-to-Peer Remote Copy (PPRC) IBM TotalStorage Global Copy - PPRC-XD IBM TotalStorage Global Mirror - Asynchronous PPRC Restriction: The Copy Services functions in the December 2004 release of the DS CLI will only support the creation of point-in-time copies (FlashCopy).
The exact install process doesn’t really vary by operating system. It consists of: 1. The DS CLI CD is placed in the CD-ROM drive (and mounted if necessary). 2. If using a command line, the user changes to the root directory of the CD. There is a setup command for each supported operating system. The user issues the relevant command and then follows the prompts. If using a GUI, the user navigates to the CD root directory and clicks on the relevant setup executable. 3. The DS CLI is then installed.
Open systems host CLI script ESS CLIsoftware software ESSCS CLI Network interface CS Server A CS Server B CS Client CS Client Cluster 1 Cluster 2 ESS 800 Figure 11-1 Command flow for ESS 800 Copy Services commands A CS server is now able to manage up to eight F20s and ESS 800s. This means that up to sixteen clusters can be clients of the CS server. All FlashCopy and remote copy commands are sent to the CS server, which then sends them to the relevant client on the relevant ESS.
Open systems host Storage HMC CLI interpreter CLI script ESS CLIsoftware software DS CLI External network interface Network interface CS Server A CS Server B CS Client CS Client Cluster 1 Cluster 2 ESS 800 Dual internal network interfaces CLI interface CLI interface Server 0 Server 1 DS8000 Figure 11-2 DS CLI Copy Services command flow DS8000 split network One thing that you may notice about Figure 11-2 is that the S-HMC has different network interfaces.
Open systems host DS Storage Management PC CLI script ESS CLI software software DS CLI CLI interpreter Network interface Network interface CLI interface CLI interface Controller 1 Controller 2 DS6000 Figure 11-3 Command flow for the DS6000 For the DS6000, it is possible to install a second network interface card within the DS Storage Manager PC. This would allow you to connect it to two separate switches for improved redundancy.
Open systems host ESS 800 tasks DS8000 tasks ESS CLI software DS CLI software Network interface Storage HMC CLI interpreter External network interface Dual internal network interfaces CLI interface Infoserver CLI interface Infoserver Cluster 1 Cluster 2 ESS 800 Server 0 Server 1 DS8000 Figure 11-4 CLI co-existence Storage management ESS CLI commands that are used to perform storage management on the ESS 800, are issued to a process known as the infoserver.
11.6 User security The DS CLI software must authenticate with the S-HMC or CS Server before commands can be issued. An initial setup task will be to define at least one userid and password whose authentication details are saved in an encrypted file. A profile file can then be used to identify the name of the encrypted password file. Scripts that execute DS CLI commands can use the profile file to get the password needed to authenticate the commands.
csadmin op_copy_services exit status of dscli = 0 C:\Program Files\IBM\dscli> It is also possible to include single commands in a script, though this is different from the script mode described later. This is because every command that uses the DS CLI would invoke the DS CLI and then exit it. A simple Windows script is shown in Example 11-2.
contain only DS CLI commands. This is because all commands in the script are executed by a single instance of the DS CLI interpreter. Comments can be placed in the script if they are prefixed by a hash (#). A simple example of a script mode script is shown in Example 11-5. Example 11-5 DS CLI script mode example # This script issues the 'lsuser' command lsuser # end of script In this example, the script was placed in a file called listAllUsers.cli, located in the scripts folder within the DS CLI folder.
Example 11-8 Use of the help -l command dscli> help -l mkflash mkflash [ { -help|-h|-? } ] [-fullid] [-dev storage_image_ID] [-tgtpprc] [-tgtoffline] [-tgtinhibit] [-freeze] [-record] [-persist] [-nocp] [-wait] [-seqnum Flash_Sequence_Num] SourceVolumeID:TargetVolumeID Man pages A “man page” is available for every DS CLI command. Man pages are most commonly seen in UNIX-based operating systems to give information about command capabilities.
echo A DS CLI application error occurred. goto end :level5 echo An authentication error occurred. Check the userid and password. goto end :level4 echo A DS CLI Server error occurred. goto end :level3 echo A connection error occurred. Try pinging 10.0.0.1 echo If this fails call network support on 555-1001 goto end :level2 echo A syntax error. Check the syntax of the command using online help. goto end :level0 echo No errors were encountered.
# The following command checks the status of the ranks lsrank -dev IBM.2107-9999999 # The following command assigns rank0 (R0) to extent pool 0 (P0) chrank -extpool P0 -dev IBM.2107-9999999 R0 # The mklcu # The mklcu following command creates -dev IBM.2107-9999999 -ss following command creates -dev IBM.
Migration considerations If your environment is currently using the ESS CS CLI to manage Copy Services on your model 800s, you could consider migrating your environment to the DS CLI. Your model 800s will need to be upgraded to a microcode level of 2.4.2 or above. If your environment is a mix of ESS F20s and ESS 800s, it may be more convenient to keep using only the ESS CLI. This is because the DS CLI cannot manage the ESS F20 at all, and cannot manage storage on an ESS 800.
Figure 11-6 A portion of the tasks listed by using the GUI In Example 11-12, the list task command is used. This is an ESS CLI command. Example 11-12 Using the list task command to list all saved tasks (only the last five are shown) arielle@aixserv:/opt/ibm/ESScli > esscli list task -s 10.0.0.1 -u csadmin -p passw0rd Wed Nov 24 10:29:31 EST 2004 IBM ESSCLI 2.4.
Figure 11-7 Using the GUI to get the contents of a FlashCopy task It makes more sense, however, to use the ESS CLI show task command to list the contents of the tasks, as depicted in Example 11-13. Example 11-13 Using the command line to get the contents of a FlashCopy task mitchell@aixserv:/opt/ibm/ESScli > esscli show task -s 10.0.0.1 -u csadmin -p passw0rd -d "name=Flash10041005" Wed Nov 24 10:37:17 EST 2004 IBM ESSCLI 2.4.
Table 11-3 Converting a FlashCopy task to DS CLI ESS CS CLI parameter Saved task parameter DS CLI conversion Explanation Tasktype FCEstablish mkflash An FCEstablish becomes a mkflash. Options NoBackgroundCopy -nocp To do a FlashCopy no-copy we use the -nocp parameter. SourceServer 2105.23953 -dev IBM.2105-23953 The format of the serial number changes. you must use the exact syntax. TargetServer 2105.23953 N/A We only need to use the -dev once, so this is redundant.
11.10.4 Using DS CLI commands via a single command or script Having translated a saved task into a DS CLI command, you may now want to use a script to execute this task upon request. Since all tasks must be authenticated you will need to create a userid. Creating a user ID for use only with ESS 800 When using the DS CLI with an ESS 800, authentication is performed by using a userid and password created with the ESS Specialist.
pwfile: passwd # Default target Storage Image ID # If the -dev parameter is needed in a command then it will default to the value here # "devid" is equivalent to "-dev storage_image_ID" # the default server that DS CLI commands will be run on is 2105 800 23953 devid: IBM.2105-23953 If you don’t want to create an encrypted password file, or do not have access to a simulator or the real S-HMC, then you can specify the userid and password in plain text.
anthony@aixsrv:/opt/ibm/dscli > The command can also be placed into a file and that file made executable. An example is shown in Example 11-19. Example 11-19 Creating an executable file anthony@aixsrv:/home anthony@aixsrv:/home anthony@aixsrv:/home anthony@aixsrv:/home anthony@aixsrv:/home >echo “/opt/ibm/dscli/dscli mkflash -nocp 1004:1005” > flash1005 >chmod +x flash1005 >./flash1005 >CMUC00137I mkflash: FlashCopy pair 1004:1005 successfully created.
252 DS8000 Series: Concepts and Architecture
12 Chapter 12. Performance considerations This chapter discusses early performance considerations regarding the DS8000 series. Disk Magic modelling for DS8000 is available as of December 03, 2004. Contact your IBM sales representative for more information about this tool and the benchmark testing that was done by the Tucson performance measurement lab. Note that Disk Magic is an IBM internal modelling tool.
12.1 What is the challenge? In recent years we have seen an increasing speed in developing new storage servers which can compete with the speed at which processor development introduces new processors. On the other side, investment protection as a goal to contain Total Cost of Ownership (TCO), dictates inventing smarter architectures that allow for growth at a component level.
To host servers Host server Processor Adapter Adapter Adapter Adapter Processor Memory Processor To storage servers Adapter Storage server Memory Processor Adapter Figure 12-1 Host server and storage server comparison: Balanced throughput challenge The challenge is obvious: Develop a storage server—from the top with its host adapters down to its disk drives—that creates a balanced system with respect to each component within this storage server and with respect to their interconnectivity wit
12.2.1 SSA backend interconnection The Storage Serial Architecture (SSA) connectivity with the SSA loops in the lower level of the storage server or backend imposed RAID rank saturation and reached their limit of 40 MB/second for a single stream file I/O. IBM decided not to pursue SSA connectivity, despite its ability to communicate and transfer data within an SSA loop without arbitration. 12.2.
relatively small logical volumes, we ran out of device numbers to address an entire LSS. This happens even earlier when configuring not only real devices (3390B) within an LSS, but also alias devices (3390A) within an LSS in z/OS environments. By the way, an LSS is congruent to an logical control unit (LCU) in this context. An LCU is only relevant in z/OS and the term is not used for open systems operating systems. 12.
To host servers Host server Processor Adapter Adapter Adapter Adapter Processor Memory Processor To storage servers Adapter Storage server Memory Processor Adapter 20 port switch o oo 16 DDM 20 port switch Figure 12-2 Switched FC-AL disk subsystem Performance is enhanced since both DAs connect to the switched Fibre Channel disk subsystem back end as displayed in Figure 12-3 on page 259. Note that each DA port can concurrently send and receive data.
To host servers Memory Processor Processor Storage server Adapter Adapter Adapter Adapter To next switch 20 port switch oo o 16 DDM 20 port switch Figure 12-3 High availability and increased bandwidth connecting both DA to two logical loops These two switched point-to-point loops to each drive, plus connecting both DAs to each switch, accounts for the following: There is no arbitration competition and interference between one drive and all the other drives because there is no hardware in comm
12.3.2 Fibre Channel device adapter The DS8000 still relies on eight DDMs to form a RAID-5 or a RAID-10 array. These DDMs are actually spread over two Fibre Channel loops and follow the successful approach to use AAL. With the virtualization approach and the concept of extents, the DAs are mapping the virtualization level over the disk subsystem back end. For more details on the disk subsystem virtualization refer to Chapter 5, “Virtualization concepts” on page 83.
ports per HA and up to 16 HAs in the smallest family member of the DS8000 series, the DS8100, you can configure up to 64 FICON channel ports. This still provides 16 FICON channel paths to each single device, which is beyond what the zSeries Channel Subsystem provides with its limit of up to eight channel paths per device as a maximum.
To host servers Processor Processor Storage server Adapter Adapter Adapter Adapter Processor L1,2 Memory Processor L1,2 Memory Memory L3 Memory RIO-G Module POWER5 2-way SMP Figure 12-6 Standard pSeries POWER5 p570 2-way SMP processor complexes for DS8100-921 Figure 12-7 provides a less abstract view and outlines some details on the dual 2-way processor complex of a DS8100-921, its gates to host servers through HAs, and its connections to the disk storage back end through the DAs.
complex, server 1. This affinity is established at creation of an extent pool. For details see Chapter 10, “The DS Storage Manager - logical configuration” on page 189. Each single I/O enclosure itself contains six Fibre Channel adapters: Two DAs which install in pairs Four HAs which install as required Each adapter itself contains four Fibre Channel ports. Although each HA can communicate with each server, there is some potential to optimize traffic on the RIO-G interconnect structure.
switch has two ports to connect to the next switch pair with 16 DDMs when vertically growing within a DS8000. As outlined before, this dual two logical loop approach allows for multiple concurrent I/O operations to individual DDMs or sets of DDMs and minimizes arbitration through the DDM/switch port mini loop communication. 12.3.5 Vertical growth and scalability Figure 12-9 shows a simplified view of the basic DS8000 structure and how it accounts for scalability.
12.4.1 Workload characteristics The answers to questions like how many host connections do I need?, how much cache do I need? and the like always depend on the workload requirements (such as, how many I/Os per second per server, I/Os per second per gigabyte of storage, and so forth).
Note: Database logging usually consists of sequences of synchronous sequential writes. Log archiving functions (copying an active log to an archived space) also tend to consist of simple sequential read and write sequences. You should consider isolating log files on separate arrays. All disks in the storage subsystem should have roughly the equivalent utilization. Any disk that is used more than the other disks will become a bottleneck to performance.
Balanced implementation: LVM striping 1 rank per extent pool Rank 1 Extent Rank 2 1GB Extent pool 1 Extent pool 2 Non-balanced implementation: LUNs across ranks More than 1 rank per extent pool Extent Extent Pool Pool 5 2 GB LUN 1 Rank 5 2GB LUN 2 8GB LUN Rank 6 Extent 1GB Extent pool 3 2GB LUN 3 Rank 7 Rank 3 Extent pool 4 2GB LUN 4 Rank 8 Rank 4 LV stripped across 4 LUNs Figure 12-10 Spreading data across ranks Note: The recommendation is to use host striping wherever possible to dis
12.4.6 Determining the number of paths to a LUN When configuring the IBM DS8000 for an open systems host, a decision must be made regarding the number of paths to a particular LUN, because the multipathing software allows (and manages) multiple paths to a LUN. There are two opposing factors to consider when deciding on the number of paths to a LUN: Increasing the number of paths increases availability of the data, protecting against outages.
IBM Reads Reads HAs don't have DS8000 server affinity HA Memory L1,2 Memory L1,2 Memory FC0 FC1 I/Os Processor SERVER 0 L3 Memory LUN1 I/Os Processor RIO-G Module Processor RIO-2 Interconnect Memory SERVER 1 Processor Extent pool 1 switch Extent pool 4 20 port L1,2 Memory L1,2 Memory L3 Memory RIO-G Module 16 DDM LUN1 ooo 20 port switch DA 20 port switch DAs with an affinity to server 0 LUN1 ooo DA DAs with an affinity to server 1 16 DDM 20 port switch oooo Extent pool 1 Ex
Parellel Sysplex z/OS 1.
the I/O rate and highly sequential read operations for the MB/sec numbers. They also vary depending on the server type used. The 2.8 GB/sec sequential read figure—and even significantly more—is achievable with a properly configured DS8300. A properly configured DS8100 can reach read hit I/O rates with numbers in the 6 digits. The DS8300 does more than double that of the DS8100.
It is not just the pure cache size which accounts for good performance figures. Economical use of cache, like 4 KB cache segments and smart, adaptive caching algorithms, are just as important to guarantee outstanding performance. This is implemented in the DS8000 series. Processor memory or cache can grow to up to 256 GB in the DS8300 and to 128 GB for the DS8100.
Another example, with four ESS F20s each with eight FICON channels, might collapse into about 20 FICON ports when changing to a connectivity speed of 2 Gbps for the target DS8100 or DS8300. Disk array sizing considerations for z/OS environments You can determine the number of ranks required not only based on the needed capacity, but also depending on the workload characteristics in terms of access density, read to write ratio, and hit rates.
Use Capacity Magic to find out about usable disk capacity. For information on this internal IBM tool, contact your IBM representative. 12.5.4 Configuration recommendations for z/OS We discuss briefly how to group ranks into extent pools and what the implications are with different grouping approaches. Note the independence of LSSs from ranks. Because an LSS is congruent with a z/OS LCU, we need to understand the implications.
HA RIO-G Module Memory L3 Memory L1,2 Memory Processor L1,2 Memory Processor RIO-G Module Processor L1,2 Memory Processor L1,2 Memory Memory L3 Memory Server0 Server1 RIO-G Module POWER5 2-way SMP RIO-G Module POWER5 2-way SMP DAs Rank Extent pool0 Extent pool2 Extent pool4 1 3 5 2 4 6 7 9 11 8 10 12 Extent pool6 Extent pool8 Extent pool10 Extent pool1 Extent pool7 Extent pool3 Extent pool9 Extent pool5 Extent pool11 Figure 12-13 Extent pool affinity to processor
HA RIO-G Module Memory L3 Memory L1,2 Memory Processor L1,2 Memory Processor RIO-G Module Processor L1,2 Memory Processor L1,2 Memory Server0 POWER5 2-way SMP Memory L3 Memory Server1 RIO-G Module RIO-G Module POWER5 2-way SMP DAs Rank 1 5 9 3 7 2 6 4 8 11 10 12 Extent pool0 Extent pool1 Figure 12-14 Extent pool affinity to processor complex with pooled ranks in two extent pools Again what is obvious here is the affinity between all volumes residing in extent pool 0 to the left
blue I/O blue I/O red I/O red I/O red server blue server Server0 Server1 DAs POWER5 2-way SMP POWER5 2-way SMP Rank SGLOG2 SGLOG1 Extent pool0 1 SGHPC1 Extent pool3 Extent pool2 3 5 7 9 SGPRIM 11 5 3 9 7 Extent pool1 1 11 Extent pool4 SGHPC2 Extent pool5 blue pool red pool Figure 12-15 Mix of extent pools Create two general extent pools for all the average workload and the majority of the volumes and subdivide these pools evenly between both processor complexes or servers.
12.6 Summary This high performance processor complex configuration is the base for a maximum of host I/O operations per second. The DS8000 Model 921 dual 2-way complex can handle an I/O rate of what about three to four ESS 800s can deliver at maximum speed. This allows you to consolidate three to four ESS 800s into a DS8100. As we saw before, the DS8000 series scales very well and in a linear fashion.
Part 4 Part 4 Implementation and management in the z/OS environment vironment In this part we discuss considerations for the DS8000 series when used in the z/OS environment. The topics include: z/OS Software Data migration in the z/OS environment © Copyright IBM Corp. 2005. All rights reserved.
280 DS8000 Series: Concepts and Architecture
13 Chapter 13. zSeries software enhancements This chapter discusses z/OS, z/VM, z/VSE and Transaction Processing Facility (TPF) software enhancements that support the DS8000 series. The enhancements include: Scalability support Large volume support Hardware configuration definition (HCD) to recognize the DS8000 series Performance statistics Resource Management Facility (RMF) © Copyright IBM Corp. 2005. All rights reserved.
13.1 Software enhancements for the DS8000 A number of enhancements have been introduced into the z/OS, z/VM, z/VSE,VSE/ESA and TPF operating systems to support the DS8000. The enhancements are not just to support the DS8000, but also to provide additional benefits that are not specific to the DS8000. 13.2 z/OS enhancements The DS8000 series simplifies system deployment by supporting major server platforms.
has the capability to scale up to 63.75K devices. With the current support, we may have CPU or spin lock contention, or exhaust storage below the 16M line at device failover, or both. Now with z/OS 1.4 and higher with the DS8000 software support, the IOS recovery has been improved by consolidating unit checks at an LSS level instead of each disconnected device. This consolidation will shorten the recovery time as a result of I/O errors.
Today control unit single point of failure information is specified in a table and must be updated for each new control unit. Instead, we can use the Read Availability Mask (PSF/RSD) command to retrieve the information from the control unit. By doing this, there is no need to maintain a table for this information. 13.2.4 Initial Program Load (IPL) enhancements During the IPL sequence the channel subsystem selects a channel path to read from the SYSRES device.
DS QD,9882,RDC,DCE IEE459I 11.36.
LISTDATA COUNT The RAID RANK counters report will not be available on the DS8000. These counters are being replaced with the new RANK and Extent Pool statistics that will be in the RMF reports. Figure 13-3 shows the RAID RANK Counters output that is available on the ESS Model 800.
Figure 13-4 shows the LISTDATA COUNTS report output for the DS8000. This report shows the Segment Pool number and the back-end information. LISTDATA COUNTS VOLUME(IN9882) UNIT(3390) DEVICE IDCAMS SYSTEM SERVICES TIME: 11:18:19 2107 STORAGE CONTROL SUBSYSTEM COUNTERS REPORT VOLUME IN9882 DEVICE ID X'02' SUBSYSTEM ID X'1011' CHANNEL OPERATIONS ......SEARCH/READ..... ..............WRITE...............
LISTDATA STATUS Figure 13-5 displays the output from the LISTDATA STATUS report. The output is the same, except that 2107 is now displayed for the storage control. LISTDATA STATUS VOLUME(SHE200) UNIT(3390) IDCAMS SYSTEM SERVICES TIME: 12:57:00 2107 STORAGE CONTROL SUBSYSTEM STATUS REPORT VOLUME SHE200 DEVICE ID X'00' SUBSYSTEM ID X'3205' ................CAPACITY IN BYTES...............
SETCACHE The DASD fast write attributes cannot be changed to OFF status on the DS8000. Figure 13-7 on page 289, displays the messages you will receive when the IDCAMS SETCACHE parameters of DEVICE, DFW, SUBSYSTEM, or NVS with OFF are specified. SETCACHE DEVICE OFF FILE(FILEX) IDC31562I THE DEVICE PARAMETER IS NOT AVAILABLE FOR THE SPECIFIED IDC31562I SUBSYSTEM OR DEVICE IDC3003I FUNCTION TERMINATED.
13.2.9 Migration considerations A DS8000 will be supported as an IBM 2105 for z/OS systems without the DFSMS and z/OS SPE installed. This will allow customers to roll the SPE to each system in a sysplex without having to take a sysplex-wide outage. An IPL will have to be taken to activate the DFSMS and z/OS portions of this support. 13.2.10 Coexistence considerations Support for the DS8000 running in 2105 mode on systems without this SPE installed will be provided.
13.5 TPF enhancements TPF is an IBM platform for high volume, online transaction processing. It is used by industries demanding large transaction volumes, such as airlines and banks. The DS8000 will be supported on TPF 4.1 and higher. Important: Always review the latest Preventative Service Planning (PSP) 2107DEVICE bucket for software updates. The PSP information can be found at: http://www-1.ibm.com/servers/resourcelink/svc03100.nsf?OpenDatabase Chapter 13.
292 DS8000 Series: Concepts and Architecture
14 Chapter 14. Data migration in zSeries environments This chapter describes several methods for migrating data from existing disk storage servers onto the DS8000 disk storage server images. This includes migrating data from the ESS 2105 as well as from other disk storage servers to the new DS8000 disk storage server images. The focus is on z/OS environments.
14.1 Define migration objectives in z/OS environments Data migration is an important activity that needs to be planned well to ensure the success of the DS8000 implementation. Because today’s business environment does not allow you to interrupt data processing services, it is crucial to make the data migration onto the new storage servers as smooth as possible.
“To avoid a single point of failure in a sysplex, IBM recommends that, for all couple data sets, you create an alternate couple datasets on a different device, control unit, and channel from the primary.” You may interpret this to mean not just having separate LCUs, but rather separate physical control units, if possible. Besides careful planning and depending on the configuration complexity, it might take weeks or months to complete the planning and to perform the actual migration. 14.1.
Note that some data set types cannot grow beyond 64K tracks. When coming from 3390-3 and staying with 50,085 tracks of a model 3 this limit of 64K tracks extent allocation is not an issue. When you already work with larger volumes you are familiar with these considerations. But it may be a surprise to you without this experience.
see on a single 3390-3. Or, to put it differently, we may see on a single volume as many concurrent I/Os as we see on nine 3390-3 volumes. Despite the PAV support, it still might be necessary to balance disk storage activities across disk storage server images. With the DS8000 you will have the flexibility to define LSSs of exactly the size you desire rather then being constrained by RAID rank topology.
14.1.4 Summary of data migration objectives To summarize the objective of data migration, it might be feasible to not just migrate the data from existing storage subsystems to the new storage server images, but also to consolidate source storage subsystems to one or fewer target storage servers. A second migration layer might be to consolidate multiple source volumes to larger target volumes, which is also called volume folding.
parameter. When the target volume is larger than the source volume it is usually necessary to adjust the VTOC size on the target volume with the ICKDSF REFORMAT REFVTOC command to make the entire volume size accessible to the system. DFSMSdss also provides full DUMP and full RESTORE commands. With the DUMP command an entire volume is copied to tape cartridges and can then be restored from tape via the RESTORE command to the new source volume.
Monitor Task Customer Systems Monitor Task Agent Agent ESCON / FICON Piper hardware Online Offline Source Volume Target Volume Swap Task ESCON Integrated tool Figure 14-2 Piper for z/OS environment configuration Currently this server is a Multiprise 3000, which can connect through ESCON channels only. This will exclude this approach to migrate data to the DS6000, which only provides a pure FICON or Fibre Channel connectivity.
Most of these benefits also apply to migration efforts controlled by the customer when utilizing TDMF or FDRPAS in customer-managed systems. Piper for z/OS is an IGS service offering which relieves the customer of the actual migration process and requires customer involvement only in the planning and preparation phase. The actual migration is transparent to the customer’s application hosts and Piper even manages a concurrent switch-over to the target volumes.
This requires an adequate bandwidth for the connectivity between the disk storage servers to the system image which hosts the SDMs. Because XRC in migration mode stores the data through, it mainly requires channel bandwidth and SDM tends to monopolize its channels. Therefore, the approach with dedicated channel resources is an advantage over a shared channel configuration and would almost not impact the application I/Os.
To utilize the advantage of PPRC with concurrent data migration on a physical volume level from older ESS models like the ESS F20, an ESS 800 (or in less active configurations an ESS 750) might be used during the migration period to bridge from a PPRC ESCON link storage server to the new disk storage server which supports only PPRC over FCP links.
impact to the application write I/Os at the source storage subsystems from where we migrate the data. This assumes a local data migration and that the distance is within the supported Metro Mirror distance for PPRC over FCP links. You can switch the application I/Os any time from the old to the new disk configuration. This requires you to quiesce or shut down the application server and restart the application servers after terminating the PPRC configuration.
DEVICE DESCRIPTOR = 0A ADDITIONAL DEVICE INFORMATION = 4A000035 ICK04030I DEVICE IS A PEER TO PEER REMOTE COPY VOLUME QUERY REMOTE COPY - VOLUME DEVICE LEVEL STATE ------ --------- -------------6102 PRIMARY PENDING XD (PRIMARY) SSID CCA PATH STATUS SER # LSS ----------- ----------ACTIVE 6100 02 22665 01 (SECONDARY) SSID CCA SER # LSS ----------8100 02 22673 01 PATHS SAID/DEST STATUS DESCRIPTION ----- -------------- ---------------1 00A4 0020 13 ESTABLISHED Fibre Channel PATH IF PENDING/SUSPEND: COUNT OF
are back to the TSO CQUERY command which does not need any additional JCL statements and does not care whether the volume is ONLINE or OFFLINE to the system. TSO provides a nicely formatted output, as the following examples display, which still might be directed to an output data set, so some REXX procedure could find the specific numbers.
Again all these approaches to utilize microcode-based mirroring capabilities require the right hardware as source and target disk servers. For completeness it is pointed out that Global Mirror is also an option to migrate data from an ESS 750 or ESS 800 to a DS8000. This might apply to certain cases at the receiving site which require consistent data at any time, although Global Copy is used for the actual data movement.
LIDD(IN001,IN002,IN003) DELETE PURGE CATALOG SELECTMULTI(ANY) SPHEREWAIT(0,0) ADMIN OPT(3) CANCELERROR /* // Example 14-6 depicts how to migrate all data sets from certain volumes. The keyword is LOGINDDNAME, or LIDD, which identifies the volumes from where the data is to be picked. There is no output volume specified, although it is also possible to distribute all data sets from the input or source volumes to a larger output volume or to more than one output volume.
FICON FICON A B C FICON SG2 SG1 D H E K L M Storage subsystem 2 Storage subsystem 1 F G J Storage server 3 Figure 14-7 SMS Storage Groups - migration source environment To explain this approach, Figure 14-7 contains two SGs, SG1 and SG2, which are distributed over three storage controllers.
Example 14-7 Execute SMS modify commands through pre-defined job //DISNEW JOB ,' DISABLE NEW ',MSGCLASS=T,CLASS=B, // MSGLEVEL=(1,1),REGION=0M,USER=&SYSUID /*JOBPARM S=VSL1 // COMMAND 'V SMS,VOL(HHHHHH,*),D,N' // COMMAND 'V SMS,VOL(JJJJJJ,*),D,N' // COMMAND 'V SMS,VOL(KKKKKK,*),D,N' // COMMAND 'V SMS,VOL(LLLLLL,*),D,N' // COMMAND 'V SMS,VOL(MMMMMM,*),D,N' // COMMAND 'D SMS,SG(SG2),LISTVOL' //* -------------------------------------------------------------- *** //STEP1 EXEC PGM=IEFBR14 // This SMS modify com
CDS Name . . . . . : SYS1.DFSMS.SCDS Storage Group Name : XC Storage Group Type : POOL Select One of the 3 1. Display 2. Define 3. Alter 4.
Example 14-11 Indicate SMS volume status change for all connected system images SMS VOLUME STATUS ALTER Command ===> Page 1 of 2 SCDS Name . . . . . . : SYS1.DFSMS.SCDS Storage Group Name .
To show how powerful, meaningful naming conventions for VOLSERs might be combined with the selection capabilities in ISMF, Example 14-13 shows an example of how to change the status of 920 volumes at once. This example assumes a contiguous number range for the volume serial numbers. Example 14-13 Indicate SMS volume status change for 920 volumes STORAGE GROUP VOLUME SELECTION Command ===> CDS Name . . . . . : SYS1.DFSMS.SCDS Storage Group Name : XC Storage Group Type : POOL Select One of the 3 1. Display 2.
//SYSPRINT DD DUMMY //SYSUT1 DD DSN=WB.MIGRATE.CNTL(DSS#SG1),DISP=SHR //SYSUT2 DD SYSOUT=(A,INTRDR) //SYSIN DD DUMMY //* ------------------------- JOB END ---------------------------- *** // You might keep the job repeatedly executing through the second step AGAIN, where the same job is read into the system again through the internal MVS reader. Eventually there remain a few data sets on the source volumes which are always open.
When a level is reached where no data moves any more because the remaining data sets are in use all the time, some down time has to be scheduled to perform the movement of the remaining data. This might require you to run DFSMSdss jobs from a system which has no active allocations on the volumes which need to be emptied. 14.5 z/VM and VSE/ESA data migration DFSMS/VM® provides a set of software utility and command functions which are suitable for data migration.
impact when the actual switch uses P/DAS, although it is quicker and easier to allow for a brief service interruption and quickly switch to the new disk storage server. Because Metro Mirror provides data consistency at any time, the switch-over to the new disk server is simple and does not require further efforts to ensure data consistency at the receiving site. It is feasible to use the GUI-based approach because migration is usually a one time effort.
Part 5 Part 5 Implementation and management in the open systems environment In this part we discuss considerations for the DS8000 series when used in an open systems environment. The topics include: Open systems support and software Data migration in open systems © Copyright IBM Corp. 2005. All rights reserved.
318 DS8000 Series: Concepts and Architecture
15 Chapter 15. Open systems support and software In this chapter we describe how the DS8000 fits into your open systems environment. In particular, we discuss: The extent of the open systems support Where to find detailed and accurate information Major changes from the ESS 2105 Software provided by IBM with the DS8000 IBM solutions and services that integrate DS8000 functionality © Copyright IBM Corp. 2005. All rights reserved.
15.1 Open systems support The scope of open systems support of the new DS8000 model is based on that of the ESS 2105, with some exceptions: No parallel SCSI attachment support Some new operating systems were added Some legacy operating systems and the corresponding servers were removed Some legacy HBAs were removed New versions of operating systems, servers, file systems, host bus adapters, clustering products, SAN components, and application software are constantly announced in the market.
updated frequently. Therefore it is advisable to visit these resources regularly and check for updates. The DS8000 Interoperability Matrix The DS8000 Interoperability Matrix always provides the latest information about supported platforms, operating systems, HBAs and SAN infrastructure solutions. It contains detailed specifications about models and versions. It also lists special support items, such as boot support, and exceptions. It can be found at: http://www.ibm.
QLogic maintains a page that lists all the HBAs, drivers, and firmware versions that are supported for attachment to IBM storage systems: http://www.qlogic.com/support/oem_detail_all.asp?oemid=22 Emulex Corporation The Emulex home page is: http://www.emulex.com They also have a page with content specific to IBM storage systems: http://www.emulex.com/ts/docoem/framibm.htm JNI / AMCC AMCC took over the former JNI, but still markets FC HBAs under the JNI brand name.
Some legacy operating systems and operating system versions were dropped from the support matrix. These are either versions which were withdrawn from marketing or support that are not marketed or supported by their vendors or are not seen as significant enough anymore to justify the testing effort necessary to support them. Major examples include: – IBM AIX 4.x, OS/400 V5R1, Dynix/ptx – Microsoft Windows NT® – Sun Solaris 2.6, 7 – HP UX 10, 11, Tru64 4.x, OpenVMS 5.x – Novell Netware 4.
Novell Netware VMware 2.5.0 SuSE SLES9 15.2 Subsystem Device Driver To ensure maximum availability most customers choose to connect their open systems hosts through more than one Fibre Channel path to their storage systems. With an intelligent SAN layout this protects you from failures of FC HBAs, SAN components, and host ports in the storage subsystem. Some operating systems, however, can’t deal natively with multiple paths to a single disk; they see the same disk multiple times.
15.3 Other multipathing solutions Some operating systems come with native multipathing software, for example: SUN StorEdge Traffic Manager for Sun Solaris HP PVLinks for HP UX IBM AIX native multipathing (MPIO) (see “IBM AIX” on page 347) IBM OS/400 V5R3 multipath support (see Appendix B, “Using DS8000 with iSeries” on page 373) In addition there are third party multipathing solutions, such as Veritas DMP, which is part of Veritas Volume Manager.
command will be passed directly to the S-HMC for immediate execution. The return code of the DS CLI program corresponds to the return code of the command it executed. This mode can be used for scripting. Interactive mode: You start the DS CLI program on your host. It provides you with a shell environment that allows you to enter commands and send them to the S-HMC for immediate execution.
As a component of the IBM TotalStorage Productivity Center, Multiple Device Manager is designed to reduce the complexity of managing SAN storage devices by allowing administrators to configure, manage, and monitor storage from a single console. The devices managed are not restricted to IBM brand products. In fact, any device compliant with the Storage Network Industry Association (SNIA) Storage Management Initiative Specification (SMI-S) can be managed with the IBM TotalStorage Multiple Device Manager.
15.5.1 Device Manager The Device Manager (DM) builds on the IBM Director technology. It uses the Service Level Protocol (SLP) to discover supported storage systems on the SAN. The SLP enables the discovery and selection of generic services accessible through an IP network. The DM then uses managed objects to manage these devices. DM also provides a subset of configuration functions for the managed devices, primarily LUN allocation and assignment.
15.5.2 TPC for Disk TPC for Disk, formerly known as MDM Performance Manager, provides the following functions: Collect performance data from devices. Configure performance thresholds. Monitor performance metrics across storage subsystems from a single console. Receive timely alerts to enable event action based on customer policies. View performance data from the performance manager database. Enable storage optimization through identification of the best performing volumes.
The Volume Performance Advisor is an automated tool to help the storage administrator pick the best possible placement of a new LUN to be allocated from a performance perspective. It uses the historical performance statistics collected from the managed devices and locates unused storage capacity on the SAN that exhibits the best estimated performance characteristics.
the DS8000 Storage Hardware Management Console for the execution of the commands. The commands allow you to: Create, modify, start, stop, and resume a Global Mirror session Manage failover and failback operations including managing consistency Perform planned outages To monitor the DS8000 volume status and the Global Mirror session status, you can either use the DS Storage Manager or DS CLI. 15.
332 DS8000 Series: Concepts and Architecture
16 Chapter 16. Data migration in the open systems environment In this chapter we discuss important concepts for the migration of data to the new DS8000: Data migration considerations Data migration and consolidation Comparison of the different methods © Copyright IBM Corp. 2005. All rights reserved.
16.1 Introduction The term data migration has a very diverse scope. We use it here solely to describe the process of moving data from one type of storage to another, or to be exact, from one type of storage to a DS8000. In many cases, this process is not only comprised of the mere copying of the data, but also includes some kind of consolidation.
We describe the most common methods in the next section. Be aware that, in a heterogeneous IT environment, you will most likely have to choose more than one method. Note: When discussing disruptiveness, we don't consider any interruptions that may be caused by adding the new DS8000 LUNs to the host and later by removing the old storage. They vary too much from operating system to operating system, even from version to version. However, they have to be taken into account, too.
Reasons against using these methods could include: Different methods are necessary for different data types and operating systems Strong involvement of the system administrator is necessary Today the majority of data migration tasks are performed with one of the methods discussed in the following sections.
This method also requires the disruption of applications writing to the data for the complete process. Online copy and synchronization with rsync rsync is an open source tool that is available for all major open system platforms, including Windows and Novell Netware. Its original purpose is the remote mirroring of file systems with as few network requirements as possible. Once the initial copy is done, it keeps the mirror up to date by only copying changes.
Mirror data for higher availability and migration The LUNs provided by the DS8000 appear to the LVM as physical SCSI disks. Usually the process is to set up a mirror of the data on the old disks to the new LUNs, wait until it is synchronized and split it at the cut over time. Some LVMs provide commands that automate this process.
Intermediate device Host Host Backup System Backup System Stop applications Backup all data Restore data Restart Host using new copy Figure 16-5 Migration using backup and restore The major disadvantage is again the disruptiveness. The applications that write to the data to be migrated must be stopped for the whole migration process. Backup and restore to and from tape usually takes longer than direct copy from disk to disk.
Metro Mirror and Global Copy From a local data migration point of view both methods are on par with each other, with Global Copy having a smaller impact on the subsystem performance and Metro Mirror requiring almost no time for the final synchronization phase. It is advisable to use Global Copy instead of Metro Mirror, if the source system is already at its performance limit even without remote mirroring. Figure 16-6 outlines the migration steps.
16.2.3 IBM Piper migration Piper is a hardware and software solution to move data between disk systems while production is ongoing. It is used in conjunction with IBM migration services. Piper is available for mainframe and open systems environments. Here we discuss the open systems version only. For mainframe environments see Chapter 14, “Data migration in zSeries environments” on page 293.
16.2.4 Other migration applications There are a number of applications available from other vendors that can assist in data migration. We don’t discuss them here in detail. Some examples include: Softek Data Replicator for Open NSI Double-Take XoSoft WANSync There also are storage virtualization products which can be used for data migration in a similar manner to the Piper tool. They are installed on a server which forms a virtualization engine that resides in the data path.
A Appendix A. Open systems operating systems specifics In this appendix, we describe the particular issues of some operating systems with respect to the attachment to a DS8000. The following subjects are covered: Planning considerations Common tools IBM AIX Linux on various platforms Microsoft Windows HP OpenVMS © Copyright IBM Corp. 2005. All rights reserved.
General considerations In this section we cover some topics that are not specific to a single operating system. This includes available documentation, some planning considerations, and common tools. The DS8000 Host Systems Attachment Guide The DS8000 Host Systems Attachment Guide, SC26-7628 provides instructions to prepare a host system for DS8000 attachment.
the data, even if this pool spans several ranks. If possible, the extents for one logical volume are taken from the same rank. To get higher throughput values than a single array can deliver, it is necessary to stripe the data across several arrays. This can only be achieved through striping on the host level. To achieve maximum granularity and control for data placement, you will have to create an extent pool for every single rank.
The output reports the following: The %tm_act column indicates the percentage of the measured interval time that the device was busy. The Kbps column shows the average data rate, read and write data combined, of this device. The tps column shows the transactions per second. Note that an I/O transaction can have a variable transfer size. This field may also appear higher than would normally be expected for a single physical disk device.
Example: A-3 SAR Sample Output # sar -u 2 5 AIX aixtest 3 4 001750154C00 2/5/03 17:58:15 %usr %sys %wio %idle 17:58:17 43 9 1 46 17:58:19 35 17 3 45 17:58:21 36 22 20 23 17:58:23 21 17 0 63 17:58:25 85 12 3 0 Average 44 15 5 35 As a general rule of thumb, a server with over 40 percent waiting on I/O is spending too much time waiting for I/O. However, you also have to take the type of workload into account.
Other publications Apart from the DS8000 Host Systems Attachment Guide, SC26-7628, there are two redbooks that cover pSeries storage attachment: Practical Guide for SAN with pSeries, SG24-6050, covers all aspects of connecting an AIX host to SAN-attached storage. However, it is not quite up-to-date; the last release was in 2002. Fault Tolerant Storage - Multipathing and Clustering Solutions for Open Systems for the IBM ESS, SG24-6295, focuses mainly on high availability and covers SDD and HACMP topics.
Device Specific.(ZB)........C2D3.91A1 Device Specific.(YL)........U1.13-P1-I1/Q1 You can also print the WWPN of an HBA directly by running lscfg -vl | grep Network The # stands for the instance of each FC HBA you want to query. Managing multiple paths It is a common and recommended practice to assign a DS8000 volume to the host system through more than one path, to ensure availability in case of a SAN component failure and to achieve higher I/O bandwidth.
Path# 0 1 2 3 Adapter/Hard Disk fscsi2/hdisk17 fscsi2/hdisk19 fscsi3/hdisk21 fscsi3/hdisk23 State OPEN OPEN OPEN OPEN Mode NORMAL NORMAL NORMAL NORMAL Select 0 27134 0 27352 Errors 0 0 0 0 DEV#: 1 DEVICE NAME: vpath4 TYPE: 2107 POLICY: Optimized SERIAL: 20522873 ========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi2/hdisk18 CLOSE NORMAL 25734 0 1 fscsi2/hdisk20 CLOSE NORMAL 0 0 2 fscsi3/hdisk22 CLOSE NORMAL 25500 0 3 fscsi3/
http://publib16.boulder.ibm.com/pseries/en_US/aixbman/admnconc/hotplug_mgmt.htm#mpioconcepts The management of MPIO devices is described in the “Managing MPIO-Capable Devices” section of System Management Guide: Operating System and Devices for AIX 5L: http://publib16.boulder.ibm.com/pseries/en_US/aixbman/baseadmn/manage_mpio.htm Restriction: A point worth considering when deciding between SDD and MPIO is, that the IBM TotalStorage SAN Volume Controller does not support MPIO at this time.
Refer to the manpages of the MPIO commands for more information. LVM configuration In AIX, all storage is managed by the AIX Logical Volume Manager (LVM). It virtualizes physical disks to be able to dynamically create, delete, resize, and move logical volumes for application use. To AIX our DS8000 logical volumes appear as physical SCSI disks. There are some considerations to take into account when configuring LVM.
Tip: If the number of async I/O (AIO) requests is high, then the recommendation is to increase maxservers to approximately the number of simultaneous I/Os there might be. In most cases, it is better to leave the minservers parameter to the default value since the AIO kernel extension will generate additional servers if needed.
The DS8000 requires the following i5 I/O adapters to attach directly to an i5 AIX partition: 0611 Direct Attach 2 Gigabit Fibre Channel PCI 0625 Direct Attach 2 Gigabit Fibre Channel PCI-X It is also possible for the AIX partition to have its storage virtualized, whereby a partition running OS/400 hosts the AIX partition's storage requirements.
-----------------------------------------------------------------------#MBs #opns #rds #wrs file volume:inode -----------------------------------------------------------------------0.3 1 70 0 unix :34096 0.0 1 2 0 ksh.cat :46237 0.0 1 2 0 cmdtrace.cat :45847 0.0 1 2 0 hosts :516 0.0 7 2 0 SWservAt :594 0.0 7 2 0 SWservAt.
avg 13.97180 min 0.00004 max 57.54421 sdev 11.78066 time to next req(msec): avg 89.470 min 0.003 max 949.025 sdev 174.947 throughput: 81.8 KB/sec utilization: 0.87 ... Linux Linux is an open source UNIX-like kernel, originally created by Linus Torvalds. The term “Linux” is often used to mean the whole operating system, GNU/Linux.
Existing reference material There is a lot of information available that helps you set up your Linux server to attach it to a DS8000 storage subsystem.
The zSeries connectivity support page lists all supported storage devices and SAN components that can be attached to a zSeries server. There is an extra section for FCP attachment: http://www.ibm.com/servers/eserver/zseries/connectivity/#fcp The whitepaper ESS Attachment to United Linux 1 (IA-32) is available at: http://www.ibm.com/support/docview.
Each SCSI device can have up to 15 partitions, which are represented by the special device files /dev/sda1, /dev/sda2, and so on. The mapping of partitions to special device files and major and minor numbers is shown in Table A-2. Table A-2 Minor numbers, partitions and special device files Major number Minor number Special device file Partition 8 0 /dev/sda all of 1st disk 8 1 /dev/sda1 1st partition of 1st disk ...
After creating the device files you may have to change their owner, group, and file permission settings to be able to use them. Often, the easiest way to do this is by duplicating the settings of existing device files, as shown in Example A-11. Be aware that after this sequence of commands, all special device files for SCSI disks have the same permissions. If an application requires different settings for certain disks, you have to correct them afterwards.
SDD, which creates a persistent relationship between a DS8000 volume and a vpath device regardless of the /dev/sd.. devices RedHat Enterprise Linux (RH-EL) multiple LUN support RH-EL by default is not configured for multiple LUN support. It will only discover SCSI disks addressed as LUN 0. The DS8000 provides the volumes to the host with a fixed Fibre Channel address and varying LUN. Therefore RH-EL 3 will see only one DS8000 volume (LUN 0), even if more are assigned to it.
Example: A-12 Sample /etc/modules.conf scsi_hostadapter aic7xxx scsi_hostadapter1 aic7xxx scsi_hostadapter2 qla2300 scsi_hostadapter3 qla2300 options scsi_mod max_scsi_luns=128 Adding FC disks dynamically The commonly used way to discover newly attached DS8000 volumes is to unload and reload the Fibre Channel HBA driver. However, this action is disruptive to all applications that use Fibre Channel attached disks on this particular host.
Example A-14 shows the SCSI disk assignment after one more DS8000 volume is added.
2787 2 Gigabit Fibre Channel Disk Controller PCI-X For more information on OS/400 support for DS8000, please see Appendix B, “Using DS8000 with iSeries” on page 373. More information on running Linux in an iSeries partition can be found in the iSeries Information Center at: http://publib.boulder.ibm.com/iseries/v5r2/ic2924/index.htm For running Linux in an i5 partition check, the i5 Information Center at: http://publib.boulder.ibm.com/infocenter/iseries/v1r2s/en_US/info/iphbi/iphbi.
Example: A-17 Sample /proc/scsi/qla2300/x knox:~ # cat /proc/scsi/qla2300/2 QLogic PCI to Fibre Channel Host Adapter for ISP23xx: Firmware version: 3.01.18, Driver version 6.05.00b9 Entry address = c1e00060 HBA: QLA2312 , Serial# H28468 Request Queue = 0x21f8000, Response Queue = 0x21e0000 Request Queue count= 128, Response Queue count= 512 . .
Microsoft Windows 2000/2003 Note: Because Windows NT is no longer supported by Microsoft (and DS8000 support is provided on RPQ only), we do not discuss Windows NT here. DS8000 supports FC attachment to Microsoft Windows 2000/2003 servers. For details regarding operating system versions and HBA types see the DS8000 Interoperability Matrix, available at: http://www.ibm.com/servers/storage/disk/ds8000/interop.html The support includes cluster service and acting as a boot device.
– With Windows 2003, MSCS uses target resets. See the Microsoft technical article Microsoft Windows Clustering: Storage Area Networks at: http://www.microsoft.com/windowsserver2003/techinfo/overview/san.mspx Windows Server 2003 will allow for boot disk and the cluster server disks hosted on the same bus. However, you would need to use Storport miniport HBA drivers for this functionality to work.
For a detailed description of VDS, refer to the Microsoft Windows Server 2003 Virtual Disk Service Technical Reference at: http://www.microsoft.com/Resources/Documentation/windowsserv/2003/all/techref/en-us/W2K3TR_ vds_intro.asp The DS8000 can act as a VDS hardware provider. The implementation is based on the DS Common Information Model (CIM) agent, a middleware application that provides a CIM-compliant interface.
Instead of writing a special OpenVMS driver, it has been decided to handle this in the DS8000 host adapter microcode. As a result, DS8000 FC ports cannot be shared between OpenVMS and non-OpenVMS hosts. Important: The DS8000 FC ports used by OpenVMS hosts must not be accessed by any other operating system, not even accidentally. The OpenVMS hosts have to be defined for access to these ports only, and it must be ensured that no foreign HBA (without definition as an OpenVMS host) is seen by these ports.
Every FC volume must have a UDID that is unique throughout the OpenVMS cluster that accesses the volume. The same UDID may be used in a different cluster or for different stand-alone host. If the volume is planned for MSCP serving, then the UDID range is limited to 0–9999 (by operating system restrictions in the MSCP code).
causes subsequent read operations to fail, which is the signal to the shadow driver to execute a repair operation using data from another copy. However, there is no forced error indicator in the SCSI architecture, and the revector operation is nonatomic. As a substitute, the OpenVMS shadow driver exploits the SCSI commands READ LONG (READL) and WRITE LONG (WRITEL), optionally supported by some SCSI devices.
372 DS8000 Series: Concepts and Architecture
B Appendix B. Using DS8000 with iSeries In this appendix, the following topics are discussed: Supported environment Logical volume sizes Protected versus unprotected volumes Multipath Adding units to OS/400 configuration Sizing guidelines Migration Linux and AIX support © Copyright IBM Corp. 2005. All rights reserved.
Supported environment Not all hardware and software combinations for OS/400 support the DS8000. This section describes the hardware and software pre-requisites for attaching the DS8000. Hardware The DS8000 is supported on all iSeries models which support Fibre Channel attachment for external storage. Fibre Channel was supported on all model 8xx onwards. AS/400 models 7xx and prior only supported SCSI attachment for external storage, so they cannot support the DS8000.
Model type Unprotected Protected OS/400 Device size (GB) Number of LBAs Extents Unusable space (GiB) Usable space% 2107-A05 2107-A85 35.1 68,681,728 33 0.25 99.24 2107-A04 2107-A84 70.5 137,822,208 66 0.28 99.57 2107-A06 2107-A86 141.1 275,644,416 132 0.56 99.57 2107-A07 2107-A87 282.2 551,288,832 263 0.13 99.
Adding volumes to iSeries configuration Once the logical volumes have been created and assigned to the host, they will appear as non-configured units to OS/400. This may be some time after being created on the DS8000. At this stage, they are used in exactly the same way as non-configured internal units. There is nothing particular to external logical volumes as far as OS/400 is concerned.
4. When adding disk units to a configuration, you can add them as empty units by selecting Option 2 or you can choose to allow OS/400 to balance the data across all the disk units. Normally, we recommend balancing the data. Select Option 8, Add units to ASPs and balance data as shown in Figure B-3. Work with Disk Configuration Select one of the following: 1. 2. 3. 4. 5. 6. 7. 8. 9.
Confirm Add Units Add will take several minutes for each unit. The system will have the displayed protection after the unit(s) are added. Press Enter to confirm your choice for Add units. Press F9=Capacity Information to display the resulting capacity. Press F12=Cancel to return and change your choice.
2. Expand the iSeries to which you wish to add the logical volume and sign on to that server as shown in Figure B-7. Figure B-7 iSeries Navigator Signon to iSeries window 3. Expand Configuration and Service, Hardware, and Disk Units as shown in Figure B-8 on page 379. Figure B-8 iSeries Navigator Disk Units 4. You will be asked to sign on to SST as shown in Figure B-9. Enter your Service tools ID and password and press OK. Appendix B.
Figure B-9 SST Signon 5. Right-click Disk Pools and select New Disk Pool as shown in Figure B-10 on page 380. Figure B-10 Create a new disk pool 6. The New Disk Pool wizard appears as shown in Figure B-11. Click Next.
Figure B-11 New disk pool - welcome 7. On the New Disk Pool dialog shown in Figure B-12, select Primary from the pull-down for the Type of disk pool, give the new disk pool a name and leave Database to default to Generated by the system. Ensure the disk protection method matches the type of logical volume you are adding. If you leave it unchecked, you will see all available disks. Select OK to continue. Figure B-12 Defining a new disk pool 8.
Figure B-13 Confirm disk pool configuration 9. Now you need to add disks to the new disk pool. On the Add to disk pool screen, click the Add disks button as shown in Figure B-14 on page 382. Figure B-14 Add disks to Disk Pool 10.A list of non-configured units similar to that shown in Figure B-15 will appear. Highlight the disks you want to add to the disk pool and click Add.
Figure B-15 Choose the disks to add to the Disk Pool 11.A confirmation screen appears as shown in Figure B-16 on page 383. Click Next to continue. Figure B-16 Confirm disks to be added to Disk Pool 12.A summary of the Disk Pool configuration similar to Figure B-17 appears. Click Finish to add the disks to the Disk Pool. Appendix B.
Figure B-17 New Disk Pool Summary 13.Take note of and respond to any message dialogs which appear. After taking action on any messages, the New Disk Pool Status panel shown in Figure B-18 on page 384 will appear showing progress. This step may take some time, depending on the number and size of the logical units being added. Figure B-18 New Disk Pool Status 14.When complete, click OK on the information panel shown in Figure B-19.
15.The new Disk Pool can be seen on iSeries Navigator Disk Pools in Figure B-20. Figure B-20 New Disk Pool shown on iSeries Navigator 16.To see the logical volume, as shown in Figure B-21, expand Configuration and Service, Hardware, Disk Pools and click the disk pool you just created. Figure B-21 New logical volume shown on iSeries Navigator Appendix B.
Multipath Multipath support was added for external disks in V5R3 of i5/OS (also known as OS/400 V5R3). Unlike other platforms which have a specific software component, such as Subsystem Device Driver (SDD), multipath is part of the base operating system. At V5R3, up to eight connections can be defined from multiple I/O adapters on an iSeries server to a single logical volume in the DS8000. Each connection for a multipath disk unit functions independently.
1. IO Frame 2. BUS 3. IOP 4. IOA 6. Port 7. Switch 8. Port 5. Cable 9. ISL 10. Port 11. Switch 12. Port 13. Cable 14. Host Adapter 15. IO Drawer Figure B-22 Single points of failure When implementing multipath, you should provide as much redundancy as possible. As a minimum, multipath requires two IOAs connecting the same logical volumes. Ideally, these should be on different buses and in different I/O racks in the iSeries. If a SAN is included, separate switches should also be used for each path.
2787 2 Gigabit Fibre Channel Disk Controller PCI-X Both can be used for multipath and there is no requirement for all paths to use the same type of adapter. Both adapters can address up to 32 logical volumes. This does not change with multipath support. When deciding how many I/O adapters to use, your first priority should be to consider performance throughput of the IOA since this limit may be reached before the maximum number of logical units.
3. Option 8, Add units to ASPs and balance data. You will then be presented with a panel similar to Figure B-25 on page 389. The values in the Resource Name column show DDxxx for single path volumes and DMPxxx for those which have more than one path. In this example, the 2107-A85 logical volume with serial number 75-1118707 is available through more than one path and reports in as DMP135. 4. Specify the ASP to which you wish to add the multipath volumes.
Adding volumes to iSeries using iSeries Navigator The iSeries Navigator GUI can be used to add volumes to the System, User or Independent ASPs. In this example, we are adding a multipath logical volume to a private (non-switchable) IASP. The same principles apply when adding multipath volumes to the System or User ASPs. Follow the steps outlined in “Adding volumes to an Independent Auxiliary Storage Pool” on page 378.
Figure B-28 New Disk Pool shown on iSeries Navigator To see the logical volume, as shown in Figure B-29, expand Configuration and Service, Hardware, Disk Pools and click the disk pool you just created. Figure B-29 New logical volume shown on iSeries Navigator Appendix B.
Managing multipath volumes using iSeries Navigator All units are initially created with a prefix of DD. As soon as the system detects that there is more than one path to a specific logical unit, it will automatically assign a unique resource name with a prefix of DMP for both the initial path and any additional paths. When using the standard disk panels in iSeries Navigator, only a single (the initial) path is shown. The following steps show how to see the additional paths.
Figure B-31 Selecting properties for a multipath logical unit You will then see the General Properties tab for the selected unit, as in Figure B-32 on page 394. The first path is shown as Device 1 in the box labelled Storage. Appendix B.
Figure B-32 Multipath logical unit properties To see the other paths to this unit, click the Connections tab, as shown in Figure B-33 on page 395, where you can see the other seven connections for this logical unit.
Figure B-33 Multipath connections Multipath rules for multiple iSeries systems or partitions When you use multipath disk units, you must consider the implications of moving IOPs and multipath connections between nodes. You must not split multipath connections between nodes, either by moving IOPs between logical partitions or by switching expansion units between systems.
Disk unit connections might be missing for a variety of reasons, but especially if one of the preceding rules has been violated. If a connection for a multipath disk unit in any disk pool is found to be missing during an IPL or vary on, a message is sent to the QSYSOPR message queue. If a connection is missing, and you confirm that the connection has been removed, you can update Hardware Service Manager (HSM) to remove that resource.
Performance Tools Reports Workload description Workload statistics Workload characteristics Other requirements: HA, DR etc.
some benefit in larger cache sizes. However, in general, with large iSeries main memory sizes, OS/400 Expert Cache can reduce the benefit of external cache. Number of iSeries Fibre Channel adapters The most important factor to take into consideration when calculating the number of Fibre Channel adapters in the iSeries is the throughput capacity of the adapter and IOP combination.
Recommended number of ranks As a general guideline, you may consider 1500 disk operations/sec for an average RAID rank. When considering the number of ranks, take into account the maximum disk operations per second per rank as shown in Table B-3. These are measured at 100% DDM Utilization with no cache benefit and with the average I/O being 4KB. Larger transfer sizes will reduce the number of operations per second.
Connecting via SAN switches When connecting DS8000 systems to iSeries via switches, you should plan that I/O traffic from multiple iSeries adapters can go through one port on a DS8000 and zone the switches accordingly. DS8000 host adapters can be shared between iSeries and other platforms. Based on available measurements and experiences with the ESS 800 we recommend you should plan no more than four iSeries I/O adapters to one host port in the DS8000.
Note: It is important to ensure that both the Metro Mirror or Global Copy source and target copies are not assigned to the iSeries at the same time as this is an invalid configuration. Careful planning and implementation is required to ensure this does not happen, otherwise unpredictable results may occur.
down time associated with removing a disk unit. This will keep new allocations away from the marked units. Start ASP Balance (STRASPBAL) Type choices, press Enter. Balance type . . . . . . . . . . > *ENDALC Storage unit . . . . . . . . . . + for more values F3=Exit F4=Prompt F24=More keys F5=Refresh F12=Cancel *CAPACITY, *USAGE, *HSM...
Copy Services for iSeries Due to OS/400 having a single level storage, it is not possible to copy some disk units without copying them all, unless specific steps are taken. Attention: You should not assume that Copy Services with iSeries works the same as with other open systems. FlashCopy When FlashCopy was first available for use with OS/400, it was necessary to copy the entire storage space, including the Load Source Unit (LSU).
storage) being copied, only the application resides in an IASP and in the event of a disaster, the target copy is attached to the DR server. Additional considerations must be taken into account such as maintaining user profiles on both systems, but this is no different from using other availability functions such as switched disk between two local iSeries Servers on a High Speed Link (HSL) and Cross Site Mirroring (XSM) to a remote iSeries.
For more information on running AIX in an i5 partition, refer to the i5 Information Center at: http://publib.boulder.ibm.com/infocenter/iseries/v1r2s/en_US/index.htm?info/iphat /iphatlparkickoff.htm Note: AIX will not run in a partition on earlier 8xx and prior iSeries systems. Linux on IBM iSeries Since OS/400 V5R1, it has been possible to run Linux in an iSeries partition. On iSeries models 270 and 8xx, the primary partition must run OS/400 V5R1 or higher and Linux is run in a secondary partition.
406 DS8000 Series: Concepts and Architecture
C Appendix C. Service and support offerings This appendix provides information about the service offerings which are currently available for the new DS6000 and DS8000 series.
IBM Web sites for service offerings IBM Global Services (IGS) and the IBM Systems Group can offer comprehensive assistance, including planning and design as well as implementation and migration support services. For more information on all of the following service offerings, contact your IBM representative or visit the following Web sites. The IBM Global Service Web site can be found at: http://www.ibm.com/services/us/index.wss/home The IBM System Group Web site can be found at: http://www.ibm.
The IBM Piper hardware assisted migration in the zSeries environment is described in this redbook in “Data migration with Piper for z/OS” on page 299. Additional information about this offering is on the following Web site: http://www.ibm.com/servers/storage/services/featured/hardware_assist.html For more information about IBM Migration Services for eServer zSeries data visit the following Web site: http://www.ibm.com/services/us/index.
Unplanned outages (disaster recovery, testing a disaster) It simplifies the disaster recovery implementation and concept. Once eRCMF is configured in the customer environment, it will monitor the PPRC states of all specified LUNs/volumes. Visit the following Web site for the latest information: http://www.ibm.com/services/us/index.
Figure 16-9 Example of the Supported Product List (SPL) from the IBM Support Line Appendix C.
412 DS8000 Series: Concepts and Architecture
Related publications The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this redbook. IBM Redbooks For information on ordering these publications, see “How to get IBM Redbooks” on page 415. Note that some of the documents referenced here may be available in softcopy only.
Online resources These Web sites and URLs are also relevant as further information sources: IBM Disk Storage Feature Activation (DSFA) Web site at http://www.ibm.com/storage/dsfa The PSP information can be found at: http://www-1.ibm.com/servers/resourcelink/svc03100.nsf?OpenDatabase Documentation for the DS8000: http://www.ibm.com/servers/storage/support/disk/2107.html Supported servers for the DS8000: http://www.storage.ibm.com/hardsoft/products/DS8000/supserver.
How to get IBM Redbooks You can search for, view, or download Redbooks, Redpapers, Hints and Tips, draft publications and Additional materials, as well as order hardcopy Redbooks or CD-ROMs, at this Web site: ibm.com/redbooks Help from IBM IBM Support and downloads ibm.com/support IBM Global Services ibm.
416 DS8000 Series: Concepts and Architecture
Index Symbols 151, 298, 335, 368 A AAL 36–37, 256 benefits 37 address groups 96, 196 Advanced Copy Services 162 AIX boot support 353 I/O access methods 352 LVM configuration 352 managing multiple paths 349 monitoring I/O performance filemon 354 iostat 354 on iSeries 404 other publications 348 SDD 349 WWPN 348 architecture 22 array sites 86, 192, 200 arrays 35, 87, 192, 200 arrays across loops see AAL Asynchronous Cascading PPRC see Metro/Global Copy Asynchronous PPRC see Global Mirror B base frame 20 batt
hot plugable 78 layer 200 device adapter see DA DFSMSdss 298, 307 DFSMShsm 308 disk drive set 7 disk enclosure 31 power and cooling 40 Disk Magic 186–187 disk storage feature activation (DSFA) 173 disk subsystem 30 disk virtualization 85 DS CLI 12, 138, 231, 325 changes from ESS CLI 138 co-existence with ESS CLI 237 command flow 234–235 command modes 239 interactive mode 240 script mode 240 single command mode 239 DS6000 command flow 236 DS8000 split network 236 functionality 232 installation methods 233 in
OEL open systems 150, 320 ordering licensed functions 170 p5 570 21, 27 PAV performance 14, 186, 253 placement of data 99 planning 344 positioning 11 power and cooling 39 power control 80 power requirements 147 POWER5 6, 26, 261 PPS 40 processor complex 26 processor memory 28 rack operator panel 21 RAS remote power control 154 remote support 154 Resiliency Family 8 RIO-G 29 RPC SAN requirements 153–154 SARC 24 scalability 13, 103, 109 benefits 110, 112 for capacity 109 for performance 110 SDD series overvie
FlashCopy to a Remote Mirror primary 9 frames 20 base 20 expansion 21 ftp offload option 166 G GDPS 409 GDS for MSCS 410 Geographically Dispersed Sites for Microsoft Cluster Services see GDS for MSCS Global Copy 9, 116, 124, 131, 168, 303, 339–340, 400 Global Mirror 10, 116, 125, 131, 168 how works 126 GMU 330 H HA 6, 37 HBA vendor resources 321 host adapter four port 260 host adapter see HA host adapters Fibre Channel 38 FICON 38 host attachment 97, 190 host connection zSeries 74 hot spot avoidance 188 H
L LCU 196 licensed functions 167 ordering 170 scenarios 174 Linux 356 /proc pseudo file system 364 issues 358 managing multiple paths 360 missing device files 359 on iSeries 363, 405 performance monitoring with iostat 365 reference material 357 RH-EL SCSI basics 358 SCSI device assignment changes 360 support issues 356 logical configuration 174 assigning LUNs to hosts 226 creating arrays 219 creating CKD LCUs 227 creating CKD volumes 227 creating FB volumes 222 deleting LUNs 226 displaying WWNN 228 extent p
data placement 265 Disk Magic 186–187 ESCON 256 FICON 256 hot spot avoidance 188 I/O priority queuing 187 LVM striping 266 monitoring 187 monitoring tools UNIX 345 vmstat 347 number of host ports/channels 187 open systems determing the connections 267 determining the number of paths to a LUN 268 PAV 187 planning 186 PPRC over FCP links 256 remote copy 187 size of cache 187 where to attach the host 268 workload characteristics 265 z/OS 269 appropriate DS8000 size 271 channel consolidation 272 configuration r
RIO-G 29 RMC 123, 169 comparison of functions 130 Global Copy 124 Global Mirror 125 Metro Mirror 123 RMF 289 RMZ 169 RPC 39 RPO 131 rsync 337 S SAN 73 SAR 346 SARC 14, 24 scalability 13 DS8000 scalability 264 SDD 14, 268, 324, 349 for Windows 366 security 165 Sequential Prefetching in Adaptive Replacement Cache see SARC server RAS 67 server consolidation 47 server-based SMP 24 service offerings Web sites 408 service processor 28 S-HMC 7, 40, 82, 136, 158 Advanced Copy Services 162 call home 164 dial up con
volumes CKD 91 VSE ESA 315 W Windows Server 2003 VDS support WWNN 228 WWPN 348 X XRC see z/OS Global Mirror Z z/OS coexistence considerations 290 DASD Unit Information Module 284 device recognition 284 IBM Migration Services 408 IOS scalability 282 IPL enhancements 284 large volume support 283 migration considerations 289–290 new performance statistics 285 read availability mask support 283 read control unit 284 RMF 289 z/OS Global Mirror 10, 116, 128, 132, 168, 301 z/OS Metro/Global Mirror 10, 129 z/VM
DS8000 Series: Concepts and Architecture DS8000 Series: Concepts and Architecture The IBM TotalStorage DS8000 Series: Concepts and Architecture DS8000 Series: Concepts and Architecture (0.5” spine) 0.475”<->0.
DS8000 Series: Concepts and Architecture DS8000 Series: Concepts and Architecture
Back cover ® The IBM TotalStorage DS8000 Series: Concepts and Architecture Advanced features and performance breakthrough with POWER5 technology Configuration flexibility with LPAR and virtualization Highly scalable solutions for on demand storage This IBM Redbook describes the IBM TotalStorage DS8000 series of storage servers, its architecture, logical design, hardware design and components, advanced functions, performance features, and specific characteristics.