Front cover The IBM TotalStorage DS6000 Series: Concepts and Architecture Enterprise-class storage functions in a compact and modular design On demand scalability and multi-platform connectivity Enhanced configuration flexibility with virtualization Cathy Warrick Christine O’Sullivan Olivier Alluis Stu S Preacher Werner Bauer Torsten Rothenwaldt Heinz Blaschek Tetsuroh Sano Andre Fourie Jing Nan Tang Juan Antonio Garay Anthony Vandewerdt Torsten Knobloch Alexander Warmuth Donald C Laing Roland Wolf ibm.
International Technical Support Organization The IBM TotalStorage DS6000 Series: Concepts and Architecture March 2005 SG24-6471-00
Note: Before using this information and the product it supports, read the information in “Notices” on page xiii. First Edition (March 2005) This edition applies to the DS6000 series per the October 12, 2004 announcement. Please note that an early version of DS6000 microcode was used for the screen captures and command output, so some details may vary from the currently available microcode. Note: This book contains detailed information about the architecture of IBM's DS6000 product family.
iii
iv DS6000 Series: Concepts and Architecture
Contents Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv Summary of changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv March 2005, First Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.5.1 Technical details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.2 Device adapter ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.3 Host adapter ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.4 SFPs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.
5.1 DS6000 highlights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.1 DS6800 Model 1750-511 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1.2 DS6000 Model 1750-EX1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Designed to scale for capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 84 85 86 Chapter 6.
8.3.3 Remote Mirror and Copy functions (RMC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.4 Parallel Access Volumes (PAV) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.5 Server attachment license . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.6 Ordering license functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.3.7 Disk storage feature activation . . . . . . . . . . . . . . . .
10.3 Supported environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4 Installation methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5 Command flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.6 User security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.
.2.1 Scalability support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2.2 Large Volume Support (LVS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2.3 Read availability mask support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2.4 Initial Program Load (IPL) enhancements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2.5 DS6000 definition to host software . . . . . . . . . . . .
Chapter 15. Data migration in the open systems environment. . . . . . . . . . . . . . . . . . 15.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.2 Comparison of migration methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.2.1 Host operating system-based migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15.2.2 Subsystem-based data migration . . . . . . . . . . . . . . . .
Multipath. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Avoiding single points of failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring multipath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Adding multipath volumes to iSeries using 5250 interface . . . . . . . . . . . . . . . . . . . . . .
Notices This information was developed for products and services offered in the U.S.A. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used.
Trademarks The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both: iSeries™ i5/OS™ pSeries® xSeries® z/OS® z/VM® z/VSE™ zSeries® AIX 5L™ AIX® AS/400® BladeCenter® CICS® DB2® DFSMS/MVS® DFSMS/VM® DFSMSdss™ DFSMShsm™ DFSORT™ Enterprise Storage Server® ESCON® FlashCopy® FICON® Geographically Dispersed Parallel Sysplex™ GDPS® HACMP™ IBM® IMS™ Lotus Notes® Lotus® Multiprise® MVS™ Netfinity® Notes® OS/390® OS/400® Parallel Sysplex® Po
Summary of changes This section describes the technical changes made in this edition of the book and in previous editions. This edition may also include minor corrections and editorial changes that are not identified. Summary of Changes for SG24-6471-00 for DS6000 Series: Concepts and Architecture as created or updated on December 13, 2005. March 2005, First Edition This revision reflects the addition, deletion, or modification of new and changed information described below.
xvi DS6000 Series: Concepts and Architecture
Preface This IBM Redbook describes the IBM TotalStorage® DS6000 storage server series, its architecture, its logical design, hardware design and components, advanced functions, performance features, and specific characteristics. The information contained in this redbook is useful for those who need a general understanding of this powerful new disk subsystem, as well as for those looking for a more detailed understanding of how the DS6000 series is designed and operates.
DWDM technology, CWDM technology). His areas of interest include storage remote copy on long-distance connectivity for business continuance and disaster recovery solutions. Werner Bauer is a certified IT specialist in Germany. He has 25 years of experience in storage software and hardware, as well as S/390®. He holds a degree in Economics from the University of Heidelberg.
for tuning and optimizing storage environments. She has written several papers about ESS Copy Services and disaster recovery solutions in an Oracle/pSeries environment. Stu Preacher has worked for IBM for over 30 years, starting as a Computer Operator before becoming a Systems Engineer. Much of his time has been spent in the midrange area, working on System/34, System/38™, AS/400® and iSeries™.
Figure 0-1 Front row - Cathy, Torsten R, Torsten K, Andre, Toni, Werner, Tetsuroh. Back row - Roland, Olivier, Anthony, Tang, Christine, Alex, Stu, Heinz, Chuck. We want to thank all the members of John Amann’s team at the Washington Systems Center in Gaithersburg, MD for hosting us. Craig Gordon and Rosemary McCutchen were especially helpful in getting us access to beta code and hardware.
Dari Durnas IBM Tampa Linda Benhase, Jerry Boyle, Helen Burton, John Elliott, Kenneth Hallam, Lloyd Johnson, Carl Jones, Arik Kol, Rob Kubo, Lee La Frese, Charles Lynn, Dave Mora, Bonnie Pulver, Nicki Rich, Rick Ripberger, Gail Spear, Jim Springer, Teresa Swingler, Tony Vecchiarelli, John Walkovich, Steve West, Glenn Wightwick, Allen Wright, Bryan Wright IBM Tucson Nick Clayton IBM United Kingdom Steve Chase IBM Waltham Rob Jackard IBM Wayne Many thanks to the graphics editor, Emma Jacobs, and the editor, A
xxii DS6000 Series: Concepts and Architecture
Part 1 Part 1 Introduction In this part we introduce the IBM TotalStorage DS6000 series and its key features. The topics covered include: Product overview Positioning Performance © Copyright IBM Corp. 2004. All rights reserved.
2 DS6000 Series: Concepts and Architecture
1 Chapter 1. Introducing the IBM TotalStorage DS6000 series This chapter provides an overview of the features, functions, and benefits of the IBM TotalStorage DS6000 series of storage servers. The topics covered include: Overview of the DS6000 series and its benefits Positioning the DS6000 series within the whole family of IBM Disk Storage products Performance of the DS6000 series © Copyright IBM Corp. 2004. All rights reserved.
1.1 The DS6000 series, a member of the TotalStorage DS Family IBM has a wide range of product offerings that are based on open standards and share a common set of tools, interfaces, and innovative features. The IBM TotalStorage DS Family and its new member, the DS6000 series (see Figure 1-1), gives you the freedom to choose the right combination of solutions for your current needs and the flexibility to help your infrastructure evolve as your needs change.
Storage Manager for Data Retention, are designed to help you automatically preserve critical data, while preventing deletion of that data before its scheduled expiration. 1.2 IBM TotalStorage DS6000 series unique benefits The IBM TotalStorage DS6000 series is a Fibre Channel based storage system that supports a wide range of IBM and non-IBM server platforms and operating environments. This includes open systems, zSeries, and iSeries servers.
1.2.1 Hardware overview The DS6000 series consists of the DS6800, Model 1750-511, which has dual Fibre Channel RAID controllers with up to 16 disk drives in the enclosure (see Figure 1-1 on page 4). Capacity can be increased by adding up to 7 DS6000 expansion enclosures, Model 1750-EX1, each with up to 16 disk drives, as shown Figure 1-3.
The disk drives The DS6800 controller unit can be equipped with up to 16 internal FC-AL disk drive modules, offering up to 4.8 TB of physical storage capacity in only 3U (5.25”) of standard 19” rack space. Dense packaging Calibrated Vectored Cooling technology used in xSeries® and BladeCenter to achieve dense space saving packaging is also used in the DS6800. The DS6800 weighs only 49.6 kg (109 lbs.) with 16 drives.
DS6000 expansion enclosure (Model 1750-EX1) The size and the front look of the DS6000 expansion enclosure (1750-EX1) is the same as the DS6800 controller enclosure. In the front you can have up to 16 disk drives. Aside from the drives, the DS6000 expansion enclosure contains two Fibre Channel switches to connect to the drives and two power supplies with integrated fans. Up to 7 DS6000 expansion enclosures can be added to a DS6800 controller enclosure. The DS6800 supports two dual redundant switched loops.
from any location that has network access to the DS management console using a Web browser. You have the following options to use the DS Storage Manager: Simulated (offline) configuration This application allows the user to create or modify logical configurations when disconnected from the network. After creating the configuration, you can save it and then apply it to a new or un-configured storage unit at a later time.
1.2.5 Business continuance functions As data and storage capacity are growing faster year by year most customers can no longer afford to stop their systems to back up terabytes of data, it just takes too long. Therefore, IBM has developed fast replication techniques that can provide a point-in-time copy of the customer’s data in a few seconds or even less. This function is called FlashCopy on the DS6000 series, as well as on the ESS models and DS8000 series.
Incremental FlashCopy Incremental FlashCopy provides the capability to refresh a LUN or volume involved in a FlashCopy relationship. When a subsequent FlashCopy is initiated, only the data required to bring the target current to the source's newly established point-in-time is copied. This unburdens the backend and the disk drives are not so busy and can do more production I/Os.
Global Copy This is a non-synchronous long distance copy option for data migration and backup. Global Copy was previously called PPRC-XD on the ESS. It is an asynchronous copy of LUNs or zSeries CKD volumes. An I/O is signaled complete to the server as soon as the data is in cache and mirrored to the other controller cache. The data is then sent to the remote storage system. Global Copy allows for copying data to far away remote sites.
large zSeries resiliency requirements. The DS6000 series systems can only be used as a target system in z/OS Global Mirror operations. 1.2.6 Resiliency The DS6000 series has built in resiliency features that are not generally found in small storage devices. The DS6000 series is designed and implemented with component redundancy to help reduce and avoid many potential single points of failure.
Light Path Diagnostics and controls are available for easy failure determination, component identification, and repair if a failure does occur. The DS6000 series can also be remotely configured and maintained when it is installed in a remote location. The DS6800 consists of only five types of customer replaceable units (CRU). Light Path indicators will tell you when you can replace a failing unit without having to shut down your whole environment.
On an ESS there was a predefined association of arrays to Logical Subsystems. This caused some inconveniences, particularly for zSeries customers. Since in zSeries one works with relatively small volume sizes, the available address range for an LSS often was not sufficient to address the whole capacity available in the arrays associated with the LSS.
attachments, along with the flexibility to easily partition the DS6000 series storage capacity among the attached environments, makes the DS6000 series system a very attractive product in dynamic, changing environments. In the midrange market it competes with many other products. The DS6000 series overlaps in price and performance with the DS4500 of the IBM DS4000 series of storage products. But the DS6000 series offers enterprise capabilities not found in other midrange offerings.
high cache hit rate, your cache hit rate on the DS6800 will drop down. This is because of the smaller cache. z/OS benefits from large cache, so for transaction-oriented workloads with high read cache hits, careful planning is required. DS6000 series compared to DS8000 series You can think of the DS6000 series as the small brother or sister of the DS8000 series. All Copy Services (with the exception of z/OS Global Mirror) are available on both systems.
1.3.4 Use with other virtualization products IBM TotalStorage SAN Volume Controller is designed to increase the flexibility of your storage infrastructure by introducing a new layer between the hosts and the storage systems. The SAN Volume Controller can enable a tiered storage environment to increased flexibility in storage management. The SAN Volume Controller combines the capacity from multiple disk storage systems into a single storage pool, which can be managed from a central point.
pending Sequential prefetching in Adaptive Replacement Cache (SARC) places data in cache based not only on server access patterns, but also on frequency of data utilization. 1.4.3 IBM multipathing software IBM Multi-path Subsystem Device Driver (SDD) provides load balancing and enhanced data availability capability in configurations with more than one I/O path between the host server and the DS6800. The data path from the host to the RAID controller is pre-determined by the LUN.
20 DS6000 Series: Concepts and Architecture
Part 2 Part 2 Architecture In this part we describe various aspects of the DS6000 series architecture. These include: Hardware components RAS - reliability, availability, and serviceability Virtualization concepts Overview of the models Copy Services © Copyright IBM Corp. 2004. All rights reserved.
22 DS6000 Series: Concepts and Architecture
2 Chapter 2. Components This chapter details the hardware components of the DS6000. Here you can read about the DS6000 hardware platform and its components: Server enclosure Expansion enclosure Controller architecture Disk subsystem Server enclosure RAID controller card Expansion enclosure SBOD controller card Front panel Rear panel Power subsystem System service card Storage Manager console Cables © Copyright IBM Corp. 2004. All rights reserved.
2.1 Server enclosure The entire DS6800 including disks, controllers, and power supplies, is contained in a single 3U chassis which is called a server enclosure. If additional capacity is needed, it can be added by using a DS6000 expansion enclosure. Front display panel Disk drive modules Figure 2-1 DS6800 front view The front view of the DS6800 server enclosure is shown in Figure 2-1. On the left is the front display panel that provides status indicators. You can also see the disk drive modules or DDMs.
2.2 Expansion enclosure The DS6000 expansion enclosure is used to add capacity to an existing DS6800 server enclosure. From the front view, it is effectively identical to the server enclosure (so it is not pictured). The rear view is shown in Figure 2-3. You can see the left and right power supplies, the rear display panel, and the upper and lower SBOD (Switched Bunch Of Disks) controllers. The power supplies and rear display panel used in the expansion enclosure are identical to the server enclosure.
the data is written to volatile memory on one controller and persistent memory on the other controller. The DS6800 then reports to the host that the write is complete before it has actually been written to disk. This provides much faster write performance. Persistent memory is also called NVS or non-volatile storage.
If you can view Figure 2-4 on page 26 in color, you can use the colors as indicators of how the DS6000 hardware is shared between the controllers (in black and white, the dark color is green and the light color is yellow). On the left side is the green controller. The green controller records its write data and caches its read data in its volatile memory area (in green). For fast-write data it has a persistent memory area on the right controller.
is not feasible in real-life systems), SARC uses prefetching for sequential workloads. Sequential access patterns naturally arise in video-on-demand, database scans, copy, backup, and recovery. The goal of sequential prefetching is to detect sequential access and effectively pre-load the cache with data so as to minimize cache misses. For prefetching, the cache management uses tracks.
cache space and delivers greater throughput and faster response times for a given cache size. Additionally, the algorithm modifies dynamically not only the sizes of the two lists, but also the rate at which the sizes are adapted. In a steady state, pages are evicted from the cache at the rate of cache misses. A larger (respectively, a smaller) rate of misses effects a faster (respectively, a slower) rate of adaptation.
Figure 2-7 Industry standard FC-AL disk enclosure The main problems with standard FC-AL access to DDMs are: The full loop is required to participate in data transfer. Full discovery of the loop via LIP (loop initialization protocol) is required before any data transfer. Loop stability can be affected by DDM failures. In the event of a disk failure, it can be difficult to identify the cause of a loop breakage, leading to complex problem determination.
Controller 0 device adapter Fibre channel switch Controller 1 device adapter Fibre channel switch Figure 2-8 Disk enclosure When a connection is made between the device adapter and a disk, the connection is a switched connection that uses arbitrated loop protocol. This means that a mini-loop is created between the device adapter and the disk. Figure 2-9 depicts four simultaneous and independent connections, one from each device adapter port.
1 1 16 1 16 FC switch FC switch FC switch Loop 1 Four enclosures maximum per loop Third expansion enclosure Raid controller DISK EXP ports Internal connections through midplane Controller 1 Controller 0 First expansion enclosure FC switch 16 1 Up to 16 DDMs per enclosure FC switch 16 FC switch Server enclosure FC switch Loop 0 Four enclosures maximum per loop Second expansion enclosure FC switch OUT ports to next enclosure Raid controller DISK CONTRL ports Cables between enclosur
spare. So at least two spares are created per loop, which will serve up to four enclosures, depending on the disk intermix. 2.5 Server enclosure RAID controller card The RAID controller cards are the heart and soul of the system. Each card is the equivalent of a cluster node in an ESS. IBM has leveraged its extensive development of the ESS host adapter and device adapter function to create a total repackaging.
in Figure 2-10 on page 32 (with loop 0 going upwards and loop 1 going in the downwards direction). You add one expansion enclosure to each loop until both loops are populated with four enclosures each (remembering the server enclosure represents the first enclosure on the first loop). Note that while we use the term disk loops, and the disks themselves are FC-AL disks, each disk is actually attached to two separate Fibre Channel switches.
Figure 2-12 SFP hot-plugable fibre port with LC connector fiber cable Ethernet and serial ports Each controller card has a 10/100 copper Ethernet port to attach to a customer-supplied LAN. Both controllers must be attached to the same LAN and have connectivity to the customer-supplied PC that has the DS Storage Manager software installed on it. This port has both a status and an activity light. In addition, there is a serial port provided for each controller.
Figure 2-13 DS6000 expansion enclosure SBOD controller card Indicators On the right-hand side, contained in an orange box, each SBOD controller card has two status indicators located below a chip symbol. The upper indicator is green and indicates that the SBOD controller card is powered on. The lower indicator is amber and indicates that this SBOD controller requires service. Cabling Examples of how the expansion enclosures are cabled are shown in Figure 2-14 and Figure 2-15 on page 37.
these cables are pictured in orange and green (which appear darker if viewed in black and white). In each case cables run from the disk contrl ports to the in ports of the SBOD card. A second expansion enclosure has been added by running cables from the out ports on the first expansion enclosure to the in ports on the second expansion enclosure.
Table 2-1 summarizes the purpose of each indicator. Table 2-1 DS6000 front panel indicators Indicator Symbol Purpose System Power (green) Lightning bolt If this indicator is on solid then DC power is present and the system is powered on. If it is blinking then AC Power is present but the DS6000 is not powered on. If this indicator is off then AC power is not present. System Identify (blue) Lighthouse This indicator is normally off. It can be made to blink by pressing the lightpath identify button.
Power control switch System identify System power System alert Data cache on battery Lightpath System information Remind and Identify switches Fault in external enclosure Fault on front CRU Rack identify connector Enclosure identifier Figure 2-17 DS6000 rear panel Table 2-2 DS6000 rear panel push buttons Button Purpose Power Control Switch (white) This button can be seen on the left-hand side of the rear panel. You press it once to begin the power on or power off sequence.
2.9 Power subsystem The power subsystem of the DS6800 consists of two redundant power supplies and two battery backup units (BBUs). DS6000 expansion enclosures contain power supplies but not BBUs. The power supplies convert input AC power to 3.3V, 5V, and 12V DC power. The battery units provide DC power, but only to the controller card memory cache in the event of a total loss of all AC power input.
Indicator Symbol Purpose DC Power (green) DC If this indicator is on solid then the power supply is producing correct DC power. If it is blinking then the DS6000 is not powered on. If it is off then either the enclosure is powered off or the power supply is faulty. Fault detected (yellow) Exclamation mark If this indicator is on solid then the DS6000 has identified this power supply as being faulty and it requires replacement. 2.9.
Indicator Symbol Purpose Battery charging (green) Battery symbol If this indicator is on solid then the battery backup unit is fully charged. If this indicator is blinking then the battery is charging. As the battery approaches full charge the blink speed will slow. If the indicator is off then the battery is not operational. Fault detected (yellow) Exclamation mark If this indicator is on solid then a fault has been detected and this battery requires service. 2.
2.13 Summary This chapter has described the various components that make up a DS6000. For additional information, there is documentation available on the Web at: http://www-1.ibm.com/servers/storage/support/disk/index.html Chapter 2.
44 DS6000 Series: Concepts and Architecture
3 Chapter 3. RAS This chapter describes the RAS (reliability, availability and serviceability) characteristics of the DS6000 series. Specific topics covered are: Controller RAS Host connection availability Disk subsystem RAS Power subsystem RAS Light path guidance strategy Microcode updates © Copyright IBM Corp. 2004. All rights reserved.
3.1 Controller RAS The DS6800 design is built upon IBM’s highly redundant storage architecture. It has the benefit of more than five years of ESS 2105 development. The DS6800, therefore, employs similar methodology to the ESS to provide data integrity when performing fast write operations and controller failover. 3.1.1 Failover and failback To understand the process of controller failover and failback, we have to understand the logical construction of the DS6800.
NVS for odd LSSs NVS for even LSSs Cache memory for even LSSs Cache memory for odd LSSs Controller 0 Controller 1 Figure 3-1 DS6800 normal data flow Figure 3-1 illustrates how the cache memory of controller 0 is used for all logical volumes that are members of the even LSSs. Likewise, the cache memory of controller 1 supports all logical volumes that are members of odd LSSs.
NVS for odd LSSs Cache memory for even LSSs Controller 0 NVS NVS for for even odd LSSs LSSs Cache Cache for for even odd LSSs LSSs Controller 1 Failover Figure 3-2 Controller failover This entire process is known as a failover. After failover, controller 1 now owns all the LSSs, which means all reads and writes will be serviced by controller 1. The NVS inside controller 1 is now used for both odd and even LSSs.
input power, the DS6800 controller cards would detect that they were now running on batteries and immediately shut down. The BBUs are not sufficient to keep the disks spinning so there is nowhere to put the modified data. All that the BBUs will do is preserve all data in memory while input power is not available. When power becomes available again, the DS6800 controllers begin the bootup process, but leave the NVS portion of controller memory untouched.
Single pathed host HBA HP HP HP HP Controller 0 DA HP Even LSS Logical Volumes Odd LSS Logical Volumes HP HP HP Controller 1 DA Inter-controller data path Figure 3-3 A host with a single path to the DS6800 For best reliability and performance, it is recommended that each attached host has two connections, one to each controller as depicted in Figure 3-4. This allows it to maintain connection to the DS6800 through both controller failure and HBA or HA (host adapter) failure.
A logic or power failure in a SAN switch can interrupt communication between hosts and the DS6800. We recommend that more than one SAN switch be provided to ensure continued availability. For example, four of the eight fibre ports in a DS6800 could be configured to go through each of two directors. The complete failure of either director leaves half of the paths still operating.
A physical FICON path is established when the DS6800 port sees light on the FICON fiber (for example, a cable is plugged in to a DS6800 host adapter, or a processor, or the DS6800 is powered on, or a path is configured online by OS/390). At this time, logical paths are established through the FICON port between the host and some or all of the LCUs in the DS6800, controlled by the HCD definition for that host. This happens for each physical path between a zSeries CPU and the DS6800.
array site (where the S stands for spare). A four disk array also effectively uses 1 disk for parity, so it is referred to as a 3+P array. In a DS6000, a RAID-5 array built on two array sites will contain either seven disks or eight disks, again depending on whether the array sites chosen had pre-allocated spares. A seven disk array effectively uses one disk for parity, so it is referred to as a 6+P array.
If two array sites are used to make a RAID-10 array and the array sites contain spares, then six DDMs are used to make two RAID-0 arrays which are mirrored. If spares do not exist on the array sites then eight DDMs are used to make two RAID-0 arrays which are mirrored. Drive failure When a disk drive module (DDM) fails in a RAID-10 array, the controller starts an operation to reconstruct the data from the failed drive onto one of the spare drives.
DDM, then approximately half of the 146 GB DDM would be wasted since that space is not needed. The problem here is that the failed 73 GB DDM will be replaced with a new 73 GB DDM. So the DS6000 microcode will most likely migrate the data on the 146 GB DDM onto the recently replaced 73 GB DDM. When this process completes, the 73 GB DDM will rejoin the array and the 146 GB will become the spare again. Another example would be if we fail a 10k RPM DDM onto a 15k RPM DDM.
paths) since two paths to the expansion controller would be available for the remaining controller. disk exp disk contrl disk contrl RAID controller 0 device adapter chipset loop 1 engine loop 0 engine Fibre channel switch disk exp RAID controller 1 device adapter chipset loop 0 engine loop 1 engine Fibre channel switch Midplanemidplane Enclosure Figure 3-5 DS6000 switched disk connections 3.
Important: If you install the DS6000 so that both power supplies are attached to the same power strip, or where two power strips are used but they connect to the same circuit breaker or the same switch-board, then the DS6000 will not be well protected from external power failures. This is a very common cause of unplanned outages. Redundant cooling The DS6000 gets its cooling from the two fan assemblies in each power supply.
Figure 3-6 Failed power supply If a power supply failure is indicated, the user could then follow this procedure: 1. Review the online component removal instructions. Figure 3-7 on page 59 shows an example of the screen the user may see. On this screen, users are given the ability to do things like: a. View an animation of the removal and replacement procedures. b. View an informational screen to determine what affect this repair procedure will have upon the DS6000. c.
b a c Figure 3-7 Power supply replacement via the GUI 2. Upon arrival of the replacement supply, the user physically removes the faulty power supply and then installs the replacement power supply. 3. Finally, the user checks the component view to review system health after the repair. An example of this is shown in Figure 3-8. In this example we can see that all the components displayed are normal. Figure 3-8 Power supply replaced All parts are very easy to remove and replace.
3.5.3 System indicators The DS6000 uses several simple indicators to allow a user to quickly determine the health of the DS6000. These indicators were previewed in Chapter 2, “Components” on page 23. System Identify Indicator (blue light) In addition to the light path diagnostics discussed here, the DS6000 implements a method for the person servicing the system to identify all of the enclosures associated with a given system. This is a blue LED visible on the front and rear of an enclosure.
additional guidance on the CRU replacement procedure is required. This includes a situation in which it is unclear which CRU has failed. This will prevent an incorrect maintenance procedure from taking place. After the defective CRU has been replaced, the CRU fault indicator will turn off. Pressing the Remind button will have no effect on the state of the CRU Endpoint Indicator. 3.5.4 Parts installation and repairs The DS6000 has been designed for ease of maintenance.
3.6 Microcode updates The DS6000 contains several discrete redundant components. Most of these components have firmware that can be updated. This includes the controllers, device adapters, host adapters, and network adapters. Each DS6800 controller also has microcode that can be updated. All of these code releases come as a single package installed all at once.
progress using the DS Management Console GUI. Clearly a multipathing driver (such as SDD) is required for this process to be concurrent. There is also the alternative to load code non-concurrently. This means that both controllers are unavailable for a short period of time. This method can be performed in a smaller window of time. 3.7 Summary This chapter has described the RAS characteristics of the DS6000.
64 DS6000 Series: Concepts and Architecture
4 Chapter 4. Virtualization concepts This chapter describes the virtualization concepts for the DS6000 and the abstraction layers for disk virtualization. The topics covered are: Array sites Arrays Ranks Extent pools Logical volumes Logical storage subsystems Address groups Volume groups Host attachments © Copyright IBM Corp. 2004. All rights reserved.
4.1 Virtualization definition In our fast changing world, where you have to react quickly to changing business conditions, your infrastructure must allow for on-demand changes. Virtualization is key to an on-demand infrastructure. However, when talking about virtualization many vendors are talking about different things. Our definition of virtualization is the abstraction process going from the physical disk drives to a logical volume that the hosts and servers see as if it were a physical disk. 4.
Inter-server communication Server1 Device Adapter Storage enclosure pair Server0 Device Adapter Switches Switched loop 1 Switched loop 2 Figure 4-1 Physical layer as the base for virtualization When you compare this with the ESS design, where there was a real loop and having an 8-pack close to a device adapter was an advantage, this is no longer relevant for the DS6000.
Array Site 1 Array Site 2 Loop 1 Switches Loop 2 Figure 4-2 Array sites Array sites are the building blocks used to define arrays. 4.2.2 Arrays Arrays are created from one or two array sites. Forming an array means defining it for a specific RAID type.The supported RAID types are RAID-5 and RAID-10 (see 3.3.1, “RAID-5 overview” on page 52 and 3.3.2, “RAID-10 overview” on page 53). For each array site or for a group of two array sites you can select a RAID type.
Array Site 1 Array Site 2 Creation of an array Data Data Data Data Data Data Parity Spare RAID Array D1 D7 D13 ... D2 D8 D14 ... D3 D9 D15 ... D4 D10 D16 ... D5 D11 P ... D6 P D17 ... P D12 D18 ... Spare Figure 4-3 Creation of an array So, an array is formed using one or two array sites, and while the array could be accessed by each adapter of the device adapter pair, it is managed by one device adapter.
and a Model 1 has 1113 cylinders, which is about 0.94 GB. The extent size of a CKD rank therefore was chosen to be one 3390 Model 1, or 1113 cylinders. One extent is the minimum physical allocation unit when a LUN or CKD volume is created, as we discuss later. It is still possible to define a CKD volume with a capacity that is an integral multiple of one cylinder or a fixed block LUN with a capacity that is an integral multiple of 128 logical blocks (64K bytes).
The DS Storage Manager GUI guides the user to use the same RAID types in an extent pool. As such, when an extent pool is defined, it must be assigned with the following attributes: – Server affinity – Extent type – RAID type The minimum number of extent pools is one; however, you would normally want at least two, one assigned to server 0 and the other one assigned to server 1 so that both servers are active.
4.2.5 Logical volumes A logical volume is composed of a set of extents from one extent pool. On a DS6000 up to 8192 (8K) volumes can be created (8K CKD, or 8K FB volumes, or a mix of both types (4K CKD plus 4K FB). Fixed block LUNs A logical volume composed of fixed block extents is called a LUN. A fixed block LUN is composed of one or more 1 GB (230 ) extents from one FB extent pool. A LUN cannot span multiple extent pools, but a LUN can have extents from different ranks within the same extent pool.
Extent Pool CKD0 Rank-x 1113 1113 1113 3390 Mod. 3 Rank-y used 1113 free used 1113 free 1113 free Logical 3390 Mod. 3 Allocate 3226 cylinder volume Extent Pool CKD0 Rank-x 1113 1113 1113 3390 Mod. 3 Rank-y used 1113 used used 1113 used Volume with 3226 cylinders 1000 used 113 cylinders unused Figure 4-6 Allocation of a CKD logical volume Figure 4-6 shows how a logical volume is allocated with a CKD volume as an example.
Extent Pool FBprod Rank-a 1 GB Logical 3 GB LUN 1 GB 1 GB free 1 GB 3 GB LUN Rank-b used 1 GB free used 1 GB free Allocate a 3 GB LUN Extent Pool FBprod Rank-a 1 GB 1 GB 1 GB used 1 GB 2.9 GB LUN created 3 GB LUN Rank-b used 1 GB used used 1 GB used 100 MB unused Figure 4-7 Creation of an FB LUN iSeries LUNs iSeries LUNs are also composed of fixed block 1 GB extents. There are, however, some special aspects with iSeries LUNs. LUNs created on a DS6000 are always RAID protected.
algorithm exists, the user may want to consider putting one rank per extent pool to control the allocation of logical volumes across ranks to improve performance. This construction method of using fixed extents to form a logical volume in the DS6000 allows flexibility in the management of the logical volumes. We can now delete LUNs and reuse the extents of that LUN to create another LUN, maybe of a different size.
created and determines the LSS that it is associated with. The 256 possible logical volumes associated with an LSS are mapped to the 256 possible device addresses on an LCU (logical volume X'abcd' maps to device address X'cd' on LCU X'ab'). When creating CKD logical volumes and assigning their logical volume numbers, users should consider whether parallel access volumes are required on the LCU and reserve some of the addresses on the LCU for alias addresses.
4.2.7 Address groups Address groups are created automatically when the first LSS associated with the address group is created and deleted automatically when the last LSS in the address group is deleted. LSSs are either CKD LSSs or FB LSSs. All devices in an LSS must be either CKD or FB. This restriction goes even further. LSSs are grouped into address groups of 16 LSSs. LSSs are numbered X'ab', where a is the address group and b denotes an LSS within the address group.
Host attachment HBAs are identified to the DS6000 in a host attachment construct that specifies the HBA's World Wide Port Names (WWPNs). A set of host ports can be associated through a port group attribute that allows a set of HBAs to be managed collectively. This port group is referred to as host attachment within the GUI. A given host attachment can be associated with only one volume group. Each host attachment can be associated with a volume group to define which LUNs that HBA is allowed to access.
WWPN-1 WWPN-2 WWPN-3 Host attachment: AIXprod1 WWPN-4 Host attachment: AIXprod2 Volume group: DB2-1 Volume group: DB2-2 Volume group: DB2-test Host att: Test WWPN-5 WWPN-6 WWPN-7 Host att: Prog WWPN-8 Volume group: docs Figure 4-10 Host attachments and volume groups Figure 4-10 shows the relationships between host attachments and volume groups. Host AIXprod1 has two HBAs, that are grouped together in one host attachment and both are granted access to volume group DB2-1.
LSS FB Address Group Volume Group 1 GB FB 1 GB FB 1 GB FB 1 GB FB 1 GB FB 1 GB FB 1 GB FB Data Data Data Data Data Data Parity Spare Extent Pool Server0 Rank Type FB 1 GB FB RAID Array 1 GB FB Array Sites Logical Volume Host Attachment X'0x' FB 4096 addresses LSS X'07' X'1x' CKD 4096 addresses Figure 4-11 Virtualization hierarchy 4.2.10 Placement of data As explained in the previous chapters, there are several options on how to create logical volumes.
Extent Pool FB-1a Extent Pool FB-0b Extent Pool FB-1b Server1 Extent Pool FB-0a DA pair LSS 01 Loop 1 LSS 00 Loop 2 Loop 2 DA pair Server0 Loop 1 Host LVM volume Figure 4-12 Optimal distribution of data 4.3 Benefits of virtualization The DS6000 physical and logical architecture defines new standards for enterprise storage virtualization. The main benefits of the virtualization layers are: Flexible LSS definition allows maximization/optimization of the number of devices per LSS.
– Dynamically add/remove volumes Virtualization reduces storage management requirements.
5 Chapter 5. IBM TotalStorage DS6000 model overview This chapter provides an overview of the IBM TotalStorage DS6000 storage server which is from here on referred to as the DS6000. While the DS6000 is physically small, it is a highly scalable and powerfully performing storage server. Topics covered in this chapter are: DS6000 highlights DS6800 Model 1750-511 DS6000 Model 1750-EX1 Design to scale for capacity © Copyright IBM Corp. 2004. All rights reserved.
5.1 DS6000 highlights The DS6000 is a member of the DS product family that offers high reliability and enterprise class performance for mid-range storage solutions with the DS6800 model 1750-511. It is built upon 2 Gbps fibre technology and offers: RAID protected storage Advanced functionality Extensive scalability Increased addressing capabilities The ability to connect to all relevant host server platforms 5.1.
Front-end connectivity with two to eight Fibre Channel host ports which auto negotiate to either 2 Gbps or 1 Gbps link speeds. Each port, long-wave or short-wave, can be either configured for: – FCP to connect to open system hosts or PPRC FCP links, or both – FICON host connectivity The DS6800 storage system can connect to a broad range of servers through its intermix of FCP and FICON front-end I/O adapters.
The DS6800 Model 1750-EX1 is also a 3 Electrical Industries Association (EIA) self-contained unit, as is the 1750-511, and it can also be mounted in a standard 19 inch rack. Figure 5-3 DS6800 Model 1750-EX1 rear view Controller model 1750-511 and expansion model 1750-EX1 have the same front appearance. Figure 5-3 displays the rear view of the expansion enclosure, which is a bit different compared to the rear view of the 1750-511 model. Figure 5-4 shows a 1750-511 model with two expansion 1750-EX1 models.
The DS6800 server enclosure can have from 8 up to 16 DDMs and can connect 7 expansion enclosures. Each expansion enclosure also can have 16 DDMs. Therefore, in total a DS6800 storage unit can have 16 + 16 x 7 = 128 DDMs. You can select from four types of DDMs: 73 GB 15k RPM 146 GB 10k RPM 146 GB 15k RPM 300 GB 10k RPM Therefore, a DS6800 can have from 584 GB (73 GB x 4 DDMs) up to 38.4TB (300 GB x 128 DDMs). Table 5-1 describes the capacity of the DS6800 with expansion enclosures.
16 1 16 1 FC switch 1 Cables between enclosures Up to 16 DDMs per enclosure FC switch FC switch FC switch 16 Raid controller disk exp ports FC switch FC switch Second expansion enclosure Loop 0 Fourth expansion enclosure Server enclosure 16 1 16 1 16 FC switch FC switch FC switch 1 FC switch Fifth expansion enclosure FC switch Third expansion enclosure Loop 1 First expansion enclosure FC switch Controller 1 Controller 0 Raid controller disk contrl ports EXP 1 73GB, 15k rpm
6 Chapter 6. Copy Services In this chapter, we describe the architecture and functions of Copy Services for the DS6000. Copy Services is a collection of functions that provide disaster recovery, data migration, and data duplication functions. Copy Services run on the DS6000 server enclosure and they support open systems and zSeries environments.
6.1 Introduction to Copy Services Copy Services is a collection of functions that provides disaster recovery, data migration, and data duplication functions. With the copy services functions, for example, you can create backup data with little or no disruption to your application, and you can back up your application data to a remote site for disaster recovery. Copy Services run on the DS6000 server enclosure and support open systems and zSeries environments.
FlashCopy provides a point-in-time copy Source Target FlashCopy command issued Copy immediately available Write Read Read Time Write Read and write to both source and copy possible T0 When copy is complete, relationship between source and target ends Figure 6-1 FlashCopy concepts When a FlashCopy operation is invoked, the process of establishing the FlashCopy pair and creating the necessary control bitmaps takes only a few seconds to complete.
The background copy may have a slight impact on your application because the real-copy needs some storage resources, but the impact is minimal because the host I/O is prior to the background copy. And if you want, you can issue FlashCopy with the no background copy option. No background copy option If you invoke FlashCopy with the no background copy option, the FlashCopy relationship is established without initiating a background copy. Therefore, you can minimize the impact of the background copy.
Incremental FlashCopy Write Source Read Target Initial FlashCopy relationship established with change recording and Write persistent copy options Read Control bitmap for each volume created Incremental FlashCopy started Tracks changed on the target are overwritten by the corresponding tracks from the source Tracks changed on the source are copied to the target Possible reverse operation, the target updates the source Figure 6-2 Incremental FlashCopy In the Incremental FlashCopy operations: 1.
Dataset Dataset Volum e level FlashCopy Dataset level FlashCopy Figure 6-3 Data Set FlashCopy Multiple Relationship FlashCopy Multiple Relationship FlashCopy allows a source to have FlashCopy relationships with multiple targets simultaneously. A source volume or extent can be FlashCopied to up to 12 target volumes or target extents, as illustrated in Figure 6-4.
Consistency Group FlashCopy Consistency Group FlashCopy allows you to freeze (temporarily queue) I/O activity to a LUN or volume. Consistency Group FlashCopy helps you to create a consistent point-in-time copy across multiple LUNs or volumes, and even across multiple storage units.
A more detailed discussion of the concept of data consistency and how to manage the Consistency Group operation is in 6.2.5, “What is Consistency Group?” on page 105. Important: Consistency Group FlashCopy can create host-based consistent copies; they are not application-based consistent copies. The copies have power-fail or crash level consistency.
Persistent FlashCopy Persistent FlashCopy allows the FlashCopy relationship to remain even after the copy operation completes. You must explicitly delete the relationship. Inband commands over remote mirror link In a remote mirror environment, commands to manage FlashCopy at the remote site can be issued from the local or intermediate site and transmitted over the remote mirror Fibre Channel links. This eliminates the need for a network connection to the remote site solely for the management of FlashCopy.
Server write 1 4 Write acknowledge Write to secondary 2 3 Write hit to secondary Figure 6-7 Metro Mirror Global Copy (PPRC-XD) Global Copy copies data non-synchronously and over longer distances than is possible with Metro Mirror. When operating in Global Copy mode, the source volume sends a periodic, incremental copy of updated tracks to the target volume, instead of sending a constant stream of updates.
2 Write acknowledge Server write 1 Write to secondary (non-synchronously) Figure 6-8 Global Copy Global Mirror (Asynchronous PPRC) Global Mirror provides a long-distance remote copy feature across two sites using asynchronous technology. This solution is based on the existing Global Copy and FlashCopy. With Global Mirror, the data that the host writes to the server enclosure at the local site is asynchronously shadowed to the server enclosure at the remote site.
Efficient synchronization of the local and remote sites with support for failover and failback modes, helping to reduce the time that is required to switch back to the local site after a planned or unplanned outage. 2 Write acknowledge Server write 1 A Write to secondary (non-synchronously) B FlashCopy (automatically) C Automatic cycle in active session Figure 6-9 Global Mirror How Global Mirror works We explain how Global Mirror works in Figure 6-10 on page 101.
Global Mirror - How it works PPRC Primary PPRC Secondary FlashCopy Source Global Copy A Local Site FlashCopy Target FlashCopy B C Remote Site Automatic Cycle in an active Global Mirror Session 1. Create Consistency Group of volumes at local site 2. Send increment of consistent data to remote site 3. FlashCopy at the remote site 4. Resume Global Copy (copy out-of-sync data only) 5.
Note: When you implement Global Mirror, you setup the FlashCopy between the B and C volumes with No Background copy and Start Change Recording options. It means that before the latest data is updated to the B volumes, the last consistent data in the B volume is moved to the C volumes. Therefore, at some time, a part of consistent data is in the B volume, and the other part of consistent data is in the C volume.
Secondary server Primary server 2 Write acknowledge System Data Mover Write ashynchronously Server write 1 DS6000 DS8000 or ESS System Data Mover is managing data consistency Figure 6-11 z/OS Global Mirror (DS6000 is used as secondary system) 6.2.4 Comparison of the Remote Mirror and Copy functions In this section we summarize the use of and considerations for Remote Mirror and Copy functions.
Note: If you want to use PPRC for mirroring, you need to compare its function with OS mirroring. Generally speaking, you will have some disruption to recover your system with PPRC secondary volumes in an open systems environment because PPRC secondary volumes are not online to the application servers during the PPRC relationship. You may need some operations before assigning PPRC secondary volumes. For example, in an AIX environment, AIX assigns specific IDs to each volume (PVID).
Considerations When the link bandwidth capability is exceeded with a heavy workload, the RPO might grow. Note: To manage Global Mirror, you need many complicated operations. Therefore, we recommend management utilities (for example, Global Mirror Utilities) or management software (for example, IBM Multiple Device Manager) for Global Mirror. 6.2.5 What is Consistency Group? With Copy Services, you can create Consistency Groups for FlashCopy and PPRC.
In order for the data to be consistent, the deposit of the paycheck must be applied before the withdrawal of cash for each of the checking accounts. However, it does not matter whether the deposit to checking account A or checking account B occurred first, as long as the associated withdrawals are in the correct order. So for example, the data copy would be consistent if the following sequence occurred at the copy.
Because of the time lag for Consistency Group operations, some volumes in some LSSs are in an extended long busy state and other volumes in the other LSSs are not. In Figure 6-12, the volumes in LSS11 are in an extended long busy state, and the volumes in LSS12 and 13 are not. The 1st operation is not completed because of this extended long busy state, and the 2nd and 3rd operations are not completed, because the 1st operation has not been completed.
LSS11 1st This write operation is completed because LSS11 is not in an extended long busy condition. Completed LSS12 Servers 2nd Wait LSS13 3rd independent for each write operation Completed This write operation is not completed because LSS12 is in an extended long busy condition. This write operation is completed because LSS13 is not in an extended long busy condition. (I/O sequence is not kept.
MC communicates with each server in the storage units via the Ethernet network. Therefore, the MC is a key component to configure and manage the DS6000. The client must provide a computer to use as the MC. If they want, they can order a computer from IBM as the MC. An additional MC can be provided for redundancy. For further information about the Management Console, see Chapter 8, “Configuration planning” on page 125. 6.3.
Tip: What is changed from the ESS CLI? There are differences between the ESS CLI and DS CLI. The ESS CLI needs two steps to issue Copy Service functions: 1. Register Copy Services task from the Web user interface. 2. Issue the registered task from CLI. The DS CLI has no need to register a Copy Services task before you issue the CLI. You can easily implement and dynamically change your Copy Services operation without the GUI. For further information about the DS CLI, see Chapter 10, “DS CLI” on page 195.
6.4 Interoperability with ESS Copy Services also supports the IBM Enterprise Storage Server Model 800 (ESS 800) and the ESS 750. To manage the ESS 800 from the Copy Services for DS6000, you need to install licensed internal code version 2.4.2 or later on the ESS 800. The DS CLI supports the DS6000, DS8000, and ESS 800 at the same time. The DS Storage Manager does not support the ESS 800. Note: The DS6000 does not support PPRC via an ESCON link.
112 DS6000 Series: Concepts and Architecture
Part 3 Part 3 Planning and configuration In this part we present an overview of the planning and configuration necessary before installing your DS6000. The topics include: Installation planning Configuration planning Logical configuration Command-Line Interface Performance © Copyright IBM Corp. 2004. All rights reserved.
114 DS6000 Series: Concepts and Architecture
7 Chapter 7. Installation planning This chapter discusses planning for the physical installation of a new DS6000 in your environment. Refer to the latest version of the IBM TotalStorage DS6000 Introduction and Planning Guide, GC26-7679, for further details. In this chapter we cover the following topics: General considerations Installation site preparation Management interfaces Network settings SAN requirements and considerations Software requirements © Copyright IBM Corp. 2004.
7.1 General considerations The successful installation of a DS6000 requires careful planning. The main considerations when planning for the physical installation of a new DS6000 are the following: Floor loading Floor space Electrical power Operating environment Cooling Management console Host attachment and cabling Network and SAN considerations Always refer to the most recent information for physical planning in the IBM TotalStorage DS6000 Introduction and Planning Guide, GC26-7679. 7.
3. Ensure that the floor area provides enough stability to support the weight of the fully configured DS6000 series and associated components. 4. Ensure that you have adequate rack space for your hardware. Calculate the amount of space that the storage units will use. Don’t forget to leave enough space for future upgrades. 5. Ensure that your racks have space for the clearances that the DS6000 series requires for service and the requirements for floor loading strength.
Use the following steps to calculate the required space for your storage units: 1. Determine the dimensions of each model configuration in your storage units. 2. Determine the total space that is needed for the storage units by planning where you will place each storage unit in the rack. 3. Verify that the planned space and layout also meets the service clearance requirements for each unit. 7.2.2 Power requirements We describe here the input voltage requirements for the DS6000 series.
Maximum wet bulb temperature 27° C (80° F) Note: The upper limit of wet bulb temperature must be lowered 1.0° degree C for every 274 meters of elevation above 305 meters. Relative humidity 8 - 80 percent Typical heat load 550 watts or 1880 Btu/hr Noise level 5.9 bels The DS6000 should be maintained within an operating temperature range of 20 to 25 degrees Celsius (68 to 77 degrees Fahrenheit). The recommended operating temperature with the power on is 22 degrees Celsius (72 degrees Fahrenheit). 7.
The DS Storage Manager can be accessed from any location that has network access to the DS management console using a Web browser. Since the DS Storage Manager is required to manage the DS6000, to perform Copy Services operations, or to call home to a service provider, the DS management console should always be on. For more information about the DS Storage Manager refer to the DS6000 Installation, Troubleshooting, and Recovery Guide, GC26-7678.
7.4 Network settings To install a DS6000 in your environment you have to plan for the Ethernet infrastructure that the DS6000 has to be connected to. You have to provide some TCP/IP addresses and you need an Ethernet switch or some free ports on a existing switch (see Figure 7-1). This following settings are required connect the DS6000 series to a network: Controller card IP address You must provide a dotted decimal address that you will assign to each storage server controller card in the DS6800.
7.5 SAN requirements and considerations The DS6000 series provides a variety of host attachments so that you can consolidate storage capacity and workloads for open systems hosts and eServer zSeries hosts. The Fibre Channel adapters of the storage system can be configured to support the Fibre Channel Protocol (FCP) and fibre connection (FICON) protocol.
7.5.1 Attaching to an Open System host Fibre Channel technology supports increased performance, scalability, availability, and distance for attaching storage subsystems to servers as compared to SCSI attachments. Fibre Channel technology supports sharing of large amounts of disk storage between many servers. Fibre Channel ports on the DS6000 can be shared between different Host Bus Adapters (HBAs) and operating systems.
7.6 Software requirements To see current information on servers, operating systems, host adapters, and connectivity products supported by the DS6000 series, review the Interoperability Matrix at the following DS6000 series Web site: http://www.ibm.com/servers/storage/disk/ds6000/interop.html 7.6.1 Licensed features Before you can configure your DS6000 series you must activate your licensed features to enable the functions purchased on your machine.
8 Chapter 8. Configuration planning This chapter discusses configuration planning considerations when implementing the DS6000 series in your environment. The topics covered are: Configuration planning DS6000 Management Console DS6000 license functions Data migration planning Planning for performance © Copyright IBM Corp. 2004. All rights reserved.
8.1 Configuration planning considerations When installing a DS6000 disk system, various physical requirements need to be addressed to successfully integrate the DS6000 into your existing environment. These requirements include: The DS6800 requires Licensed Machine Code (LMC) level 5.0.0.0 or later. Appropriate operating environment characteristics such as temperature, humidity, electrical power, and noise levels. A PC, the DS Management Console, to host the DS Management Console (MC).
The Management Console is supported on the following operating systems: Microsoft Windows 2000 Microsoft Windows 2000 Server Microsoft Windows XP Professional Microsoft Windows 2003 Server Linux (Red Hat AS 2.1) Note: If the user wants to order the management console, consider the IBM 8141 ThinkCentre M51 Model 23U (8141-23U) Desktop system with a 3.0 GHz/800 MHz Intel Pentium 4 Processor.
new or un-configured DS6000 and apply the configuration. The following tasks may be performed when operating in offline configuration mode: Create or import a new simulated instance of a physical storage unit. Apply logical configurations to a new or fully deconfigured storage unit. View and change communication settings for the DS6000. From a single interface, work with new and view existing multiple DS6000s.
8.2.2 DS Management Console connectivity Connectivity to the DS6000 series from the DS Management Console is needed to perform configuration updates to the DS6000 series. Connectivity to both DS6800 controllers is required to activate updates. Figure 8-1 illustrates the DS Management console, optionally connected to to the DS6000 by redundant Ethernet switches, but redundant switches are not a requirement.
The DS6000 Storage Manager indicates the condition of the system after service. Should the user require additional maintenance services, excluding standard maintenance services, IBM and IBM Business Partners have fee-based service offerings to fulfill such maintenance requirements. 8.2.4 Copy Services management The DS6000 series may be part of a Remote Mirror and Copy configuration. In Metro Mirror and Global Mirror configurations the DS6000 series may be either primary or secondary.
Error (1) LAN DS6000 Customer Management GUI Firewall VPN tunnel Customer ask IBM for help (2) Internet Firewall IBM support server IBM assists by phone or remote connection IBM Service Center LAN Figure 8-3 DS6000 series configured for remote support 8.2.6 Call home The DS6000 has the capability to enable a call home facility. The call home facility is only available by way of e-mail.
8.3.1 Operating Environment License (OEL) - required feature The user must order an operating environment license (OEL) feature, the IBM TotalStorage DS Operating Environment, for every DS6000 series. The operating environment model and features establish the extent of IBM authorization for the use of the IBM TotalStorage DS Operating Environment. The OEL licenses the operating environment and is based on the total physical capacity of the storage unit (DS6800 plus any DS6000 expansion enclosures).
The deactivation of an activated licensed function, or a lateral change or reduction in the licensed scope, is a disruptive activity and requires an initial microcode load (IML). A lateral change occurs when the user changes license scope from CKD to FB or FB to CKD. A reduction in license scope also occurs when the user decides to change from a license scope of ALL to CKD or FB.
Feature code Description 5302 RMC-10TB unit 5303 RMC-25TB unit 5304 RMC-50TB unit 8.3.4 Parallel Access Volumes (PAV) The Parallel Access Volumes model and features establish the extent of IBM authorization for the use of the Parallel Access Volumes licensed function. Table 8-5 provides the feature codes for the PAV function.
CKD volumes will be FlashCopied, then you only need to purchase a license for 5 TB PTC and not the full 20 TB. The 5 TB of PTC would be set to CKD only. Figure 8-4, shows an example of a FlashCopy feature code authorization. In this case, the user is authorized up to 25 TB of CKD data. The user cannot FlashCopy any FB data.
I want to FlashCopy 10 TB of FB and 12 TB of CKD. DS6000 with 45 TB disk capacity in total, 45 TB FlashCopy authorization, 45 TB of OEL Licensed function and license scope of ALL User FlashCopies up to 20 TB of FB user data plus FC data upto 20 TB FB solution User has 20 TB of FB data and has 25 TB of CKD data allocated. The user has to purchse a Point-in-Time function equal to the total of both the FB and CKD capacity, that is 20 TB (FB) + 25 TB (CKD) equals 45 TB.
Primary DS6000 with 45 TB disk capacity in total, Secondary DS6000 with 45 TB disk capacity, 45 TB Remote Mirror Copy authorization for primary DS6000 and 45 TB Remote Mirror Copy for secondary DS6000, 45 TB of OEL Licensed function for primary DS6000 and 45 TB of OEL Licensed function for secondary DS6000.
The high-level steps for storage feature activation are: Have machine-related information available; that is, model, serial number, and machine signature. This information is obtained from the DS Storage Manager. Log on to the DSFA Web site. Obtain feature activation keys by completing the machine-related information. The feature activation keys can either be directly retrieved, saved onto a diskette, or they can be written down.
Loop 0 Additional enclosures to a maximum of four per loop (accessed via the disk exp ports) Expansion enclosure 2 Server Enclosure 16 1 FC switch 16 16 DDMs per enclosure 1 16 2Gb FC-AL link 1 4 FC-AL Ports Device adapter chipset inside controller 1 Expansion enclosure 1 Device adapter chipset inside controller 2 1 16 Loop 1 (accessed via the disk contrl ports) Expansion enclosure 3 Additional enclosures to a maximum of four per loop 1 16 1 16 Figure 8-7 Example of connecting severa
Note: IBM plans to offer Capacity Magic for DS6000 in the future. Capacity Magic calculates the physical and effective storage capacity of a DS6000. IBM sales representatives and IBM Business Partners can download this tool from the IBM Web site. The values presented here are intended to be used only until Capacity Magic is available. The following figures are for an 8 DDMs array. The user can also select a 4 DDMs array configuration.
FB RAID rank capacity Rank RAID10 73GB RAID10 73GB RAID10 146GB RAID10 146GB RAID10 300GB RAID10 300GB RAID5 73GB RAID5 73GB RAID5 146GB RAID5 146GB RAID5 300GB RAID5 300GB RAID type RAID10 RAID10 RAID10 RAID10 RAID10 RAID10 RAID5 RAID5 RAID5 RAID5 RAID5 RAID5 DDM capacity 73GB 73GB 146GB 146GB 300GB 300GB 73GB 73GB 146GB 146GB 300GB 300GB Array format 3+3+2S 4+4 3+3+2S 4+4 3+3+2S 4+4 6+P+S 7+P 6+P+S 7+P 6+P+S 7+P Extents 190 254 386 515 787 1,050 382 445 773 902 1,576 1,837 Binary Decimal GB GB 190 20
See Figure 8-10 and Figure 8-11 on page 143. These figures are simple examples for 64 DDM configurations with RAID 5 or RAID 10 arrays. RAID arrays are configured with spare disks in the server enclosure and the first expansion enclosure (each enclosure has two spare disks). Expansion enclosures 2 and 3 don’t have spare disks.
Sparing Example 2 – RAID 10: All same capacity, same rpm Server Enclosure 3x2 4x2 Exp Enclosure 2 4x2 4x2 1 … 2 DA Exp Enclosure 1 3x2 4x2 DA Exp Enclosure 3 4x2 4x2 1 … 2 • Assumes all devices same capacity & same rpm • On a DS6800 will have up to 2 spares on each loop – Additional RAID arrays will have no spares (RAID 5: 7+P, RAID 10: 4 x 2) • All spares available to all arrays on the loop Figure 8-11 Sparing example 2 (RAID-10) Configuration with different types of DDMs If you attach other
Sparing Example 3 – RAID 5: 1st 2 arrays 146GB & next 2 arrays 300GB (same rpm) Server Enclosure (146 GB) Exp Enclosure 2 (300 GB) 6+P 6+P 6+P 6+P 1 1 … 2 2 DA Exp Enclosure 1 (146GB) Exp Enclosure 3 (300 GB) 6+P 6+P 6+P DA 6+P 1 1 … 2 2 • Assumes all devices same rpm • Minimum 2 spares of the largest capacity Array Site on each loop – First two enclosures (server and Exp 1) needs two spare disks each – And next two enclosures (Exp2 and Exp3) also needs two spare disks each (because of l
Sparing Example 4 – RAID 5: 1st 2 arrays 145 GB (10k rpm) & next 2 arrays 73 GB (15k rpm) Server Enclosure (146 GB) Exp Enclosure 2 (73 GB) 6+P 6+P 6+P 6+P 1 1 … 2 2 DA DA Exp Enclosure 1 (146GB) Exp Enclosure 3 (73 GB) 6+P 6+P 6+P 6+P 1 1 … 2 2 • Minimum 2 spares of the largest capacity Array Site on each loop – First two enclosures (server and Exp 1) needs two spare disks each – And next two enclosures (Exp2 and Exp3) also need two spare disks each (because of faster rpm) – Even if ins
8.5.1 Operating system mirroring Logical volume mirroring (LVM) and Veritas Volume Manager have little or no application service disruption and the original copy will stay intact while the second copy is being made. The disadvantages of this approach include: Host cycles are utilized. It is labor intensive to set up and test. Application delays are possible due to dual writes occurring. This method does not allow for point-in-time copy or easy backout once the first copy is removed. 8.5.
The advantages of remote copy technologies are: Other than z/OS Global Mirror, they are operating system independent. Minimal host application outages. The disadvantages of remote copy technologies include: Same storage device types are required. For example, in a Metro Mirror configuration you need ESS 800 mirroring to a DS6000 (or an IBM approved configuration), but cannot have non-IBM disk systems mirroring to a DS6000.
Data migration methods Environment Data migration method S/390 IBM TotalStorage Global Mirror, Remote Mirror and Copy (when available) zSeries IBM TotalStorage Global Mirror, Remote Mirror and Copy (when available) Linux environment IBM TotalStorage Global Mirror, Remote Mirror and Copy (when available) z/OS operating system DFSMSdss (simplest method) DFSMShsm IDCAMS Export/Import (VSAM) IDCAMS Repro (VSAM, SAM, BDAM) IEBCOPY ICEGENER, IEBGENER (SAM) Specialized database utilities for CICS, DB2 or
covering several time intervals, and should include peak I/O rate, peak R/T, and peak (read and write) MB/sec throughput. Disk Magic will be enhanced to support the DS6000. Disk Magic provides insight when you are considering deploying remote technologies such as Metro Mirror. Confer with your sales representative for any assistance with Disk Magic. Note: Disk Magic is available to IBM sales representatives and IBM Business Partners only. 8.6.
8.6.7 Hot spot avoidance Workload activity concentrated on a limited number of RAID ranks will saturate the RAID ranks. This may result in poor response times, so balancing I/O activity across any disk system is important. Spreading your I/O activity evenly across the available DS6000s will enable you to optimally exploit the DS6000 resources, thus providing better performance.
9 Chapter 9. The DS Storage Manager: Logical configuration In this chapter, the following topics are discussed: Configuration hierarchy, terminology, and concepts Summary of the DS Storage Manager logical configuration steps Introducing the GUI and logical configuration panels The logical configuration process © Copyright IBM Corp. 2004. All rights reserved.
9.1 Configuration hierarchy, terminology, and concepts The DS Storage Manager provides a powerful, flexible, and easy to use application for the logical configuration of the DS6000. It is the client’s responsibility to configure the storage server to fit their specific needs. It is not in the scope of this redbook to show detailed steps and scenarios for every possible setup. For further assistance, help and guidance can be obtained from an IBM FTSS or an IBM Business Partner. 9.1.
Fiber Channel Port W W PN 4 W W PN 3 W W PN 2 W W PN 1 pSeries1 NO Host Attachment Group defined for host pSeries1.
DDM A Disk Drive Module (DDM) is a customer-replaceable unit that consists of a single disk drive and its associated packaging. Array sites An array site is a predetermined grouping of four individual DDMs of the same speed and capacity. Arrays Arrays consist of DDMs from one to two array sites, used to construct one RAID array. An array is given either a RAID-5 or RAID-10 format. Ranks One array forms one CKD or Fixed Block (FB) rank.
Extent Pool 1 3 1 0 0 5 0 1 0 7 0 1 1GB Extents Ranks formated for FB data Figure 9-2 Diagram of an extent pool containing 3 volumes In Figure 9-2, there is an example of one extent pool composed of ranks formatted for FB data. Three logical volumes are defined (volumes 1310, 0501, 0701). Two logical volumes (1310 and 0501) are made up of 6 extents at 1 GB each; this makes each volume 6 GBs. Logical Volume 0701 is built with 11 extents making an 11 GB Volume.
Any extent can be used to make a logical volume. There are thresholds that warn you when you are nearing the end of space utilization on the extent pool. There is a Reserve space option that will prevent the logical volume from being created in reserved space until the space is explicitly released. Note: A user can't control or specify which ranks in an extent pool are used when allocating extents to a volume.
Volume groups can be thought of as LUN groups. Do not confuse the term volume group here with that of volume groups on pSeries. The DS6000 Storage Manager volume groups have the following properties: Volume groups enable FB LUN masking. They contain one or more host attachments from different hosts and one or more LUNs. This allows sharing of the volumes/LUNs in the group with other host port attachments or other hosts that might be, for example, configured in clustering.
Host System A Host Attachment Host Attachment WWPN WWPN WWPN WWPN 1 2 Host System B Host System C 4 3 Volume Group 1 Volume Group 2 Host Attachment WWPN WWPN 5 7 6 8 Host Attachment WWPN WWPN It is possible to have several Host Attachments associated to one Volume Group.
Figure 9-5 shows an example of the relationship between LSSs, extent pools, and volume groups: Extent Pool 4, consisting of two LUNs, LUNs 0210 and 1401, and Extent Pool 5, consisting of three LUNs, LUNs 0313, 0512, and 1515. Here are some considerations regarding the relationship between LSSs, extent pools, and volume groups: Volumes from different LSSs and different extent pools can be in one volume group as shown in Figure 9-5.
– CKD LSSs definitions are configured during the LCU creation. – FB LSSs definitions are configured during the volume creation. LSSs have no predetermined relation to physical ranks or extent pools other than their server affinity to either server0 or server1. – One LSS can contain volumes/LUNs from different extent pools. – One extent pool can contain volumes/LUNs that are in different LSSs, as shown in Figure 9-6. – One LCU can contain CKD volumes/LUNs of different types.
9.1.2 Summary of the DS Storage Manager logical configuration steps It is our recommendation that the client consider the following concepts before performing the logical configuration. These recommendations are discussed in this chapter. Planning When configuring available space to be presented to the host, we suggest that the client approach the configuration from the host and work up to the DDM (raw physical disk) level.
Raw or physical DDM layer At the very top, you can see the raw DDMs. There are four DDMs in a group called a four-pack. They are placed into the DS6000 in pairs of two, making eight DDMs. DDM X represents one four-pack and DDM Y represents a pair from another four-pack. Upon placing the four or eight-packs into the DS6000 each four or eight-pack is grouped into array sites, shown as Layer 2. The DS6000 does support 4 DDM arrays; this is not described here.
Create Storage Complex Create Storage Units Define Host Systems and associated Host Attachments. Create Arrays by selecting Array Sites. Array Sites are already automatically pre-defined. Create Ranks and add Arrays in the ranks. Create Extent Pools, add ranks to Extent Pools and define the server 0 or server 1 affinity.
Figure 9-10 Entering the URL using the TCP/IP address for the SMC In Figure 9-10, we show the TCP/IP address and the port number 8452 afterwards. Figure 9-11 Entering the URL using the fully qualified name of your MC In Figure 9-11, we show the fully qualified name and the port number 8452 separated by a colon. For ease of identification, you could add a suffix such as 0 or 1 to the selected fully qualified name, for example, MC_0 for the default MC as shown in Figure 9-11.
Once the GUI is started and the user has successfully logged on, the Welcome panel will display as shown in Figure 9-12. The triangle expands the menu Figure 9-12 View of the Welcome panel Figure 9-12 shows the Welcome panel’s two menu choices. By clicking on the triangles, the menu expands to show the menu options needed to configure the storage.
Figure 9-13 View of the fully expanded Real-time Manager menu choices – Copy Services You can use the Copy Services selections of the DS Storage Manager if you choose Real-time during the installation of the DS Storage Manager and you purchased these optional features. A further requirement for using the Copy Services features is the activation of license activation keys. You need to obtain the Copy Services license activation keys (including the one for the use of PAVs) from the DSFA Web site.
Figure 9-14 View of the fully expanded Simulated Manager menu choices Express configuration Express configuration provides the simplest and fastest method to configure a storage complex. Some configuration methods require extensive time. Because there are many complex functions available to you, you are required to make several decisions during the configuration process. However, with Express configuration, the storage server makes several of those decisions for you.
Create and define the users and passwords Select User administration → Add user and click Go, as shown in Figure 9-15. The screen that is returned is shown in Figure 9-16. On this screen you can add users and set passwords, subject to the following guidelines: – The user name can be up to 16 characters. – Passwords must contain at least 5 alphabetic characters, and at least one special character, with an alphabetic character in the first and last positions.
To use the information center, click the question mark (?) icon shown in Figure 9-17. Figure 9-17 View of the information center 9.2.3 Navigating the GUI Knowing what icons, radio buttons, and check boxes to click in the GUI, will help you efficiently navigate your way through the configurator and successfully configure your storage. 1 2 3 4 5 Figure 9-18 The DS Storage Manager Welcome panel On the DS Storage Manager Welcome panel, several buttons appear that need some explanation. Chapter 9.
With reference to the numbers shown in Figure 9-18, the icons shown have the following meanings: 1. Click icon 1 to hide the My Work menu area. This increases the space for displaying main work area of the panel. 2. Icon 2 hides the black banner across the top of the screen, again to increase the space to display the panel you're working on. 3. Icon 3 allows you to properly Log out and exit the DS Storage Manager GUI. 4. Use icon 4 to access the Info Center.
1 2 3 4 5 6 8 7 Figure 9-20 View of the Storage Complexes section The buttons displayed on the Storage complexes screen, shown in Figure 9-19 and called out in detail in Figure 9-20, have the following meanings. Boxes 1 through 6 are for selecting and filtering: – – – – – – 1 - Select all 2 - Deselect all 3 - Show filter row 4 - Clear all filters 5 - Edit sort 6 - Clear all sorts The caret called out as item 7 is a simple ascending/descending sort on a single column.
Radio buttons Check boxes Figure 9-22 View of radio buttons and check boxes in the host attachment panel In the example shown in Figure 9-22, the radio button is checked to allow specific host attachments for selected storage unit I/O ports only. The check box has also been selected to show the recommended location view for the attachment. Only one radio button can be selected in the group, but multiple check boxes can be selected. 9.
Figure 9-24 View of the Define properties panel, with the Nickname defined Do not click Create new storage unit at the bottom of the screen shown in Figure 9-24. Click Next and Finish in the verification step. 9.3.2 Configuring the storage unit To create the storage unit, expand the Manage Hardware section, click Storage units (2), click Create from the Select action pull-down and click Go. Follow the panel directions with each advancing window.
1 2 3 4 Figure 9-25 View of the General storage unit information panel As illustrated in Figure 9-25, fill in the required fields as follows: 1. Click the Machine Type-Model from the pull-down list. 2. Fill in the Nickname. 3. Type in the Description. 4. Click the Select Storage Complex from the pull-down, and choose the storage complex on which you wish to create the storage unit. Clicking Next will advance you to the Specify DDM packs panel, as shown in Figure 9-26.
Configuration advancements steps Figure 9-26 View of Specify DDM packs panel, with the Quantity and DDM type added Click Next to advance to the Define licensed function panel, under the Create storage unit path, as shown in Figure 9-27. Figure 9-27 View of the Defined licensed function panel Fill in the fields shown in Figure 9-27 as follows: The number of licensed TB for the Operation environment. The quantity of storage covered by a FlashCopy License, in TB.
Follow the steps specified on the panel shown in Figure 9-27. The next panel, shown in Figure 9-28, requires you to enter the storage type details. Figure 9-28 Specify the I/O adapter configuration panel Enter the appropriate information and click OK. 9.3.3 Configuring the logical host systems To create a logical host for the storage unit that you just created, click Host Systems, as shown in Figure 9-29. You may want to expand the work area.
Figure 9-30 View of Host Systems panel, with the Go button selected Click the Select Storage Complex action pull-down, and highlight the storage complex on which you wish to configure, click Create and Go. The screen will advance to the General host information panel shown in Figure 9-31. Figure 9-31 View of the General host information panel Click Next to advance the screen to the Define Host System panel shown in Figure 9-32 on page 178. Chapter 9.
Figure 9-32 View of Define Host Systems panel Enter the appropriate information in the Define host ports panel, as shown in Figure 9-32. Note: Selecting Group ports to share a common set of volumes will group the host ports together into one attachment. Each host port will require a WWPN to be entered now, if you are using the Real-time Manager, or later if you are using the Simulated Manager. Click Add, and the Define host ports panel will update the new information as shown in Figure 9-33.
Click Next, and the screen will advance to the Select storage units panel shown in Figure 9-34. Figure 9-34 View of the Select storage unit panel Highlight the Available storage units that you wish, click Add and Next. The screen will advance to the Specify storage units parameters shown in Figure 9-35 on page 180. Chapter 9.
Figure 9-35 View of the Specify storage units parameters panel Under the Specify storage units parameters, do the following: 1. Click the Select volume group for host attachment pull-down, and highlight Select volume group later. 2. Click any valid storage unit I/O ports under the This host attachment can login to field. 3. Click Apply assignment and OK. 4. Verify and click Finish. 9.3.4 Creating arrays from array sites Under Configure Storage, click Arrays.
Clicking the check box will make an eight disk array Figure 9-36 View of the Definition method panel From the Definition method panel, if you choose Create arrays automatically, the system will automatically take all the space from an array site and place it into an array. You will also notice that by clicking the check box next to Create an 8 disk array, that two 4 disk array sites will be placed together to form an eight disk array.
Enter the appropriate information for the quantity of the arrays and the RAID type. Note: If you choose to create eight disk arrays, then you will only have half as many arrays and ranks as if you would have chosen to create four disk arrays. Click Next to advance to the Add arrays to ranks panel shown in Figure 9-38. If you click the check box next to the Add these arrays to ranks, then you will not have to configure the ranks separately at a later time.
Figure 9-39 View of creating custom arrays from four disk array sites. At this point you can select from the list of four disk array sites to put together to make an eight disk array. If you click Next the second array-site selection panel is displayed, as shown in Figure 9-40 on page 184. Chapter 9.
Figure 9-40 View of the second array-site selection panel From this panel you can select the array sites from the pull-down list to make an eight disk array. 9.3.5 Creating extent pools To create extent pools, expand the Configure Storage section, click Extent pools, click Create from the Select Action pull-down and click Go. Follow the panel directions with each advancing window.
The extent pools will take on either a server 0 or server 1 affinity at this point, as shown in Figure 9-42. Figure 9-42 View of Define properties panel Click Next and Finish. 9.3.6 Creating FB volumes from extents Under Simulated Manager, expand the Open systems section and click Volumes. Click Create from the Select Action pull-down and click Go. Follow the panel directions with each advancing window.
radio button Figure 9-43 View of the Select extent pool panel To determine the quantity and size of the volumes, use the calculators to determine the max size versus quantity as shown in Figure 9-44. Even numbered LSSs Figure 9-44 The Define volume properties panel It is here that the volume will take on the LSS numbering affinity.
Note: Since server 0 was selected for the extent pool, only even LSS numbers are selectable, as shown in Figure 9-44. You can give the volume a unique name and number that may help you manage the volumes, as shown in Figure 9-45. Figure 9-45 View of the Create volume nicknames panel Click Finish to end the process of creating the volumes. 9.3.7 Creating volume groups Under Simulated Manager → Open Systems, perform the following steps to configure the volume groups: 1. Click Volume Groups. 2.
Figure 9-46 The Define volume group properties filled out Select the host attachment you wish to associate the volume group with. See Figure 9-47.
Select the volumes for the group panel, as shown in Figure 9-48. Figure 9-48 The Select volumes for group panel Click Finish. 9.3.8 Assigning LUNs to the hosts Under Simulated Manager, perform the following steps to configure the volumes: 1. Click Volumes. 2. Select the check box next to the volume that you want to assign. 3. Click the Select Action pull-down, and highlight Add To Volume Group. 4. Click Go. 5. Click the check box next to the desired volume group and click the Apply button. 6. Click OK.
4. Click Go. 5. Click OK. 9.3.10 Creating CKD LCUs Under Simulated Manager, zSeries, perform the following steps: 1. Click LCUs. 2. Click the Select Action pull-down, and highlight Create. 3. Click Go. 4. Click the check box next to the LCU ID you wish to create. 5. Click Next. 6. In this panel do the following: a. Enter the desired SSID. b. Select the LCU type. c. Accept the defaults on the other input boxes, unless you are using Copy Services. 7. Click Next. 8. Click Finish. 9.3.
12.Under the Define alias assignments panel, do the following: a. Click the check box next to the LCU number. b. Enter the starting address. c. Select the order of Ascending or Descending. d. Select the aliases per volumes. For example, 1 alias to every 4 base volumes, or 2 aliases to every 1 base volume. e. Click Next. 13.Under the Verification panel, click Finish. 9.3.
9.3.13 Displaying the storage units WWNN in the DS Storage Manager GUI Under Simulated manager, perform the following steps to display the WWNN of the storage unit: 1. Click Simulated manager as shown in Figure 9-50. Figure 9-50 View of the Real-time Manager panel 2. Click Storage units. 3. Select the check box beside the storage unit name, as shown in Figure 9-51. Figure 9-51 View of the storage unit with the radio button and the Properties selected 4.
Figure 9-52 View of the WWNN in the General panel The WWNN number is displayed as shown in Figure 9-52. 9.4 Summary In this chapter we have discussed the configuration hierarchy, terminology and concepts. We have recommended an order and methodology for configuring the DS6000 storage server. We have included some logical configuration steps and examples and explained how to navigate the GUI. Chapter 9.
194 DS6000 Series: Concepts and Architecture
10 Chapter 10. DS CLI This chapter provides an introduction to the DS Command-Line Interface (DS CLI), which can be used to configure and maintain the DS6000 and DS8000 series. It also describes how it can be used to manage Copy Services relationships. In this chapter we describe: Functionality Supported environments Installation methods Command flow User security Usage concepts Usage examples Mixed device environments and migration DS CLI migration example © Copyright IBM Corp. 2004.
10.1 Introduction The IBM TotalStorage DS Command-Line Interface (the DS CLI) is a software package that allows open systems hosts to invoke and manage Copy Services functions as well as to configure and manage all storage units in a storage complex. The DS CLI is a full-function command set. In addition to the DS6000 and DS8000, the DS CLI can also be used to manage Copy Services on the ESS 750s and 800s, provided they are on ESS code versions 2.4.2.x and above.
Manage host access to volumes Configure host adapter ports The DS CLI can be used to invoke the following Copy Services functions: FlashCopy - Point-in-time Copy IBM TotalStorage Metro Mirror - Synchronous Peer-to-Peer Remote Copy (PPRC) IBM TotalStorage Global Copy - PPRC-XD IBM TotalStorage Global Mirror - Asynchronous PPRC 10.3 Supported environments The DS CLI will be supported on a very wide variety of open systems operating systems. At present the supported systems are: AIX 5.1, 5.
and then follows the prompts. If using a GUI, the user navigates to the CD root directory and clicks on the relevant setup executable. 3. The DS CLI is then installed. The default install directory will be: – /opt/ibm/dscli - for all forms of UNIX – C:\Program Files\IBM\dscli - for all forms of Windows – SYS:\dscli - for Novell Netware 10.5 Command flow To understand migration or co-existence considerations, it is important to understand the flow of commands in both the ESS CLI and the DS CLI.
Open systems host CLI script ESS CLIsoftware software ESSCS CLI Network interface CS Server A CS Server B CS Client CS Client Cluster 1 Cluster 2 ESS 800 Figure 10-1 Command flow for ESS 800 Copy Services commands A CS server is now able to manage up to eight F20s and ESS 800s. This means that up to sixteen clusters can be clients of the CS server. All FlashCopy and remote copy commands are sent to the CS server which then sends them to the relevant client on the relevant ESS.
Open systems host Storage HMC CLI interpreter CLI script ESS CLI software software DS CLI External network interface Network interface CS Server A CS Server B CS Client CS Client Cluster 1 Cluster 2 ESS 800 Dual internal network interfaces CLI interface CLI interface Server 0 Server 1 DS8000 Figure 10-2 DS CLI Copy Services command flow DS8000 split network One thing that you may notice about Figure 10-2 is that the S-HMC has different network interfaces.
Open systems host DS Storage Management PC CLI script ESS CLIsoftware software DS CLI CLI interpreter Network interface Network interface CLI interface CLI interface Controller 1 Controller 2 DS6000 Figure 10-3 Command flow for the DS6000 For the DS6000, it is possible to install a second network interface card within the DS Storage Manager PC. This would allow you to connect it to two separate switches for improved redundancy.
Open systems host ESS 800 tasks DS8000 tasks ESS CLI software DS CLI software Network interface Storage HMC CLI interpreter External network interface Dual internal network interfaces CLI interface Infoserver CLI interface Infoserver Cluster 1 Cluster 2 ESS 800 Server 0 Server 1 DS8000 Figure 10-4 CLI co-existence Storage management ESS CLI commands that are used to perform storage management on the ESS 800, are issued to a process known as the infoserver.
10.6 User security The DS CLI software must authenticate with the DS MC or CS Server before commands can be issued. An initial setup task will be to define at least one userid and password whose authentication details are saved in an encrypted file. A profile file can then be used to identify the name of the encrypted password file. Scripts that execute DS CLI commands can then use the profile file to get the password needed to authenticate the commands.
exit status of dscli = 0 C:\Program Files\IBM\dscli> It is also possible to include single commands in a script, though this is different from the script mode described later. This is because every command that uses the DS CLI would invoke the DS CLI and then exit it. A simple Windows script is shown in Example 10-2.
a single instance of the DS CLI interpreter. Comments can be placed in the script if they are prefixed by a hash (#). A simple example of a script mode script is shown in Example 10-5. Example 10-5 DS CLI script mode example # This script issues the 'lsuser' command lsuser # end of script In this example, the script was placed in a file called listAllUsers.cli, located in the scripts folder within the DS CLI folder. It is then executed by using the dscli -script command, as shown in Example 10-6.
Example 10-8 Use of the help -l command dscli> help -l mkflash mkflash [ { -help|-h|-? } ] [-fullid] [-dev storage_image_ID] [-tgtpprc] [-tgtoffline] [-tgtinhibit] [-freeze] [-record] [-persist] [-nocp] [-wait] [-seqnum Flash_Sequence_Num] SourceVolumeID:TargetVolumeID Man pages A man page is available for every DS CLI command. Man pages are most commonly seen in UNIX-based operating systems to give information about command capabilities.
echo A DS CLI application error occurred. goto end :level5 echo An authentication error occurred. Check the userid and password. goto end :level4 echo A DS CLI Server error occurred. goto end :level3 echo A connection error occurred. Try pinging 10.0.0.1 echo If this fails call network support on 555-1001 goto end :level2 echo A syntax error. Check the syntax of the command using online help. goto end :level0 echo No errors were encountered.
# The following command checks the status of the ranks lsrank -dev IBM.2107-9999999 # The following command assigns rank0 (R0) to extent pool 0 (P0) chrank -extpool P0 -dev IBM.2107-9999999 R0 # The mklcu # The mklcu following command creates -dev IBM.2107-9999999 -ss following command creates -dev IBM.
Migration considerations If your environment is currently using the ESS CS CLI to manage Copy Services on your model 800s, you could consider migrating your environment to the DS CLI. Your model 800s will need to be upgraded to a microcode level of 2.4.2 or above. If your environment is a mix of ESS F20s and ESS 800s, it may be more convenient to keep using only the ESS CLI. This is because the DS CLI cannot manage the ESS F20 at all, and cannot manage storage on an ESS 800.
Figure 10-6 A portion of the tasks listed by using the GUI In Example 10-12, the list task command is used. This is an ESS CLI command. Example 10-12 Using the list task command to list all saved tasks (only the last five are shown) arielle@aixserv:/opt/ibm/ESScli > esscli list task -s 10.0.0.1 -u csadmin -p passw0rd Wed Nov 24 10:29:31 EST 2004 IBM ESSCLI 2.4.
Figure 10-7 Using the GUI to get the contents of a FlashCopy task It makes more sense, however, to use the ESS CLI show task command to list the contents of the tasks, as depicted in Example 10-13. Example 10-13 Using the command line to get the contents of a FlashCopy task mitchell@aixserv:/opt/ibm/ESScli > esscli show task -s 10.0.0.1 -u csadmin -p passw0rd -d "name=Flash10041005" Wed Nov 24 10:37:17 EST 2004 IBM ESSCLI 2.4.
Table 10-3 Converting a FlashCopy task to DS CLI ESS CS CLI parameter Saved task parameter DS CLI coversion Explanation Tasktype FCEstablish mkflash An FCEstablish becomes a mkflash. Options NoBackgroundCopy -nocp To do a FlashCopy no-copy we use the -nocp parameter. SourceServer 2105.23953 -dev IBM.2105-23953 The format of the serial number changes. you must use the exact syntax. TargetServer 2105.23953 N/A We only need to use the -dev once, so this is redundant.
You can also confirm the status of the FlashCopy by using the Web Copy Services GUI, as shown in Figure 10-8. Figure 10-8 FlashCopy status via the ESS 800 Web Copy Services GUI 10.10.4 Using DS CLI commands via a single command or script Having translated a saved task into a DS CLI command, you may now want to use a script to execute this task upon request. Since all tasks must be authenticated you will need to create a userid.
Having added the userid called csadmin, the password has been saved in an encrypted file called security.dat. By default, the file is placed in: c:\Documents and settings\\DSCLI\security.dat or $HOME/dscli/security.dat You can however use the -pwfile parameter to place this file anywhere. Setting up a profile Having created a userid, you will need to edit the profile used by the DS CLI to store the S-HMC IP address (or fully qualified name) and other common parameters.
# Default target Storage Image ID devid: IBM.2105-23953 An example of a command where the password is entered in plain text is shown in Example 10-17. In this example the lsuser command is issued directly to a DS MC. Note that the password will still be sent using SSL so a network sniffer would not be able to view it easily. Note also that the syntax between the command and the profile is slightly different.
10.11 Summary This chapter has provided some important information about the DS CLI. This new CLI allows considerable flexibility in how DS6000 and DS8000 series storage servers are configured and managed. It also detailed how an existing ESS 800 customer can benefit from the new flexibility provided by the DS CLI.
Part 4 Part 4 Implementation and management in the z/OS environment In this part we discuss considerations for the DS6000 series when used in the z/OS environment. The topics include: z/OS software Data migration in the z/OS environment © Copyright IBM Corp. 2004. All rights reserved.
218 DS6000 Series: Concepts and Architecture
11 Chapter 11. Performance considerations This chapter discusses early performance considerations regarding the DS6000 series. Disk Magic modelling for DS6000 is going to be available in early 2005. Contact your IBM sales representative for more information about this tool and the benchmark testing that was done by the Tucson performance measurement lab. Note that Disk Magic is an IBM internal modelling tool.
11.1 What is the challenge? In recent years we have seen an increasing speed in developing new storage servers which can compete with the speed at which processor development introduces new processors. On the other side, investment protection as a goal to contain Total Cost of Ownership (TCO), dictates inventing smarter architectures that allow for growth at a component level.
To host servers Host server Processor Adapter Adapter Adapter Adapter Processor Memory Processor To storage servers Adapter Storage server Memory Processor Adapter Figure 11-1 Host server and storage server comparison: Balanced throughput challenge The challenge is obvious: Develop a storage server—from the top with its host adapters down to its disk drives—that creates a balanced system with respect to each component within this storage server and with respect to their interconnectivity wit
11.2.1 SSA backend interconnection The Storage Serial Architecture (SSA) connectivity with the SSA loops in the lower level of the storage server or backend imposed RAID rank saturation and reached their limit of 40 MB/sec for a single stream file I/O. IBM decided not to pursue SSA connectivity, despite its ability to communicate and transfer data within an SSA loop without arbitration. 11.2.
relatively small logical volumes, we ran out of device numbers to address an entire LSS. This happens even earlier when configuring not only real devices (3390B) within an LSS, but also alias devices (3390A) within an LSS in z/OS environments. By the way, an LSS is congruent to an logical control unit (LCU) in this context. An LCU is only relevant in z/OS and the term is not used for open systems operating systems. 11.
To host servers Host server Processor Adapter Adapter Adapter Adapter Processor Memory Processor To storage servers Adapter Storage server Memory Processor Adapter Fibre Channel switch o oo 16 DDM Fibre Channel switch Figure 11-2 Switched FC-AL disk subsystem Performance is enhanced as both DAs connect to the switched Fibre Channel disk subsystem backend as displayed in Figure 11-3 on page 225. Note that each DA port can concurrently send and receive data.
To host servers Memory Processor Processor Storage server Adapter Adapter Adapter Adapter To next switch Fibre Channel switch ooo 16 DDM Fibre Channel switch Figure 11-3 High availability and increased bandwidth connecting both DA to two logical loops These two switched point-to-point loops to each drive, plus connecting both DAs to each switch, accounts for the following: There is no arbitration competition and interference between one drive and all the other drives because there is no hardwa
This just outlines the physical structure. A virtualization approach built on top of the high performance architecture contributes even further to enhanced performance. For details see Chapter 4, “Virtualization concepts” on page 65. 11.3.2 Fibre Channel device adapter The DS6000 still relies on eight DDMs to form a RAID-5 or a RAID-10 array. With the virtualization approach and the concept of extents, the DAs are mapping the virtualization level over the disk subsystem back end.
With two sets of HA chip sets the DS6000 series can configure up to eight FICON or FCP ports.
To host servers 2 Gbps Fibre Channel ports Adapter Adapter Controller Card 1 Processor Adapter Processor Adapter Storage server Host adapter chipset Power PC chipset Processor memory Volatile Device adapter chipset Persistent 2 Gbps Fibre Channel ports Figure 11-6 Standard PowerPC processor complexes for DS6800-511 The next figure, Figure 11-7, provides a less abstract view.
2 Gbps Fibre Channel ports 2 Gbps Fibre Channel ports Host adapter chipset Power PC chipset Controller Card 1 Controller Card 0 Server 0 Host adapter Server 1 Interconnect Processor memory Processor memory Volatile Volatile Persistent Persistent Power PC chipset Device adapter Device adapter chipset chipset chipset 2 Gbps Fibre Channel ports 2 Gbps Fibre Channel ports Fibre Channel switch To next switch ooo 16 DDM Fibre Channel switch Server enclosure Fibre Channel switch ooo 16 D
Server 0 Server 1 Server enclosure Fibre Channel switch ooo 16 DDM Fibre Channel switch Fibre Channel switch ooo Expansion enclosure 16 DDM Fibre Channel switch Fibre Channel switch ooo 16 DDM Expansion enclosure Fibre Channel switch Fibre Channel switch ooo Expansion enclosure 16 DDM Fibre Channel switch Figure 11-9 DS6000 interconnects to expansion enclosures and scales very well Figure 11-9 outlines how expansion enclosures connect through inter-switch links to the server enclosure.
11.4.2 Data placement in the DS6000 Once you have determined the disk subsystem throughput, the disk space and number of disks required by your different hosts and applications, you have to make a decision regarding the data placement. As is common for data placement and to optimize the DS6000 resources utilization, you should: Equally spread the LUNs across the DS6000 servers. Spreading the LUNs equally on rank group 0 and 1 will balance the load across the DS6000 servers.
Balanced implementation: LVM striping 1 rank per extent pool Rank 1 Extent Rank 2 1GB Extent pool 1 Extent pool 2 Non-balanced implementation: LUNs across ranks More than 1 rank per extent pool Extent Pool Pool 5 Extent 2 GB LUN 1 Rank 5 2GB LUN 2 8GB LUN Rank 6 Extent 1GB Extent pool 3 2GB LUN 3 Rank 7 Rank 3 Extent pool 4 2GB LUN 4 Rank 8 Rank 4 LV stripped across 4 LUNs Figure 11-10 Spreading data across ranks Note: The recommendation is to use host striping wherever possible to dis
11.4.5 Determining the number of paths to a LUN When configuring the IBM DS6000 for an open systems host, a decision must be made regarding the number of paths to a particular LUN, because the multipath software allows (and manages) multiple paths to a LUN. There are two opposing factors to consider when deciding on the number of paths to a LUN: Increasing the number of paths increases availability of the data, protecting against outages.
11.5.1 Connect to zSeries hosts Figure 11-11 displays a configuration fragment on how to connect a DS6800 to a FICON host.
13,600 I/Os per second with the conservative numbers. These numbers vary depending on the server type used. The ESS 800 has an aggregated bandwidth of about 500 MB/sec for highly sequential reads and about 350 MB/sec for sequential writes. The DS6800 can achieve higher data rates than an ESS 800. In a z/OS environment a typical transaction workload might perform on an ESS 800 Turbo II with a large cache configuration slightly better than with a DS6800.
0.2% cache to backstore ratio for high performance open systems 0.2% for z/OS for standard performance A ratio of 0.1% between cache size and backstore capacity for open system environments for standard performance S/390 or zSeries channel consolidation The number of channels plays a role as well when sizing DS6000 configurations and when we know from where we are coming.
With 15K RPM DDMs you need the equivalent of three 8 packs to satisfy the I/O load from the host for this example. Note the DS6000 can also be configured with a RAID array comprised of four DDMs. Depending on the required capacity, you then decide the disk capacity, provided each desired disk capacity has 15k RPM. When the access density is less and you need more capacity, follow the example with higher capacity disks, which usually spin at a slower speed like 10k RPM.
control the placement of each single volume and where it ends up in the disk subsystem. For the DS6000 this would have the advantage that you can plan for proper volume placement with respect to preferred paths. In the example in Figure 11-12 each rank is in its own extent pool. The evenly numbered extent pools have an affinity to the left server, server 0. The odd number extent pools have an affinity to the right server, server 1.
Minimize the number of extent pools The other extreme is to create just two extent pools when the DS6000 is configured as CKD storage only. You would then subdivide the disk subsystem evenly between both processor complexes or servers as Figure 11-13 shows.
preferred path blue I/O red I/O blue I/O Server 0 Extent pool0 1 red I/O Server 1 blue Rank preferred path red Extent pool2 Extent pool3 3 5 5 3 7 9 9 7 11 Extent pool4 blue extent pools Extent pool1 1 11 Extent pool5 red extent pools Figure 11-14 Mix of extent pools Create two general extent pools for all the average workload and the majority of the volumes and subdivide these pools evenly between both processor complexes or servers.
128DDMs. Depending on the DDM size this reaches a total of up to 67.2 TB. Just the base enclosure provides up to 4.8 TB of physical storage capacity with 16 DDMs and 300 GB per DDM. The small and fast DS6000 with its rich functionality and compatibility with the ESS 750, ESS 800, and DS8000, in all functional respects makes this a very attractive choice. Chapter 11.
242 DS6000 Series: Concepts and Architecture
12 Chapter 12. zSeries software enhancements This chapter discusses z/OS, z/VM, z/VSE™, and Transaction Processing Facility (TPF) software enhancements that support the DS6000 series. The enhancements include: Scalability support Large volume support Hardware configuration definition (HCD) to recognize the DS6000 series Performance statistics Resource Measurement Facility (RMF) Preferred pathing © Copyright IBM Corp. 2004. All rights reserved.
12.1 Software enhancements for the DS6000 A number of enhancements have been introduced into the z/OS, z/VM, z/VSE,VSE/ESA and TPF operating systems to support the DS6000. The enhancements are not just to support the DS6000, but also to provide additional benefits that are not specific to the DS6000. 12.2 z/OS enhancements The DS6000 series simplifies system deployment by supporting major server platforms.
well with a DS6000 that has the capability to scale up to 8192 devices. With the current support, we may have CPU or spin lock contention, or exhaust storage below the 16M line at device failover. Now with z/OS 1.4 and higher with the DS6000 software support, the IOS recovery has been improved by consolidating unit checks at an LSS level instead of each disconnected device. This consolidation will shorten the recovery time as a result of I/O errors.
Today control unit single point of failure information is specified in a table and must be updated for each new control unit. Instead, we can use the Read Availability Mask command to retrieve the information from the control unit. By doing this, there is no need to maintain a table for this information. 12.2.4 Initial Program Load (IPL) enhancements During the IPL sequence the channel subsystem selects a channel path to read from the SYSRES device.
ICKDSF DFSORT EREP 12.2.7 New performance statistics There are two new sets of performance statistics that will be reported by the DS6000. Since a logical volume is no longer allocated on a single RAID rank with a single RAID type or single device adapter pair, the performance data will be provided with a new set of rank performance statistics and extent pool statistics. The RAID RANK reports will no longer be reported by RMF and IDCAMS LISTDATA batch reports.
sections: Extent Pool Statistics and Rank Statistics. These statistics are generated from SMF record 74 subtype 8: The ESS Extent Pool Statistics section provides capacity and performance information about allocated disk space. For each extent pool, it shows the real capacity and the number of real extents. The ESS Rank Statistics section provides measurements about read and write operations in each rank of an extent pool. It also shows the number of arrays and the array width of all ranks.
D M=DEV(9E02) IEE174I 09.14.23 DISPLAY M 945 DEVICE 9E02 STATUS=ONLINE CHP 38 42 DEST LINK ADDRESS 2F 2F ENTRY LINK ADDRESS 27 27 PATH ONLINE Y Y CHP PHYSICALLY ONLINE Y Y PATH OPERATIONAL Y Y PATH ATTRIBUTES PF NP MANAGED N N MAXIMUM MANAGED CHPID(S) ALLOWED: 0 DESTINATION CU LOGICAL ADDRESS = 00 CU ND = 001750.000.IBM.13.000000048156 DEVICE NED = 001750.000.IBM.13.000000048156 PAV BASE AND ALIASES 7 Figure 12-3 D M=DEV command output 12.2.
VSE/ESA 2.7 and higher. Important: Always review the latest Preventative Service Planning (PSP) 1750DEVICE bucket for software updates. The PSP information can be found at: http://www-1.ibm.com/servers/resourcelink/svc03100.nsf?OpenDatabase VSE/ESA does not support 64K LVs for the DS6000. 12.5 TPF enhancements TPF is an IBM platform for high volume, online transaction processing. It is used by industries demanding large transaction volumes, such as airlines and banks.
13 Chapter 13. Data Migration in zSeries environments This chapter describes several methods for migrating data from existing disk storage servers onto the DS6000 disk storage server. This includes migrating data from the ESS 2105 as well as from other disk storage servers to the new DS6000 disk storage server. The focus is on z/OS environments.
13.1 Define migration objectives in z/OS environments Data migration is an important activity that needs to be planned well to ensure the success of DS6000 implementation. Because today’s business environment does not allow you to interrupt data processing services, it is crucial to make the data migration onto the new storage servers as smooth as possible.
8-packs. The allocation of a volume happens in extents or increments of the size of an IBM 3390-1 volume, or 1,113 cylinders. So a 3390-3 consists of exactly three extents from an extent pool. A 3390-9 with 10,017 cylinders comprises 9 extents out of an extent pool. There is no affinity any longer to an 8-pack for a logical volume like a 3390-3 or any other sized 3390 volume.
source storage subsystems to one or fewer target storage servers. A second migration layer might be to consolidate multiple source volumes to larger target volumes, which is also called volume folding. The latter is in general more difficult to do and requires data migration on a data set level. It usually needs a few but brief service interruptions, when moving the remaining data sets which are usually open and active 24 hours every day. 13.
volume are not available to the application in order to keep data consistency between when the DUMP is run and when the RESTORE command completes. The advantage of this method is that it creates a copy which offers fail-back capabilities. When source and target disk servers are not available at the same time for migration, this might be a feasible approach to migrate the data over to the new hardware. DFSMSdss is optimized to read and write data sequentially as fast as possible.
Monitor Task Customer Systems Monitor Task Agent Agent ESCON / FICON Piper hardware Online Offline Source Volume Target Volume Swap Task ESCON Integrated tool Figure 13-2 Piper for z/OS environment configuration Currently this server is a Mulitprise 3000, which can connect through ESCON channels only. This will exclude this approach to migrate data to the DS6800, which provides pure FICON and Fibre Channel connectivity only.
Most of these benefits also apply to migration efforts controlled by the customer when utilizing TDMF or FDRPAS in customer-managed systems. To summarize: Piper for z/OS is an IGS service offering which relieves the customer of the actual migration process and requires customer involvement only in the planning and preparation phase. The actual migration is transparent to the customer’s application hosts and Piper even manages a concurrent switch-over to the target volumes.
target volumes in the new disk storage server. It can restart immediately to connect to the new disk storage server after the XRC secondary volumes have been relabeled by a single XRC command per XRC session, XRECOVER. In addition to transparent data replication, the advantage for XRC is extreme scalability. Figure 13-3 shows that XRC can run either in existing system images or in a dedicated LPAR. Each image can host up to five System Data Movers (SDM).
FICON ESCON Intermediate ESS 800 PPRC PPRC ESS E20s ESCON FCP Links Links DS 6000 Cascading Global Copy Metro Mirror Metro/Global Copy Figure 13-4 Intermediate ESS 800 used to migrate data with PPRC over ESCON To utilize the advantage of PPRC with concurrent data migration, on a physical volume level though, from older ESS models like the ESS F20, an ESS 800 (or in less active configurations an ESS 750) might be used during the migration period to bridge from a PPRC ESCON link storage server to th
FICON ESS 750 Global Copy ESS 800 PPRC FCP Links OR Metro Mirror DS 6000 Figure 13-5 Metro Mirror or Global Copy from ESS 750 or ESS 800 to DS6800 Metro Mirror (Synchronous PPRC) provides data consistency at any time once the volumes are in full DUPLEX state, although through its synchronous approach it imposes a slight impact to the application write I/Os to the source storage subsystems from where we migrate the data.
Figure 13-6 Check with Global Copy whether all data was replicated to the new volume This approach is not really practical though. ICKDSF also allows you to query the status of a Global Copy primary volume and displays the amount of data which is not yet replicated, as shown in Example 13-1.
Example 13-2 All data is replicated PPRCOPY DDNAME(DD02) QUERY ICK00700I DEVICE INFORMATION FOR 6102 IS CURRENTLY AS FOLLOWS: PHYSICAL DEVICE = 3390 STORAGE CONTROLLER = 2105 STORAGE CONTROL DESCRIPTOR = E8 DEVICE DESCRIPTOR = 0A ADDITIONAL DEVICE INFORMATION = 4A000035 ICK04030I DEVICE IS A PEER TO PEER REMOTE COPY VOLUME QUERY REMOTE COPY - VOLUME DEVICE -----6102 PATHS ----1 LEVEL --------PRIMARY STATE PATH STATUS -------------- ----------PENDING XD ACTIVE SAID/DEST --------00A4 0020 STATUS -----13
* PRIMARY.... 5005076300C09621 2.4.01.0062 * * SECONDARY.1 5005076300C09629 * ******************************************************************** ANTP0001I CQUERY COMMAND COMPLETED FOR DEVICE 6102. COMPLETION CODE: 00 A quick way is to open the data set which received the SYSTSPRT output from TSO in batch and exclude all data. Then an F COMPLETE ALL would only display a single line per volume and you could quickly spot when a volume is not 100% complete.
The following software products and components support logical data migration: DFSMS allocation management Allocation management by CA-ALLOC DFSMSdss DFSMShsm™ FDR System utilities like: – IDCAMS with REPRO, EXPORT / IMPORT commands – IEBCOPY to migrate Partitioned Data Sets (PDS) or Partitioned Data Sets Extended (PDSE) – ICEGENER as part of DFSORT which can handle sequential data but not VSAM data sets, which also applies to IEBGENER CA-Favor CA-DISK or ASM2 Database utilities for data which is mana
selection filters, to copy all data sets onto cartridges (ABACKUP) and then restore the aggregate. This is nothing but the group of data sets which have been selected through filtering, put back onto the new DS8000 storage servers. For more information, see the redbook DFSMShsm ABARS and Mainstar Solutions, SG24-5089. System utilities don’t play an import role any longer since DFSMSdss incorporated the capability to manage all data set types for copy and move operations in a very efficient way.
ESCON SG1 A ESCON B D C E F SG2 ESS E20 ESS E20 RO RO*ALL,V *ALL,VSMS,VOL(DDDDDD),D,N SMS,VOL(DDDDDD),D,N VVSMS,VOL(KKKKKK),D,N SMS,VOL(KKKKKK),D,N IGD010I IGD010IVOLUME VOLUME(KKKKKK,MCECEBC (KKKKKK,MCECEBC))STATUS STATUSIS ISNOW NOWDISABLED,NEW DISABLED,NEW RO RO*ALL,V *ALL,VSMS,VOL(AAAAAA),D,N SMS,VOL(AAAAAA),D,N VVSMS,VOL(AAAAAA),D,N SMS,VOL(AAAAAA),D,N IGD010I VOLUME IGD010I VOLUME(AAAAAA,MCECEBC (AAAAAA,MCECEBC))STATUS STATUSIS ISNOW NOWDISABLED,NEW DISABLED,NEW DS 6000 DS2 DS1 Not dr
3. Alter - Alter a Storage Group 4. Volume - Display, Define, Alter or Delete Volume Information If List Option is chosen, Enter "/" to select option Respecify View Criteria Respecify Sort Criteria Use ENTER to Perform Selection; Use HELP Command for Help; Use END Command to Exit. The next panel which appears is in Example 13-7. Option 3 allows you to alter the SMS volume status. Here you select the volumes which ought to receive the change.
MCECEBC ===> ENABLE ===> ===> ===> ===> ===> ===> ===> MZBCVS2 ===> ENABLE ===> ===> ===> ===> ===> ===> ===> DISALL, DISNEW, QUIALL, QUINEW ) * SYS GROUP = sysplex minus systems in the sysplex explicitly defined in the SCDS In this panel we overtype the SMS volume status with the desired status change. This shows in the following panel, shown in Example 13-9. Example 13-9 Indicate SMS volume status change for all connected system images SMS VOLUME STATUS ALTER Command ===> Page 1 of 2 SCDS Name . .
===> Use ENTER to Perform Selection; Use HELP Command for Help; Use END Command to Exit. In this example all volumes that were selected through the filtering in the previous panel no longer allow any new allocation on these volumes. But this happens only after the updated SCDS is activated and copied into the Active Control Data Set (ACDS). A way to activate a new SMS configuration is through the ISMF Primary Menu panel with option 8.
//SYSIN COPY DD * STORGRP(SG1) DS(INC(**) EXCLUDE(SYS1.VTOCIX.*,SYS1.VVDS.*)) DELETE CATALOG SELECTMULTI(ANY) SPHEREALLDATA(*) ALLX WAIT(00,00) ADMIN OPT(3) CANCELERROR /* //* ------------------------------------------------------------- *** //AGAIN EXEC PGM=IEBGENER //SYSPRINT DD DUMMY //SYSUT1 DD DSN=WB.MIGRATE.
EXTVTOC, which requires you to delete and rebuild the VTOC index using EXTINDEX in the REFORMAT command. Then perform the logical data set copy operation to the larger volumes. This allows you to use either DFSMSdss logical copy operations or the system-managed data approach. When a level is reached where no data moves any more because the remaining data sets are in use all the time, some down time has to be scheduled to perform the movement of the remaining data.
13.6 Summary of data migration The route which an installation takes to migrate data to one or more DS6800 storage servers depends on requirements for service levels, whether application down time is acceptable, available tools and cost estimates with certain budget limits.
Part 5 Part 5 Implementation and management in the open systems environment In this part we discuss considerations for the DS6000 series when used in an open systems environment. The topics include: Open systems support and software Data migration in open systems © Copyright IBM Corp. 2004. All rights reserved.
274 DS6000 Series: Concepts and Architecture
14 Chapter 14. Open systems support and software In this chapter we describe how the DS6000 fits into your open systems environment. In particular, we discuss: The extent of the open systems support Where to find detailed and accurate information Major changes from the ESS 2105 Software provided by IBM with the DS6000 IBM solutions and services that integrate DS6000 functionality © Copyright IBM Corp. 2004. All rights reserved.
14.1 Open systems support The scope of open systems support of the new DS6000 model is based on that of the ESS 2105, with some exceptions: No parallel SCSI attachment support. Some new operating systems were added. Some legacy operating systems and the corresponding servers were removed. Some legacy HBAs were removed. New versions of operating systems, servers, file systems, host bus adapters, clustering products, SAN components, and application software are constantly announced in the market.
14.1.2 Where to look for updated and detailed information This section provides a list of online resources where detailed and up-to-date information about supported configurations, recommended settings, device driver versions, and so on, can be found. Due to the high innovation rate in the IT industry, the support information is updated frequently. Therefore it is advisable to visit these resources regularly and check for updates.
QLogic Corporation The Qlogic web site can be found at: http://www.qlogic.com QLogic maintains a page that lists all the HBAs, drivers, and firmware versions that are supported for attachment to IBM storage systems: http://www.qlogic.com/support/oem_detail_all.asp?oemid=22 Emulex Corporation The Emulex home page is: http://www.emulex.com They also have a page with content specific to IBM storage systems: http://www.emulex.com/ts/docoem/framibm.
There is no support for IBM TotalStorage SAN Volume Controller Storage Software for Cisco MDS9000 (SVC4MDS) at initial GA. Some legacy operating systems and operating system versions were dropped from the support matrix. These are either versions which were withdrawn from marketing or support that are not marketed or supported by their vendors or are not seen as significant enough anymore to justify the testing effort necessary to support them. Major examples include: – IBM AIX 4.
14.2 Subsystem Device Driver To ensure maximum availability most customers choose to connect their open systems hosts through more than one Fibre Channel path to their storage systems. With an intelligent SAN layout this protects you from failures of FC HBAs, SAN components, and host ports in the storage subsystem. Most operating systems, however, can’t deal natively with multiple paths to a single disk - they see the same disk multiple times.
When you click the Subsystem Device Driver downloads link, you will be presented a list of all operating systems for which SDD is available. Selecting one leads you to the download packages, the SDD User’s Guide, SC30-4096, and additional support information. The user’s guide contains all the information that is needed to install, configure, and use SDD for all supported operating systems.
The DS CLI allows you to invoke and manage logical storage configuration tasks and Copy Services functions from an open systems host through batch processes and scripts. It is part of the DS6000 Licensed Internal Code and is delivered with the Customer Software Packet. It is closely tied to the installed version of code and is therefore not available for download. The CLI code must be obtained from the same software bundle as the current microcode update.
Figure 14-1 IBM TotalStorage Productivity Center TPC is the integration point for storage and fabric management, and replication, as depicted in Figure 14-1.
Figure 14-2 MDM main panel For more information about the IBM TotalStorage Multiple Device Manager refer to the redbook IBM TotalStorage Multiple Device Manager Usage Guide, SG24-7097. Updated support summaries, including specific software, hardware, and firmware levels supported, are maintained at: http://www.ibm.
configuration. Devices that are not SMI-S compliant are not supported. The DM also interacts and provides SAN management functionality when the IBM Tivoli SAN Manager is installed. The DM health monitoring keeps you aware of hardware status changes in the discovered storage devices. You can drill down to the status of the hardware device, if applicable. This enables you to understand which components of a device are malfunctioning and causing an error status for the device.
TPC for Disk collects data from IBM or non-IBM networked storage devices that implement SMI-S. A performance collection task collects performance data from one or more storage groups of one device type. It has individual start and stop times, and a sampling frequency. The sampled data is stored in DB2 database tables. You can use TPC for Disk to set performance thresholds for each device type.
14.5.3 TPC for Replication TPC for Replication, formerly known as MDM Replication Manager, provides a single point of control for all replication activities. Given a set of source volumes to be replicated, it will find the appropriate targets, perform all the configuration actions required, and ensure the source and target volume relationships are set up. TPC for Replication administers and configures the copy services functions of the managed storage systems and monitors the replication actions.
14.7 Enterprise Remote Copy Management Facility (eRCMF) eRCMF is a multi-site disaster recovery solution, managing IBM Total Storage Remote Mirror and Copy as well as FlashCopy functions, while ensuring data consistency across multiple machines. It is a scalable, flexible solution for the DS6000 with the following functions: When a site failure occurs or may be occurring, eRCMF splits the two sites in a manner that allows the backup site to be used to restart the applications.
15 Chapter 15. Data migration in the open systems environment In this chapter we discuss important concepts for the migration of data to the new DS6000: Data migration considerations Data migration and consolidation Comparison of the different methods © Copyright IBM Corp. 2004. All rights reserved.
15.1 Introduction The term data migration has a very diverse scope. We use it here solely to describe the process of moving data from one type of storage to another, or to be exact, from one type of storage to a DS6000. In many cases, this process is not only comprised of the mere copying of the data, but also includes some kind of consolidation.
Note: When discussing disruptiveness, we don't consider any interruptions that may be caused by adding the new DS6000 LUNs to the host and later by removing the old storage. They vary too much from operating system to operating system, even from version to version. However, they have to be taken into account, too. Once it is decided which method (or methods) to use, the migration process starts with a very careful planning phase.
Strong involvement of the system administrator is necessary. Today the majority of data migration tasks is performed with one of the methods discussed in the following sections. Basic copy commands Using copy commands is the simplest way to move data from one storage system to another, for example: copy, xcopy, drag and drop for Windows cp, cpio for UNIX These commands are available on every system supported for DS6000 attachment, but work only with data organized in file systems.
Online copy and synchronization with rsync rsync is an open source tool that is available for all major open system platforms, including Windows and Novell Netware. Its original purpose is the remote mirroring of file systems with as few network requirements as possible. Once the initial copy is done, it keeps the mirror up to date by only copying changes. Additionally, the incremental copies are compressed. rsync can also be used for local mirroring and therefore for data migration.
Usually the process is to set up a mirror of the data on the old disks to the new LUNs, wait until it is synchronized and split it at the cut over time. Some LVMs provide commands that automate this process. The biggest advantage of using the LVM for data migration is that the process can be totally non-disruptive, as long as the operating system allows you to add and remove LUNs dynamically. Due to the virtualization nature of LVM, it also allows for all kinds of consolidation.
Intermediate device Backup System Host Backup System Host Stop applications Backup all data Restore data Restart Host using new copy Figure 15-5 Migration using backup and restore The major disadvantage is again the disruptiveness. The applications that write to the data to be migrated must be stopped for the whole migration process. Backup and restore to and from tape usually takes longer than direct copy from disk to disk.
Metro Mirror and Global Copy From a local data migration point of view both methods are on par with each other, with Global Copy having a smaller impact on the subsystem performance and Metro Mirror requiring almost no time for the final synchronization phase. It is advisable to use Global Copy instead of Metro Mirror, if the source system is already at its performance limit even without remote mirroring. Figure 15-6 outlines the migration steps.
15.2.3 IBM Piper migration Piper is a hardware and software solution to move data between disk systems while production is ongoing. It is used in conjunction with IBM migration services. Piper is available for mainframe and open systems environments. Here we discuss the open systems version only. For mainframe environments see Chapter 13, “Data Migration in zSeries environments” on page 251.
15.2.4 Other migration applications There are a number of applications available from other vendors that can assist in data migration. We don’t discuss them here in detail. Some examples include: Softek Data Replicator for Open NSI Double-Take XoSoft WANSync There also are storage virtualization products which can be used for data migration in a similar manner to the Piper tool. They are installed on a server which forms a virtualization engine that resides in the data path.
A Appendix A. Operating systems specifics In this appendix, we describe the particular issues of some operating systems with respect to the attachment to a DS6000. The following subjects are covered: Planning considerations Common tools IBM AIX Linux on various platforms Microsoft Windows HP OpenVMS © Copyright IBM Corp. 2004. All rights reserved.
General considerations In this section we cover some topics that are not specific to a single operating system. This includes available documentation, some planning considerations, and common tools. The DS6000 Host Systems Attachment Guide The DS6000 Host Systems Attachment Guide, SC26-7680, provides instructions to prepare a host system for DS6000 attachment.
the data, even if this pool spans several ranks. If possible, the extents for one logical volume are taken from the same rank. To get higher throughput values than a single array can deliver, it is necessary to stripe the data across several arrays. This can only be achieved through striping on the host level. To achieve maximum granularity and control for data placement, you will have to create an extent pool for every single rank.
The output reports the following: The %tm_act column indicates the percentage of the measured interval time that the device was busy. The Kbps column shows the average data rate, read and write data combined, of this device. The tps column shows the transactions per second. Note that an I/O transaction can have a variable transfer size. This field may also appear higher than would normally be expected for a single physical disk device.
Example: A-3 SAR Sample Output # sar -u 2 5 AIX aixtest 3 4 001750154C00 2/5/03 17:58:15 %usr %sys %wio %idle 17:58:17 43 9 1 46 17:58:19 35 17 3 45 17:58:21 36 22 20 23 17:58:23 21 17 0 63 17:58:25 85 12 3 0 Average 44 15 5 35 As a general rule of thumb, a server with over 40 percent waiting on I/O is spending too much time waiting for I/O. However, you also have to take the type of workload into account.
Other publications Apart from the DS6000 Host Systems Attachment Guide, GC26-7680, there are two redbooks that cover pSeries storage attachment: Practical Guide for SAN with pSeries, SG24-6050, covers all aspects of connecting an AIX host to SAN-attached storage. However, it is not quite up-to-date; the last release was in 2002. Fault Tolerant Storage - Multipathing and Clustering Solutions for Open Systems for the IBM ESS, SG24-6295, focuses mainly on high availability and covers SDD and HACMP topics.
Device Specific.(ZB)........C2D3.91A1 Device Specific.(YL)........U1.13-P1-I1/Q1 You can also print the WWPN of an HBA directly by running: lscfg -vl | grep Network The # stands for the instance of each FC HBA you want to query. Managing multiple paths It is a common and recommended practice to assign a DS6000 volume to the host system through more than one path, to ensure availability in case of a SAN component failure and to achieve higher I/O bandwidth.
========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi2/hdisk17 OPEN NORMAL 0 0 1 fscsi2/hdisk19 OPEN NORMAL 27134 0 2 fscsi3/hdisk21 OPEN NORMAL 0 0 3 fscsi3/hdisk23 OPEN NORMAL 27352 0 DEV#: 1 DEVICE NAME: vpath4 TYPE: 1750 POLICY: Optimized SERIAL: 20522873 ========================================================================== Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi2/hdisk18 CLOSE NORMAL 25734 0 1 fscsi2/hdi
The management of MPIO devices is described in the “Managing MPIO-Capable Devices” section of the System Management Guide: Operating System and Devices for AIX 5L: http://publib16.boulder.ibm.com/pseries/en_US/aixbman/baseadmn/manage_mpio.htm Restriction: A point worth considering when deciding between SDD and MPIO is, that the IBM TotalStorage SAN Volume Controller does not support MPIO at this time. For updated information refer to: http://www-03.ibm.com/servers/storage/support/software/sanvc/installing.
LVM configuration In AIX all storage is managed by the AIX Logical Volume Manager (LVM). It virtualizes physical disks to be able to dynamically create, delete, resize, and move logical volumes for application use. To AIX our DS6000 logical volumes appear as physical SCSI disks. There are some considerations to take into account when configuring LVM.
Tip: If the number of async I/O (AIO) requests is high, then the recommendation is to increase maxservers to approximately the number of simultaneous I/Os there might be. In most cases, it is better to leave the minservers parameter to the default value since the AIO kernel extension will generate additional servers if needed.
0611 Direct Attach 2 Gigabit Fibre Channel PCI 0625 Direct Attach 2 Gigabit Fibre Channel PCI-X It is also possible for the AIX partition to have its storage virtualized, whereby a partition running OS/400 hosts the AIX partition's storage requirements.
#MBs #opns #rds #wrs file volume:inode -----------------------------------------------------------------------0.3 1 70 0 unix :34096 0.0 1 2 0 ksh.cat :46237 0.0 1 2 0 cmdtrace.cat :45847 0.0 1 2 0 hosts :516 0.0 7 2 0 SWservAt :594 0.0 7 2 0 SWservAt.
seek dist (%tot blks):init 27.84031, avg 13.97180 min 0.00004 max 57.54421 sdev 11.78066 time to next req(msec): avg 89.470 min 0.003 max 949.025 sdev 174.947 throughput: 81.8 KB/sec utilization: 0.87 ... Linux Linux is an open source UNIX-like kernel, originally created by Linus Torvalds. The term “Linux” is often used to mean the whole operating system, GNU/Linux.
Existing reference material There is a lot of information available that helps you set up your Linux server to attach it to a DS6000 storage subsystem.
The zSeries connectivity support page lists all supported storage devices and SAN components that can be attached to a zSeries server. There is an extra section for FCP attachment: http://www.ibm.com/servers/eserver/zseries/connectivity/#fcp The whitepaper ESS Attachment to United Linux 1 (IA-32) is available at: http://www.ibm.com/support/docview.
Table A-2 Minor numbers, partitions and special device files Major number Minor number Special device file Partition 8 0 /dev/sda all of 1st disk 8 1 /dev/sda1 1st partition of 1st disk ... 8 15 /dev/sda15 15th partition of 1st disk 8 16 /dev/sdb all of 2nd disk 8 17 /dev/sdb1 1st partition of 2nd disk ... 8 31 /dev/sdb15 15th partition of 2nd disk 8 32 /dev/sdc all of 3rd disk ...
commands, all special device files for SCSI disks have the same permissions. If an application requires different settings for certain disks, you have to correct them afterwards.
RedHat Enterprise Linux (RH-EL) multiple LUN support RH-EL by default is not configured for multiple LUN support. It will only discover SCSI disks addressed as LUN 0. The DS6000 provides the volumes to the host with a fixed Fibre Channel address and varying LUN. Therefore RH-EL 3 will see only one DS6000 volume (LUN 0), even if more are assigned to it. Multiple LUN support can be added with an option to the SCSI midlayer Kernel module scsi_mod.
scsi_hostadapter3 qla2300 options scsi_mod max_scsi_luns=128 Adding FC disks dynamically The commonly used way to discover newly attached DS6000 volumes is to unload and reload the Fibre Channel HBA driver. However, this action is disruptive to all applications that use Fibre Channel attached disks on this particular host. A Linux system can recognize newly attached LUNs without unloading the FC HBA driver. The procedure slightly differs depending on the installed FC HBAs.
/dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg - 2nd 1st 2nd new new DS6000 DS6000 DS6000 DS6000 DS6000 volume, volume, volume, volume, volume, seen seen seen seen seen by by by by by HBA HBA HBA HBA HBA 0 1 1 0 1 The mapping of special device files is now different than it would have been if all three DS6000 volumes had been already present when the HBA driver was loaded. In other words: if the system is now restarted, the device ordering will change to what is shown in Example A-15.
More information on running Linux in an iSeries partition can be found in the iSeries Information Center at: http://publib.boulder.ibm.com/iseries/v5r2/ic2924/index.htm For running Linux in an i5 partition check, the i5 Information Center at: http://publib.boulder.ibm.com/infocenter/iseries/v1r2s/en_US/info/iphbi/iphbi.pdf Troubleshooting and monitoring The /proc pseudo file system The /proc pseudo file system is maintained by the Linux kernel and provides dynamic information about the system.
Request Queue count= 128, Response Queue count= 512 . .
DS6000 supports FC attachment to Microsoft Windows 2000/2003 servers. For details regarding operating system versions and HBA types see the DS6000 Interoperability Matrix, available at: http://www.ibm.com/servers/storage/disk/ds6000/interop.html The support includes cluster service and acting as a boot device. Booting is supported currently with host adapters QLA23xx (32 bit or 64 bit) and LP9xxx (32 bit only).
– If you reboot a system with adapters while the primary path is in a failed state, you must manually disable the BIOS on the first adapter and manually enable the BIOS on the second adapter. You cannot enable the BIOS for both adapters at the same time. If the BIOS for both adapters is enabled at the same time and there is a path failure on the primary adapter, the system will stop with an INACCESSIBLE_BOOT_DEVICE error upon reboot.
information and manage LUNs. See the IBM TotalStorage DS Open Application Programming Interface Reference, GC35-0493, for information on how to install and configure VDS support. The following sections present examples of VDS integration with advanced functions of the DS6000 storage systems that became possible with the implementation of the DS CIM agent.
Important: The DS6000 FC ports used by OpenVMS hosts must not be accessed by any other operating system, not even accidentally. The OpenVMS hosts have to be defined for access to these ports only, and it must be ensured that no foreign HBA (without definition as an OpenVMS host) is seen by these ports. Conversely, an OpenVMS host must have access only to the DS6000 ports configured for OpenVMS compatibility. You must dedicate storage ports for only the OpenVMS host type.
If the volume is planned for MSCP serving, then the UDID range is limited to 0–9999 (by operating system restrictions in the MSCP code). OpenVMS system administrators tend to use elaborate schemes for assigning UDIDs, coding several hints about physical configuration into this logical ID, for instance odd/even values or reserved ranges to distinguish between multiple data centers, storage systems, or disk groups.
However, there is no forced error indicator in the SCSI architecture, and the revector operation is nonatomic. As a substitute, the OpenVMS shadow driver exploits the SCSI commands READ LONG (READL) and WRITE LONG (WRITEL), optionally supported by some SCSI devices. These I/O functions allow data blocks to be read and written together with their disk device error correction code (ECC).
328 DS6000 Series: Concepts and Architecture
B Appendix B. Using the DS6000 with iSeries In this appendix, the following topics are discussed: Supported environment Logical volume sizes Protected versus unprotected volumes Multipath Adding units to OS/400 configuration Sizing guidelines Migration Linux and AIX support © Copyright IBM Corp. 2004. All rights reserved.
Supported environment Not all hardware and software combinations for OS/400 support the DS6000. This section describes the hardware and software pre-requisites for attaching the DS6000. Hardware The DS6000 is supported on all iSeries models which support Fibre Channel attachment for external storage. Fibre Channel was supported on all model 8xx onwards. AS/400 models 7xx and prior only supported SCSI attachment for external storage, so they cannot support the DS6000.
Model Type Unprotected Protected OS/400 Device size (GB) Number of LBAs Extents Unusable space (GiB) Usable space% 1750-A85 1750-A05 35.1 68,681,728 33 0.25 99.24 1750-A84 1750-A04 70.5 137,822,208 66 0.28 99.57 1750-A86 1750-A06 141.1 275,644,416 132 0.56 99.57 1750-A87 1750-A07 282.2 551,288,832 263 0.13 99.
Adding volumes to iSeries configuration Once the logical volumes have been created and assigned to the host, they will appear as non-configured units to OS/400. This may be some time after being created on the DS6000. At this stage, they are used in exactly the same way as non-configured internal units. There is nothing particular to external logical volumes as far as OS/400 is concerned.
Work with Disk Units Select one of the following: 1. Display disk configuration 2. Work with disk configuration 3. Work with disk unit recovery Selection 2 F3=Exit F12=Cancel Figure B-2 Work with Disk Units menu 4. When adding disk units to a configuration, you can add them as empty units by selecting Option 2 or you can choose to allow OS/400 to balance the data across all the disk units. Normally, we recommend balancing the data.
Specify ASPs to Add Units to Specify the ASP to add each unit to. Specify ASP 1 Serial Number 21-662C5 21-54782 75-1118707 F3=Exit F12=Cancel Type 4326 4326 1750 F5=Refresh Resource Model Capacity Name 050 35165 DD124 050 35165 DD136 A85 35165 DD006 F11=Display disk configuration capacity Figure B-4 Specify ASPs to Add Units to 6. The Confirm Add Units panel will appear for review as shown in Figure B-5. If everything is correct, press Enter to continue.
Figure B-6 iSeries Navigator initial panel 2. Expand the iSeries to which you wish to add the logical volume and sign on to that server as shown in Figure B-7. Figure B-7 iSeries Navigator Signon to iSeries panel 3. Expand Configuration and Service, Hardware, and Disk Units as shown in Figure B-8 on page 336. Appendix B.
Figure B-8 iSeries Navigator Disk Units 4. You will be asked to sign on to SST as shown in Figure B-9. Enter your Service tools ID and password and press OK. Figure B-9 SST Signon 5. Right-click Disk Pools and select New Disk Pool as shown in Figure B-10 on page 337.
Figure B-10 Create a new disk pool 6. The New Disk Pool wizard appears as shown in Figure B-11. Click Next. Figure B-11 New disk pool - welcome Appendix B.
7. On the New Disk Pool dialog shown in Figure B-12, select Primary from the pull-down for the Type of disk pool, give the new disk pool a name and leave Database to default to Generated by the system. Ensure the disk protection method matches the type of logical volume you are adding. If you leave it unchecked, you will see all available disks. Select OK to continue. Figure B-12 Defining a new disk pool 8.
Figure B-14 Add disks to Disk Pool 10.A list of non-configured units similar to that shown in Figure B-15 will appear. Highlight the disks you want to add to the disk pool and click Add. Figure B-15 Choose the disks to add to the Disk Pool 11.A confirmation screen appears as shown in Figure B-16 on page 340. Click Next to continue. Appendix B.
Figure B-16 Confirm disks to be added to Disk Pool 12.A summary of the Disk Pool configuration similar to Figure B-17 appears. Click Finish to add the disks to the Disk Pool. Figure B-17 New Disk Pool Summary 13.Take note of and respond to any message dialogs which appear. After taking action on any messages, the New Disk Pool Status panel shown in Figure B-18 on page 341 will appear showing progress. This step may take some time, depending on the number and size of the logical units being added.
Figure B-18 New Disk Pool Status 14.When complete, click OK on the information panel shown in Figure B-19. Figure B-19 Disks added successfully to Disk Pool 15.The new Disk Pool can be seen on iSeries Navigator Disk Pools in Figure B-20. Figure B-20 New Disk Pool shown on iSeries Navigator 16.To see the logical volume, as shown in Figure B-21, expand Configuration and Service, Hardware, Disk Pools and click the disk pool you just created. Appendix B.
Figure B-21 New logical volume shown on iSeries Navigator Multipath Multipath support was added for external disks in V5R3 of i5/OS (also known as OS/400 V5R3). Unlike other platforms which have a specific software component, such as Subsystem Device Driver (SDD), multipath is part of the base operating system. At V5R3, up to eight connections can be defined from multiple I/O adapters on an iSeries server to a single logical volume in the DS6000.
Prior to multipath being available, some customers used OS/400 mirroring to two sets of disks, either in the same or different external disk subsystems. This provided implicit dual-path as long as the mirrored copy was connected to a different IOP/IOA, BUS, or I/O tower. However, this also required two copies of data. Since disk level protection is already provided by RAID-5 or RAID-10 in the external disk subsystem, this was sometimes seen as unnecessary.
Figure B-23 Multipath removes single points of failure Unlike other systems, which may only support two paths (dual-path), OS/400 V5R3 supports up to eight paths to the same logical volumes. As a minimum, you should use two, although some small performance benefits may be experienced with more. However, since OS/400 multipath spreads I/O across all available paths in a round-robin manner, there is no load balancing, only load sharing.
Volumes 1-24 Volumes 25-48 Controllers and Host Adapters BUS a BUS b FC IOA FC IOA FC IOA FC IOA BUS x iSeries IO Towers/Racks BUS y Logical connection Figure B-24 Example of multipath with iSeries Figure B-24 shows an example where 48 logical volumes are configured in the DS6000. The first 24 of these are assigned via a host adapter in the top controller card in the DS6000 to a Fibre Channel I/O adapter in the first iSeries I/O tower or rack.
Specify ASPs to Add Units to Specify the ASP to add each unit to. Specify ASP 1 Serial Number 21-662C5 21-54782 75-1118707 F3=Exit F12=Cancel Type 4326 4326 1750 F5=Refresh Resource Model Capacity Name 050 35165 DD124 050 35165 DD136 A85 35165 DMP135 F11=Display disk configuration capacity Figure B-25 Adding multipath volumes to an ASP Note: For multipath volumes, only one path is shown. In order to see the additional paths, see “Managing multipath volumes using iSeries Navigator” on page 349. 5.
When you get to the point where you will select the volumes to be added, you will see a panel similar to that shown in Figure B-27. Multipath volumes appear as DMPxxx. Highlight the disks you want to add to the disk pool and click Add. Figure B-27 Adding a multipath volume Note: For multipath volumes, only one path is shown. In order to see the additional paths, see “Managing multipath volumes using iSeries Navigator” on page 349.
When you have completed these steps, the new Disk Pool can be seen on iSeries Navigator Disk Pools in Figure B-28. Figure B-28 New Disk Pool shown on iSeries Navigator To see the logical volume, as shown in Figure B-29, expand Configuration and Service, Hardware, Disk Pools and click the disk pool you just created.
Managing multipath volumes using iSeries Navigator All units are initially created with a prefix of DD. As soon as the system detects that there is more than one path to a specific logical unit, it will automatically assign a unique resource name with a prefix of DMP for both the initial path and any additional paths. When using the standard disk panels in iSeries Navigator, only a single (the initial) path is shown. The following steps show how to see the additional paths.
To see the other connections to a logical unit, right click the unit and select Properties, as shown in Figure B-31 on page 350.
You will then see the General Properties tab for the selected unit, as in Figure B-32. The first path is shown as Device 1 in the box labelled Storage. Figure B-32 Multipath logical unit properties Appendix B.
To see the other paths to this unit, click the Connections tab, as shown in Figure B-33, where you can see the other seven connections for this logical unit. Figure B-33 Multipath connections Multipath rules for multiple iSeries systems or partitions When you use multipath disk units, you must consider the implications of moving IOPs and multipath connections between nodes.
Disk unit connections might be missing for a variety of reasons, but especially if one of the preceding rules has been violated. If a connection for a multipath disk unit in any disk pool is found to be missing during an IPL or vary on, a message is sent to the QSYSOPR message queue. If a connection is missing, and you confirm that the connection has been removed, you can update Hardware Service Manager (HSM) to remove that resource.
Performance Tools Reports Workload description Workload statistics Workload characteristics Other requirements: HA, DR etc.
Number of iSeries Fibre Channel adapters The most important factor to take into consideration when calculating the number of Fibre Channel adapters in the iSeries is the throughput capacity of the adapter and IOP combination. Since this guideline is based only on iSeries adapters and Access Density (AD) of iSeries workload, it doesn't change when using the DS6000. (The same guidelines are valid for ESS 800). Note: Access Density is the capacity of occupied disk space divided by the average I/O per sec.
When considering the number of ranks, take into account the maximum disk operations per second per rank as shown in Table B-3. These are measured at 100% DDM Utilization with no cache benefit and with the average I/O being 4KB. Larger transfer sizes will reduce the number of operations per second. Based on these values you can calculate how many host I/O per second each rank can handle at the recommended utilization of 40%. This is shown for workload read-write ratios of 70% read and 50% read in Table B-3.
Based on available measurements and experiences with the ESS 800 we recommend you should plan no more than four iSeries I/O adapters to one host port in the DS6000. For a current list of switches supported under OS/400, refer to the iSeries Storage Web site at: http://www-1.ibm.com/servers/eserver/iseries/storage/storage_hw.html Migration For many iSeries customers, migrating to the DS6000 will be best achieved using traditional Save/Restore techniques.
tro Me ror Mir Figure B-35 Using Metro Mirror to migrate from ESS to the DS6000 The same setup can also be used if the ESS LUNs are in an IASP, although the iSeries would not require a complete shutdown since varying off the IASP in the ESS, unassigning the ESS LUNs, assigning the DS6000 LUNs and varying on the IASP would have the same effect. Clearly, you must also take into account the licensing implications for Metro Mirror and Global Copy.
You can then use the OS/400 command STRASPBAL TYPE(*ENDALC) to mark the units to be removed from the configuration as shown in Figure B-36. This can reduce the down time associated with removing a disk unit. This will keep new allocations away from the marked units. Start ASP Balance (STRASPBAL) Type choices, press Enter. Balance type . . . . . . . . . . > *ENDALC Storage unit . . . . . . . . . . + for more values F3=Exit F4=Prompt F24=More keys F5=Refresh *CAPACITY, *USAGE, *HSM...
Copy Services for iSeries Due to OS/400 having a single level storage, it is not possible to copy some disk units without copying them all, unless specific steps are taken. Attention: You should not assume that Copy Services with iSeries works the same as with other open systems. FlashCopy When FlashCopy was first available for use with OS/400, it was necessary to copy the entire storage space, including the Load Source Unit (LSU).
storage) being copied, only the application resides in an IASP and in the event of a disaster, the target copy is attached to the DR server. Additional considerations must be taken into account such as maintaining user profiles on both systems, but this is no different from using other availability functions such as switched disk between two local iSeries Servers on a High Speed Link (HSL) and Cross Site Mirroring (XSM) to a remote iSeries.
For more information on running AIX in an i5 partition, refer to the i5 Information Center at: http://publib.boulder.ibm.com/infocenter/iseries/v1r2s/en_US/index.htm?info/iphat /iphatlparkickoff.htm Note: AIX will not run in a partition on earlier 8xx and prior iSeries systems. Linux on IBM iSeries Since OS/400 V5R1, it has been possible to run Linux in an iSeries partition. On iSeries models 270 and 8xx, the primary partition must run OS/400 V5R1 or higher and Linux is run in a secondary partition.
C Appendix C. Service and support offerings This appendix provides information about the service offerings which are currently available for the new DS6000 and DS8000 series. It includes a brief description of each offering and where you can find more information on the Web.
IBM Web sites for service offerings IBM Global Services (IGS) and the IBM Systems Group can offer comprehensive assistance, including planning and design as well as implementation and migration support services. For more information on all of the following service offerings, contact your IBM representative or visit the following Web sites. The IBM Global Services Web site can be found at: http://www.ibm.com/services/us/index.wss/home The IBM System Group Web site can be found at: http://www.ibm.
The IBM Piper hardware assisted migration in the zSeries environment is described in this redbook in “Data migration with Piper for z/OS” on page 255. Additional information about this offering is on the following Web site: http://www.ibm.com/servers/storage/services/featured/hardware_assist.html For more information about IBM Migration Services for eServer zSeries data visit the following Web site: http://www.ibm.com/services/us/index.
It simplifies the disaster recovery implementation and concept. Once eRCMF is configured in the customer environment, it will monitor the PPRC states of all specified LUNs/volumes. Visit the following Web site for the latest information: http://www.ibm.com/services/us/index.
Figure 15-9 Example of the Supported Product List (SPL) from the IBM Support Line Appendix C.
368 DS6000 Series: Concepts and Architecture
Related publications The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this redbook. IBM Redbooks For information on ordering these publications, see “How to get IBM Redbooks” on page 371. Note that some of the documents referenced here may be available in softcopy only.
Device Support Facilities: User’s Guide and Reference, GC35-0033 z/OS Advanced Copy Services, SC35-0248 Online resources These Web sites and URLs are also relevant as further information sources: Documentation for DS6800: http://www.ibm.com/servers/storage/support/disk/ds6800/ SDD and Host Attachment scripts http://www.ibm.com/support/ IBM Disk Storage Feature Activation (DSFA) Web site at http://www.ibm.com/storage/dsfa The PSP information can be found at: http://www-1.ibm.
CNT: http://www.cnt.com/ibm/ Nortel: http://www.nortelnetworks.com/ ADVA: http://www.advaoptical.com/ How to get IBM Redbooks You can search for, view, or download Redbooks, Redpapers, Hints and Tips, draft publications and Additional materials, as well as order hardcopy Redbooks or CD-ROMs, at this Web site: ibm.com/redbooks Help from IBM IBM Support and downloads ibm.com/support IBM Global Services ibm.
372 DS6000 Series: Concepts and Architecture
Index Symbols 39, 52, 66, 146, 196, 230, 232, 271, 330, 357 A address groups 77, 158 AIX access methods 308 boot device support 309 filemon 310 iostat 310 LVM 308 managing multiple paths 305 monitoring performance 310 MPIO 306 on iSeries 309, 361 architecture 21 array sites 67, 154, 162 arrays 68, 154, 162 creating 180 Asynchronous Cascading PPRC see Metro/Global Copy Asynchronous PPRC see Global Mirror B battery backup unit see BBU BBU 41 benefits of virtualization 81 boot device support 309 business con
installation methods 197 migration 208 migration example 209 mixed device environments 208 return codes 206 supported environments 197 usage examples 207 user assistance 205 user security 203 DS management console see DS MC DS MC 8, 108, 126 connectivity 129 DS Open API 9, 110, 120 DS Open application programming interface see DS Open API DS Storage Manager 8, 109, 119 logical configuration 151 real-time (online) configuration 9 simulated (offline) configuration 9 DS6000 allocation and deletion of LUNs/CKD
controller enclosure 6 controller RAS 46 Ethernet cables 42 FICON 50 host connection 49 interoperability 13 major features 7 microcode installation process 62 maintaining 62 Model 1750-511 84 NVS recovery 48 open systems host connection 51 preferred path 49 SAN 50 server based design 27 server enclosure 24 service cable 42 system service card 42 tagged command queuing 18 zSeries host connection 51 dynamic LUN/volume creation and deletion 14 E enclosure ID indicator 39 Enterprise Remote Copy Management Faci
Incremental FlashCopy 11, 92 Independent Auxiliary Storage Pool see IASP indicators CRU endpoint 60 DA ports 34 enclosure ID 39 HA port 34 health 35 SBOD controller card 36 system 60 system alert 60 system identify 60 system informaton 60 information lifecycle management 4 infrastructure simplification 4 installation planning 115 environmental requirements 118 floor and space 116 network settings 120–121 power requirements 118 rack 119 SAN requirements 122 site preparation 116 interfaces for Copy Services 1
microcode maintaining 62 updates 62 Microsoft Windows 2000/2003 321 HBA and operating system settings 322 SDD 322 MPIO 306 multipathing other solutions 281 software 51 multiple allegiance 19 Multiple Relationship FlashCopy 10, 94 network settings 120–121 non-volatile storage see NVS NVS 48 zSeries 254 DFSMSdss 254 Piper 255, 297 Point-in-time Copy see PTC port Ethernet 35 serial 35 power cords 42 power requirements 118 power subsystem 40 RAS 56 PPRC 97 Consistency Group 105 PPRC-XD see Global Copy predict
Sequential prefetching in Adaptive Replacement Cache see SARC serial port 35 server attachment license 134 server enclosure 24 RAID controller card 33 service offerings 363 SFP 34 Simple Network Management Protocol see SNMP simplified LUN masking 15 simulated manager 165–166 site preparation 116 small form factor plugable see SFP SNMP 131 sparing rules 141 spare creation 54 storage capacity 8 storage complex 152 configuring 172 storage unit 152, 192 configuring 173 support offerings 363 SVC 18 switched FC-A
DS6000 Series: Concepts and Architecture DS6000 Series: Concepts and Architecture DS6000 Series: Concepts and Architecture DS6000 Series: Concepts and Architecture (0.5” spine) 0.475”<->0.
DS6000 Series: Concepts and Architecture DS6000 Series: Concepts and Architecture
Back cover ® The IBM TotalStorage DS6000 Series: Concepts and Architecture Enterprise-class storage functions in a compact and modular design On demand scalability and multi-platform connectivity Enhanced configuration flexibility with virtualization This IBM Redbook describes the IBM TotalStorage DS6000 storage server series, its architecture, its logical design, hardware design and components, advanced functions, performance features, and specific characteristics.