Computer Drive User Manual
Table Of Contents
- Front cover
- Contents
- Notices
- Preface
- Summary of changes
- Part 1 Overview
- Chapter 1. Introduction
- Chapter 2. Copy Services architecture
- Part 2 Interfaces
- Chapter 3. DS Storage Manager
- Chapter 4. DS Command-Line Interface
- Chapter 5. System z interfaces
- Part 3 FlashCopy
- Chapter 6. FlashCopy overview
- Chapter 7. FlashCopy options
- 7.1 Multiple relationship FlashCopy
- 7.2 Consistency Group FlashCopy
- 7.3 FlashCopy target as a Metro Mirror or Global Copy primary
- 7.4 Incremental FlashCopy - refresh target volume
- 7.5 Remote FlashCopy
- 7.6 Persistent FlashCopy
- 7.7 Data set FlashCopy
- 7.8 Reverse restore
- 7.9 Fast reverse restore
- 7.10 Options and interfaces
- Chapter 8. FlashCopy ordering and activation
- Chapter 9. FlashCopy interfaces
- Chapter 10. FlashCopy performance
- Chapter 11. FlashCopy examples
- Part 4 Metro Mirror
- Chapter 12. Metro Mirror overview
- Chapter 13. Metro Mirror options and configuration
- Chapter 14. Metro Mirror interfaces
- 14.1 Metro Mirror interfaces - overview
- 14.2 TSO commands for Metro Mirror management
- 14.3 ICKDSF
- 14.3.1 Metro Mirror management with ICKDSF
- 14.3.2 Display the Fibre Channel Connection Information Table
- 14.3.3 PPRCOPY DELPAIR
- 14.3.4 PPRCOPY DELPATH
- 14.3.5 PPRCOPY ESTPATH
- 14.3.6 PPRCOPY ESTPAIR
- 14.3.7 PPRCOPY FREEZE
- 14.3.8 PPRCOPY QUERY
- 14.3.9 PPRCOPY RECOVER
- 14.3.10 PPRCOPY SUSPEND
- 14.3.11 PPRCOPY RUN
- 14.3.12 Refreshing the VTOC
- 14.4 DS Command-Line Interface
- 14.5 DS CLI command- examples
- 14.6 DS Storage Manager GUI
- 14.7 ANTRQST API
- Chapter 15. Metro Mirror performance and scalability
- Chapter 16. Metro Mirror examples
- Part 5 Global Copy
- Chapter 17. Global Copy overview
- Chapter 18. Global Copy options and configuration
- Chapter 19. Global Copy performance and scalability
- Chapter 20. Global Copy interfaces
- Chapter 21. Global Copy examples
- Chapter 22. Global Mirror overview
- Part 6 Global Mirror
- Chapter 23. Global Mirror options and configuration
- 23.1 Terminology used in Global Mirror environments
- 23.2 Create a Global Mirror environment
- 23.3 Modify a Global Mirror session
- 23.4 Remove a Global Mirror environment
- 23.5 Global Mirror with multiple storage disk subsystems
- 23.6 Connectivity between local and remote site
- 23.7 Recovery scenario after primary site failure
- 23.7.1 Normal Global Mirror operation
- 23.7.2 Primary site failure
- 23.7.3 Failover B volumes
- 23.7.4 Check for valid Consistency Group state
- 23.7.5 Set consistent data on B volumes
- 23.7.6 Reestablish the FlashCopy relationship between B and C volumes
- 23.7.7 Restart the application at the remote site
- 23.7.8 Prepare to switch back to the local site
- 23.7.9 Return to local site
- 23.7.10 Conclusions
- Chapter 24. Global Mirror interfaces
- 24.1 Global Mirror interfaces - overview
- 24.2 Different interfaces for the same function
- 24.3 Global Mirror management using TSO commands
- 24.3.1 Establish a Global Mirror environment
- 24.3.2 Define paths
- 24.3.3 Establish Global Copy volume pairs
- 24.3.4 Establish FlashCopy relationships for Global Mirror
- 24.3.5 Define a Global Mirror session
- 24.3.6 Populate a Global Mirror session with volumes
- 24.3.7 Start a Global Mirror session
- 24.3.8 Query a Global Mirror session
- 24.4 DS CLI to manage Global Mirror volumes in z/OS
- 24.5 Global Mirror management using ICKDSF
- 24.5.1 Establish a Global Mirror environment
- 24.5.2 Define paths
- 24.5.3 Establish Global Copy pairs
- 24.5.4 Establish FlashCopy relationships
- 24.5.5 Define a Global Mirror session
- 24.5.6 Add volumes to a session
- 24.5.7 Start Global Mirror
- 24.5.8 Query an active Global Mirror session
- 24.5.9 Remove a Global Mirror environment
- 24.5.10 Stop the Global Mirror session
- 24.5.11 Remove volumes from Global Mirror
- 24.5.12 Un-define the Global Mirror session
- 24.5.13 Withdraw FlashCopy relationships
- 24.5.14 Delete Global Copy pairs
- 24.5.15 Remove all paths
- 24.6 ANTRQST macro
- 24.7 DS Storage Manager GUI
- Chapter 25. Global Mirror performance and scalability
- Chapter 26. Global Mirror examples
- 26.1 Global Mirror examples - configuration
- 26.2 Global Mirror query examples with TSO
- 26.3 Set up the Global Mirror environment using TSO
- 26.4 Primary site failure and recovery management with TSO
- 26.4.1 Primary site failure
- 26.4.2 Stop a Global Mirror session
- 26.4.3 Failover from B to A volumes
- 26.4.4 Check Global Mirror FlashCopy status between B and C volumes
- 26.4.5 Create a data consistent set of B volumes
- 26.4.6 Optionally create a data consistent set of D volumes
- 26.4.7 Create a data consistent set of C volumes
- 26.4.8 Prepare to return to the local site
- 26.4.9 Replicate the changes from B to A
- 26.4.10 Return to the local site and resume Global Mirror
- 26.5 Remove Global Mirror environment using TSO
- 26.6 Planned outage management using ICKDSF
- 26.7 Remove a Global Mirror environment using ICKDSF
- 26.8 Query device information with ICKDSF
- 26.9 Set up a Global Mirror environment using DS SM
- 26.10 Set up a Global Mirror environment using the DS CLI
- 26.11 Control and Query Global Mirror with the DS CLI
- 26.12 Site switch basic operations using the DS CLI
- 26.13 Remove the Global Mirror environment with the DS CLI
- Part 7 Interoperability
- Chapter 27. Combining Copy Service functions
- Chapter 28. Interoperability between DS6000 and DS8000
- 28.1 DS6000 and DS8000 Copy Services interoperability
- 28.2 Preparing the environment
- 28.2.1 Minimum microcode levels
- 28.2.2 Hardware and licensing requirements
- 28.2.3 Network connectivity
- 28.2.4 Creating matching user IDs and passwords
- 28.2.5 Updating the DS CLI profile
- 28.2.6 Adding the Storage Complex
- 28.2.7 Volume size considerations for Remote Mirror Copy
- 28.2.8 Determining DS6000 and DS8000 CKD volume size
- 28.3 RMC: Establishing paths between DS6000 and DS8000
- 28.4 Managing Metro Mirror or Global Copy pairs
- 28.5 Managing DS6000 to DS8000 Global Mirror
- 28.6 Managing DS6000 and DS8000 FlashCopy
- 28.7 z/OS Global Mirror
- Part 8 Solutions
- Chapter 29. Interoperability between DS6000 and ESS 800
- 29.1 DS6000 and ESS 800 Copy Services interoperability
- 29.2 Preparing the environment
- 29.2.1 Minimum microcode levels
- 29.2.2 Hardware and licensing requirements
- 29.2.3 Network connectivity
- 29.2.4 Creating matching user IDs and passwords
- 29.2.5 Updating the DS CLI profile
- 29.2.6 Adding the Copy Services domain
- 29.2.7 Volume size considerations for RMC (PPRC)
- 29.2.8 Volume address considerations on the ESS 800
- 29.3 RMC: Establishing paths between DS6000 and ESS 800
- 29.4 Managing Metro Mirror or Global Copy pairs
- 29.5 Managing ESS 800 Global Mirror
- 29.6 Managing ESS 800 FlashCopy
- Chapter 30. IIBM TotalStorage Rapid Data Recovery
- Chapter 31. IBM TotalStorage Productivity Center for Replication
- 31.1 IBM TotalStorage Productivity Center
- 31.2 Where we are coming from
- 31.3 What TPC for Replication provides
- 31.4 Copy Services terminology
- 31.5 TPC for Replication terminology
- 31.6 TPC for Replication session types
- 31.7 TPC for Replication session states
- 31.8 Volumes in a copy set
- 31.9 TPC for Replication and scalability
- 31.10 TPC for Replication system and connectivity overview
- 31.11 TPC for Replication monitoring and freeze capability
- 31.12 TPC for Replication heartbeat
- 31.13 Supported platforms
- 31.14 Hardware requirements for TPC for Replication servers
- 31.15 TPC for Replication GUI
- 31.16 Command Line Interface to TPC for Replication
- Chapter 32. GDPS overview
- Appendix A. Concurrent Copy
- Appendix B. SNMP notifications
- Appendix C. Licensing
- Appendix D. CLI migration
- Related publications
- Index
- Back cover

Chapter 10. FlashCopy performance 113
Table 10-1 FlashCopy source and target volume location
10.1.2 LSS/LCU versus rank considerations
In the DS6000 it is much more meaningful to discuss volume location in terms of ranks and
not in terms of logical subsystem (LSS) or logical control unit (LCU). On the ESS 800 and
earlier IBM disk subsystems, the physical locations of the volumes were described in terms of
the logical subsystem LSS/LCU. If there existed more than a single rank in the LSS/LCU,
then each of the ranks would have a range of volumes from that specific LSS/LCU.
The LSSs/LCUs in a DS6000 disk subsystem are logical constructs that are no longer tied to
predetermined ranks. Within the DS6000 the LSS/LCU can be configured to span one or
more ranks but is not limited to specific ranks. There can be individual ranks that contain
volumes from more than a single LSS/LCU, which was not possible before the introduction of
the DS6000.
10.1.3 Rank geometry
Finally, you can achieve a small performance improvement by using identical rank
geometries for both the source and target volumes. In other words, if the source volumes are
located on a rank with a 7+p/RAID-5 configuration, then the target volumes should also be
located on a rank configured as 7+p/RAID-5.
10.1.4 Incremental FlashCopy
This chapter focuses on FlashCopy performance best practices, but there are many other
business requirements that must be weighed when using FlashCopy. The designer should
carefully consider all aspects for the implementation of each specific solution and should
definitely evaluate the use of
incremental FlashCopy for all FlashCopy applications.
Tip: If the FlashCopy target volume is on the same rank as the FlashCopy source volume,
you run the risk of a rank failure causing the loss of both the source and the target volume.
Server Device Adapter Rank
FlashCopy establish
performance
Same server Don’t care Different ranks
Background copy
performance
Same server Different device adapter Different ranks
FlashCopy impact to
applications
Same server Don’t care Different ranks
Tip: To find the relative location of your volumes, you can use the following procedure:
1. Use the lsfbvol command to find out which Extent Pool contains the relevant volumes.
2. Use the lsrank command to display both the device adapter and the rank for each
Extent Pool.
3. To determine which server contains your volumes, look at the Extent Pool name. Even
numbered Extent Pools are always from Server 0, while odd numbered Extent Pools
are always from Server1.