Computer Drive User Manual
Table Of Contents
- Front cover
- Contents
- Notices
- Preface
- Summary of changes
- Part 1 Overview
- Chapter 1. Introduction
- Chapter 2. Copy Services architecture
- Part 2 Interfaces
- Chapter 3. DS Storage Manager
- Chapter 4. DS Command-Line Interface
- Chapter 5. System z interfaces
- Part 3 FlashCopy
- Chapter 6. FlashCopy overview
- Chapter 7. FlashCopy options
- 7.1 Multiple relationship FlashCopy
- 7.2 Consistency Group FlashCopy
- 7.3 FlashCopy target as a Metro Mirror or Global Copy primary
- 7.4 Incremental FlashCopy - refresh target volume
- 7.5 Remote FlashCopy
- 7.6 Persistent FlashCopy
- 7.7 Data set FlashCopy
- 7.8 Reverse restore
- 7.9 Fast reverse restore
- 7.10 Options and interfaces
- Chapter 8. FlashCopy ordering and activation
- Chapter 9. FlashCopy interfaces
- Chapter 10. FlashCopy performance
- Chapter 11. FlashCopy examples
- Part 4 Metro Mirror
- Chapter 12. Metro Mirror overview
- Chapter 13. Metro Mirror options and configuration
- Chapter 14. Metro Mirror interfaces
- 14.1 Metro Mirror interfaces - overview
- 14.2 TSO commands for Metro Mirror management
- 14.3 ICKDSF
- 14.3.1 Metro Mirror management with ICKDSF
- 14.3.2 Display the Fibre Channel Connection Information Table
- 14.3.3 PPRCOPY DELPAIR
- 14.3.4 PPRCOPY DELPATH
- 14.3.5 PPRCOPY ESTPATH
- 14.3.6 PPRCOPY ESTPAIR
- 14.3.7 PPRCOPY FREEZE
- 14.3.8 PPRCOPY QUERY
- 14.3.9 PPRCOPY RECOVER
- 14.3.10 PPRCOPY SUSPEND
- 14.3.11 PPRCOPY RUN
- 14.3.12 Refreshing the VTOC
- 14.4 DS Command-Line Interface
- 14.5 DS CLI command- examples
- 14.6 DS Storage Manager GUI
- 14.7 ANTRQST API
- Chapter 15. Metro Mirror performance and scalability
- Chapter 16. Metro Mirror examples
- Part 5 Global Copy
- Chapter 17. Global Copy overview
- Chapter 18. Global Copy options and configuration
- Chapter 19. Global Copy performance and scalability
- Chapter 20. Global Copy interfaces
- Chapter 21. Global Copy examples
- Chapter 22. Global Mirror overview
- Part 6 Global Mirror
- Chapter 23. Global Mirror options and configuration
- 23.1 Terminology used in Global Mirror environments
- 23.2 Create a Global Mirror environment
- 23.3 Modify a Global Mirror session
- 23.4 Remove a Global Mirror environment
- 23.5 Global Mirror with multiple storage disk subsystems
- 23.6 Connectivity between local and remote site
- 23.7 Recovery scenario after primary site failure
- 23.7.1 Normal Global Mirror operation
- 23.7.2 Primary site failure
- 23.7.3 Failover B volumes
- 23.7.4 Check for valid Consistency Group state
- 23.7.5 Set consistent data on B volumes
- 23.7.6 Reestablish the FlashCopy relationship between B and C volumes
- 23.7.7 Restart the application at the remote site
- 23.7.8 Prepare to switch back to the local site
- 23.7.9 Return to local site
- 23.7.10 Conclusions
- Chapter 24. Global Mirror interfaces
- 24.1 Global Mirror interfaces - overview
- 24.2 Different interfaces for the same function
- 24.3 Global Mirror management using TSO commands
- 24.3.1 Establish a Global Mirror environment
- 24.3.2 Define paths
- 24.3.3 Establish Global Copy volume pairs
- 24.3.4 Establish FlashCopy relationships for Global Mirror
- 24.3.5 Define a Global Mirror session
- 24.3.6 Populate a Global Mirror session with volumes
- 24.3.7 Start a Global Mirror session
- 24.3.8 Query a Global Mirror session
- 24.4 DS CLI to manage Global Mirror volumes in z/OS
- 24.5 Global Mirror management using ICKDSF
- 24.5.1 Establish a Global Mirror environment
- 24.5.2 Define paths
- 24.5.3 Establish Global Copy pairs
- 24.5.4 Establish FlashCopy relationships
- 24.5.5 Define a Global Mirror session
- 24.5.6 Add volumes to a session
- 24.5.7 Start Global Mirror
- 24.5.8 Query an active Global Mirror session
- 24.5.9 Remove a Global Mirror environment
- 24.5.10 Stop the Global Mirror session
- 24.5.11 Remove volumes from Global Mirror
- 24.5.12 Un-define the Global Mirror session
- 24.5.13 Withdraw FlashCopy relationships
- 24.5.14 Delete Global Copy pairs
- 24.5.15 Remove all paths
- 24.6 ANTRQST macro
- 24.7 DS Storage Manager GUI
- Chapter 25. Global Mirror performance and scalability
- Chapter 26. Global Mirror examples
- 26.1 Global Mirror examples - configuration
- 26.2 Global Mirror query examples with TSO
- 26.3 Set up the Global Mirror environment using TSO
- 26.4 Primary site failure and recovery management with TSO
- 26.4.1 Primary site failure
- 26.4.2 Stop a Global Mirror session
- 26.4.3 Failover from B to A volumes
- 26.4.4 Check Global Mirror FlashCopy status between B and C volumes
- 26.4.5 Create a data consistent set of B volumes
- 26.4.6 Optionally create a data consistent set of D volumes
- 26.4.7 Create a data consistent set of C volumes
- 26.4.8 Prepare to return to the local site
- 26.4.9 Replicate the changes from B to A
- 26.4.10 Return to the local site and resume Global Mirror
- 26.5 Remove Global Mirror environment using TSO
- 26.6 Planned outage management using ICKDSF
- 26.7 Remove a Global Mirror environment using ICKDSF
- 26.8 Query device information with ICKDSF
- 26.9 Set up a Global Mirror environment using DS SM
- 26.10 Set up a Global Mirror environment using the DS CLI
- 26.11 Control and Query Global Mirror with the DS CLI
- 26.12 Site switch basic operations using the DS CLI
- 26.13 Remove the Global Mirror environment with the DS CLI
- Part 7 Interoperability
- Chapter 27. Combining Copy Service functions
- Chapter 28. Interoperability between DS6000 and DS8000
- 28.1 DS6000 and DS8000 Copy Services interoperability
- 28.2 Preparing the environment
- 28.2.1 Minimum microcode levels
- 28.2.2 Hardware and licensing requirements
- 28.2.3 Network connectivity
- 28.2.4 Creating matching user IDs and passwords
- 28.2.5 Updating the DS CLI profile
- 28.2.6 Adding the Storage Complex
- 28.2.7 Volume size considerations for Remote Mirror Copy
- 28.2.8 Determining DS6000 and DS8000 CKD volume size
- 28.3 RMC: Establishing paths between DS6000 and DS8000
- 28.4 Managing Metro Mirror or Global Copy pairs
- 28.5 Managing DS6000 to DS8000 Global Mirror
- 28.6 Managing DS6000 and DS8000 FlashCopy
- 28.7 z/OS Global Mirror
- Part 8 Solutions
- Chapter 29. Interoperability between DS6000 and ESS 800
- 29.1 DS6000 and ESS 800 Copy Services interoperability
- 29.2 Preparing the environment
- 29.2.1 Minimum microcode levels
- 29.2.2 Hardware and licensing requirements
- 29.2.3 Network connectivity
- 29.2.4 Creating matching user IDs and passwords
- 29.2.5 Updating the DS CLI profile
- 29.2.6 Adding the Copy Services domain
- 29.2.7 Volume size considerations for RMC (PPRC)
- 29.2.8 Volume address considerations on the ESS 800
- 29.3 RMC: Establishing paths between DS6000 and ESS 800
- 29.4 Managing Metro Mirror or Global Copy pairs
- 29.5 Managing ESS 800 Global Mirror
- 29.6 Managing ESS 800 FlashCopy
- Chapter 30. IIBM TotalStorage Rapid Data Recovery
- Chapter 31. IBM TotalStorage Productivity Center for Replication
- 31.1 IBM TotalStorage Productivity Center
- 31.2 Where we are coming from
- 31.3 What TPC for Replication provides
- 31.4 Copy Services terminology
- 31.5 TPC for Replication terminology
- 31.6 TPC for Replication session types
- 31.7 TPC for Replication session states
- 31.8 Volumes in a copy set
- 31.9 TPC for Replication and scalability
- 31.10 TPC for Replication system and connectivity overview
- 31.11 TPC for Replication monitoring and freeze capability
- 31.12 TPC for Replication heartbeat
- 31.13 Supported platforms
- 31.14 Hardware requirements for TPC for Replication servers
- 31.15 TPC for Replication GUI
- 31.16 Command Line Interface to TPC for Replication
- Chapter 32. GDPS overview
- Appendix A. Concurrent Copy
- Appendix B. SNMP notifications
- Appendix C. Licensing
- Appendix D. CLI migration
- Related publications
- Index
- Back cover
134 IBM System Storage DS6000 Series: Copy Services with IBM System z
13.1 High availability solutions
Because the disk subsystem attached to the server is being mirrored using Metro Mirror, this
offers some improved opportunities for high availability solutions.
13.1.1 GDPS HyperSwap Manager
The GDPS services offering includes the GDPS HyperSwap™ Manager function. This
function can be used to mask some primary Metro Mirror disk subsystem problems or
planned maintenance activities, by allowing the primary DASD to be swapped transparently
from one site to another without requiring an application or system outage. For more
information, refer to Part 8, “Solutions” on page 431.
13.1.2 Open systems - Clustering
For open system environments, IBM offers several solutions in this area, including GDS for
Windows environments, and HACMP™ for AIX. For more information, refer to Part 8,
“Solutions” on page 431.
13.2 Failover and failback
The Metro Mirror Failover and Failback modes are designed to help reduce the time required
to synchronize Metro Mirror volumes after switching between the production and the recovery
sites.
In a typical Metro Mirror environment, processing will temporarily switch over to the Metro
Mirror secondary site upon an outage at the primary site. When the primary site is capable of
resuming production, processing will switch back from the secondary site to the primary site.
At the recovery site the Metro Mirror Failover function combines, into a single task, the three
steps involved in the switch over (planned or unplanned) to the remote site: terminate the
original Metro Mirror relationship, then establish and suspend a new relationship at the
remote site. Note that the state of the original source volume at the normal production site is
preserved. The state of the original target volume at the recovery site becomes a source
suspended. This design takes into account that the original source LSS may no longer be
reachable.
To initiate the switch back to the production site, the Metro Mirror Failback function at the
recovery site checks the preserved state of the original source volume at the production site
to determine how much data to copy back. Then either all tracks or only out-of-sync tracks
are copied, with the original source volume becoming a target full-duplex. In more detail, this
is how Metro Mirror Failback operates:
If a volume at the production site is in the simplex state, all of the data for that volume is
copied back from the recovery site to the production site.
If a volume at the production site is in the full-duplex or suspended state and without
changed tracks, only the modified data on the volume at the recovery site is copied back to
the volume at the production site.
If a volume at the production site is in a suspended state and has tracks that have been
updated, then both the tracks changed on the production site and the tracks marked at the
recovery site will be copied back.
Finally, the volume at the production site becomes a write-inhibited target volume. This
action is performed on an individual volume basis.