Storage Management with DB2 for OS/390 Paolo Bruni, Hans Duerr, Daniel Leplaideur, Steve Wintle International Technical Support Organization www.redbooks.ibm.
International Technical Support Organization Storage Management with DB2 for OS/390 September 1999 SG24-5462-00
Take Note! Before using this information and the product it supports, be sure to read the general information in Appendix F, “Special Notices” on page 239. First Edition (September 1999) This edition applies to Version 5 of DB2 for OS/390, Program Number 5655-DB2, and Version 1 Release 4 of DFSMS/MVS, Program Number 5695-DF1, unless otherwise stated. Comments may be addressed to: IBM Corporation, International Technical Support Organization Dept.
Contents Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii The Team That Wrote This Redbook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xvii Comments Welcome . . . . . . . . . . .
3.7 Other DB2 Data Sets . . . . . . . . . . . . . . . . . . 3.7.1 DB2 Library Data Sets . . . . . . . . . . . . . 3.7.2 DB2 Temporary Data Sets . . . . . . . . . . 3.8 DB2 Data Sets Naming Conventions . . . . . . 3.8.1 Table Space and Index Space Names . 3.8.2 BSDS Names. . . . . . . . . . . . . . . . . . . . 3.8.3 Active Log Names . . . . . . . . . . . . . . . . 3.8.4 Archive Log and BSDS Backup Names 3.8.5 Image Copy Names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. .. ..
6.1.2 SMS Data Class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .47 6.1.3 SMS Storage Class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .48 6.1.4 SMS Management Class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .49 6.1.5 SMS Storage Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .50 6.1.6 DB2 STOGROUPs and SMS Storage Groups . . . . . . . . . . . . . . . . . .52 6.1.
7.5.1 Storage Class . . . . 7.5.2 Management Class 7.5.3 Storage Group . . . . 7.6 Summary . . . . . . . . . . . . .. .. .. .. . . . . . . . . . . . . . . . . .. .. .. .. . . . . . . . . . . . . .. .. .. .. . . . . . . . . . . . . . . . . .. .. .. .. . . . . . . . . . . . . .. .. .. .. . . . . . . . . . . . . . . . . .. .. .. .. . . . . . . . . . . . . . . . . .. .. .. .. . . . . 72 72 73 73 Chapter 8. Converting DB2 to Systems Managed Storage . . . . . . . . . . . . 75 8.
.5.4.3 Extended Remote Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 9.5.5 Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .100 9.5.6 Sequential Data Striping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .101 Chapter 10. DB2 I/O Operations . . . . . . . . . 10.1 Avoiding I/O Operations . . . . . . . . . . . . 10.2 Data Read Operations . . . . . . . . . . . . . 10.2.1 Normal Read . . . . . . . . . . . . . . . . .
Chapter 12. Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 12.1 DB2 Case Study Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 12.1.1 General Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 12.1.1.1 Elapsed and CPU Time. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 12.1.1.2 SQL Statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
A.6 Partitioned Table Space Using SMS Distribution . . . . . . . . . . . . . . . . . . . . . A.6.1 Define Volumes to SMS Storage Group . . . . . . . . . . . . . . . . . . . . . . . A.6.2 ACS Routines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.6.3 DB2 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.6.4 Data Set Allocation Results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.
x Storage Management with DB2 for OS/390
Figures 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. © Copyright IBM Corp. 1999 Creating a STOGROUP Defined Table Space . . . . . . . . . . . . . . . . . . . . . . . . 14 User Defined Table Space: Step 1—Define the Cluster . . . . . . . . . . . . . . . . . 14 User Defined Table Space: Step 2— Define the Table Space . . . . . . . . . . . .
51. I/O Queuing Activity Report. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 52. Channel Path Activity Report: LPAR Mode. . . . . . . . . . . . . . . . . . . . . . . . . . . 132 53. DB2 I/O . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 54. IXFP Device Performance Subsystem Summary Report . . . . . . . . . . . . . . . . 137 55. IXFP Cache Effectiveness Subsystem Summary Report . . . . . . . . . . . . . . . .
104.Test Case 3 - IDCAMS LISTCAT Display Extract . . . . . . . . . . . . . . . . . . . . . 105.Test Case 4 - Storage Class Routine Extract . . . . . . . . . . . . . . . . . . . . . . . . 106.Test Case 4 - Management Class Extract . . . . . . . . . . . . . . . . . . . . . . . . . . . 107.Test Case 4 - IDCAMS LISTCAT Extract . . . . . . . . . . . . . . . . . . . . . . . . . . . 108.Test Case 5 - ISMF Volume List Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109.
xiv Storage Management with DB2 for OS/390
Tables 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. © Copyright IBM Corp. 1999 Summary of Partition and Partitioned Table Space Sizes . . . . . . . . . . . . . . . . 12 DB2 Image Copy with and without Concurrent Copy . . . . . . . . . . . . . . . . . . . . 21 Table Space and Index Space Names. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 BSDS Names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
xvi Storage Management with DB2 for OS/390
Preface This redbook will help you tailor and configure DFSMS constructs to be used in a DB2 for OS/390 environment. In addition, this redbook provides a broad understanding of new disk architectures and their impact in DB2 data set management for large installations. This book addresses both the storage administrator and the DB2 administrator. The DB2 administrator will find information on how to use DFSMS for managing DB2’s data sets.
Thanks to the following people for their invaluable contributions to this project: Mary Lovelace Markus Muetschard Hans-Peter Nagel Alison Pate Toru Yamazaki International Technical Support Organization, San Jose Center Ted Blank John Campbell Paramesh Desai Ching Lee Rick Levin Roger Miller Akira Shibamiya Jim Teng Horacio Terrizzano Jeff Todd Steve Turnbaugh Jay Yothers IBM Development, Santa Teresa Bob Kern Lee La Frese IBM Development, Tucson Jeffrey Berger Bruce Mc Nutt Paulus Usong IBM Development,
Martin Packer IBM UK John Burg Nghi Eakin IBM Gaithersburg David Petersen IBM Washington Thanks to Elsa Martinez for administration support, Maggie Cutler and Yvonne Lyon for technical editing, and Emma Jacobs for the graphics. Comments Welcome Your comments are important to us! We want our redbooks to be as helpful as possible.
xx Storage Management with DB2 for OS/390
Part 1. Introduction and Summary © Copyright IBM Corp.
2 Storage Management with DB2 for OS/390
Chapter 1. Introduction Auxiliary storage management in the DB2 environment for the MVS platform has, so far, been mainly the responsibility of the database administrators. In the first few years of its usage, DB2’s implicit definition of page sets through its Storage Groups (STOGROUP) often replaced the more traditional method of explicitly allocating VSAM data sets because of DB2’s simplicity and ease of use.
4 Storage Management with DB2 for OS/390
Chapter 2. Summary of Considerations This book describes the exploitation of storage by DB2 for OS/390 (DB2). Two major areas are analyzed: 1. DB2 and storage management 2. DB2 and storage servers This chapter summarizes the major conclusions of this project. 2.1 DB2 and Storage Management A detailed analysis of the different types of DB2 data sets shows that DFSMS can automatically manage all of the data sets DB2 uses and requires.
• DB2 data sharing performance improvement for open/close of data sets (especially beneficial during DB2 start-up) with Enhanced Catalog Sharing (ECS); ECS reduces the path length and supports the ICF shared catalog on the coupling facility. You can check the Appendix section G.4, “Web Sites” on page 242 for sites on DB2 and DFSMS reporting the most current information on the supported functions. 2.1.
Installations having these devices could use sequential caching as an installation option. Installations with a mixture of devices with large, small, or no cache can benefit from the bypass cache option. 2.2.3 Log Structured File Devices using the log structured file technique (like the RVA) do not maintain data location during data updates. For these devices there exists a concept of logical location of data, independent from the the physical location.
volumes, closing it on volumes to be removed), and finally removing the volumes you want to exclude from the Storage Group. All those functions can be accomplished while the data is on line. Data sets that were unmovable, never-closed, or never reallocated could be moved using remote copy techniques, then, after a short outage, the critical application can be switched onto the new volumes. 2.2.
Part 2. DB2 and System Managed Storage © Copyright IBM Corp.
10 Storage Management with DB2 for OS/390
Chapter 3. DB2 Storage Objects This chapter represents an introduction to DB2 for OS/390 (DB2 throughout this redbook) for the storage administrators interested in understanding the different types of data related objects used in a DB2 environment. Special emphasis is placed on the data sets managed directly by DB2. 3.1 DB2 Overview DB2 is a database management system based on the relational data model.
3.2.1 TABLE All data managed by DB2 is associated to a table. The data within the table is organized in columns and rows, and this represents the minimum unit of data that can be identified by the user. The table is the main object used by DB2 applications. The SQL DML used by application programs and end users directly references data in tables. 3.2.2 TABLESPACE A table space is used to store one or more tables. A table space is physically implemented with one or more data sets.
DB2 V6 has increased the maximum size of a partitioned table space to almost 16 TB, increasing the maximum data set size to 64 GB. This is supported only if they are defined and managed with DFSMS 1.5. 3.2.3 INDEX A table can have zero or more indexes. An index contains keys. Each key may point to one or more data rows. The purpose of indexes is to establish a way to get a direct and faster access to the data in a table.
DB2 defined and managed spaces should be the choice by default. It is the easier of these solutions and is adequate for most table and index spaces in the majority of situations. User defined table spaces provide more control of the data set placement and all VSAM definition options are available.
database name, and the table space name in Figure 2 on page 14 must match the definitions on Figure 3 on page 15. CREATE TABLESPACE PAOLOR1 IN DSN8D61A BUFFERPOOL BP0 CLOSE NO USING VCAT DB2V610Z; Figure 3. User Defined Table Space: Step 2— Define the Table Space 3.4 DB2 System Table Spaces DB2 uses four internally defined databases to control and manage itself and the application data.
Work database table spaces. When required, 8K and 16K pages are placed in a 32K table space of the Work database. Figure 4 on page 16 shows how the size and number of table spaces in the Work database is defined. Parameters 13 through 16 on this figure show the default number of table spaces (one for each page size), and default sizes for the 4K table space (16 MB) and the 32K table space (4 MB).
log RBA (or LRSN) of updates to a table space. The record is opened when a first update is detected, and closed after an interval of read only activity. The interval is defined with two read-only switch parameters on the DB2 installation panel DSNTIPN. 3.5 DB2 Application Table Spaces All application data in DB2 is organized in the objects described in 3.2, “DB2 Data Objects” on page 11. Application table spaces and index spaces are created as shown in 3.
2. The failing BSDS must be redefined, or alternatively, an existing spare BSDS copy must be renamed. 3. The BSDS is rebuilt from the good copy with an IDCAMS REPRO. 3.6.2 Active Logs The active log data sets are used for data recovery and to ensure data integrity in case of software or hardware errors. DB2 uses active log data sets to record all updates to user and system data. The active log data sets are open as long as DB2 is active.
• The amount of time to cover with all logs (Time 1 up to Current Time) • The amount of time to cover with active logs (Time 2 up to Current Time) Start Time Time 1 Current Time Time 2 DB2 LOG ACTIVE LOG ARCHIVE + ACTIVE LOG Figure 5. DB2 Log and Its Data Sets 3.6.3 Archive Logs Archive log data sets are DB2 managed backups of the active log data sets. Archive log data sets are created automatically by DB2 whenever an active log is filled.
3.6.4 Image Copies Image copies are the backup of user and system data. DB2 V6 introduces the possibility of taking image copy for indexes. For a well-managed backup and recovery policy, it is recommended that the amount of data in image copy data sets should cover at least three generations of image copies in order to guarantee recoverability. This means that a large number of image copy data sets is required and needs to be managed in DB2 installations.
APPLICATION READ PROCESSING APPLICATION WRITE PROCESSING IMAGE COPY Figure 7. Image Copy SHRLEVEL CHANGE Another option for image copies is the use of the concurrent copy feature, with or without SnapShot. Concurrent copy and SnapShot are described in 9.5.2, “Concurrent Copy” on page 94. These features allow DB2 to create full image copies with only a short time interval of data unavailability. This is illustrated in Figure 26 on page 94. The DB2 RECOVER utility is able to handle these copies.
for an archive log that is older than the oldest image copy unless other types of backups, not just image copies, are also used for recovery. Image copies and archive logs are recorded in DB2 and optionally cataloged in an ICF Catalog. Physical deletion of the data sets removes them from the ICF catalog. This physical deletion must be coordinated with a DB2 cleanup procedure to remove obsolete information in SYSIBM.SYSCOPY. This cleanup is performed with the MODIFY utility.
3.8.1 Table Space and Index Space Names The names for DB2 table spaces and index spaces have the following structure: Table 3. Table Space and Index Space Names hlq.DSNDBx.dbname.spname.ynnnn.Ammm The elements of this name are: hlq VSAM catalog high level qualifier DSNDB Standard part of the name x Identifies a VSAM cluster or data component C D Cluster Data dbname Database name spname Space name. Either a table space name or an index name.
LOGCOPY Standard part of the name n Active log copy, 1 or 2 mm Active log number, 01 to 31 3.8.4 Archive Log and BSDS Backup Names The default names for archive log and BSDS backup data sets have the following optional structure: Table 6. Archive Log and BSDS Backup Names hlq.ARCHLOGn.Dyyddd.Thhmmsst.
Chapter 4. System Managed Storage Concepts and Components This chapter is designed to familiarize the DB2 database administrator (DBA) with the concepts and components of system managed storage (SMS). The following topics are discussed: • Background • Evolution • DFSMS/MVS components 4.1 Background The continued growth of data requires the need for a more effective and efficient way of managing both data and the storage on which it resides.
4.3 DFSMS/MVS Components DFSMS/MVS provides and enhances functions formerly provided by MVS/DFP, Data Facility Data Set Services (DFDSS), and the Data Facility Hierarchical Storage Manager (DFHSM). The product is now easier to install and order than the combination of the earlier offerings. This chapter describes the main components of the DFSMS/MVS family: • DFSMSdfp • DFSMSdss • DFSMShsm • DFSMSrmm • DFSMSopt 4.3.
4.3.1.2 ISMF for the Storage Administrator Figure 9 on page 27 shows the ISMF primary option menu displayed for a storage administrator. Options allow lists to be built from volume selection criteria (Storage Group), as well as the Management Class, Data Class, and Storage Class facilities allowing the individual to define, alter, copy, and delete SMS classes, volumes, and data sets. Again, these lists are built from VTOC or catalog information, and tailored in the same way.
Logical processing, which is data set oriented. This means it performs against data sets independently of the physical device format. Physical processing, which can perform against data sets, volumes, and tracks, but moves data at the track-image level. PRINT Used to print both VSAM and non-VSAM data sets, track ranges, or a VTOC. RELEASE Releases allocated but unused space from all eligible sequential, partitioned, and extended format VSAM data sets.
by issuing specific commands when manual operations are appropriate. Volumes can be: • Managed by SMS. In this case the Storage Group definitions controls HSM initiated automatic functions, depending upon the appropriate Management Class of the data set. • Managed by HSM. These volumes are commonly known as primary or level 0. Here each volume is defined to HSM individually by the ADDVOL parameter and governed accordingly. • Owned by HSM.
determined by either the Management Class for SMS managed data sets, or set by the ADDVOL parameter for HSM managed. It can also be controlled in combination with volume thresholds set by the storage administrator. Data sets may be migrated to ML1 (normally disk) 4, after a period of inactivity, and then onto ML2 (tape) following a further period of non-usage. It is feasible, and maybe more appropriate in certain cases, to migrate directly to ML2.
• Recovery Recovery can be either at the individual data set level or to physically restore a full volume. This is applicable for both SMS and non-SMS managed data sets. Note that full exploitation of this component requires the use of the DSS and Optimizer components of DFSMS. 4.3.4 DFSMSrmm The Removable Media Manager (RMM) 5 component provides facilities for tape and cartridge formats, including library, shelf, volume and data set level management.
• Simulate potential policy changes and understand the costs of those changes. • Produce presentation quality charts. For more information on the DFSMS/MVS Optimizer Feature, see the following publications: • DFSMS Optimizer V1R2 User's Guide and Reference, SC26-7047-04 • DFSMS Optimizer: The New HSM Monitor/Tuner, SG24-5248 4.3.6 SMF Records 42(6) DFSMS statistics and configuration records are recorded in SMF record type 42.
• Automated disk space management SMS has the facility to automatically reclaim space which is allocated to old and unused data sets. Policies can be defined that determine how long an unused data set are allowed to reside on level 0 volumes (active data). Redundant or expired data can be removed by the process of migration to other volumes (disk or tape), or the data can be deleted. Allocated but unused space can be automatically released, which is then available for new allocations and active data sets.
34 Storage Management with DB2 for OS/390
Chapter 5. Storage Management with DFSMS This chapter is designed to familiarize the DB2 administrator with the functionality of SMS.
Communications Data Set The communications data set (COMMDS) holds the name of the ACDS and provides communication between SMS systems in a multisystem environment. The COMMDS also contains statistics on the SMS, and MVS status for each SMS volume, including space. 5.1.2 Class and Storage Group Definitions The storage management policies are defined to the system by use of classes. Data sets have classes assigned to them.
1. Data Class—data definition parameters. 2. Storage Class—performance and accessibility requirements. 3. Management Class—migration, backup and retention attributes. 4. Storage Group—candidate allocation volumes. Because data set allocations, whether dynamic or through JCL, are processed through ACS routines, installation standards can be enforced on those allocations on both SMS and non-SMS managed volumes.
5.3 SMS Classes The storage administrator uses ISMF to create an ACS routine for each of the three types of classes and one to assign the Storage Groups. These routines, used together with the Data Class, Storage Class, Management Class, Storage Group definitions, and the base configuration, define an installation’s SMS configuration. Figure 13 on page 38 shows the relationship of each of the four constructs which make up the SMS ACS routine environment.
User defined allocations take precedence over default Data Classes. For example, if a Data Class specifies an LRECL of 80 bytes, and the JCL allocation specifies an LRECL of 100 bytes, then 100 bytes are allocated. If the Data Class is altered by the storage administrator, attributes previously allocated by the Class remains unchanged. Alterations are only be honored for new allocations. 5.3.1.
was available. SMS allows the separation of performance and service level of data sets by use of the Storage Class. A Storage Class construct details the intended performance characteristics required for a data set assigned to a given class. The response times set for each Storage Class are target response times for the disk controller to achieve when processing an I/O request. It decides if the volume should be chosen by the user or by SMS.
ATTRIBUTE COMMENT Availability objectives - Accessibility - Availability - Guaranteed space - Guaranteed synchronous write Caching - Weighting 5.3.3 Management Class 5.3.3.1 Description Prior to SMS, DFHSM managed the data sets at volume level, applying a standard management criteria for all data sets on a given volume. Although this is still applicable for non-SMS managed data sets, with the introduction of SMS, the control is carried out at data set level by use of the Management Class.
DFSMShsm's automatic backup services, supported by concurrent copy, to help with point of consistency backups. It is not advisable to use HSM to manage most production databases. Therefore, use a NOMIGRATE Management Class for this type of data. This prevents HSM space and availability management from operating.
ATTRIBUTE COMMENT Backup -Who can back up the data (the storage administrator or the user, or both). - If automatic backup should be taken for a data set. - Backup frequency (number of days between backups). - Number of backup versions (data set exists). - Number of backup versions (data set deleted). - Retention of backups once a data set has been deleted. - Backup copy type (incremental, full volume dump). 5.3.4 Storage Group 5.3.4.
• The number of volumes in the Storage Group satisfies the volume count • Accessibility requested • Availability (dual copy or RAMAC) requested • The volume was explicitly requested and guaranteed space is YES • Sufficient free space to perform the allocation without exceeding the high threshold • Volumes fall within a pre-determined range of millisecond response times based on the request • The volume supports extended format if EXT=PREFERRED or REQUIRED is requested in the data class Candidates for second
Electing DB2 to select the volume requires assigning a Storage Class with guaranteed space. However, guaranteed space reduces the benefits of SMS allocation, so this approach is not recommended. If you do choose to use specific volume assignments, additional manual space management must be performed . Unlike non-SMS, SMS does not retry to skip a volume that cannot satisfy the requested space.
hardware capabilities. For a few specific and exceptional cases, the storage class GUARANTEED SPACE option can be used. As the Storage Group definition exists only in SMS tables, its logical mapping onto volumes can be redistributed when a hardware change occurs, without any DB2 application outage, provided that DB2 and storage administrators act in concert (in particular for allocating new DB2 objects). Notice that redefining a Storage Group does not require application outage. 5.
Chapter 6. Managing DB2 Databases with SMS This chapter describes DB2 databases from the point of view of their attributes for SMS management, and provides examples for these databases. Due to their stricter availability requirements, the DB2 system databases are analyzed separately. This chapter includes examples of SMS Data, Storage, and Management Classes for DB2 table spaces.
DATA CLASS APPLICATION SELECTION Command ===> To perform Data Class Operations, Specify: CDS Name . . . . . . 'ACTIVE' (1 to 44 character data set name or 'Active' ) Data Class Name . . DCDB2 (For Data Class List, fully or partially specified or * for all) Select one of the 2 1. List 2. Display 3. Define 4.
SCDBFAST This Storage Class is intended for table spaces belonging to applications requiring performance. It provides high performance and good availability. SCDBCRIT This Storage Class is intended for table spaces belonging to critical applications. It provides high performance and continuous availability. SMS attempts to place these table spaces on disks with dual copy or on RAID. SCDBTEST This Storage Class is intended for environments with lower requirements.
MCDB22 This Management Class is intended for table spaces that are allowed to migrate and require less availability than that defined in the MCDB2M1 Management Class. This Management Class causes migration after one week of inactivity and the table space will migrate directly to level 2. For example, this Management Class can be used for Test or Development table spaces. Table 13.
SGDBTEST Storage Group for DB2 table spaces and index spaces with low performance and availability requirements. This Storage Group allows migration. SGDBXXXX Other Storage Groups intended for specific partitioned DB2 table spaces and index spaces, or for other critical table spaces, where strict placement is considered essential. XXXX is any four characters. The attributes of these Storage Groups are similar to one of the other Storage Groups.
Panel Utilities Help -----------------------------------------------------------------------------POOL STORAGE GROUP ALTER Command ===> SCDS Name . . . . . : SMS.SCDS1.SCDS Storage Group Name : SCDB20 To ALTER Storage Group, Specify: Description ==> STANDARD STORAGE GROUP FOR DB2 TABLE AND INDEX SPACES ==> Auto Migrate . . N (Y, N, I or P) Migrate Sys/Sys Group Name . . Auto Backup . . N (Y or N) Backup Sys/Sys Group Name . . Auto Dump . . . N (Y or N) Dump Sys/Sys Group Name . . . Dump Class . . .
Panel Utilities Help -------------------------------------------------------------------------STORAGE GROUP VOLUME SELECTION Command ===> CDS Name . . . . . : SMS.SCDS1.SCDS Storage Group Name : SCDB20 Storage Group Type : POOL Select One of the following Options: 1 1. Display - Display SMS Volume Statuses (Pool only) 2. Define - Add Volumes to Volume Serial Number List 3. Alter - Alter Volume Statuses (Pool only) 4.
combinations of these names. This provides the installation with great flexibility in implementation alternatives, such as: High Level Qualifier Filter The ACS routines contain a list of high level qualifiers. These qualifiers are used to assign the specific SMS classes. The high level qualifiers can provide a meaningful distinction between data of different DB2 subsystems. This method is recommended as a starting point, because of its simplicity.
/*********************************************************************/ /* PARTITION FILTER /*********************************************************************/ FILTLIST &PTSP INCLUDE ('LINEITEM','ORDER','PART','PARTSUPP', 'SUPPLIER','NATION','REGION') /* Supply a list of the partitioned tablespaces */ FILTLIST &PNDX INCLUDE ('PXL@OK','PXO@OK','PXP@PK','PXPS@SK', 'PXS@SK','PXN@NK','PXR@RK') /* Supply a list of the partitioned indexes */ WHEN ( (&DSN(4) = &PTSP OR &DSN(4) = &PNDX) AND (&LLQ EQ 'A001' OR &
6.1.8 Table Space and Index Space Names for SMS The recommendation in this book for finely tuned SMS installations is to imbed SMS codes into the names of DB2 table and index spaces. This is shown in Table 16 on page 56. The data set names have the structure shown in Table 3 on page 23, with a change in the space name itself. As explained in 6.1.7, “Assigning SMS Classes to DB2 Table Spaces and Index Spaces” on page 53, this name contains codes for the ACS routines.
may not leave enough volumes with adequate free space; this could cause a REORG to fail due to lack of space. The following methods address this issue. Use One SMS Storage Group for Each Partition A one-volume SMS Storage Group can be defined for each partition. The ACS routine assigns to each partition its corresponding Storage Group. This method is similar to creating a DB2 defined partitioned table space, using one STOGROUP for each partition. One SMS Storage Group is defined for each DB2 STOGROUP.
6.2.1 Online Production Databases The databases used in production normally contain important data and have special availability and security requirements. Performance may become a critical issue, if the databases are used in an online environment. Because online production databases contain important data, the DB2 Database Administrators typically monitor them very frequently. These databases are sometimes placed on specific disks to manage the data individually.
6.2.3.1 Storage Classes The following example Storage Classes can be used for Data Warehouse table spaces: • SCDBMED • SCDBTEST • SCDBFAST 6.2.3.2 Management Classes The following example Management Classes can be used for Data Warehouse table spaces: • MCDB20 • MCDB21 • MCDB22 6.2.4 Development and Test Databases Development and test databases are not essential for immediate business needs. Their performance and availability requirements should not have priority over production databases.
Table 17.
6.3.1.1 Storage Classes The following example Storage Classes can be used for online production table spaces: • SCDBCRIT 6.3.1.2 Management Classes The following example Management Class can be used for online production table spaces: • MCDB20 6.3.2 Work Database All DB2 subsystems use table spaces in a Work database. For example, the Work database stores an intermediate result of a query, or is the workarea for an internal sort of a result table.
62 Storage Management with DB2 for OS/390
Chapter 7. Managing DB2 Recovery Data Sets with SMS Some DB2 data sets are standard sequential files or partitioned data sets. Many installations already manage these data sets with SMS and have already SMS classes defined for these data sets. Therefore, this chapter only analyzes DB2 recovery related data sets. This chapter describes attributes for SMS management of the DB2 recovery data sets and provides example SMS constructs for these data sets. DB2 recovery data sets are described in 3.
Table 19. SMS Storage Classes for Recovery Data Sets Attribute SCDBIC Direct response (MSEC) SCDBICH SCDBARCH SCDBACTL 10 5 10 5 10 5 10 1 10 20 20 40 Availabilitya Standard Continuous Continuous Continuous Accessibility b Standard Standard Standard Standard Guaranteed space No No No Yes Guaranteed synchronous write No No No No Direct bias Sequential response (MSEC) Sequential bias Sustained data rate (MB/sec) Cache set name CF direct weight CF sequential weight a.
Table 20.
SGDBACTL Storage Group intended for BSDSs and active logs for all non-production DB2 subsystems. Because the corresponding Storage Class has guaranteed space defined as yes, the DB2 administrator can direct the allocation of the data sets on volumes which are dedicated to a specific DB2 subsystem. SGDB2PLG Storage Group intended for BSDSs and active logs for the production DB2P subsystem. The Storage Class contains the volumes for the DB2P subsystem.
7.2 BSDS The bootstrap data set (BSDS) contains the information required by DB2 to start the subsystem in normal circumstances. It also handles the restart and recovery in any abnormal circumstance. For example, all log data sets (active and archive) are automatically registered within the BSDS. Data Organization The BSDS is a VSAM KSDS. The data control interval is 4 KB; the index control interval is 1 KB. Figure 19 on page 67 shows an example VSAM definition of a BSDS.
7.2.3 Storage Group Because the Storage Class has guaranteed space, the BSDS data sets are allocated on the disk volumes requested by the DB2 administrator. The volumes must belong to the assigned Storage Group (such as: SGDBACTL), and the disk volume must be eligible for SMS. For example, this can be done with the DFSMSdss CONVERT command. • SGDBACTL for several DB2 susbsystems • SGDB2PLG for the DB2P subsystem 7.2.
DEFINE CLUSTER ( NAME (DB2V610Z.LOGCOPY1.DS01) VOLUMES(SBOX09) REUSE RECORDS(8640) LINEAR ) DATA ( NAME (DB2V610Z.LOGCOPY1.DS01.DATA) ) Figure 20. Example VSAM Definition of One Active Log 7.3.1 Storage Class A Storage Class with guaranteed space set to yes, enables the DB2 administrator to decide the location of the active logs. • SCDBACTL 7.3.2 Management Class The following Management Class has been defined for active logs, no SMS management is required. • MCDBACTL 7.3.
can define two separate device types for the primary and secondary archive log. This can be seen on line 5 and 6 of Figure 21. DSNTIPA ===> INSTALL DB2 - ARCHIVE LOG DATA SET PARAMETERS Enter data below: 1 2 3 4 5 6 7 8 9 10 11 12 ALLOCATION UNITS PRIMARY QUANTITY SECONDARY QTY.
directly to tape. Backup The archive logs are a backup of the active logs. DB2 can create dual archive logs. There is no need for an additional backup of the archive logs. 7.4.1 Storage Class Storage Class SCDBARCH is an example of a Storage Class for archive logs. This Storage Class has high availability and good performance. • SCDBARCH 7.4.2 Management Class Two different Management Classes are used for the archive logs. One is used for the primary copy and the other for the secondary copy.
means that a large number of image copy data sets are required and need to be managed. Data Organization Image Copy data sets are physical sequential data sets. Record size is 4096 (for any size of page) and the block size is typically 28672 bytes. Sample statements to execute an image copy are shown in Figure 137 on page 198 in Appendix B, section B.3, “Image Copies” on page 194.
7.5.3 Storage Group For this example, three Storage Groups are defined. These provide different levels of performance and availability. SGDBARCH serves to separate secondary copies from the primary copies. • SGDBIC • SGDBICH • SGDBARCH 7.6 Summary Table 23.
74 Storage Management with DB2 for OS/390
Chapter 8. Converting DB2 to Systems Managed Storage This chapter describes the techniques for converting DB2 data to SMS. However, each customer has unique data sets and facilities to support their online environment. These differences have an impact on recommended storage management procedures.
are not duplexed by the database management system. The use of fast write and cache facilities will provide increased performance for databases and recovery data sets. DFSMS/MVS enhances the backup and recovery utilities provided by the DB2 system as follows: • DFSMSdss uses concurrent copy capability to create point-of-consistency backups. • DFSMShsm backs up system data sets and end-user database data that is less critical than production database data.
8.4 Positioning for Implementation For the DBA, there are a number of items to be considered as prerequisites for the process. 8.4.
DB2 Naming Conventions Certain parts of tablespace names are generated by DB2. This does not leave the DBA with much scope for a flexible naming convention. For further information on this subject see 6.1.7, “Assigning SMS Classes to DB2 Table Spaces and Index Spaces” on page 53 and 6.1.8, “Table Space and Index Space Names for SMS” on page 56. Ensure that the storage administrator is fully aware of any restrictions so ACS routines can be coded accordingly.
• Active logs and BSDS. 8.5.2 Methodology 8.5.2.1 Conversion Window Decide when each type of data is available for conversion. During a normal processing cycle, some data sets will be deleted and reallocated, providing the opportunity for SMS management. Online data must be converted when those services are unavailable (down time). This is the most difficult to schedule, and requires precise planning. 8.5.2.2 Data Movement Each disk device is either SMS managed or not.
8.5.2.4 Contingency Time Frame Limit the amount of data converted at a particular time, so if problems are experienced, the situation can be recovered or backed out. 8.5.3 SMS Implementation The storage administrator performs the implementation of SMS, using ISMF to update the ACS routines. However, it is normally the DB2 administrator who is closely involved with the planning and positioning of data.
• Translating and validating the ACS routines • Generating test cases, to ensure updates to the ACS routines have the desired effect • Activating the new SMS configuration 8.5.
• Perform storage reporting, through ISMF and with DCOLLECT and Volume Mount Analyzer (VMA) data. • Print ISMF lists. • Run ISMF functions in batch mode, using the REXX EXECs provided. For more information on this feature, see DFSMS/MVS V1R3 NaviQuest User's Guide, SC26-7194.
Part 3. DB2 and Storage Servers © Copyright IBM Corp.
84 Storage Management with DB2 for OS/390
Chapter 9. Disk Environment Overview This chapter considers the disk architecture from a DB2 point of view. It focuses on concepts and recommendations for their practical implementation, rather than on technical details. In order to facilitate the mutual understanding of some storage terms between DB2 administrators and storage administrators, we highlight them in italics.
• RAID 1: mirroring • RAID 2: synchronized access with separate error correction disks • RAID 3: synchronized access with fixed parity disk • RAID 4: independent access with fixed parity disk • RAID 5: independent access with rotating parity • RAID 6: dual redundancy with rotating parity Note that we still closely associate the terms volume and device because the mapping is fixed.
• Only one set of write operations to disk in continuous physical sequence (instead of a set of random writes), which is the most optimized write mode for RAID technology Figure 23 on page 87 and Figure 24 on page 87 illustrate the LSF concept. Log Structured File SHIP'S LOG 49D12.26N 123D14.92W 49D14.07N 123D16.04W 49D19.69N 123D16.22W Only the last position is valid! Figure 23. LSF Concept 1 The main challenge of an LSF architecture is managing the free space.
Volume SnapShot Functional Device Table Functional Track Definition FTT VOL 100 100 33903 3339 Cyts 2.8GB TNT 2 2 2 SNAP 2 200 33903 3339 Cyts 2.8GB 2 VOL200 2 Figure 25.
volumes to physical disks. A functional volume is a logical volume still defined by track size, capacity, and address. This mapping structure is contained in a series of tables stored in the control unit. These tables are updated at each write on functional volume, and have to be maintained when previously used space is released. Data from all functional volumes could reside on one array device or many array devices.
processing of any other failing cluster. Let us briefly review the storage server subcomponent relationships: • Host adapters attach channel links and allow them to communicate with either cluster-processor complex. Practically, statistics at this level deal with what is called upper interface busy percentage. • Device adapters provide storage device interfaces. Statistics captured at this level, very often indirectly measured, are called lower interface busy percentage.
In a relational database environment, the physical separation of logically related data results in little locality of reference. Data in memory techniques also minimize the re-referencing of data on disk, as this is ideally accomplished in processor memory. Write caching requires that data integrity be preserved. Applications assume that an update written to disk is safe.
9.3.3 Write Record Caching (Quickwrite) Write record caching, called quickwrite, extends the benefits of DFW to data that does not have a read-before-update access pattern (which is currently required to have a DFW hit) and for data with a poor cache hit rate. Data with a predictable record format, such as VSAM records, can benefit from this algorithm. 9.3.
allows them a longer stay in cache. Other data sets should be set in may cache Storage Classes, defined with intermediate response time values. 9.4 Paths and Bandwidth Evolution A path is a logical concept that lies on the physical web of links (cables) existing between hosts and storage servers. This topology can be highly diversified and complex.
one-to-one redundancy. Its purpose is I/O automatic switching to secondary when unattended outage occurs on primary. Virtual volume, RAID 5, and RAID 6 have made the concept of dual copy practically obsolete. 9.5.2 Concurrent Copy Concurrent copy is a function the disk storage server controls in conjunction with DFSMS software. Concurrent copy enables taking backup copies of data while minimally impacting the application data access.
DB2 fully integrates concurrent copy into DB2 recovery. The CONCURRENT option on the DB2 COPY utility reduces disruption and automatically manages the copies used for recovery, to ensure consistent data. This option invokes the concurrent copy function of DFSMSdss, and records the resulting image copies in the SYSCOPY table of the DB2 catalog. “Image Copy Options” on page 20 has more information about DB2 use of concurrent copy. Concurrent copy is called through the DFSMSdss standard API.
traditional disaster recovery, is that each software subsystem (CICS, IMS, DB2, VSAM, and others) has its own recovery technique. Because an application is typically made up of multiple software subsystems, it is impossible to get a time-consistent backup across all subsystems unless the application is stopped, which impacts availability. Please note that backups are still required in a remote copy environment.
database would be corrupted and would have to be recovered from image copies and log data. In all cases notification of this miss must be known at secondary. When that happens for hundreds of volumes, without a clear notification of status of impacted secondary volumes, recovery can be extremely long. For more information on this topic, please refer to RAMAC Virtual Array : Implementing Peer-to-Peer Remote Copy, SG24-5338.
This replaces a manual site switch process that could require more than 20 people to be present to perform their specialized tasks. Figure 30 on page 98 shows the global GDPS architecture. Network Site A Site B High Performance Routing 9037-2 40km Max Distance CF Local DASD 9037-2 CF Secondary DASD Primary DASD Local DASD Remote Copy Figure 30.
During normal operations GDPS continuously monitors all systems and specifically looks for messages indicating that PPRC volume pairs are being suspended. At the occurrence of a suspend, GDPS immediately freezes the image of the secondary disk configuration, to ensure restartability of the applications in the backup location. The next step is to analyze the reason for the suspend, because each cause can have different levels fo effect.
site. XRC externalizes a timestamp of the recovered system so that manual recovery is possible from a specified time. The time lag between the primary and the secondary sites can be minimized by performance tuning actions. 5 1 7 6 3 4 8 2 1. Write data to cache and NVS on primary 2. 3990 sidefile entry created 3. Device End - write complete 4. SDM reads sidefile using a utility address 5. SDM forms Consistency Group - SDM optimizes secondary update process 6.
9.5.6 Sequential Data Striping Sequential data striping provides the opportunity for significant improvement in sequential processing performance by allowing data to be spread across multiple devices that are accessed concurrently transparently to the applications. With sequential data striping, the data transfer rate may be substantially higher than the individual device is capable of sustaining.
102 Storage Management with DB2 for OS/390
Chapter 10. DB2 I/O Operations The information shown here is extracted and modified from different sections of the DB2 UDB for OS/390 V6 Administration Guide, SC26-8957. This information is provided to give storage administrators an understanding of the I/O operations performed by DB2. The two most important I/O operations performed by DB2 are the data read I/O and the log write I/O. The data read I/O has direct impact on the response time of any SQL query.
Coupling Facility DB2 Vitual Buffer Pool Group Buffer Pool Hiper Pool DB2 CPC CONTROLLER CACHE Figure 32. Storage Hierarchy 10.2 Data Read Operations DB2 uses four read mechanisms to get data pages from disk into the virtual bufferpool: • • • • Normal read (or synchronous read) Sequential prefetch Dynamic prefetch List sequential prefetch 10.2.1 Normal Read Normal read is used when just one or a few consecutive pages are retrieved. The unit of transfer for a normal read is one page.
instances that do not have total overlap, in which wait times will still appear in the accounting records. Sequential prefetch can be used to read data pages, by table space scans or index scans with clustered data reference. It can also be used to read index pages in an index scan. Sequential prefetch allows CP and I/O operations to be overlapped.
Figure 24 shows the prefetch quantity as a function of the page size and the buffer pool size. For certain utilities (REORG, RECOVER), the prefetch quantity can be twice as much. Table 24.
10.2.7 Sequential Prefetch Threshold The sequential prefetch threshold (SPTH) is set by DB2 at 90% of each virtual buffer pool. This threshold is checked at two different times: • Before scheduling a prefetch operation. If the threshold has been exceeded, the prefetch is not scheduled. • During buffer allocation for an already-scheduled prefetch operation. If the threshold has been exceeded, the prefetch is canceled.
10.3.2 Synchronous Writes Synchronous writes occur exceptionally, when: • The virtual buffer pool is too small and the immediate write threshold (IWTH, see 10.3.3, “Immediate Write Threshold” on page 108) is exceeded. • More than two DB2 checkpoints have been taken during the execution of a unit of work, and an updated page has not been written out to disk. When the conditions for synchronous write occur, the updated page is written to disk as soon as the update completes.
Table spaces containing pages which are frequently reread and updated should have a high threshold, placing them in a virtual buffer pool with a high DWQT, or high VDWQT. This ensures that pages are reused in storage. The reference value J in Figure 35 on page 110 shows the rate of updates per each write. The higher this rate, the better the page reuse for write is in this virtual buffer pool.
BP4 READ OPERATIONS QUANTITY /MINUTE /THREAD /COMMIT --------------------------- -------- ------- ------- ------BPOOL HIT RATIO (%) 55.12 GETPAGE REQUEST GETPAGE REQUEST-SEQUENTIAL GETPAGE REQUEST-RANDOM 221.8K 6534.43 18427.00 542.99 203.3K 5991.43 SYNCHRONOUS READS SYNCHRON. READS-SEQUENTIAL SYNCHRON. READS-RANDOM 613.00 64.00 549.00 GETPAGE PER SYN.READ-RANDOM 370.36 18.06 1.89 16.18 N/C 110.9K N/C 9213.50 N/C 101.7K N/C N/C N/C 306.50 32.00 274.50 SEQUENTIAL PREFETCH REQUEST 577.00 17.
DSNB450I =DB2Z TABLESPACE = DSNDB06.SYSCOPY, USE COUNT DSNB452I =DB2Z STATISTICS FOR DATASET 1 DSNB453I =DB2Z VP CACHED PAGES CURRENT = 64 MAX CHANGED = 0 MAX DSNB455I =DB2Z SYNCHRONOUS I/O DELAYS AVERAGE DELAY = 9 MAXIMUM DELAY TOTAL PAGES = 3 DSNB456I =DB2Z ASYNCHRONOUS I/O DELAYS AVERAGE DELAY = 1 MAXIMUM DELAY TOTAL PAGES = 61 TOTAL I/O COUNT = 0, GBP-DEP = N = = 64 0 = 22 = = 1 2 Figure 36. Display Buffer Pool Data Set Statistics 10.
10.4.1 Asynchronous Writes DB2 writes the log records (the control intervals) from the output buffer to the active log data set when the number of log buffers used reaches the value the installation set for the WRITE THRESHOLD field of installation panel DSNTIPL; see Figure 40 on page 115. The application is not aware of these writes. 10.4.2 Synchronous Writes Synchronous writes usually occur at commit time when an application has updated data.
APPLICATION PROGRAM Log Record NOWAIT FORCE Log Output Buffer ASYNC SYNC Active Log Data set Figure 37. Log Record Path to Disk Force End of Phase 1 Force Beginning of Phase 2 End of COMMIT I/O I/O Log 1 I/O I/O Log 2 Application waiting for logging Application waiting for logging Figure 38.
10.4.5 Improving Log Write Performance In this section we present some considerations on choices to improve log write performance. LOG OUTPUT BUFFER Size The OUTPUT BUFFER field of installation panel DSNTIPL lets the system administrator specify the size of the output buffer used for writing active log data sets. This field is shown in Figure 40 on page 115. With DB2 V6, the maximum size of this buffer (OUTBUFF) is 400000 KB.
LOGCOPY1.DS01 LOGCOPY2.DS03 LOGCOPY2.DS01 LOGCOPY1.DS02 LOGCOPY2.DS02 LOGCOPY1.DS03 Figure 39. Minimum Active Log Data Set Distribution Preformat New Active Log Data Sets The system administrator, when allocating new active log data sets, can preformat them using the DSNJLOGF utility described in Section 3 of DB2 for OS/390 Utility Guide and Reference, SC26-8967. This avoids the overhead of preformatting the log, which normally occurs at unpredictable times.
records into the input buffer used by the reading process (such as a recovery job or a rollback). From a performance point of view, it is always best for DB2 to obtain the log records from the output buffer. These accesses are reported by DB2 PM; see F in Figure 41 on page 118. The next fastest access for DB2 is the active log; see G in Figure 41. Access to the archive log is not desirable; it can be delayed for a considerable length of time.
If you allow DB2 to create the archive log data sets on RVA disks, you can take advantage of the compression capability offered by the device. Depending on the type of application data DB2 is processing and storing in the log data sets, you could obtain a very good reduction in DASD occupancy with RVA and achieve good recoverability at a reasonable price. This is explained in more detail in DB2 for OS/390 and Data Compression, SG24-5261.
needs values from the statistics report shown in Figure 41: the NOWAIT counter C, and the number of control intervals created in the active log, counter D. Use the following formula: avg size of log record in bytes = D * 4096 / C Using this value to estimate logging needs, plus considering the available device sizes, the DB2 system administrator can update the output of the installation CLIST to modify the calculated values for active log data set sizes.
Chapter 11. I/O Performance and Monitoring Tools This chapter addresses I/O performance reporting and monitoring tools in relation to storage management in a DB2 environment. The following tools are described: • DB2 Performance Monitor (DB2 PM) • Resource Monitoring Facility (RMF) • IBM Extended Facilities Product (IXFP) for RVA monitoring Figure 42 on page 119 illustrates the scope of these tools.
Statistics and accounting traces are collected in most installations. A performance trace is collected when a specific problem has to be investigated. Activating the performance trace has a significant impact on DB2 subsystem performance. The user can cause more or less information to be collected by these traces, by specifying trace classes to be activated. An accounting trace provides information at an identifier level. Examples of identifiers are plans, packages, users, or connection types.
BP4 TOTAL --------------------- -------BPOOL HIT RATIO (%) 50 GETPAGES 300350 BUFFER UPDATES 0 SYNCHRONOUS WRITE 0 SYNCHRONOUS READ 754 SEQ. PREFETCH REQS 702 LIST PREFETCH REQS 0 DYN. PREFETCH REQS 3944 PAGES READ ASYNCHR. 148634 HPOOL WRITES 0 HPOOL WRITES-FAILED 0 PAGES READ ASYN-HPOOL 0 HPOOL READS 0 HPOOL READS-FAILED 0 Figure 44. DB2 PM Accounting, Buffer Pool Section 11.1.1.
the accounting report. Some additional information is calculated in this report, for example, in Figure 45 it shows the average number of pages for each type of prefetch read. BP4 READ OPERATIONS QUANTITY /MINUTE /THREAD /COMMIT --------------------------- -------- ------- ------- ------BPOOL HIT RATIO (%) 55.12 GETPAGE REQUEST GETPAGE REQUEST-SEQUENTIAL GETPAGE REQUEST-RANDOM 221.8K 6534.43 18427.00 542.99 203.3K 5991.43 SYNCHRONOUS READS SYNCHRON. READS-SEQUENTIAL SYNCHRON. READS-RANDOM 613.00 64.
Line C in Figure 46 shows BSDS accesses. Just like the active log accesses, these accesses are mainly writes. The block of lines starting with D shows volume of records created in the active log and offloaded by the archiving process. The block of lines starting with E shows archive volume mounting information. LOG ACTIVITY QUANTITY /MINUTE /THREAD /COMMIT --------------------------- -------- ------- ------- ------READS SATISFIED-OUTPUT BUFF A 0.00 0.00 N/C 0.00 READS SATISFIED-OUTP.
Table 26. Trace Requirement for the I/O Activity Report s I/O Activity Report DB2 Trace Class IFCID Buffer Pool Performance 4 6, 7, 8, 9, 10, 105, 107 EDM Pool Performance 4 29, 30, 105, 107 Active Log Performance 5 34, 35, 36, 37, 38, 39 Archive Log/BSDS Performance 5 34, 35, 36, 37, 40, 41, 114, 115, 116, 119, 120 Cross Invalidation Performance 21 105, 107, 255 BUFFER POOL TOTALS AET ---------------------------- -------- --------TOTAL I/O REQUESTS 51 0.
Most of the RMF reports are issued either at the central processor complex (CPC) level for a global view, or at the logical partition (LPAR) level for each MVS image view. The following two processes can be used to extract relevant data from the various RMF reports: 1. Determine which fields, from which reports, are useful for a DB2 performance analyst. Most of this data is also required by IBM storage specialists for disk evaluation. 2.
There are three Cache Subsystem Activity reports: • Cache Subsystem Status This report gives the amount of cache storage and nonvolatile storage (NVS) installed, as well as the current status of the cache. • Cache Subsystem Overview This report gives the number of I/O requests sent to the control unit and their resolution in the cache (hits). • Cache Subsystem Device Overview This report gives, for all online volumes attached to the subsystem, the specific utilization of the cache.
C A C H E S U B S Y S T E M A C T I V I T Y PAGE OS/390 REL. 02.05.00 SUBSYSTEM 3990-06 TYPE-MODEL 3990-006 SYSTEM ID IPO4 RPT VERSION 2.4.0 CU-ID 0395 SSID 0080 START 11/19/1998-10.30.00 END 11/19/1998-11.30.00 CDATE 11/19/1998 1 INTERVAL 001.00.00 CTIME 10.30.01 CINT 00.59.
Cache Subsystem Device Overview This report lists the devices known by the subsystem at the beginning of the interval. Each line displays statistics for a specific (functional) volume. The I/O rate, divided into two groups (CACHE HIT and DASD I/O), shows the different types of I/O activity in each group. The *ALL line consolidates values at the subsystem level. The fields to review are, in decreasing order of importance: • I/O RATE, number of I/O requests per second.
• Disconnect time, which covers all internal LCU delays primarily due to a prerequisite process in the storage server; for instance, staging activity for cache misses, or PPRC propagation of the I/O to the secondary site • Connect time, which covers effective data transfer activity Queuing time is called I/O supervisor queuing (IOSQ) time in RMF reports, and covers delays involved by aggregate application interactions on the same volume for that LPAR.
• AVG CONN TIME, connect time mainly for data transfer. To estimate path percentage utilization demand, calculate: AVG CONN TIME * DEVICE ACTIVITY RATE/1000)*100 • As an example, an average connect of 4.5 ms with 1200 I/O/sec gives 540%, which means a minimum of six paths is required for this workload level. Checking the channel path activity reports for the different LPARS sharing this LCU enables you to determine the balance of the activity demand (540%) over the current defined path configuration.
I/O OS/390 Q U E U I N G A C T I V I T Y SYSTEM ID IPO4 START 11/19/1998-10.30.00 INTERVAL 001.00.00 REL. 02.05.00 SECONDS TOTAL SAMPLES = 3600 IOP 16.56.05 ACT: POR 00 RPT VERSION 2.4.0 ACTIVITY RATE AVG Q LNGTH 1615.962 END 11/19/1998-11.30.00 CYCLE 1.000 IODF = 40 CR-DATE: 11/05/98 CR-TIME: 0.07 LCU CONTENTION RATE 0026 0.534 DELAY Q LNGTH 0.39 % ALL CH PATH CONTROL UNITS BUSY 0.21 0026 0027 0031 0.617 0.57 2.06 002E 002F 0063 0.106 0.02 0.01 0034 0035 0071 2.206 0.02 0.
C H A N N E L P A T H A C T I V I T Y PAGE OS/390 REL. 02.05.00 SYSTEM ID IPO4 RPT VERSION 2.4.0 IODF = 40 CR-DATE: 11/05/98 CR-TIME: 16.56.05 ACT: POR CHANNEL PATH UTILIZATION(%) CHANNEL PATH ID TYPE SHR 08 0C 0D 14 15 16 17 18 91 92 94 95 96 98 99 9A B3 B4 B5 B6 B7 B8 B9 BA E6 E7 E8 E9 EA EB OS IS IS CN CN CN CN CN BL BL BL BL BL CN CN CN CN CN CN CN CN CN CN CN CN CN CN CN CN CN Y D D D D D D D D D D D D D D D D D D Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y Y D Y D Y D Y PARTITION 0.
same view as cache reports for each LCU: and second, between all LCUs to get the whole storage server view. Some tools, such as IXFP, offer consolidated data. In the case study activities, there is only one active LPAR, so only LCU level consolidation is done. 11.2.2.2 RMF Reporting at Storage Group Level The RMF DEVICE report, when edited at the Storage Group level, shows the Storage Group’s overall performance from which it is easy to deduce required parallelism.
2. Starting RMF2SC on the PC, using appropriate options to select the reports to be converted. 3. Using your spreadsheet program to manipulate the spreadsheet data. Details of how to do this depend on which program you are using, but in all cases, the cells and ranges that you can reference are as described in the OS/390 RMF Report Analysis, SC28-1950. RMF2SC is installed on the host along with the rest of the MVS components of RMF.
Disk Storage Server LPAR Storage Cache Disk DB2 Applications Virtual Buffer Pool GETPAGE BUFFER UPDATE row READ WRITE STAGE DESTAGE page(s) track(s) Figure 53. DB2 I/O 11.3 IXFP Monitoring The IBM Extended Facilities Product (IXFP) is a host software that helps to manage the RVA. IXFP provides an additional level of cache control for RVA beyond that provided by the local operator panel.
own report writer and graphics display tools. Refer to Chapter 10 of IXFP Subsystem Reporting, SC26-7184, as a reference manual for any RVA monitoring and reporting facilities. Standard IXFP reports require use of a SAS statistical environment from SAS Institute, Incorporated. IBM Storage Division specialists can also provide REXX programs with some basic reporting facilities.
For storage specialists, we recommend monitoring the FREE SPACE COLLECTION LOAD which represents the amount of back-end physical space collected for free space consolidation that did not yield available free space. This is the average percent full of collected storage areas.
• WRITE PER SEC is the average number of write operations per second for the subsystem. • I/O PER SEC is the average number of I/O operations per second for the subsystem. This field may not be equal to the sum of READ PER SEC and WRITE PER SEC, either because it includes other I/O operations, such as sense commands, or because there may be more than one read or write operation per channel program. Accounting of I/O per second is based on number of Locate Record CCWs met in the channel programs.
IBM ITSO POKEEPSIE XSA/REPORTER 16:47 Wednesday, February 17, 1999 17FEB1999 16:47:05 SIBSPUT V2 R1 L1 SPACE UTILIZATION SUMMARY REPORT (NUMBER OF FUNCTIONAL DEVICES: 256) SUBSYSTEM 20395 0 FUNCTIONAL CAPACITY FDID DEV VOLSER ADDR ---- ---- -----0000 2B00 RV2B00 0001 2B01 RV2B01 00FC 2BFC RV2BFC 00FD 2BFD RV2BFD 00FE 2BFE RV2BFE 00FF 2BFF RV2BFF (MB)-- % FUNCTIONAL CAPACITY T/P DEVICE FUNCT NOT TYPE CAP (MB) ALLOC STORED STORED --- ------ -------- -------- -------- -------P 33903 2838.0 1708.3 1568.
140 Storage Management with DB2 for OS/390
Chapter 12. Case Study The case study applies all previously described monitoring facilities in a common project from the DB2 and storage perspectives. This approach introduces some redundancy. The good news is that redundancy allows cross-checking information among various sources. The environment is a very large DB2 query on partitioned table spaces over two RVA storage servers. Activity is exclusively read oriented. Only one DB2 LPAR accesses the data; there is no data sharing.
TIMES/EVENTS APPL (CLASS 1) DB2 (CLASS 2) ------------ -------------- -------------ELAPSED TIME 37:41.001054 37:40.386069 A CPU TIME 1:06:21.580844 1:06:21.549125 B TCB 14:02.087183 14:02.055513 C TCB-STPROC 0.000000 0.000000 PAR.TASKS 52:19.493661 52:19.493612 D SUSPEND TIME N/A 30:55.990291 E TCB N/A 3:48.273531 F PAR.TASKS N/A 27:07.716760 G NOT ACCOUNT. N/A 19:50.057025 H DB2 ENT/EXIT N/A 217 EN/EX-STPROC N/A 0 DCAPT.DESCR. N/A N/A LOG EXTRACT. N/A N/A Figure 57.
The explanation can be found in the driver program used to run the query. This program does a CONNECT RESET automatically after each query. SQL DCL TOTAL ---------- -------LOCK TABLE 0 GRANT 0 REVOKE 0 SET SQLID 0 SET H.VAR. 0 SET DEGREE 1 L SET RULES 0 CONNECT 1 0 CONNECT 2 1 M SET CONNEC 0 RELEASE 0 CALL 0 ASSOC LOC. 0 ALLOC CUR. 0 DCL-ALL 2 Figure 59. DB2 PM Accounting, SQL DCL Section QUERY PARALLEL.
GLOBAL DDF ACTIVITY QUANTITY /MINUTE /THREAD /COMMIT --------------------------- -------- ------- ------- ------DBAT QUEUED-MAXIMUM ACTIVE N/P N/P N/P N/A CONV.DEALLOC-MAX.
BP4 TOTAL --------------------- -------BPOOL HIT RATIO (%) 50 GETPAGES 300350 F BUFFER UPDATES 0 SYNCHRONOUS WRITE 0 SYNCHRONOUS READ 754 SEQ. PREFETCH REQS 702 LIST PREFETCH REQS 0 DYN. PREFETCH REQS 3944 PAGES READ ASYNCHR. 148634 Figure 63. DB2 PM Accounting, BP4 Section 12.1.3 Suspend Times The class 3 section of the accounting report shows suspend (wait) times. This is shown in Figure 64 on page 145. These values are only shown if the accounting trace is started with class 3.
HIGHLIGHTS -------------------------THREAD TYPE : ALLIED TERM.CONDITION: NORMAL INVOKE REASON : DEALLOC COMMITS : 2 ROLLBACK : 0 INCREM.BINDS : 0 UPDATE/COMMIT : 0.00 SYNCH I/O AVG.: 0.002903 D PROGRAMS : 0 PARALLELISM : CP Figure 65. DB2 PM Accounting Highlights 12.1.3.2 Asynchronous Read I/O B in Figure 64 on page 145 shows 143669 asynchronous reads. This corresponds to the sum of all prefetch operations (sequential, dynamic and list prefetch). The suspend time for these reads is 29 minutes and 59.
TOT4K TOTAL --------------------- -------BPOOL HIT RATIO (%) 2 E GETPAGES 6135875 BUFFER UPDATES 48 SYNCHRONOUS WRITE 0 SYNCHRONOUS READ 19559 F SEQ. PREFETCH REQS 164649 G LIST PREFETCH REQS 0 H DYN. PREFETCH REQS 26065 I PAGES READ ASYNCHR. 5943947 J HPOOL WRITES 0 HPOOL WRITES-FAILED 0 PAGES READ ASYN-HPOOL 0 HPOOL READS 0 HPOOL READS-FAILED 0 Figure 66. DB2 PM Accounting Buffer Pool Summary 12.1.4 Conclusions This example shows a very complex and heavy read-only query.
discarding "foreign overhead" from the target study. However, values related to this foreign overhead must be preserved because interactions exist on resource access inside the same computing perimeter. The first step is an overall analysis. Moreover, this first overview allows locating some missing data and finding information from other sources. In this case, some input lacking from the RMF cache analysis reports was compensated by IXFP.
12.2.1.1 Device Activity Report Analysis The RMF report analysis is based on the LCU level and Storage Group level device activity reports. Figure 68 on page 149 shows the extracted summary lines. The Storage Group level line is the average combination of LCUs 46-48 and 4A-4D activities, as the detail volume level display shows (refer to Figure 49 on page 127 as an example).
1. For LCUs 0046-0049: 07, 08, 12, 8B, 91, 1E, C8, D0 2. For LCUs 004A-004D: 0A, 8C, 95, 15, 8F, C1, D2 This results in a total of 16 different paths. Look at their busy percentage in the channel path activity report. I/O Q U E U I N G A C T I V I T Y ( RMF Report extract for first RVA) 0046 0.000 0.00 0.00 2B00 2B01 07 08 12 8B 91 1E C8 D0 1.988 1.980 1.985 1.969 1.961 1.967 1.966 1.964 0.07 0.11 0.04 0.09 0.13 0.04 0.07 0.07 0.15 0.11 0.02 0.16 0.24 0.31 0.16 0.24 0.00 0.10 0.09 0.09 0.19 0.
12.2.1.4 Cache Subsystem Activity Reports Analysis These reports address questions: 1. Is cache overloaded ? 2. What is the efficiency of staging and destaging processes ? All %READ fields of reports show 100%, so the I/O activity is read-only. Focus the analysis on LCU 0046, associated with the CU-ID 2B00; refer to Figure 71 on page 151. In the cache subsystem device overview report, the I/O RATE from the host is 18.2, of which 18.0 are reads and READ H/R is 0.991, which is good.
12.2.2 IXFP View IXFP builds RVA statistics reports at the level of the storage server: it has a hardware standpoint, and so consolidates activities from sharing LPARs. This standpoint also gives to IXFP a good knowledge of data handed into each channel program. Information on functional volumes mapping to physical disk space is also available. For the case study, where only one unique query activity occurred in both the RVA Storage Groups, IXFP information complements the RMF view. 12.2.2.
12.2.2.2 Cache effectiveness Overall Summary Figure 73 on page 153 shows an extract of these reports for both RVAs that contains information similar to RMF, but with more details on caching algorithms, and also explains the origin of the observed disconnect times. The I/O demand the host submits to the RVAs and the staging effectiveness are analyzed: • Analysis of host I/Os. Both RVAs receive a similar I/O activity demand which is read-only.
XSA/REPORTER SUBSYSTEM 20395 DISK ARRAY CAPACITY (MB) ------------117880.2 NET CAPACITY LOAD(%) TEST PROD OVERALL ----- ----- ------0.0 56.4 56.4 -------- DISK ARRAY --------- PHYSICAL CAP USED (MB) -- COMP SHARED UNIQUE TOTAL RATIO -------- -------- -------- ----0.0 65964.1 65964.1 3.1 0.0 65964.1 65964.1 3.1 COLL FREE SPACE (%) TEST PROD OVERALL ----- ----- ------0.0 42.4 42.4 UNCOLL FREE SPACE(%) TEST PROD OVERALL ----- ----- ------0.0 1.3 1.
DB2 PM I/O SUMMARY 2260 Elapsed (sec): BUFFER POOL Tot4K requests per sec wait / request ms per read 6135875 2714.52 synchronous prefetch dynamic prefetch total prefetch synchronous reads 164649 26065 190714 19559 72.84 11.53 84.37 8.65 9.4 2.9 Total read I/O 210273 93.03 8.8 getpages ACCOUNTING CLASS 3 elapsed sec 56.77 1799.16 Synchronous I/O Other Read I/O Figure 75.
field unit RVA_1 1st LCU 2nd LCU 3rd LCU 4th LCU tot RVA_2 1st LCU 2nd LCU 3rd LCU 4th LCU tot RVA CACHE ACTIVITY normal sequent normal sequent destage read read staging staging async io/s io/s trk/s trk/s io/s rate io/s icl io/s sequent read hit ratio 18.2 10.1 15.9 m 44.2 0.4 0.1 2.5 m 3.0 17.8 10.0 13.5 m 41.3 0.2 0 0.1 m 0.3 41.1 23 30.3 m 94.4 0 0 0 m 0,0 0 0 0 m 0.0 0.999 1 1 m 4.2 15.9 16.8 m 36.9 0.1 4.2 0.0 m 4.3 4.2 11.8 16.7 m 32.7 0 0.1 0 m 0.1 9.6 26.8 38.6 m 75.0 0 0 0 m 0.
Disk Storage Server LPAR Storage Cache Disk DB2 Applications Virtual Buffer Pool GETPAGE READ STAGE 2715 93 216 rows/sec requests/sec tracks/sec Figure 79.
158 Storage Management with DB2 for OS/390
Part 4. Appendixes © Copyright IBM Corp.
160 Storage Management with DB2 for OS/390
Appendix A. Test Cases for DB2 Table Space Data Sets This appendix shows the different test cases generated during the writing of this publication, related to the allocation of table spaces (and indexes). The following tests are documented: TEST CASE 1 Appendix A.2, “Partitioned Table Space, DB2 Defined, Without SMS” on page 162 TEST CASE 2 Appendix A.3, “Partitioned Table Space, User Defined, Without SMS” on page 164 TEST CASE 3 Appendix A.
A.2 Partitioned Table Space, DB2 Defined, Without SMS Test case 1 illustrates how to allocate a DB2 partitioned table space using DB2 defined data sets, without SMS. The purpose is to distribute the different data sets across multiple volumes and access paths, in order to obtain maximum benefit from DB2 parallelism. This example allocates a partitioned table space with 16 partitions on eight volumes. Two partitions are placed on each volume. A.2.
A.2.3 Create the Table Space The CREATE TABLESPACE statement is shown in Figure 83 on page 163. In this statement, each partition is directed to a specific STOGROUP. CREATE TABLESPACE PART1 IN BPAOLOR1 USING STOGROUP SGRV1CU0 PRIQTY 20 SECQTY 20 ERASE NO NUMPARTS 16 (PART 1 USING STOGROUP SGRV1CU0 PRIQTY 720 SECQTY 720, PART 2 USING STOGROUP SGRV1CU1 PRIQTY 720 SECQTY 720, .... ..... ...
A.3 Partitioned Table Space, User Defined, Without SMS Test case 2 illustrates how to allocate a DB2 partitioned table space using user defined data sets, without SMS. The objective and the table space are the same as for Appendix A, section A.2, “Partitioned Table Space, DB2 Defined, Without SMS” on page 162. A.3.
CREATE TABLESPACE PART2 IN BPAOLOR1 USING STOGROUP SGRV1CU0 NUMPARTS 16 (PART 1 USING VCAT PART 2 USING VCAT PART 3 USING VCAT PART 4 USING VCAT PART 5 USING VCAT .... ..... DB2V610Z, DB2V610Z, DB2V610Z, DB2V610Z, DB2V610Z, Figure 87. Test Case 2 - CREATE TABLESPACE A.3.5 Display a Volume Volumes can be displayed, to ensure that the allocation was done correctly. As an example, Figure 88 on page 165 shows the contents of volume RV2CU1.
databases, BCUSTOMR, BSERVICE, BTRANS are allocated on SGDBFAST. • All table spaces in the development system will be allocated on SGDBTEST. • The development databases are subject to migration by HSM. Those databases with a name starting with B will get preferential treatment. A.4.1 Storage Classes Using ISMF, option 5.
/*************************************************/ /* STORAGE CLASS */ /* FILTLIST DEFINITIONS */ /*************************************************/ FILTLIST SCDBMED INCLUDE(DB2P.DSNDB%.**) EXCLUDE(DB2P.DSNDB%.BCUSTOMR.**, DB2P.DSNDB%.BSERVICE.**, DB2P.DSNDB%.BTRANS.**, DB2P.DSNDB%.BACCTS.**) FILTLIST SCDBCRIT INCLUDE(DB2P.DSNDB%.BACCTS.**) FILTLIST SCDBFAST INCLUDE(DB2P.DSNDB%.BCUSTOMR.**, DB2P.DSNDB%.BSERVICE.**, DB2P.DSNDB%.BTRANS.**) FILTLIST SCDBTEST INCLUDE(DB2D.DSNDB%.
MANAGEMENT CLASS DEFINE Page 2 of 5 Command ===> CDS Name . . . . . . . . . : SMS.SCDS1.SCDS Management Class Name . . . : MCDB21 Partial Release . . . . . . : CONDITIONAL Migration Primary Level 1 Command Attributes Days Non-usage . : 7 Days Date/Days . : 14 or Auto Migrate . : BOTH GDG Management Attributes # GDG Elements on Primary : Rolled-off GDS Action . . : Figure 91. Test Case 3 - ISMF Management Class Definition Next, using ISMF, option 7.
A.4.3 Storage Group Four Storage Groups, SGDBFAST, SGDB20, SGDBCRIT, and SGDBTEST were defined using ISMF, option 6.2. Figure 93 on page 169 shows the associated panel used by the storage administrator for the definition of POOL Storage Groups: POOL STORAGE GROUP DEFINE Command ===> SCDS Name . . . . . : SMS.SCDS1.SCDS Storage Group Name : SGDBTEST To DEFINE Storage Group, Specify: Description ==> STORAGE GROUP FOR DB2 TEST TABLE SPACES ==> Auto Migrate . . Y (Y, N, I or P) Migrate Sys/Sys Group Name . .
example of the ISMF panel option 6.4 used by the storage administrator to define volumes. Table 28. Test Case 3 - Storage Group Volumes SMS STORAGE GROUP VOLUMES SGDBTEST RV1CU3 RV2CU3 RV3CU3 SGDB20 RV1CU1 SGDBFAST RV1CU0 SGDBCRIT RV2CU0 STORAGE GROUP VOLUME SELECTION Command ===> CDS Name . . . . . : SMS.SCDS1.SCDS Storage Group Name : SGDBTEST Storage Group Type : POOL Select One of the following Options: 2 1. Display - Display SMS Volume Statuses (Pool only) 2.
Figure 97 on page 171 shows the output from the executed batch job: PAGE 0001 5695-DF175 DFSMSDSS V1R5.0 DATA SET SERVICES 1999.047 19:40 CONVERTV 00060000 DDNAME(DASD1,DASD2,DASD3) 00070001 SMS 00080001 ADR101I (R/I)-RI01 (01), TASKID 001 HAS BEEN ASSIGNED TO COMMAND 'CONVERTV' ADR109I (R/I)-RI01 (01), 1999.047 19:40:33 INITIAL SCAN OF USER CONTROL STATEMENT ADR016I (001)-PRIME(01), RACF LOGGING OPTION IN EFFECT FOR THIS TASK ADR006I (001)-STEND(01), 1999.
Figure 99 on page 172 shows the test case results for the same pattern table space name against the updated SCDS. This time, with the relevant source code in place, the SMS qualification is successful, and Management Class and Storage Group attributes are also assigned: ACS TESTING RESULTS CDS NAME : SMS.SCDS1.SCDS ACS ROUTINE TYPES: DC SC MC SG ACS TEST LIBRARY : PAOLOR3.JCL.CNTL ACS TEST MEMBER EXIT CODE RESULTS ------------------ -----------------------------------DESCRIPTION: DB2D.DSNDBC.BTRANS.TEST1.
CREATE DATABASE BTRANS STOGROUP SGRV1CU0 ... CREATE DATABASE TESTDB STOGROUP SGRV1CU0 ... Figure 101. Test Case 3 - CREATE DATABASE Extract Figure 102 on page 173 shows an extract of the CREATE TABLESPACE statements used for the purposes of this exercise: CREATE TABLESPACE CHECKS IN BTRANS USING STOGROUP SGRV1CU0 PRIQTY 1024000 SECQTY 512000 .... CREATE TABLESPACE TEST3 IN BTRANS USING STOGROUP SGRV1CU0 PRIQTY 1024000 SECQTY 512000 ....
Using the IDCAMS LISTCAT command, it can be seen from this extract in Figure 104 on page 174, that the catalog retains the SMS attributes assigned at data set allocation time, and shows the space attributes that were allocated in cylinders: CLUSTER ------- DB2D.DSNDBC.BTRANS.CHECKS.I0001.A001 IN-CAT --- UCAT.VSBOX09 HISTORY DATASET-OWNER------HAIMO CREATION--------1999.048 RELEASE----------------2 EXPIRATION------0000.
The FILTLIST definitions for SCDBFAST and SCDBCRIT were coded to ensure that table spaces assigned with these attributes would not be subject to HSM migration. Any attempt to deviate from this naming pattern would result in a null Storage Class being assigned. A.5.2 Management Class The Management Classes from the previous exercise were used, but the ACS code was amended to reflect the appropriate naming patterns. Figure 106 on page 176 shows an extract of the FILTLIST definitions: A.5.
/*************************************************/ /* STORAGE CLASS */ /* FILTLIST DEFINITIONS */ /*************************************************/ FILTLIST SCDBMED INCLUDE(DB2P.DSNDB%.*.M*.**) FILTLIST SCDBCRIT INCLUDE(DB2P.DSNDB%.*.C0*.**) FILTLIST SCDBFAST INCLUDE(DB2P.DSNDB%.*.F0*.**) FILTLIST SCDBTEST INCLUDE(DB2D.DSNDB%.*.T*.**, DB2T.DSNDB%.*.T*.
A.5.4 DB2 Definitions 1. Three STOGROUP’s were defined on system DB2P: • SGBTRANS with (VOLUMES(*) • SGCUSTMR with (VOLUMES(*) • SGBRANCH with (VOLUMES(*) 2. One STOGROUP was defined on system DB2D: • SGTEST with (VOLUMES(*) 3. Three databases were defined on system DB2P: • BTRANS, using STOGROUP SGBTRANS • CUSTOMER, using STOGROUP SGCUSTMR • BRANCH, using STOGROUP SGBRANCH 4. One database was defined on system DB2D: • TESTRANS, using STOGROUP SGTEST 5.
CLUSTER ---------- DB2P.DSNDBC.BRANCH.C0BRCH01.I0001.A001 IN-CAT --- UCAT.VSBOX09 HISTORY SMSDATA STORAGECLASS ---SCDBCRIT MANAGEMENTCLASS---MCDB20 DATACLASS --------(NULL) LBACKUP ---0000.000.0000 VOLUME VOLSER------------RV3CU3 CLUSTER ------- DB2P.DSNDBC.BTRANS.M1CHECK.I0001.A001 IN-CAT --- UCAT.VSBOX09 HISTORY SMSDATA STORAGECLASS ----SCDBMED MANAGEMENTCLASS---MCDB21 DATACLASS --------(NULL) LBACKUP ---0000.000.0000 VOLUME VOLSER------------RV3CU1 CLUSTER ------- DB2P.DSNDBC.CUSTOMER.F0CUST01.I0001.
A.6.1 Define Volumes to SMS Storage Group The SMS Storage Group SGDBTEST, was expanded to contain eight disk volumes. Figure 108 on page 179 shows the ISMF panel, option 6.1 using the LISTVOL line command, to display the assigned volumes. Table 30.
CREATE TABLESPACE PART1 IN BPAOLOR1 USING STOGROUP SGRV1CU0 PRIQTY 20 SECQTY 20 ERASE NO NUMPARTS 8 (PART 1 USING STOGROUP SGRV1CU0 PRIQTY 1200000 SECQTY 1200000, PART 2 USING STOGROUP SGRV1CU0 PRIQTY 1200000 SECQTY 1200000, ......................... PART 8 USING STOGROUP SGRV1CU0 PRIQTY 1200000 SECQTY 1200000) ......, Figure 110. Test Case 5 - CREATE TABLESPACE Extract A.6.4 Data Set Allocation Results The following results were displayed once the table space was allocated.
VOLUME LIST Command ===> Scroll ===> HALF Entries 1-8 of 8 Data Columns 3-8 of 40 Enter Line Operators below: LINE VOLUME FREE % ALLOC OPERATOR SERIAL SPACE FREE SPACE ---(1)---- -(2)-- --(3)-- (4)- --(5)-RV1CU1 1380300 50 1391200 RV1CU2 1380300 50 1391200 RV1CU3 1380300 50 1391200 RV2CU2 1380300 50 1391200 RV2CU3 1380300 50 1391200 RV3CU1 1380300 50 1391200 RV3CU2 1380300 50 1391200 RV3CU3 1380300 50 1391200 ---------- ------ ----------- BOTTOM OF FRAG INDEX -(6)0 0 0 0 0 0 0 0 DATA LARGEST FREE EXTENT
on page 182 shows the volume distribution. Figure 114 on page 182 shows the ISMF panel Storage Group list displaying the eight Storage Groups. Table 31. Test Case 6 - Storage Group Volumes SMS STORAGE GROUP VOLUMES SGDBA001 RV1CU0 SGDBA002 RV1CU2 SGDBA003 RV2CU0 SGDBA004 RV2CU1 SGDBB001 RV2CU3 SGDBB002 RV3CU0 SGDBB003 RV3CU2 SGDBB004 RV3CU3 STORAGE GROUP LIST Command ===> Scroll ===> HALF Entries 1-8 of 8 Data Columns 3-7 of 40 CDS Name : SMS.SCDS1.
/*******************************************/ /* STORAGE GROUP SELECTION ROUTINE */ /*******************************************/ SELECT WHEN ((&DSN(4) = 'ALPHA') AND (&LLQ EQ 'A001')) SET &STORGRP = 'SGDBA001' WHEN ((&DSN(4) = 'ALPHA') AND (&LLQ EQ 'A002')) SET &STORGRP = 'SGDBA002' WHEN ((&DSN(4) = 'ALPHA') AND (&LLQ EQ 'A003')) SET &STORGRP = 'SGDBA003' WHEN ((&DSN(4) = 'ALPHA') AND (&LLQ EQ 'A004')) SET &STORGRP = 'SGDBA004' WHEN ((&DSN(4) = 'BETA') AND (&LLQ EQ 'A001')) SET &STORGRP = 'SGDBB001' WHEN (
CREATE TABLESPACE ALPHA IN BPAOLOR1 USING STOGROUP SGDBA000 .... NUMPARTS 4 (PART .... 1 USING STOGROUP SGDBA000 CREATE TABLESPACE BETA IN BPAOLOR1 USING STOGROUP SGDBB000 .... NUMPARTS 4 (PART .... 1 USING STOGROUP SGDBB000 Figure 116. Test Case 6 - CREATE TABLESPACE Extract DATA SET LIST Command ===> Scroll ===> CSR Entries 1-7 of 8 Data Columns 3-5 (and 17) Enter Line Operators below: LINE OPERATOR DATA SET NAME ---(1)---- ------------(2)-----------DB2P.DSNDBD.BPAOLOR1.ALPHA. I0001.A001 DB2P.
Appendix B. Test Cases for DB2 Recovery Data Sets This appendix shows a selection of test cases generated during the writing of this publication for the definition of DB2 recovery data sets. The following tests are documented: 1. Allocation of BSDS and active logs using SMS. 2. Allocation of archive logs using SMS. 3. Allocation of image copies using SMS. All tests were performed under laboratory conditions, and are presented as examples of the options and methodology used to achieve the end results. B.
STORAGE CLASS DEFINE Page 2 of 2 Command ===> SCDS Name . . . . . : SMS.SCDS1.SCDS Storage Class Name : SCDBACTL To DEFINE Storage Class, Specify: Guaranteed Space . . . Guaranteed Synchronous CF Cache Set Name . . CF Direct Weight . . . CF Sequential Weight . . . . Write . . . . . . . . . . . . . . . . . . . . Y . N . . .
/*********************************************/ /* MANAGEMENT CLASS */ /* FILTLIST DEFINITIONS */ /*********************************************/ FILTLIST ACTLOG INCLUDE(DB2*.BSDS*.**, DB2*.LOGCOPY*.DS*) /*********************************************/ /* SELECTION ROUTINE FOR BSDS & ACTIVE LOGS */ /*********************************************/ IF &DSN EQ &ACTLOG THEN DO SET &MGMTCLAS = 'MCDBACTL' EXIT END Figure 120. Management Class Routine Extract for BSDS and Active Logs B.1.
Figure 121 on page 188 shows an extract of the Storage Group ACS routine.
ACS TESTING RESULTS CDS NAME : SMS.SCDS1.SCDS ACS ROUTINE TYPES: DC SC MC SG ACS TEST LIBRARY : PAOLOR3.JCL.CNTL ACS TEST MEMBER EXIT CODE RESULTS ------------------ -----------------------------------DESCRIPTION: DB2P.BSDS01.DATA SMSTEST1 0 DC = NULL VALUE ASSIGNED 0 SC = SCDBACTL 0 MC = MCDBACTL 0 SG = SGDB2PLG ACS TESTING RC: 00 Figure 123. ISMF Test Result for BSDS (2) B.1.5 Data Set Allocation Results Following the activation of the new SMS configuration, a number of data sets were defined.
Menu Options View Utilities Compilers Help ------------------------------------------------------------------------------DSLIST - Data Sets Matching DB2P.BSDS* Row 1 of 6 Command ===> Scroll ===> CSR Command - Enter "/" to select action Message Volume ------------------------------------------------------------------------------DB2P.BSDS01 *VSAM* DB2P.BSDS01.DATA RV2CU3 DB2P.BSDS01.INDEX RV2CU3 DB2P.BSDS02 *VSAM* DB2P.BSDS02.DATA RV3CU3 DB2P.BSDS02.
Menu Options View Utilities Compilers Help ------------------------------------------------------------------------------DSLIST - Data Sets Matching DB2P.LOG* Row 1 of 12 Command ===> Scroll ===> CSR Command - Enter "/" to select action Message Volume ------------------------------------------------------------------------------DB2P.LOGCOPY1.DS01 *VSAM* DB2P.LOGCOPY1.DS01.DATA RV1CU1 DB2P.LOGCOPY1.DS02 *VSAM* DB2P.LOGCOPY1.DS02.DATA RV2CU1 DB2P.LOGCOPY1.DS03 *VSAM* DB2P.LOGCOPY1.DS03.DATA RV3CU1 DB2P.
/************************************************/ /* STORAGE CLASS */ /* FILTLIST DEFINITIONS */ /************************************************/ FILTLIST DBSYS INCLUDE(DB2*.BSDS*.**, DB2*.LOGCOPY*.DS*) FILTLIST DBARCH INCLUDE(DB2*.ARCHLOG*.
B.2.3 Storage Group One Storage Group, SGDBARCH, was added for all archive log data sets of the three DB2 subsystems. Three disk volumes, RV1CU0, RV2CU0, and RV3CU0, were defined to the Storage Group as shown in Table 33 on page 193. Figure 130 on page 193 shows an extract of the extended Storage Group ACS routine used in the previous test case, now incorporating the archive logs. Table 33.
The four data sets created were allocated successfully on disk volumes assigned to Storage Group SGDBARCH. Figure 132 on page 194 shows an ISPF data set list to display all data sets with a pattern name of DB2P.ARCH*: Menu Options View Utilities Compilers Help -----------------------------------------------------------------------------DSLIST - Data Sets Matching DB2P.
• The naming standard for the data sets is shown in 3.8.5, “Image Copy Names” on page 24, using as a high level qualifier the subsystem identifier, followed by IC.
B.3.
B.3.3 Storage Group Two additional storage groups are defined to cater for image copy data sets, SGDBIC and SGDICH. Table 34 on page 197 shows the distribution of volumes across all the storage groups used in all three examples of this appendix. Table 34.
//IMAGCOPY EXEC DSNUPROC,PARM='DB2P,DSN8’,COND=(4,LT) //DSNTRACE DD SYSOUT=* //SYSCOPY1 DD DSN=DB2PIC.PHD99060.T130000.DSN8S61P.A001, // UNIT=3390,DISP=(,CATLG,DELETE),SPACE=(4000,(20,20)) //SYSCOPY2 DD DSN=DB2PIC.SSD99060.T130000.DSN8S61P.A001, // UNIT=3390,DISP=(,CATLG,DELETE),SPACE=(4000,(20,20)) //SYSCOPY3 DD DSN=DB2PIC.PSW99060.T130000.DSN8S61R.A001, // UNIT=3390,DISP=(,CATLG,DELETE),SPACE=(4000,(20,20)) //SYSCOPY4 DD DSN=DB2PIC.SSW99060.T130000.DSN8S61R.
DSLIST - Data Sets Matching DB2PIC.* Command ===> Row 1 of 6 Scroll ===> CSR Command - Enter "/" to select action Message Volume ------------------------------------------------------------------------------DB2PIC.PSW99060.T130000.DSN8S61R.A001 RV1CU2 DB2PIC.SSW99060.T130000.DSN8S61R.A001 RV1CU0 DB2PIC.PHD99060.T130000.DSN8S61P.A001 RV3CU2 DB2PIC.SSD99060.T130000.DSN8S61P.A001 RV3CU0 DB2PIC.PSM99060.T130000.DSN8S61S.A001 RV1CU2 DB2PIC.SSM99060.T130000.DSN8S61S.
200 Storage Management with DB2 for OS/390
Appendix C. DB2 PM Accounting Trace Report 1 LOCATION: USIBMT6BOAPLX DB2 PERFORMANCE MONITOR (V5) GROUP: BOAG PAGE: 1-1 ACCOUNTING TRACE - LONG REQUESTED FROM: 02/12/99 15:06:03.00 MEMBER: NB22 TO: 02/12/99 15:43:47.00 SUBSYSTEM: NB22 ACTUAL FROM: 02/12/99 15:43:45.28 DB2 VERSION: V5 ---- IDENTIFICATION -------------------------------------------------------------------------------------------------------------ACCT TSTAMP: 02/12/99 15:43:45.28 PLANNAME: POCDRIVE BEGIN TIME : 02/12/99 15:06:04.
1 LOCATION: USIBMT6BOAPLX DB2 PERFORMANCE MONITOR (V5) GROUP: BOAG PAGE: 1-2 ACCOUNTING TRACE - LONG REQUESTED FROM: 02/12/99 15:06:03.00 MEMBER: NB22 TO: 02/12/99 15:43:47.00 SUBSYSTEM: NB22 ACTUAL FROM: 02/12/99 15:43:45.28 DB2 VERSION: V5 ---- IDENTIFICATION -------------------------------------------------------------------------------------------------------------ACCT TSTAMP: 02/12/99 15:43:45.28 PLANNAME: POCDRIVE BEGIN TIME : 02/12/99 15:06:04.
1 LOCATION: USIBMT6BOAPLX DB2 PERFORMANCE MONITOR (V5) GROUP: BOAG PAGE: 1-3 ACCOUNTING TRACE - LONG REQUESTED FROM: 02/12/99 15:06:03.00 MEMBER: NB22 TO: 02/12/99 15:43:47.00 SUBSYSTEM: NB22 ACTUAL FROM: 02/12/99 15:43:45.28 DB2 VERSION: V5 ---- IDENTIFICATION -------------------------------------------------------------------------------------------------------------ACCT TSTAMP: 02/12/99 15:43:45.28 PLANNAME: POCDRIVE BEGIN TIME : 02/12/99 15:06:04.
204 Storage Management with DB2 for OS/390
Appendix D. DB2 PM Statistics Report 1 LOCATION: USIBMT6BOAPLX DB2 PERFORMANCE MONITOR (V5) GROUP: BOAG PAGE: 2-1 STATISTICS REPORT - LONG REQUESTED FROM: 02/12/99 15:06:03.00 MEMBER: NB22 TO: 02/12/99 15:44:05.00 SUBSYSTEM: NB22 INTERVAL FROM: 02/12/99 15:09:49.64 DB2 VERSION: V5 SCOPE: MEMBER TO: 02/12/99 15:43:45.80 ---- HIGHLIGHTS ---------------------------------------------------------------------------------------------------INTERVAL START : 02/12/99 15:09:49.
1 LOCATION: USIBMT6BOAPLX DB2 PERFORMANCE MONITOR (V5) GROUP: BOAG PAGE: 2-2 STATISTICS REPORT - LONG REQUESTED FROM: 02/12/99 15:06:03.00 MEMBER: NB22 TO: 02/12/99 15:44:05.00 SUBSYSTEM: NB22 INTERVAL FROM: 02/12/99 15:09:49.64 DB2 VERSION: V5 SCOPE: MEMBER TO: 02/12/99 15:43:45.80 ---- HIGHLIGHTS ---------------------------------------------------------------------------------------------------INTERVAL START : 02/12/99 15:09:49.64 SAMPLING START: 02/12/99 15:09:49.64 TOTAL THREADS : 0.
1 LOCATION: USIBMT6BOAPLX DB2 PERFORMANCE MONITOR (V5) GROUP: BOAG PAGE: 2-3 STATISTICS REPORT - LONG REQUESTED FROM: 02/12/99 15:06:03.00 MEMBER: NB22 TO: 02/12/99 15:44:05.00 SUBSYSTEM: NB22 INTERVAL FROM: 02/12/99 15:09:49.64 DB2 VERSION: V5 SCOPE: MEMBER TO: 02/12/99 15:43:45.80 ---- HIGHLIGHTS ---------------------------------------------------------------------------------------------------INTERVAL START : 02/12/99 15:09:49.64 SAMPLING START: 02/12/99 15:09:49.64 TOTAL THREADS : 0.
1 LOCATION: USIBMT6BOAPLX DB2 PERFORMANCE MONITOR (V5) GROUP: BOAG PAGE: 2-4 STATISTICS REPORT - LONG REQUESTED FROM: 02/12/99 15:06:03.00 MEMBER: NB22 TO: 02/12/99 15:44:05.00 SUBSYSTEM: NB22 INTERVAL FROM: 02/12/99 15:09:49.64 DB2 VERSION: V5 SCOPE: MEMBER TO: 02/12/99 15:43:45.80 ---- HIGHLIGHTS ---------------------------------------------------------------------------------------------------INTERVAL START : 02/12/99 15:09:49.64 SAMPLING START: 02/12/99 15:09:49.64 TOTAL THREADS : 0.
1 LOCATION: USIBMT6BOAPLX DB2 PERFORMANCE MONITOR (V5) GROUP: BOAG PAGE: 2-5 STATISTICS REPORT - LONG REQUESTED FROM: 02/12/99 15:06:03.00 MEMBER: NB22 TO: 02/12/99 15:44:05.00 SUBSYSTEM: NB22 INTERVAL FROM: 02/12/99 15:09:49.64 DB2 VERSION: V5 SCOPE: MEMBER TO: 02/12/99 15:43:45.80 ---- HIGHLIGHTS ---------------------------------------------------------------------------------------------------INTERVAL START : 02/12/99 15:09:49.64 SAMPLING START: 02/12/99 15:09:49.64 TOTAL THREADS : 0.
1 LOCATION: USIBMT6BOAPLX DB2 PERFORMANCE MONITOR (V5) GROUP: BOAG PAGE: 2-6 STATISTICS REPORT - LONG REQUESTED FROM: 02/12/99 15:06:03.00 MEMBER: NB22 TO: 02/12/99 15:44:05.00 SUBSYSTEM: NB22 INTERVAL FROM: 02/12/99 15:09:49.64 DB2 VERSION: V5 SCOPE: MEMBER TO: 02/12/99 15:43:45.80 ---- HIGHLIGHTS ---------------------------------------------------------------------------------------------------INTERVAL START : 02/12/99 15:09:49.64 SAMPLING START: 02/12/99 15:09:49.64 TOTAL THREADS : 0.
1 LOCATION: USIBMT6BOAPLX DB2 PERFORMANCE MONITOR (V5) GROUP: BOAG PAGE: 2-7 STATISTICS REPORT - LONG REQUESTED FROM: 02/12/99 15:06:03.00 MEMBER: NB22 TO: 02/12/99 15:44:05.00 SUBSYSTEM: NB22 INTERVAL FROM: 02/12/99 15:09:49.64 DB2 VERSION: V5 SCOPE: MEMBER TO: 02/12/99 15:43:45.80 ---- HIGHLIGHTS ---------------------------------------------------------------------------------------------------INTERVAL START : 02/12/99 15:09:49.64 SAMPLING START: 02/12/99 15:09:49.64 TOTAL THREADS : 0.
1 LOCATION: USIBMT6BOAPLX DB2 PERFORMANCE MONITOR (V5) GROUP: BOAG PAGE: 2-8 STATISTICS REPORT - LONG REQUESTED FROM: 02/12/99 15:06:03.00 MEMBER: NB22 TO: 02/12/99 15:44:05.00 SUBSYSTEM: NB22 INTERVAL FROM: 02/12/99 15:09:49.64 DB2 VERSION: V5 SCOPE: MEMBER TO: 02/12/99 15:43:45.80 ---- HIGHLIGHTS ---------------------------------------------------------------------------------------------------INTERVAL START : 02/12/99 15:09:49.64 SAMPLING START: 02/12/99 15:09:49.64 TOTAL THREADS : 0.
1 LOCATION: USIBMT6BOAPLX DB2 PERFORMANCE MONITOR (V5) GROUP: BOAG PAGE: 2-9 STATISTICS REPORT - LONG REQUESTED FROM: 02/12/99 15:06:03.00 MEMBER: NB22 TO: 02/12/99 15:44:05.00 SUBSYSTEM: NB22 INTERVAL FROM: 02/12/99 15:09:49.64 DB2 VERSION: V5 SCOPE: MEMBER TO: 02/12/99 15:43:45.80 ---- HIGHLIGHTS ---------------------------------------------------------------------------------------------------INTERVAL START : 02/12/99 15:09:49.64 SAMPLING START: 02/12/99 15:09:49.64 TOTAL THREADS : 0.
1 LOCATION: USIBMT6BOAPLX DB2 PERFORMANCE MONITOR (V5) GROUP: BOAG PAGE: 2-10 STATISTICS REPORT - LONG REQUESTED FROM: 02/12/99 15:06:03.00 MEMBER: NB22 TO: 02/12/99 15:44:05.00 SUBSYSTEM: NB22 INTERVAL FROM: 02/12/99 15:09:49.64 DB2 VERSION: V5 SCOPE: MEMBER TO: 02/12/99 15:43:45.80 ---- HIGHLIGHTS ---------------------------------------------------------------------------------------------------INTERVAL START : 02/12/99 15:09:49.64 SAMPLING START: 02/12/99 15:09:49.64 TOTAL THREADS : 0.
1 LOCATION: USIBMT6BOAPLX DB2 PERFORMANCE MONITOR (V5) GROUP: BOAG PAGE: 2-11 STATISTICS REPORT - LONG REQUESTED FROM: 02/12/99 15:06:03.00 MEMBER: NB22 TO: 02/12/99 15:44:05.00 SUBSYSTEM: NB22 INTERVAL FROM: 02/12/99 15:09:49.64 DB2 VERSION: V5 SCOPE: MEMBER TO: 02/12/99 15:43:45.80 ---- HIGHLIGHTS ---------------------------------------------------------------------------------------------------INTERVAL START : 02/12/99 15:09:49.64 SAMPLING START: 02/12/99 15:09:49.64 TOTAL THREADS : 0.
1 LOCATION: USIBMT6BOAPLX DB2 PERFORMANCE MONITOR (V5) GROUP: BOAG PAGE: 2-12 STATISTICS REPORT - LONG REQUESTED FROM: 02/12/99 15:06:03.00 MEMBER: NB22 TO: 02/12/99 15:44:05.00 SUBSYSTEM: NB22 INTERVAL FROM: 02/12/99 15:09:49.64 DB2 VERSION: V5 SCOPE: MEMBER TO: 02/12/99 15:43:45.80 ---- HIGHLIGHTS ---------------------------------------------------------------------------------------------------INTERVAL START : 02/12/99 15:09:49.64 SAMPLING START: 02/12/99 15:09:49.64 TOTAL THREADS : 0.
1 LOCATION: USIBMT6BOAPLX DB2 PERFORMANCE MONITOR (V5) GROUP: BOAG PAGE: 2-13 STATISTICS REPORT - LONG REQUESTED FROM: 02/12/99 15:06:03.00 MEMBER: NB22 TO: 02/12/99 15:44:05.00 SUBSYSTEM: NB22 INTERVAL FROM: 02/12/99 15:09:49.64 DB2 VERSION: V5 SCOPE: MEMBER TO: 02/12/99 15:43:45.80 ---- HIGHLIGHTS ---------------------------------------------------------------------------------------------------INTERVAL START : 02/12/99 15:09:49.64 SAMPLING START: 02/12/99 15:09:49.64 TOTAL THREADS : 0.
1 LOCATION: USIBMT6BOAPLX DB2 PERFORMANCE MONITOR (V5) GROUP: BOAG PAGE: 2-14 STATISTICS REPORT - LONG REQUESTED FROM: 02/12/99 15:06:03.00 MEMBER: NB22 TO: 02/12/99 15:44:05.00 SUBSYSTEM: NB22 INTERVAL FROM: 02/12/99 15:09:49.64 DB2 VERSION: V5 SCOPE: MEMBER TO: 02/12/99 15:43:45.80 ---- HIGHLIGHTS ---------------------------------------------------------------------------------------------------INTERVAL START : 02/12/99 15:09:49.64 SAMPLING START: 02/12/99 15:09:49.64 TOTAL THREADS : 0.
1 LOCATION: USIBMT6BOAPLX DB2 PERFORMANCE MONITOR (V5) GROUP: BOAG PAGE: 2-15 STATISTICS REPORT - LONG REQUESTED FROM: 02/12/99 15:06:03.00 MEMBER: NB22 TO: 02/12/99 15:44:05.00 SUBSYSTEM: NB22 INTERVAL FROM: 02/12/99 15:09:49.64 DB2 VERSION: V5 SCOPE: MEMBER TO: 02/12/99 15:43:45.80 ---- HIGHLIGHTS ---------------------------------------------------------------------------------------------------INTERVAL START : 02/12/99 15:09:49.64 SAMPLING START: 02/12/99 15:09:49.64 TOTAL THREADS : 0.
1 LOCATION: USIBMT6BOAPLX DB2 PERFORMANCE MONITOR (V5) GROUP: BOAG PAGE: 2-16 STATISTICS REPORT - LONG REQUESTED FROM: 02/12/99 15:06:03.00 MEMBER: NB22 TO: 02/12/99 15:44:05.00 SUBSYSTEM: NB22 INTERVAL FROM: 02/12/99 15:09:49.64 DB2 VERSION: V5 SCOPE: MEMBER TO: 02/12/99 15:43:45.80 ---- HIGHLIGHTS ---------------------------------------------------------------------------------------------------INTERVAL START : 02/12/99 15:09:49.64 SAMPLING START: 02/12/99 15:09:49.64 TOTAL THREADS : 0.
1 LOCATION: USIBMT6BOAPLX DB2 PERFORMANCE MONITOR (V5) GROUP: BOAG PAGE: 2-17 STATISTICS REPORT - LONG REQUESTED FROM: 02/12/99 15:06:03.00 MEMBER: NB22 TO: 02/12/99 15:44:05.00 SUBSYSTEM: NB22 INTERVAL FROM: 02/12/99 15:09:49.64 DB2 VERSION: V5 SCOPE: MEMBER TO: 02/12/99 15:43:45.80 ---- HIGHLIGHTS ---------------------------------------------------------------------------------------------------INTERVAL START : 02/12/99 15:09:49.64 SAMPLING START: 02/12/99 15:09:49.64 TOTAL THREADS : 0.
1 LOCATION: USIBMT6BOAPLX DB2 PERFORMANCE MONITOR (V5) GROUP: BOAG PAGE: 2-18 STATISTICS REPORT - LONG REQUESTED FROM: 02/12/99 15:06:03.00 MEMBER: NB22 TO: 02/12/99 15:44:05.00 SUBSYSTEM: NB22 INTERVAL FROM: 02/12/99 15:09:49.64 DB2 VERSION: V5 SCOPE: MEMBER TO: 02/12/99 15:43:45.80 ---- HIGHLIGHTS ---------------------------------------------------------------------------------------------------INTERVAL START : 02/12/99 15:09:49.64 SAMPLING START: 02/12/99 15:09:49.64 TOTAL THREADS : 0.
Appendix E. Disk Storage Server Reports RMF EXTRACT REPORTS C H A N N E L P A T H A C T I V I T Y OS/390 SYSTEM ID QP02 START 02/12/1999-15.05.39 INTERVAL 000.38.01 REL. 02.06.00 RPT VERSION 2.6.0 END CYCLE 1.000 SECONDS IODF = 29 NO CREATION INFORMATION AVAILABLE CHANNEL PATH ID TYPE UTILIZATION(%) SHR PARTITION TOTAL ACT: POR CHANNEL PATH ID TYPE 02/12/1999-15.43.
I/O Q U E U I N G A C T I V I T Y OS/390 SYSTEM ID QP02 START 02/12/1999-15.05.39 INTERVAL 000.38.01 REL. 02.06.00 RPT VERSION 2.6.0 END CYCLE 1.000 SECONDS TOTAL SAMPLES = LCU 0046 2281 IOP ACTIVITY RATE 00 111.094 0.00 01 17.684 0.00 DELAY % ALL CONTENTION Q CH PATH RATE LNGTH 0.000 0.00 CONTROL UNITS BUSY 0.00 2B00 2B01 0047 0.000 0.00 0.00 2B40 2B41 0048 0.000 0.00 0.00 2B80 2B81 0049 0.000 0.00 0.00 2BC0 2BC1 004A 0.000 0.00 0.
004C 0.000 0.00 0.00 2C80 2C81 004D 0.000 0.00 0.00 2CC0 2CC1 0A 1.801 0.05 0.29 8C 1.832 0.02 0.33 95 1.811 0.12 0.10 C3 1.811 0.05 0.31 15 1.826 0.17 0.12 8F 1.822 0.12 0.26 C1 1.819 0.07 0.17 D2 1.816 0.05 0.38 0A 1.464 0.00 0.27 8C 1.452 0.06 0.36 95 1.463 0.00 0.24 C3 1.459 0.03 0.15 15 1.465 0.00 0.12 8F 1.452 0.09 0.36 C1 1.456 0.00 0.24 D2 1.459 0.09 0.30 C2 0.082 0.00 0.
RVA 1 C A C H E 1 S U B S Y S T E M A C T I V I T Y PAGE SUBSYSTEM OS/390 SYSTEM ID QP02 START 02/12/1999-15.05.39 REL. 02.06.00 RPT VERSION 2.6.0 END 3990-03 CU-ID 2B00 CDATE SSID 0088 1 INTERVAL 000.38.01 02/12/1999-15.43.41 02/12/1999 CTIME 15.05.40 CINT 00.37.
* 1 C A C H E S U B S Y S T E M A C T I V I T Y PAGE OS/390 SYSTEM ID QP02 REL. 02.06.00 0SUBSYSTEM 3990-03 START 02/12/1999-15.05.39 RPT VERSION 2.6.0 CU-ID SSID 0089 2B40 END CDATE 1 INTERVAL 000.38.01 02/12/1999-15.43.41 02/12/1999 CTIME 15.05.40 CINT 00.37.
1 C A C H E S U B S Y S T E M A C T I V I T Y PAGE OS/390 SYSTEM ID QP02 REL. 02.06.00 0SUBSYSTEM 3990-03 START 02/12/1999-15.05.39 RPT VERSION 2.6.0 CU-ID 2B80 SSID 008A END CDATE 1 INTERVAL 000.38.01 02/12/1999-15.43.41 02/12/1999 CTIME 15.05.41 CINT 00.37.
1 C A C H E S U B S Y S T E M A C T I V I T Y PAGE OS/390 SYSTEM ID QP02 REL. 02.06.00 0SUBSYSTEM 3990-03 CU-ID START 02/12/1999-15.05.39 RPT VERSION 2.6.0 2BC0 SSID 008B END CDATE 1 INTERVAL 000.38.01 02/12/1999-15.43.41 02/12/1999 CTIME 15.05.41 CINT 00.37.
RVA 2 1 C A C H E S U B S Y S T E M A C T I V I T Y PAGE OS/390 SYSTEM ID QP02 REL. 02.06.00 0SUBSYSTEM 3990-03 START 02/12/1999-15.05.39 RPT VERSION 2.6.0 CU-ID 2C00 SSID 2007 END CDATE 1 INTERVAL 000.38.01 02/12/1999-15.43.41 02/12/1999 CTIME 15.05.40 CINT 00.37.
1 C A C H E S U B S Y S T E M A C T I V I T Y PAGE OS/390 SYSTEM ID QP02 REL. 02.06.00 0SUBSYSTEM 3990-03 START 02/12/1999-15.05.39 RPT VERSION 2.6.0 CU-ID 2C40 SSID 2008 END CDATE 1 INTERVAL 000.38.01 02/12/1999-15.43.41 02/12/1999 CTIME 15.05.41 CINT 00.37.
1 C A C H E S U B S Y S T E M A C T I V I T Y PAGE OS/390 SYSTEM ID QP02 REL. 02.06.00 0SUBSYSTEM 3990-03 START 02/12/1999-15.05.39 RPT VERSION 2.6.0 CU-ID 2CA0 SSID 2009 END CDATE 1 INTERVAL 000.38.01 02/12/1999-15.43.41 02/12/1999 CTIME 15.05.40 CINT 00.37.
1 C A C H E S U B S Y S T E M A C T I V I T Y PAGE OS/390 SYSTEM ID QP02 REL. 02.06.00 0SUBSYSTEM 3990-03 CU-ID START 02/12/1999-15.05.39 RPT VERSION 2.6.0 2CC0 SSID 200A END CDATE 1 INTERVAL 000.38.01 02/12/1999-15.43.41 02/12/1999 CTIME 15.05.41 CINT 00.37.
SYSTEM 1 C A C H E S U B S Y S T E M A C T I V I T Y PAGE OS/390 SYSTEM ID QP02 REL. 02.06.00 0SUBSYSTEM 3990-03 START 02/12/1999-15.05.39 RPT VERSION 2.6.0 CU-ID 71C0 SSID 603C END CDATE 1 INTERVAL 000.38.01 02/12/1999-15.43.41 02/12/1999 CTIME 15.05.41 CINT 00.37.
D I R E C T 1 A C C E S S D E V I C E A C T I V I T Y OS/390 SYSTEM ID QP02 START 02/12/1999-15.05.39 INTERVAL 000.38.01 REL. 02.06.00 RPT VERSION 2.6.0 END CYCLE 1.000 SECONDS 02/12/1999-15.43.
IXFP EXTRACT REPORTS DEVICE PERFORMANCE OVERALL SUMMARY XSA/REPORTER SUBSYSTEM 20395 SUBSYSTEM % DEV I/O KBYTES ACCESS -I/O SERVICE TIME (MS)- % DEV SUMMARY AVAIL PER SEC PER SEC DENSITY TOTAL DISC CONNECT UTIL DISC CONN ----- ------- ------- ------- ----- ------ ------- ----- ----- ----- PROD PARTITION 100.0 45.7 5481.4 0.1 30.5 7.3 23.3 0.5 0.1 0.4 OVERALL TOTALS 100.0 45.7 5481.4 0.1 30.5 7.3 23.3 0.5 0.1 0.
CACHE EFFECTIVENESS OVERALL SUMMARY XSA/REPORTER SUBSYSTEM NAME: 20395 SUBSYSTEM SUMMARY (CACHE SIZE: 1024 MB I/O NVS SIZE: 18FEB1999 8 MB) READ WRITE READ READ PER SEC PER SEC PER SEC RATIO HIT % HIT % WRITE HIT % I/O CONSTR DFW PER SEC STAGE HITS/ STGE REF CT LOW OCCUP ------ ------- ------- ------- ----- ----- ----- ----- ------ ------- ----- ------ PROD PARTITION 53.8 0.0 45.7 61329 99.3 100.0 99.3 0.0 113.2 0.5 73.7 OVERALL TOTALS 53.8 0.0 45.
SPACE UTILIZATION SUMMARY REPORT XSA/REPORTER SUBSYSTEM 20395 17FEB1999 16:47:05 (NUMBER OF FUNCTIONAL DEVICES: 256) SELECTED DEVICES SUMMARY FUNCTIONAL CAPACITY (MB) SELECTED TOTAL FUNCTIONAL DEVICES CAPACITY (MB) -------- ------------- % FUNCT CAPACITY NOT STORED STORED --------- --------- NOT -------- DISK ARRAY --------- PHYSICAL CAP USED (MB) -UNIQUE TOTAL COMP STORED STORED SHARED ------ ------ -------- -------- -------- RATIO ----- PRODUCTION PARTITION: 256 726532.
Appendix F. Special Notices This publication is intended to help managers and professionals understand and evaluate the applicability of DFSMS/MVS functions to DB2 for OS/390. It also provides disk architecture background information in order to make management and control of DB2 data sets easier . The information in this publication is not intended as the specification of any programming interfaces that are provided by DB2 for OS/390 Version 5.
Any performance data contained in this document was determined in a controlled environment, and therefore, the results that may be obtained in other operating environments may vary significantly. Users of this document should verify the applicable data for their specific environment. This document contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples contain the names of individuals, companies, brands, and products.
Appendix G. Related Publications The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this redbook. G.1 International Technical Support Organization Publications For information on ordering these ITSO publications see “How to Get ITSO Redbooks” on page 243.
G.
How to Get ITSO Redbooks This section explains how both customers and IBM employees can find out about ITSO redbooks, redpieces, and CD-ROMs. A form for ordering books and CD-ROMs by fax or e-mail is also provided. • Redbooks Web Site http://www.redbooks.ibm.com/ Search for, view, download, or order hardcopy/CD-ROM redbooks from the redbooks Web site. Also read redpieces and download additional materials (code samples or diskette/CD-ROM images) from this redbooks site.
IBM Redbook Fax Order Form Please send me the following: Title Order Number First name Last name Company Address City Postal code Country Telephone number Telefax number VAT number Card issued to Signature Invoice to customer number Credit card number Credit card expiration date We accept American Express, Diners, Eurocard, Master Card, and Visa. Payment by credit card not available in all countries. Signature mandatory for credit card payment.
List of Abbreviations ABARS aggregate backup and recovery support ECSA extended common storage area APAR authorized program analysis report EDM environment descriptor management ARM automatic restart manager ERP enterprise resource planning BLOB binary large objects ESA BSDS boot strap data set Enterprise Systems Architecture CCW channel command word FBA fixed block architecture CEC central electronics complex GBP group buffer pool CF coupling facility GB CFRM coupling facility
KSDS key-sequenced data set LCU logical control unit LDS linear data set LLQ low level qualifier LPAR logically partitioned mode LRSN log record sequence number LRU least recently used MB megabyte (1,048,576 bytes) MVS Multiple Virtual Storage NVS non-volatile storage ODBC Open Data Base Connectivity OPT optimizer OS/390 Operating System/390 PDF Program Development Facility (component of ISPF) PDS partitioned data set PPRC peer-to-peer remote copy QMF Query Management Facili
Index Numerics 3380 85 3390 85 A ABARS 30 abbreviations 245 accounting report 141 ACDS 35 acronyms 245 ACS 36 ACS routines 80 active control data set 35 active log 18 sizing 18 active log data sets default names 23 active log size 117 ADDVOL 30 ARCHIVE 19 ARCHIVE command 69, 193 archive installation panel 69 archive log 19 archive log and SMS 69 archive log data set 116, 117 archive log data sets default names 24 archive logs 191 archive to disk or tape 116 array 85 asynchronous copy 99 asynchronous write
DUMP 94 DFSMShsm 28 DFSMSopt 31 DFSMSrmm 31 DFW 91 directory 15 disk architecture 85 DMTH 106 DSN1COPY 22 DSNDB01 15 DSNDB01.
OPT 31 OUTPUT BUFFER 114 P partitioned data sets 22 path 93 peer to peer remote copy 93 peer-to-peer remote copy 96 policy 80 PPRC 93, 96 prefetch quantity 106 Q queuing time 128 quickwrite 92 R RAID 85 read operations 104 read record caching 91 recall 30 RECOVER 21, 95 recovery data sets 11, 17 recovery strategy 18 remote copy 95 REORG 79 response time 128 RMF 119 cache reports 125 CHAN report 125 device report 125 IOQ report 125 report analysis 149 report consolidation 132 storage server reports 223 RM
timestamp 100 track size 85 two-phase commit 112 U user defined table space 14 V VDWQT 107, 108 vertical deferred write threshold 107 virtual concurrent copy 95 virtual volumes 88 VTOC 26 W work database 15 work table spaces page sizes 15 write frequency 108 write operations 107 write quantity 108 write record caching 92 WRITE THRESHOLD 114 X XRC 99 250 Storage Management with DB2 for OS/390
ITSO Redbook Evaluation Storage Management with DB2 for OS/390 SG24-5462-00 Your feedback is very important to help us maintain the quality of ITSO redbooks. Please complete this questionnaire and return it using one of the following methods: • Use the online evaluation form found at http://www.redbooks.ibm.com • Fax this form to: USA International Access Code + 1 914 432 8264 • Send your comments in an Internet note to redbook@us.ibm.
Printed in the U.S.A.