IBM Informix OnLine Database Server Administrator’s Guide Version 5.x December 2001 Part No.
Note: Before using this information and the product it supports, read the information in the appendix entitled “Notices.” This document contains proprietary information of IBM. It is provided under a license agreement and is protected by copyright law. The information contained in this publication does not include any product warranties, and any statements provided in this manual should not be interpreted as such.
Table of Contents Table of Contents Introduction In This Introduction . . . . . . . . . . . . . . About This Manual . . . . . . . . . . . . . . . Organization of This Manual . . . . . . . . . . Demonstration Database . . . . . . . . . . . IBM Informix OnLine . . . . . . . . . . . . . . Product Overview. . . . . . . . . . . . . . IBM Informix OnLine and Other IBM Informix Products Documentation Conventions . . . . . . . . . . . Typographical Conventions . . . . . . . . . . Icon Conventions . . . . . . . .
Chapter 1 Installation and Initial Configuration In This Chapter . . . . . . . . . . . . . . . Define Your Starting Point . . . . . . . . . . . Upgrade an Earlier Version of OnLine . . . . . Overview of OnLine Installation Steps . . . . . Overview of OnLine Initial Configuration Tasks . . . OnLine Configuration Files . . . . . . . . . Contents of tbconfig.std . . . . . . . . . . Set Up Your Initial Configuration . . . . . . . . Root Dbspace Configuration Guidelines . . . . .
Chapter 2 System Architecture In This Chapter . . . . . . . . . . . . . . . Initialization . . . . . . . . . . . . . . . . Initialization Commands . . . . . . . . . . What Happens During Shared-Memory Initialization What Happens During Disk-Space Initialization . . UNIX Kernel and Semaphore-Allocation Parameters . OnLine User Processes . . . . . . . . . . . . . How User Processes Attach to Shared Memory . . . User Processes and Critical Sections. . . . . . . OnLine User Process Status and States . . . . .
Structure of a Blobspace or Dbspace Mirror Chunk OnLine Limits for Chunks . . . . . . . . Reserved Pages . . . . . . . . . . . . Chunk Free-List Page . . . . . . . . . . tblspace Tblspace. . . . . . . . . . . . Database Tblspace . . . . . . . . . . . Create a Database: What Happens on Disk . . . OnLine Limits for Databases . . . . . . . . Create a Table: What Happens on Disk . . . . Create a Temporary Table: What Happens on Disk Structure of an Extent . . . . . . . . . . Next Extent Allocation . . . . . . .
Chapter 3 Operating OnLine In This Chapter . . . . . . . . . . . . . Changing Modes . . . . . . . . . . . . . Types of OnLine Modes . . . . . . . . . From Offline to Quiescent . . . . . . . . From Offline to Online . . . . . . . . . From Quiescent to Online . . . . . . . . Gracefully from Online to Quiescent . . . . Immediately from Online to Quiescent. . . . From Any Mode Immediately to Offline . . . Logical Log Administration . . . . . . . . . Examine Your Logical Log Configuration . . .
If the Logical Log Files Fill During an Archive. If an Archive Terminates Prematurely . . . Monitor OnLine Activity . . . . . . . . . Monitor Archive History . . . . . . . . Monitor Blobs in a Blobspace . . . . . . Monitor Blobs in a Dbspace . . . . . . . Monitor Buffers . . . . . . . . . . . Monitor Buffer-Pool Activity. . . . . . . Monitor Checkpoints . . . . . . . . . Monitor Chunks . . . . . . . . . . . Monitor Configuration Information . . . . Monitor Databases . . . . . . . . . . Monitor Dbspaces . .
Change the Maximum Number of Dbspaces . Change the Maximum Number of Locks . . Change the Maximum Number of Tblspaces. Change the Maximum Number of Users . . Change the Number of Page Cleaners . . . Things to Avoid . . . . . . . . . . . . Chapter 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-111 3-112 3-113 3-114 3-115 3-116 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Monitor and Archive Physical Log Pages . . Write a Trailer Page . . . . . . . . . . Update the Reserved Pages . . . . . . . Fast Recovery . . . . . . . . . . . . . How Does OnLine Initiate Fast Recovery? . . Fast Recovery and Logging . . . . . . . Step 1: Checkpoint Condition . . . . . . Step 2: Find Checkpoint Record in Logical Log Step 3: Roll Forward Log Records . . . . . Step 4: Roll Back Incomplete Transactions . . Data Restore: When Should You Do It? . . . . Steps That Occur During a Data Restore . .
Chapter 5 How to Improve Performance In This Chapter . . . . . . . . . . . . . Disk Layout . . . . . . . . . . . . . . Optimize Blobspace Blobpage Size . . . . . . tbcheck -pB and tbcheck -pe Utility Commands. Blobpage Average Fullness . . . . . . . . Apply Effective Criteria . . . . . . . . . Eliminate User-Created Resource Bottlenecks . . . When Is Tuning Needed? . . . . . . . . . . % Cached Fields . . . . . . . . . . . ovtbls, ovlock, ovuser, and ovbuff Fields . . . Bufsize Pages/IO Fields . . . . . . .
Chapter 6 DB-Monitor Screens In This Chapter . . . Main Menu . . . Status Menu . . Parameters Menu Dbspaces Menu . Mode Menu . . Force-Ckpt Option Archive Menu . . Logical-Logs Menu Chapter 7 xii . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
tbload: Create a Database or Table . . . . . . Syntax . . . . . . . . . . . . . . Specify Tape Parameters . . . . . . . . Create Options . . . . . . . . . . . tblog: Display Logical Log Contents . . . . . Syntax . . . . . . . . . . . . . . Log-Record Read Filters . . . . . . . . Log-Record Display Filters . . . . . . . Interpreting tblog Output . . . . . . . tbmode: Mode and Shared-Memory Changes . . Syntax . . . . . . . . . . . . . . Change OnLine Mode . . . . . . . . Force a Checkpoint . . . . . . . . .
tbunload: Transfer Binary Data in Page Units . . . . . . . . 7-107 Syntax . . . . . . . . . . . . . . . . . . . . 7-108 Specify Tape Parameters . . . . . . . . . . . . . . 7-109 Chapter 8 OnLine Message Log In This Chapter . . . . . . . . . . . . . . . . . . . OnLine Message Log . . . . . . . . . . . . . . . . . Alphabetized Messages . . . . . . . . . . . . . . . . Chapter 9 Product Environment In This Chapter . . . . . . . . . . . . . . . The OnLine Environment . . . . . . . . . . . OnLine Features . .
Introduction Introduction In This Introduction . . . . . . . . . . . . . . . . . . 3 About This Manual . . . . . . Organization of This Manual . Demonstration Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 4 5 IBM Informix OnLine . . . . . . . . . . . . . . . Product Overview . . . . . . . . . . . . . . . IBM Informix OnLine and Other IBM Informix Products . . . . . . . . . . 7 7 7 Documentation Conventions . . . . . . .
2 IBM Informix OnLine Database Server Administrator’s Guide
In This Introduction This introduction provides an overview of the information in this manual and describes the conventions it uses. About This Manual The IBM Informix OnLine Administrator’s Guide describes the powerful Informix online transaction processing (OLTP) database server. You do not need database management experience or familiarity with relational database concepts to use this manual. However, a knowledge of SQL (Structured Query Language) would be useful.
Organization of This Manual Organization of This Manual This manual includes the following chapters: 4 ■ Chapter 1, “Installation and Initial Configuration,” provides a stepby-step explanation of OnLine database server installation and setup. The chapter includes a worksheet to assist you in planning your system and in documenting your configuration.
Demonstration Database A Notices appendix contains information about IBM products, services, and features. An index directs you to areas of particular interest. Demonstration Database Your IBM Informix OnLine software includes a demonstration database called stores5 that contains information about a fictitious wholesale sportinggoods distributor. The sample command files that make up a demonstration application are included as well.
Demonstration Database You can run the dbaccessdemo5 script again whenever you want a fresh demonstration database to work with. The script prompts you when the creation of the database is complete and asks if you would like to copy the sample command files to the current directory. Answer “N” to the prompt if you have made changes to the sample files and do not want them replaced with the original versions. Answer “Y” to the prompt if you want to copy over the sample command files.
IBM Informix OnLine IBM Informix OnLine Product Overview The IBM Informix OnLine database server combines high-availability, online transaction-processing (OLTP) performance with multimedia capabilities. By managing its own shared-memory resources and disk I/O, OnLine delivers process concurrency while maintaining transaction isolation. Table data can span multiple disks, freeing administrators from constraints imposed by data storage limitations.
Documentation Conventions Documentation Conventions This section describes the conventions that this manual uses. These conventions make it easier to gather information from this and other volumes in the documentation set.
Icon Conventions Icon Conventions Comment icons identify three types of information, as the following table describes. This information always appears in italics.
Command-Line Conventions Elements That Can Appear on the Path You might encounter one or more of the following elements on a path. Element Description command This required element is usually the product name or other short word used to invoke the product or call the compiler or preprocessor script for a compiled Informix product. It may appear alone or precede one or more options. You must spell a command exactly as shown and must use lowercase letters.
Command-Line Conventions Element Description A branch below the main line indicates an optional path. - The vertical line is a terminator and indicates that the statement is complete. Commands enclosed in a pair of arrows indicate that this is a subdiagram. A gate ( 1 ) in an option indicates that you can only use that option once, even though it is within a larger loop.
Sample Code Conventions To construct a similar command, start at the top left with the command tbunload. Then follow the diagram to the right, including the elements that you want. Figure 1 illustrates the following steps. 1. Type tbunload. 2. Optionally, change the parameters of the tape device that is to receive the data. If you wish to do this, turn to page 7-109 for further syntax information. Otherwise, tbunload uses the current archive tape device. 3.
Sample Code Conventions To use this SQL code for a specific product, you must apply the syntax rules for that product. For example, if you are using DB-Access or IBM Informix SQL, you must delineate the statements with semicolons. If you are using an embedded language, you must use EXEC SQL and a semicolon (or other appropriate delimiters) at the start and end of each statement, respectively.
Additional Documentation Additional Documentation For additional information, refer to the following types of documentation: ■ Printed manuals ■ Error message files ■ Documentation notes, release notes, and machine notes ■ Related reading Printed Manuals You might want to refer to a number of related Informix product documents that complement this manual.
Error Message Files ■ The DB-Access User’s Manual describes how to invoke the utility to access, modify, and retrieve information from OnLine relational databases. ■ When errors occur, you can look them up by number and find their cause and solution in the IBM Informix Error Messages manual. If you prefer, you can look up the error messages in the online message file described in “Error Message Files” on page 15.
Error Message Files The messages numbered -1 to -100 can be platform-dependent. If the message text for a message in this range does not apply to your platform, check the operating system documentation for the precise meaning of the message number. The finderr Script Use the finderr script to display one or more error messages, and their corrective actions, on the terminal screen. The finderr script has the following syntax: finderr msg_num + msg_num is the number of the error message to display.
Error Message Files The rofferr Script Use the rofferr script to format one error message or a range of error messages for printing. By default, rofferr displays output on the screen. You need to send the output to nroff to interpret the formatting commands and then to a printer, or to a file where the nroff output is stored until you are ready to print. You can then print the file. For information on using nroff and on printing files, see your UNIX documentation.
Documentation Notes, Release Notes, Machine Notes Using the PostScript Error Message Files Use the IBM Informix Messages and Corrections product to print the error messages and their corrective actions on a PostScript printer. The PostScript error messages are distributed in a number of files of the format errmsg1.ps, errmsg2.ps, and so on. These files are located in the $INFORMIXDIR/msg directory. Each file contains approximately 50 printed pages of error messages.
Related Reading Related Reading If you have had no prior experience with database management, you may want to refer to an introductory text like C. J. Date’s An Introduction to Database Systems: Seventh Edition (Addison-Wesley Publishing, 1999). If you want more technical information on database management, consider consulting the following tests: ■ Database Systems: A Practical Approach to Design, Implementation, and Management, 3rd Edition, by C. Begg and T.
IBM Welcomes Your Comments IBM Welcomes Your Comments To help us with future versions of our manuals, we want to know about any corrections or clarifications that you would find useful. Include the following information: ■ The name and version of your manual ■ Any comments that you have about the manual ■ Your name, address, and phone number Send electronic mail to us at the following address: doc@informix.com This address is reserved for reporting errors and omissions in our documentation.
Chapter Installation and Initial Configuration In This Chapter . . . . . . . . . . . . . . . . 1-5 Define Your Starting Point . . . . . . . . . . . . Upgrade an Earlier Version of OnLine . . . . . . . Compare Your Current Configuration to OnLine 5.x Create a Level-0 Archive . . . . . . . . . . Load the Software and Execute the install Script . . Initialize Shared Memory . . . . . . . . . . Run tbcheck . . . . . . . . . . . . . . Create a New Level-0 Archive . . . . . . . .
Message File Guidelines . . . . . . . . . . . MSGPATH . . . . . . . . . . . . . . CONSOLE . . . . . . . . . . . . . . Archive Tape Device Guidelines . . . . . . . . TAPEDEV . . . . . . . . . . . . . . TAPEBLK . . . . . . . . . . . . . . TAPESIZE . . . . . . . . . . . . . . Logical Log Tape Device Guidelines. . . . . . . LTAPEDEV . . . . . . . . . . . . . . LTAPEBLK . . . . . . . . . . . . . . LTAPESIZE . . . . . . . . . . . . . . Identification Parameter Guidelines . . . . . . . SERVERNUM . . . . . . . . .
OnLine Disk Space Allocation . . . . . . Allocate Raw Disk Space or Cooked Files?. How Much Disk Space Do You Need? . . How Should You Apportion Disk Space? . How to Allocate Disk Space . . . . . . Evaluate UNIX Kernel Parameters . . . Configuration Checklist . . . . . . . . . Enter Your Configuration and Initialize OnLine Setting Shared Memory Parameters . . . Initialize OnLine . . . . . . . . . . Set Your Environment Variables. . . . . SQLEXEC . . . . . . . . . . . TBCONFIG . . . . . . . . . .
1-4 IBM Informix OnLine Database Server Administrator’s Guide
In This Chapter This chapter describes how to get started administering your IBM Informix OnLine environment. You need the following items to install your OnLine database server: ■ UNIX Products Installation Guide ■ IBM Informix OnLine electronic media ■ IBM Informix OnLine serial number keycard The specific steps that you should follow as part of your installation depend on your environment. To find the starting point that is right for you, refer to page 1-6.
Define Your Starting Point Define Your Starting Point This section directs you to the starting point for your specific installation and configuration. If you are installing an IBM Informix OnLine 5.x product for the first time, follow all the steps illustrated on page 1-10. After you complete the software load and installation, turn to page 1-10 of this manual for instructions on completing your initial configuration for OnLine 5.x.
Upgrade an Earlier Version of OnLine If you are installing OnLine 5.x and plan to run it on the same host machine where you are running an earlier version of OnLine, you must load the OnLine 5.x software into a different $INFORMIXDIR directory than the one that contains your earlier server software. To install OnLine 5.x, follow all the steps illustrated on page 1-10. Be sure that you define the OnLine 5.x INFORMIXDIR and PATH environment variables correctly for user informix.
Upgrade an Earlier Version of OnLine Create a Level-0 Archive Ask all users to exit their applications before you begin the upgrade procedure. (Perform a graceful shutdown by executing tbmode -s from the command line.) Create a level-0 archive of your current OnLine system. Keep a copy of your current configuration file for reference. Load the Software and Execute the install Script Take OnLine to offline mode. (Execute tbmode -ky.) Verify that you are logged in as user root.
Upgrade an Earlier Version of OnLine Run tbcheck Verify the integrity of the upgraded 5.x databases before you continue. To do this, execute the following commands from the system prompt: tbcheck -ci dbname Checks and verifies the integrity of the database indexes. tbcheck -cD dbname Checks and verifies the integrity of database data. tbcheck -cc dbname Checks and verifies the integrity of the OnLine 5.x system catalog tables. tbcheck -cr Checks and verifies the integrity of the OnLine 5.
Overview of OnLine Installation Steps Overview of OnLine Installation Steps Installing OnLine 5.x involves three major steps, which are summarized here. For detailed information, see the UNIX Products Installation Guide. Important: For each step, you must be logged in as root. 1. 2. Create UNIX environment: ■ Create user informix. ■ Set INFORMIXDIR. ■ Set PATH. ■ Change your directory to $INFORMIXDIR. Load OnLine 5.x software: ■ 3. Copy Informix files into the Informix installation directory.
OnLine Configuration Files OnLine Configuration Files You are not limited to just one configuration file. You can create and manage multiple OnLine configuration files, and each file can contain a unique set of configuration parameter values. This section explains how multiple configuration files are created and managed. As part of OnLine 5.x installation, the product software is loaded into the Informix product directory, specified as the environment variable INFORMIXDIR.
OnLine Configuration Files If you modify the configuration file while OnLine is online, you might want to compare the current configuration values with the new values stored in the file specified as TBCONFIG. To obtain a copy of your current, effective OnLine configuration through DB-Monitor, choose the Status menu, Configuration option. You are asked to supply a filename for the output file. If you supply a filename (without a directory location), a copy of the current configuration is stored in filename.
Contents of tbconfig.std Contents of tbconfig.std The tbconfig.std file contains all OnLine configuration parameters. The paragraphs that follow name each parameter and provide a brief definition. The parameters are listed in alphabetic order, not in the order in which they appear in tbconfig.std. Figure 1-1 displays a copy of the tbconfig.std file. (If you are unfamiliar with the terms used by Informix to describe units of disk space, refer to the IBM Informix Guide to SQL: Tutorial.
Contents of tbconfig.std DBSPACES specifies the maximum number of dbspaces supported by this OnLine configuration. Like CHUNKS, the number of dbspaces can be system- dependent. Refer to page 1-35 for information about setting the value of this parameter. DEADLOCK_TIMEOUT specifies the maximum number of seconds that an OnLine user process can wait to acquire a lock in a client/server environment. The parameter is used only if this OnLine configuration uses the distributed capabilities of IBM Informix STAR.
Contents of tbconfig.std LRUS specifies the number of LRU (least-recently used) queues. The LRU queues manage the shared-memory buffer pool. Refer to page 1-37 for information about setting the value of this parameter. LRU_MAX_DIRTY specifies the percentage of modified pages in the LRU queues that, when reached, flags the queue to be cleaned. Refer to page 1-37 for information about setting the value of this parameter.
Contents of tbconfig.std MIRROR specifies whether OnLine blobspace and dbspace mirroring is enabled. Refer to page 1-24 for information about setting the value of this parameter. MIRROROFFSET specifies in kilobytes the offset into the disk partition or into the device to reach the beginning of the mirror chunk. Refer to page 1-25 for information about setting the value of this parameter. MIRRORPATH specifies the pathname of the mirror chunk where the mirrored root dbspace resides.
Contents of tbconfig.std ROOTPATH specifies the pathname of the chunk where the root dbspace resides. Informix recommends that this value be a link that points to the root dbspace chunk device. Refer to page 1-22 for information about setting the value of this parameter. ROOTSIZE specifies the size of the root dbspace in kilobytes. Refer to page 1-23 for information about setting the value of this parameter.
Contents of tbconfig.std USERS specifies the maximum number of OnLine user processes that can attach to shared memory concurrently. A user process is broadly defined as a process that is, or will be, attached to shared memory. User processes include database server processes, daemon processes, and utility processes. (In this manual, no reference is made to application tool processes.) Refer to page 1-33 for information about setting the value of this parameter. Figure 1-1 The Contents of tbconfig.
Contents of tbconfig.std TAPEDEV TAPEBLK TAPESIZE /dev/tapedev # Archive tape device pathname 16 # Archive tape block size (Kbytes) 10240 # Max.
Set Up Your Initial Configuration STAGEBLOB (multiprocessor-machine default is 300) # INFORMIX-OnLine/Optical staging area # System Page Size BUFFSIZE machine-specific # Page size (do not change!) Set Up Your Initial Configuration This chapter uses a workbook approach to help you define your initial configuration. The configuration worksheet lists each parameter needed for initialization. The default value for the parameter is displayed in bold type next to the parameter name.
Root Dbspace Configuration Guidelines Root Dbspace Configuration Guidelines The root dbspace, like all dbspaces, consists of at least one chunk. You can add other chunks to the root dbspace after OnLine is initialized. All disk configuration parameters refer to the first (initial) chunk of the root dbspace. The root dbspace contains information that is critical for OnLine operation. Specific control and tracking information needed for OnLine operation is stored in the root dbspace reserved pages.
Root Dbspace Configuration Guidelines ROOTPATH The ROOTPATH parameter specifies the pathname of the initial chunk of the root dbspace. ROOTPATH is stored in the OnLine reserved pages as a chunk name. Informix recommends that, instead of entering the actual device name for the initial chunk, you define ROOTPATH as a pathname that is a link to the root dbspace initial chunk. The link enables you to quickly replace the disk where the chunk is located.
Root Dbspace Configuration Guidelines ROOTSIZE ROOTSIZE specifies the size of the initial chunk of the root dbspace, expressed in kilobytes. The size that you select depends on your immediate plans for OnLine. The ROOTSIZE default value is 20,000 KB (about 19.5 MB). If you are configuring OnLine for a learning environment, plan to make the root dbspace 20 to 60 MB. If you plan to add test databases to this system, choose the larger size. Enter this value in two places on the configuration worksheet.
Mirroring Configuration Guidelines Mirroring Configuration Guidelines Mirroring is not required, but it is strongly recommended. Refer to page 4-14 for a complete discussion of mirroring and mirroring administration. Mirroring is a strategy that pairs primary chunks of one defined blobspace or dbspace with equal-sized mirror chunks. Writes to the primary chunk are duplicated asynchronously on the mirror chunk.
Physical Log Configuration Guidelines MIRROROFFSET The MIRROROFFSET parameter specifies the offset into the disk partition or into the device to reach the chunk that serves as the mirror for the root dbspace initial chunk. Leave this worksheet field blank until you allocate OnLine disk space. Physical Log Configuration Guidelines This section describes how to assign values to the physical log parameters.
Logical Log Configuration Guidelines Refer to page 1-42 for guidelines on deciding how much disk space should be dedicated to OnLine dbspaces. Refer to page 1-26 for information about sizing the logical log files. The default value of PHYSDBS is 1,000 KB. The default values included in the tbconfig.std file adhere to both of the guidelines just described. The size of the physical log is 1,000 KB. The default value of LOGSIZE is 500 KB. The default value of LOGFILES is 6.
Logical Log Configuration Guidelines LOGFILES LOGFILES specifies the number of logical log files managed by OnLine. The minimum number required for OnLine operation is three log files. The maximum number is determined by the number of logical log descriptors that can fit on a page. For a 2-KB page, the maximum number is about 60 log files. The default value of LOGFILES is 6.
Message File Guidelines Message File Guidelines The console receives messages that deserve your immediate attention–for example, alerting you that your logical logs are full. The OnLine message log contains a more complete set of messages that record OnLine activity but rarely require immediate action. MSGPATH MSGPATH specifies the UNIX pathname of the OnLine message file. OnLine writes status messages and diagnostic messages to this message file during operation.
Logical Log Tape Device Guidelines The default value of TAPEDEV is /dev/tapedev. You can set the value of TAPEDEV to /dev/null if you are testing or prototyping an application, or if you are using OnLine in a learning environment. During OnLine operation, some tasks require that you create an archive. If you set TAPEDEV to /dev/null, you can create an archive instantly, without overhead. However, you are not archiving your OnLine data. You cannot perform a restore.
Logical Log Tape Device Guidelines OnLine supports a logical log backup option called Continuous-Logging, which backs up each logical log as soon as it becomes full. The ContinuousLogging option is recommended for all OnLine configurations, but it requires a dedicated tape device while the option is active. Informix strongly recommends that your OnLine environment include two tape devices, one for continuous backup of the logical logs and one for archiving.
Identification Parameter Guidelines LTAPESIZE LTAPESIZE specifies the maximum amount of data that should be written to each tape, expressed in kilobytes. If the pathname of the tape device is /dev/null, the tape size is ignored. The default value of LTAPESIZE is 10,240KB. Identification Parameter Guidelines This section describes how to assign values to the OnLine identification parameters.
Shared-Memory Parameter Guidelines DBSERVERNAME DBSERVERNAME specifies a unique name associated with this specific occur- rence of OnLine. The identifier distinguishes this OnLine server from all other database servers in the $INFORMIXDIR directory and the network, if one exists. The value of DBSERVERNAME cannot exceed 18 characters. Valid characters are restricted to digits, letters, and the underscore. The default value of DBSERVERNAME is ONLINE.
Shared-Memory Parameter Guidelines USERS USERS specifies the maximum number of user processes that can concur- rently attach to shared memory. The value can have a large effect on the size of shared memory because it determines the minimum values for four other shared-memory parameters (LOCKS, TBLSPACES, BUFFERS, and TRANSACTIONS.) To arrive at a value for USERS, specify the highest likely value for the number of user processes active at any one time plus the value of CLEANERS, plus 4.
Shared-Memory Parameter Guidelines BUFFERS BUFFERS specifies the maximum number of shared-memory buffers available to OnLine user processes during processing. The minimum value for BUFFERS is 4 per user process. The maximum value is 32,000. The default value is 200. As a general guideline, buffer space should range from 20 to 25 percent of physical memory.
Shared-Memory Parameter Guidelines CHUNKS CHUNKS specifies the maximum number of chunks supported by OnLine. The value specified should be as close as possible to the maximum number permitted, which is operating-system dependent. The maximum number of chunks is the lesser of two values: ■ The number of chunk entries (pathnames) that can fit on an OnLine page ■ The maximum number of open files per process allowed by the operating system, minus 6 The default value for CHUNKS is 8.
Shared-Memory Parameter Guidelines The recommended value for LOGBUFF is 16 pages, or 16 multiplied by BUFFSIZE. (BUFFSIZE is the machine-specific page size and the last parameter listed in tbconfig.std.) The default value is 32KB. LOGSMAX LOGSMAX specifies the maximum number of logical log files that OnLine supports. OnLine requires at least three logical log files for operation. In general, you can set the value of LOGSMAX equal to the value of LOGFILES.
Shared-Memory Parameter Guidelines CKPTINTVL CKPTINTVL specifies the maximum interval, expressed in seconds, that can elapse before OnLine checks to determine if a checkpoint is needed. The default value for CKPTINTVL is 300 seconds, or five minutes. DB-Monitor does not prompt for this value during initialization. You can tune this parameter to affect performance. Refer to page 5-20. LRUS LRUS specifies the number of LRU (least recently used) queues in the sharedmemory buffer pool.
Shared-Memory Parameter Guidelines DB-Monitor does not prompt for this value during initialization. You can tune this parameter to affect performance. Refer to page 5-19. LTXHWM LTXHWM specifies the “long transaction high-water mark.” In the logical log, LTXHWM is the percentage of available logical log space that, when filled, triggers the tbinit daemon to check for long transactions.
Machine- and Product-Specific Parameter Guidelines Machine- and Product-Specific Parameter Guidelines Because your machine or product environment might not support these parameters, they do not appear on the configuration worksheet. DB-Monitor does not prompt for any of these values during initialization. DYNSHMSZ The DYNSHMSZ parameter affects your OnLine configuration only if you plan to use OnLine with the IBM Informix TP/XA library product. The default value for DYNSHMSZ is 0.
OnLine Disk Space Allocation SPINCNT The SPINCNT parameter affects only multiprocessor machines that use spinand-retry latch acquisition. SPINCNT specifies the number of times that a process attempts to acquire a latch in shared memory before it enters a wait mode. The default value of SPINCNT on a uniprocessor machine is 0. The default value of SPINCNT on a multiprocessor machine is 300. Refer to page 5-24 for information about tuning the value of this parameter.
OnLine Disk Space Allocation Cooked files are unreliable because I/O on a cooked file is managed by the UNIX operating system. A write to a cooked file can result in data being written to a memory buffer in the UNIX file manager instead of being written immediately to disk. As a consequence, UNIX cannot guarantee that the committed data has actually reached the disk. This is the problem. OnLine recovery depends on the guarantee that data written to disk is actually on disk.
OnLine Disk Space Allocation Calculate Root dbspace Size Analyze your application to estimate the amount of disk space that OnLine might require for implicit temporary tables, which are tables OnLine creates as part of processing. Implicit temporary tables are stored in the root dbspace and deleted when the database server process ends.
OnLine Disk Space Allocation Project Total Space Requirements The amount of additional disk space needed for OnLine data storage depends on your production environment. Every application environment is different. The following list suggests some of the steps you might take to help you calculate the amount of disk space to allocate (beyond the root dbspace): 1. Decide how many databases and tables you need to store. Calculate the amount of space required for each one. 2.
OnLine Disk Space Allocation After you decide how you plan to define the chunks, decide on the number of chunks you plan to create and a size for each.
OnLine Disk Space Allocation Figure 1-2 illustrates four guidelines for planning the physical layout of your OnLine data. Each guideline is described in detail in the text that follows. Figure 1-2 Guidelines for planning your disk layout Consider mirroring Locate a primary chunk and its mirror chunk on different disks. primary mirror Isolate high-use tables disk platter high-use table Locate high-use tables on their own device at the center of the disk or spread them across multiple devices.
OnLine Disk Space Allocation Critical tables and databases should be mirrored. The root dbspace should be mirrored. Mirroring is specified by chunk. Locate the primary and the mirrored chunk on different disks. Ideally, different controllers should handle the disks. You can place a table with high I/O activity on a disk device that is dedicated to its use. When disk drives have different performance levels, you can put the tables with the most use on the fastest drives.
OnLine Disk Space Allocation The logs contain critical information and should be located in mirrored dbspaces, despite the fact that their high level of activity makes it costly (in terms of performance) to do so. Compare the total amount of dbspace disk space (exclude blobspaces) that you intend to allocate to OnLine with the amount of space dedicated to OnLine logging (physical log size plus total space allocated for the logical log files).
OnLine Disk Space Allocation Raw File Space Consult your UNIX system manuals for instructions on how to create and install a raw device. In general, you can either repartition your disks or unmount an existing file system. In either case, take proper precautions to back up any files before you unmount the device. Change the group and owner of the character-special devices to informix. (The filename of the character-special device usually begins with the letter r (for example, /dev/rsd1f).
OnLine Disk Space Allocation Evaluate UNIX Kernel Parameters Your OnLine product arrives with a machine-specific file, $INFORMIXDIR/release/ONLINE_5.x, that contains recommended values for UNIX kernel parameters. Compare the values in this file with your current UNIX configuration. If the recommended values for OnLine differ significantly from your current environment, consider modifying your UNIX kernel settings.
Configuration Checklist Configuration Checklist Figure 1-3 is a checklist to help you verify that you have correctly completed the initialization preparation tasks. ✔ Create user and group informix The only member of the group informix is user informix. Perform OnLine administrative actions as user informix. ✔ Create raw devices Do not mount raw devices. Raw devices should not include file systems. ✔ Set device permissions Each raw device should display informix as its group and owner.
Enter Your Configuration and Initialize OnLine Enter Your Configuration and Initialize OnLine When you configure OnLine for the first time, you specify two sets of parameter values through DB-Monitor. The first set of parameters is the disk parameters; the second set is the shared memory parameters. Each set of values is specified on its own DB-Monitor screen. After you complete both screens, OnLine prompts you to begin the initialization.
Enter Your Configuration and Initialize OnLine The disk parameters initialization screen appears. Some fields contain default values. The following screen representation replaces the default values in each field with the name of the OnLine configuration parameter associated with that field: INITIALIZATION: Make desired changes and press ESC to record changes. Press Interrupt to abort changes. Press F2 or CTRL-F for field-level help. Page Size [BUFFSIZE DISK PARAMETERS ] Kbytes Mirror [MIRROR Sys.
Setting Shared Memory Parameters Setting Shared Memory Parameters Like the disk parameters screen, the shared-memory parameters screen appears with some default values in some fields. In the following representation of the shared-memory parameters screen, each default value has been replaced with the name of the OnLine configuration parameter associated with that field: SHARED MEMORY: Make desired changes and press ESC to record changes. Press Interrupt to abort changes.
Initialize OnLine Initialize OnLine OnLine prompts you for confirmation that you wish to initialize immediately using these current values: Do you really want to continue? (y/n) WARNING: The root dbspace will be initialized. All previous data will be destroyed. When you initialize OnLine starting from the DB-Monitor Initialize option (disk parameters screen), you are initializing both disk space and shared memory.
Set Your Environment Variables SQLEXEC The value of SQLEXEC directs the front-end processes to a specific database server within the $INFORMIXDIR directory. The default value for SQLEXEC is $INFORMIXDIR/lib/sqlturbo, the OnLine database server. If OnLine is the only database server in your $INFORMIXDIR directory, you do not need to define SQLEXEC. If you worked with an IBM Informix SE database server on this machine in the past, you might have an SQLEXEC environment variable already set for use with SE.
Modify UNIX Startup and Shutdown Scripts If your environment contains a single OnLine database server, you do not need to explicitly set TBCONFIG. If the tbinit process cannot find the file specified by TBCONFIG, it creates a copy of tbconfig.std, places the copy in the file specified by TBCONFIG, and uses the values in that file for initialization.
Modify UNIX Startup and Shutdown Scripts Startup Add UNIX and OnLine utility commands to the UNIX startup script that perform the following steps: 1. Set the INFORMIXDIR environment variable to the full pathname of the directory in which OnLine is installed. (If multiple versions of OnLine are running on your UNIX system, you must reset INFORMIXDIR for each OnLine system that you initialize.) 2. Set the PATH environment variable to include the $INFORMIXDIR/bin directory. 3.
Modify UNIX Startup and Shutdown Scripts Shutdown Add UNIX and OnLine utility commands to the UNIX shutdown script that perform the following steps: 1. Set the INFORMIXDIR environment variable to the full pathname of the directory in which OnLine is installed. (If multiple versions of OnLine are running on your UNIX system, you must reset INFORMIXDIR for each OnLine system that you shut down.) 2. Set the PATH environment variable to include the $INFORMIXDIR/bin directory. 3.
Create Blobspaces and Dbspaces Create Blobspaces and Dbspaces After OnLine is initialized, you can create blobspaces and dbspaces as desired. If you plan to use blobspaces, Informix recommends that you create one or two blobspaces before you create a dbspace. The reason for this is the way that OnLine archives data. During an archive, OnLine temporarily blocks blobpage allocation in a chunk until the chunk is archived.
OnLine Error Message Format OnLine Error Message Format The OnLine error message format is straightforward: -nn Explanatory statement of error condition OnLine messages begin with a number that identifies the category of error. Explanatory text follows. Use the error message number as a key into the Informix Error Messages manual. The manual lists all Informix error messages (not just OnLine messages), along with information about the cause of the error and corrective actions available to you.
Chapter System Architecture In This Chapter . . . . . . . . . . . . . 2 . . . . . . . 2-7 Initialization . . . . . . . . . . . . . . . . . . . Initialization Commands . . . . . . . . . . . . . Shared Memory Commands . . . . . . . . . . . Disk Space Commands . . . . . . . . . . . . . What Happens During Shared-Memory Initialization . . . Shared-Memory Initialization Procedure. . . . . . . Step 1: Calculate Configuration Values . . . . . . . Step 2: Create Shared Memory . . . . . . . .
OnLine User Processes. . . . . . . . . . . . . . How User Processes Attach to Shared Memory . . . . Step 1: Obtain SERVERNUM . . . . . . . . . Step 2: Calculate Shared-Memory Key Value . . . Steps 3 and 4: Request Shared-Memory Segment and Attach to SHMBASE . . . . . . . . . Step 5: Attach Additional Segments . . . . . . User Processes and Critical Sections . . . . . . . . OnLine User Process Status and States . . . . . . . OnLine Database Server Process . . . . . . . . .
LRU Queues and Buffer Pool Management . . . LRU_MAX_DIRTY . . . . . . . . . . LRU_MIN_DIRTY . . . . . . . . . . How a User Process Acquires a Buffer . . . . Step 1: Identify the Data . . . . . . . . Step 2: Determine Lock-Access Level . . . Step 3: Locate the Page in Memory . . . . Step 4: Read the Page in from Disk . . . . Steps 5-7: Lock Buffer, Release Lock, and Wake Waiting Processes . . . . . . . Physical Log Buffer . . . . . . . . . . . Double Buffering . . . . . . . . . . Causes of Flushing . . . . .
Dbspace and Blobspace . . . . . . . . . Database . . . . . . . . . . . . . . Tblspace . . . . . . . . . . . . . . Extent . . . . . . . . . . . . . . . Physical Log . . . . . . . . . . . . Logical Log . . . . . . . . . . . . . Structure of the Root Dbspace . . . . . . . . Structure of a Regular Dbspace . . . . . . . Structure of an Additional Dbspace Chunk . . . Structure of a Blobspace . . . . . . . . . . Structure of a Blobspace or Dbspace Mirror Chunk OnLine Limits for Chunks . . . . . . . . .
Structure of an Extent . . . . . . Extent Size. . . . . . . . . Page Types . . . . . . . . Next Extent Allocation . . . . . . Structure of a Dbspace Page . . . . Page Header . . . . . . . . Timestamp . . . . . . . . Slot Table . . . . . . . . . Data Row Format and Rowid . . . Data Pages and Data Row Storage . . Single-Page Storage . . . . . Multipage Storage . . . . . . Storage of Modified Rows . . . Page Compression . . . . . . Structure of an Index Page . . . . The Root Node Page . . . . .
2-6 IBM Informix OnLine Database Server Administrator’s Guide
In This Chapter In this guide, system architecture is interpreted broadly to include OnLine database server processes as well as OnLine shared memory and disk data structures. This chapter provides optional reference material about OnLine 5.x operation that is intended to deepen your understanding. Topics in other chapters contain cross-references to specific topics in this chapter if additional information could prove helpful for understanding.
Initialization Commands Shared-memory initialization establishes the contents of shared memory (OnLine internal tables and buffers) according to the parameter values contained in the configuration file. The tbinit process reads the configuration file and detects and implements any changes in the size or location of any OnLine shared-memory structure since the last initialization. A record of the changes is written to the OnLine message log as well.
Initialization Commands As tbinit executes, it reads the configuration file named by the environment variable TBCONFIG. Refer to page 1-11 for further information about OnLine configuration files and TBCONFIG.
What Happens During Shared-Memory Initialization Disk Space Commands You can direct OnLine to initialize disk space (and automatically initialize shared memory) in any one of three ways: ■ tbinit -i (UNIX command line) ■ tbinit -i -s (UNIX command line) ■ Parameters menu, Initialize option (DB-Monitor) When you initialize disk space, all existing data on the disk you are initializing is destroyed. If you use only the -i option, OnLine is left in online mode after initialization.
What Happens During Shared-Memory Initialization Shared-Memory Initialization Procedure 1. The tbinit process calculates configuration values. 2. The tbinit daemon creates OnLine shared memory. 3. The tbinit daemon attaches to shared memory. 4. The tbinit daemon initializes shared-memory structures. 5. The tbinit daemon wakes parent tbinit process. 6. The tbinit daemon initiates fast recovery. 7. The tbinit daemon initiates the first checkpoint. 8.
What Happens During Shared-Memory Initialization The tbinit process uses the configuration values to calculate the required size of OnLine shared memory. Step 2: Create Shared Memory After tbinit finishes computing the configuration values, it forks a child process, which becomes the tbinit daemon. From this point on, the child (daemon) process performs the initialization tasks. The parent process sleeps until the child wakes it.
What Happens During Shared-Memory Initialization During shared-memory initialization, disk structures and disk layout are not affected. Essential address information (such as the locations of the logical and physical logs) is read from disk. These addresses are used to update pointers in shared memory.
What Happens During Disk-Space Initialization Step 9: Document Configuration Changes The tbinit daemon compares the values stored in the configuration file with the values formerly stored in the root dbspace reserved page PAGE_CONFIG. Where differences exist, tbinit notes both values (old and new) in a message written to the OnLine message log. This action is for documentation only; tbinit has already written the new values from the configuration file into the root dbspace reserved page.
What Happens During Disk-Space Initialization Important: Do not initialize disk space without careful consideration. As part of the procedure, initialization destroys all data on the portion of the disk where the new root dbspace (and its mirror) will be located. Here are the main tasks that are completed during disk-space initialization: 1. The tbinit process calculates configuration values. 2. The tbinit daemon creates OnLine shared memory. 3. The tbinit daemon attaches to shared memory. 4.
What Happens During Disk-Space Initialization After tbinit finishes computing the configuration values, it forks a child process, which becomes the tbinit daemon. From this point on, the child (daemon) process performs the initialization tasks. The parent process sleeps until the child wakes it. Step 2: Create OnLine Shared Memory The tbinit daemon creates shared memory by acquiring the shared-memory space from UNIX.
What Happens During Disk-Space Initialization Next, tbinit reserves space in the initial chunk of the root dbspace for the physical and logical logs. As part of the same step, tbinit updates the pointers in shared memory with the new disk addresses. The daemon repeats this process for each disk structure. In each pass, tbinit reads configuration values from its private data space, creates a structure and then updates any associated structures in shared memory with required address information.
UNIX Kernel and Semaphore-Allocation Parameters Step 8: Change to Quiescent Mode After the checkpoint completes, the tbinit daemon takes OnLine to quiescent mode. All configuration information in the tbinit private data space is written to the second reserved page in the initial chunk of the root dbspace, PAGE_CONFIG. If tbinit was executed with the -s option, OnLine remains in quiescent mode. Otherwise, tbinit takes OnLine to online mode.
UNIX Kernel and Semaphore-Allocation Parameters ■ The shared-memory lower-boundary address ■ The maximum number of attached shared-memory segments per process ■ The maximum amount of shared memory system-wide The remaining three parameters are semaphore-allocation parameters: ■ The maximum number of semaphore identifiers ■ The maximum number of semaphores ■ The maximum number of semaphores per identifier When tbinit creates the required shared-memory segments, it attempts to acquire as large a
UNIX Kernel and Semaphore-Allocation Parameters If this relationship does not hold, either one of two undesirable situations could develop: ■ If the total amount of shared memory is less than the total addressable size, you are able to address more shared memory for the operating system than that which is available. ■ If the total amount of shared memory is greater than the total addressable size, you can never address some amount of shared memory that is available.
UNIX Kernel and Semaphore-Allocation Parameters Figure 2-1 Shared memory must be attached to a user process so that it looks like contiguous memory. Operating system memory User process Shared-memory segment SHMBASE Gap Shared-memory segment Next segment of shared memory should attach here. Next segment attaches here when lower boundary is too large.
OnLine User Processes OnLine User Processes An OnLine user process is any process that eventually needs to attach to OnLine shared memory. User processes include three types: ■ OnLine database server processes named $INFORMIXDIR/lib/sqlturbo ■ OnLine daemon processes, such as tbinit, tbundo, and tbpgcl ■ OnLine utility processes, such as tbmode, tbmonitor, and tbtape An application development tool, such as an IBM Informix 4GL program, is not a user process by this definition.
OnLine User Processes Figure 2-2 illustrates the virtual address space of a user process after the user process has attached to shared memory. For a detailed discussion of how a user process attaches to shared memory, refer to page 2-24. Figure 2-2 An example of a user process virtual address space. Depending on the machine, shared memory is allocated “above” or “below” the value of SHMBASE.
How User Processes Attach to Shared Memory How User Processes Attach to Shared Memory OnLine requires a technique to ensure that all OnLine user processes find and gain access to the same shared-memory segments. If two or more OnLine database servers exist on a single UNIX host machine, the shared-memory segments associated with each OnLine server exist at different locations in physical memory. The shared-memory segments for each OnLine server must be uniquely identifiable to the database user processes.
How User Processes Attach to Shared Memory Step 2: Calculate Shared-Memory Key Value When a user process is ready to attach to shared memory, it calculates a value that serves as the shared-memory key to identify to UNIX the first sharedmemory segment. To ensure that all user processes within a single OnLine system attach to the same shared-memory segments, the key value must be shared among all OnLine user processes.
How User Processes Attach to Shared Memory The first shared-memory segment is attached to the virtual address space of each process at the same virtual address defined as SHMBASE. SHMBASE identifies the specific virtual address where the database server processes attach the first, or base, shared-memory segment. (Refer to Figure 2-2 on page 2-23 for an illustration of the virtual address space of a database server process.
How User Processes Attach to Shared Memory The user process lays out the first shared-memory segment, which includes the shared-memory header. Sixteen bytes into the header, the user process obtains the following data: ■ The total size of shared memory for this OnLine server ■ The size of each shared-memory segment The user process then calculates how much shared memory it has and how much is still required. (Each user process must acquire the total amount of shared memory.
User Processes and Critical Sections User Processes and Critical Sections A critical section is a section of OnLine code that comprises a set of disk modifications that must be performed as a single unit; either all of the modifications must occur or none can occur. OnLine designates critical sections to maintain physical consistency in a way that is analogous to the way that transactions maintain logical consistency.
OnLine User Process Status and States OnLine User Process Status and States The tbstat -u command prints a profile of user process activity. To interpret why a user process might be waiting for a latch or waiting for a checkpoint to complete, refer to the information contained on the pages indicated. OnLine user process status flags occupy the first flag position of each entry in the tbstat -u display, users section. These flags describe a process that is waiting.
OnLine Database Server Process The user process type flags occupy the fourth position of each entry in the tbstat -u display, users section. These flags provide the following information: C The user process is dead and waiting for proper clean-up. (Refer to page 2-33.) D The user process is either tbinit or tbundo. (Refer to page 2-33.) F The user process is a page-cleaner daemon, tbpgcl. (Refer to page 2-33.) M The user process is a DB-Monitor process, tbmonitor.
Orphaned Database Server Processes The application development tool process and the database server process communicate with each other through unnamed UNIX pipes. Each process reads from one pipe and writes to the other. This interaction is illustrated in Figure 2-3. Write Application process Read Read Write Database server process Figure 2-3 The application process and the database server process communicate through unnamed UNIX pipes.
Orphaned Database Server Processes Never kill an OnLine database server process with the UNIX kill -9 command. If you execute the kill -9 command and generate a SIGKILL signal while that server process is holding a latch or is in a critical section of a transaction, OnLine aborts with an immediate shutdown to preserve data consistency. Never kill an application tool process with a SIGKILL signal.
OnLine Daemon Processes OnLine Daemon Processes Daemon processes are OnLine user processes that perform system-wide tasks. Unlike database server processes, a daemon process is not associated with a corresponding application development tool process. The OnLine system employs three different types of daemons, which are listed below.
tbundo Daemon tbundo Daemon The tbundo daemon is called by the tbinit daemon to perform cleanup for database server processes that die abnormally. As part of cleanup, tbundo rolls back incomplete transactions and releases shared-memory resources (such as locks and buffers) held by the server process when it died. The tbundo daemon also removes the server process from any buffer wait lists.
tbpgcl Daemon When OnLine is initialized, all page-cleaner daemons are started and placed in idle mode. As master daemon, tbinit directs each tbpgcl to an LRU queue of shared-memory buffers. (Refer to page 2-57 for more information about the LRU queues.) Periodically, tbpgcl wakes and searches through the LRU queues, looking for buffers that need to be flushed. (Refer to page 2-58 for information about the LRU queues and buffer flushing.
Shared Memory and Process Communication Shared Memory and Process Communication Shared memory refers to the use of the same memory segments by more than one OnLine user process, enabling interprocess communication. Figure 2-4 on page 2-37 shows how multiple user processes can communicate by way of shared memory.
Shared Memory and Process Communication User process A virtual address space User process B virtual address space Unallocated space Unallocated space Private data Private data Shared database server code (C program text) Shared database server code (C program text) Figure 2-4 Multiple user processes can communicate by way of shared memory.
Shared Memory and Buffer Locks Shared Memory and Buffer Locks A primary benefit of shared memory is the ability of multiple OnLine user processes to share access to disk pages stored in the shared-memory buffer pool. OnLine maintains process isolation while achieving this increased concurrency through a strategy of buffer locking.
Managing Shared-Memory Resources Buffer Exclusive Lock A buffer is in exclusive mode, or has an exclusive lock, if a user process demands exclusive access to the buffer. All other user processes requesting to lock the buffer are placed on the user wait list for this buffer. When the executing process is ready to release the exclusive lock, it wakes the next process in the wait-list queue.
Managing Shared-Memory Resources Sharedmemory header User table Lock table Hash table Dbspace table Transaction table Figure 2-5 Components of shared memory. The sharedmemory header contains pointers to all other sharedmemory structures. Chunk table Latch table Page-cleaner table Tblspace table Buffer pool Hash table Buffer header table Hash table Consider what happens when two OnLine server processes attempt to attach to shared memory simultaneously.
Managing Shared-Memory Resources Buffer locks ensure process isolation while user processes contend for the same shared-memory resources. (Refer to page 2-38 for further information about buffer locks.) Latches ensure that only one OnLine user process at a time can gain access to any one shared-memory resource. (Refer to page 2-41 for further information about latches.) Timestamps provide a method of coordinating sequential activity. (Refer to page 2-44 for further information about timestamps.
Managing Shared-Memory Resources If an OnLine user process requires a specific latch, how does it determine if the latch is available? The user process has two options: ■ Test for the latch; if unavailable, do not wait (that is, do not block). ■ Test for the latch; if unavailable, wait (that is, block). Test-and-Set Institutions Most machines use a single test-and-set instruction as part of the test that each user process performs in its attempt to acquire a shared-memory latch.
Managing Shared-Memory Resources Semaphores When an OnLine user process attempts to acquire a latch and finds that the latch is unavailable, the user process can block until the latch is available. The mechanism that signals the process to wake when the latch becomes available is a UNIX semaphore. The semaphore mechanism works likes this. Every OnLine user process is associated with a semaphore.
Managing Shared-Memory Resources OnLine resolves the dilemma by forcing an abort. When OnLine comes back online, fast recovery occurs automatically. Fast recovery returns OnLine to a consistent state through the last completed transaction. (Refer to page 2-32 for instructions on the proper way to kill a database server process. Refer to page 4-39 for further information about fast recovery.
Managing Shared-Memory Resources The blob timestamp on the disk page changes each time the blob data on the page is overwritten. The blob timestamp stored with the forward pointer changes each time a new blob replaces the old blob. For example, when a blob in a data row is updated, the new blob is stored on disk, and the forward pointer stored with the blob descriptor is revised to point to the new location.
Managing Shared-Memory Resources If a program is using Committed Read isolation, the problem just described cannot occur since the database server does not see a row that has been marked for deletion. However, under Committed Read, no lock is placed on an undeleted row when it is read. BYTE or TEXT data is read in a second step, after the row has been fetched.
Shared-Memory Header OnLine compares the item key with the key value it is searching for. If the values match, the item is located. If not, each item in the linked list is examined in succession until the item is found or the search is ended. Figure 2-6 illustrates a hashing technique that uses an algorithm that looks at the first letter of the key value.
Shared-Memory Internal Tables The header also contains the OnLine “magic number,” which is used to synchronize user processes. Each OnLine release is assigned a magic number. In addition, the same magic number is contained within the user process code. Whenever a user process attempts to attach to shared memory, these magic numbers are compared. If they are not compatible, an error is returned. The magic-number check ensures that the database server processes are compatible.
Shared-Memory Internal Tables ■ User processes waiting for the buffer Each buffer header maintains a list of the user processes that are waiting for the buffer and the lock-access level that each waiting process requires. Each OnLine buffer is represented as one entry in the buffer header table. Each entry in the buffer header table occupies 54 bytes. The number of entries in the buffer header hash table is based on the number of allocated buffers.
Shared-Memory Internal Tables ■ The number of chunk entries (pathnames) that can fit on an OnLine page ■ The operating-system value of maximum number of open files per process, minus 6 Refer to page 2-93 for instructions on calculating the number of chunk entries per OnLine page. OnLine Dbspace Table The dbspace table tracks both dbspaces and blobspaces in the OnLine system.
Shared-Memory Internal Tables The minimum value of DBSPACES is 1, representing the root dbspace. OnLine Latch Table The latch table is not a table in the same sense as other tables, but it functions in a similar manner to track all latches in the OnLine system. Refer to page 2-41 for a detailed discussion of how OnLine uses latches to control and manage modifications to OnLine shared memory. The number of latch entries is equal to the number of shared-memory resources configured for the OnLine system.
Shared-Memory Internal Tables A byte lock is only generated if you are using VARCHAR data types. The byte lock exists solely for rollforward and rollback execution, so you must be working in a database that uses logging. Byte locks only appear in tbstat -k output if you are using row-level locking; otherwise, they are merged with the page lock. The upper limit for the maximum number of locks (specified as LOCKS in the configuration file) is 256,000.
Shared-Memory Internal Tables Each tblspace table entry includes header information about the tblspace, the tblspace name, and pointers to the tblspace tblspace in the root dbspace on disk. (Do not confuse the shared-memory active tblspace table with the tblspace tblspace, which is described on page 2-104.) The number of entries in the tblspace table is equal to the maximum number of open tblspaces permitted in this OnLine system, specified as TBLSPACES in the configuration file.
Shared-Memory Internal Tables OnLine Transaction Table The transaction table tracks all transactions in the OnLine system. The transaction table specifically supports the X/Open environment, in which the concept of a global transaction replaces the traditional UNIX architecture of one application tool process associated with one database server process. In a global transaction, more than one database server process can be enlisted to perform work on behalf of a single application tool process.
Shared-Memory Buffer Pool The last available entry slot in the user table is always reserved for a tbmonitor process, regardless of whether any other tbmonitor processes are currently running. The number of entries in the user table is equal to the maximum number of users permitted on this OnLine system, specified as USERS in the configuration file. If an additional user process attempts to attach to shared memory, an error is returned. Each entry in the user table occupies 102 bytes.
Shared-Memory Buffer Pool Regular Buffers The regular buffers store dbspace pages read from disk. The status of the regular buffers is tracked through the buffer header table. Within shared memory, regular buffers are organized into LRU buffer queues. (Refer to page 2-57 for further information about the LRU queues.) Buffer allocation is managed through the use of latches and lock access information.
OnLine LRU Queues OnLine LRU Queues Each regular buffer is tracked through several linked lists of pointers to the buffer header table. These linked lists are the least-recently used (LRU) queues. When OnLine is initialized, the configuration parameter LRUS specifies the number of LRU queues or lists to create. The minimum number of LRU queues is three. The maximum number of LRU queues is the smaller of two values: (USERS divided by 2) or 8.
LRU Queues and Buffer Pool Management LRU Queues and Buffer Pool Management Before processing begins, all page buffers are empty. Every page buffer is associated with a buffer header in the buffer table. Every buffer header is represented by an entry in one of the FLRU queues. The buffer headers are evenly distributed among the FLRU queues. (Refer to page 2-57 for more information about the FLRU and MLRU queues.
LRU Queues and Buffer Pool Management LRU_MAX_DIRTY Periodically, the modified buffers in the MLRU queue are written (flushed) to disk by the page-cleaner daemons. You can specify the point at which cleaning begins. The LRU_MAX_DIRTY configuration parameter limits the number of page buffers that can be appended to the MLRU queues. The default value of LRU_MAX_DIRTY is 60, meaning that page cleaning begins when 60 percent of the total number of buffers are modified.
How a User Process Acquires a Buffer The following example shows how the value of LRU_MIN_DIRTY is applied to the buffer pool to arrive at the number of page buffers in an MLRU queue that, when reached, can signal a suspension of page cleaning: BUFFERS specified as 8000 LRUS specified as 8 LRU_MIN_DIRTY specified as 50 Number of buffers in the MLRU queue when cleaning can be suspended is equal to (Total buffers/Number of LRU queues) multiplied by the percentage specified by LRU_MIN_DIRTY.
How a User Process Acquires a Buffer Step 1: Identify the Data OnLine user processes request a specific data row by rowid. (Refer to page 2-123 for a definition of rowid.) OnLine translates the logical rowid into a physical page location. The user process searches for this page. Step 2: Determine Lock-Access Level Next OnLine determines the level of lock access required by the requesting user process: share, update, or exclusive. (Refer to page 2-38 for further information about buffer locks.
How a User Process Acquires a Buffer Step 4: Read the Page in from Disk If the requested page must be read from disk, the user process first locates a usable buffer in the FLRU queues. (Refer to page 2-57.) OnLine selects an FLRU queue at random and tries to acquire the latch associated with the queue. If the latch can be acquired, the buffer at the “least-recently used” end of the queue is used.
Physical Log Buffer If the user process intends to modify the buffer, it acquires a latch on the buffer and changes the buffer lock-access type to exclusive. A copy of the “before-image” of the page is needed for data consistency. The user process determines if a “before-image” of this page was written to either the physical log buffer or the physical log since the last checkpoint. If not, a copy of the page is written to the physical log buffer.
Physical Log Buffer Refer to page 2-151 for further information about the physical log. Double Buffering The physical log buffer is actually two buffers. The size of each buffer is specified (in kilobytes) by the configuration file parameter PHYSBUFF. Double buffering permits user processes to write to the active physical log buffer while the other buffer is being flushed to the physical log on disk. A pointer in shared memory indicates the current buffer to which processes write their “before-images.
Physical Log Buffer The contents of the physical log buffer must always be flushed to disk before any data buffers. This rule is required for fast recovery. (Refer to page 4-39 for a definition of fast recovery. Refer to page 2-74 for a description of physical log buffer flushing when it is prompted by the need to flush the sharedmemory buffer pool. Refer to page 2-72 for a description of the checkpoint procedure.
Logical Log Buffer First, the page is written to the current physical log buffer, filling it. Next, the user process latches the other physical log buffer. The user process switches the shared-memory current-buffer pointer, making the newly latched buffer the current buffer. The latch on the physical log on disk and the latch on this new, current buffer are released, which permits other user processes to begin writing to the new current buffer.
Logical Log Buffer Logical log buffers Current logical log buffer (now filling) Writes performed by OnLine user processes Figure 2-9 The logical log buffer and its relation to the logical log files on disk Logical log buffer (ready to accept data) Logical log buffer (flushing) Writes performed by page-cleaner daemons Current logical log file Free logical log file Free logical log file System Architecture 2-67
Logical Log Buffer Buffer Contents Logical log records are written continuously during OnLine operation. Even if a database is not created with transaction logging, administrative changes (such as adding a dbspace or a chunk) and data definition statements, such as CREATE TABLE or DROP TABLE, are logged. (SELECT statements are never logged.
Logical Log Buffer Note, however, that this cost of unbuffered logging is minor compared to the benefits of ensured data consistency. (Refer to page 3-34 for further information about the benefits of unbuffered logging compared to buffered logging.) Flushing a Full Buffer When a user process needs to write a record to the logical log buffer, it acquires the latch associated with the logical log buffer and the latch associated with the current logical log on disk.
OnLine Checkpoints OnLine Checkpoints The term checkpoint refers to the point in OnLine operation when the pages on disk are synchronized with the pages in the shared-memory buffer pool. When a checkpoint completes, all physical operations are complete and OnLine is said to be physically consistent. Outlined below are the main events that occur during a checkpoint. Refer to page 2-72 for a detailed description of what happens during a checkpoint.
OnLine Checkpoints One reason an administrator might want to initiate a checkpoint would be to force a new checkpoint record in the logical log. Forcing a checkpoint would be a step in freeing a logical log file with status U-L. (Refer to page 3-41.) Fast Recovery A checkpoint is critical to the operation of the fast-recovery process. (Refer to page 4-39.) As fast recovery begins, OnLine data is brought to physical consistency as of the last checkpoint by restoring the contents of the physical log.
What Happens During a Checkpoint What Happens During a Checkpoint The checkpoint procedure is prompted by any one of four conditions (refer to page 2-70). This description begins when the checkpoint-requested flag is set by an OnLine user process after one of the four conditions is found to exist. The checkpoint-requested flag wakes the tbinit daemon if it is not already awake. Once this flag is set, OnLine user processes are prevented from entering portions of code that are considered critical sections.
When the Daemons Flush the Buffer Pool When the Daemons Flush the Buffer Pool Buffer flushing is managed by the tbinit master daemon and performed by tbinit or by one or more tbpgcl (page-cleaner) daemons. (If no tbpgcl daemons have been configured for your OnLine server, the tbinit daemon performs page-cleaner functions.
How OnLine Synchronizes Buffer Flushing How OnLine Synchronizes Buffer Flushing Buffer flushing occurs within the context of OnLine activity. When OnLine is first initiated, all buffers are empty. As processing occurs, data pages are read from disk into the buffers and user processes begin to modify these pages. (Refer to page 2-73 for an explanation of the “before-images first” rule, which is the reason that synchronization is necessary.
Write Types Describe Flushing Activity If the timestamp on the page in the buffer pool is equal to or more recent than the timestamp for the physical log buffer flush, the “before-image” of this page conceivably could be contained in the physical log buffer. If this is the case, the physical log buffer must be flushed before the shared-memory buffer pages are flushed.
Write Types Describe Flushing Activity Data is always written to the primary chunk first. If a mirror chunk is associated with the primary chunk, the write is repeated on the mirror chunk. The write to the mirror chunk is also included in these counts. Refer to page 5-17 for a discussion of tuning OnLine performance by monitoring write-type statistics. Refer to page 7-87 for information about monitoring write types (and buffer flushing) using tbstat -F.
Write Types Describe Flushing Activity If OnLine is configured for more than one page-cleaner daemon process, the LRU queues are divided among the page-cleaner daemons for more efficient flushing. Foreground Write If a database server process searches through the FLRU queues and cannot locate an empty or unmodified buffer, the server process itself marks a page for flushing. If the server process must perform buffer flushing just to acquire a shared-memory buffer, performance can suffer.
Writing Data to a Blobspace In addition, since database server processes must wait for the checkpoint to complete, the page-cleaner daemons are not competing with a large number of processes for CPU time. As a result, the page cleaners can finish their work with less context switching. Big-Buffer Write Each OnLine big buffer is the size of eight regular buffers, or eight times BUFFSIZE.
Writing Data to a Blobspace At the time that the blob data is being transferred, the row itself may not yet exist. During an insert, for example, the blob is transferred before the rest of the row data. After the blob is stored, the data row is created with a 56-byte descriptor that points to the location of the blob. (Refer to page 2-143 for further information on blob storage and the blob descriptor that is stored in the data row.
Writing Data to a Blobspace The blobspace buffers remain until the OnLine server process that opened the blob is finished. When the application tool process terminates the server process, the buffers are also terminated. Figure 2-10 illustrates the process of creating a blobspace blob. Application process Figure 2-10 Data is written to a blobspace without passing through shared memory.
Disk Data Structures Disk Data Structures OnLine achieves its high performance by managing its own I/O. Storage, search, and retrieval are all managed by OnLine. As OnLine stores data, it creates the structures it needs to search and retrieve the data later. OnLine disk structures also store and track control information needed to manage logging and archiving. OnLine structures must contain all information needed to ensure data consistency, both physical and logical.
OnLine Disk Space Terms and Definitions OnLine maintains three additional disk space structures to ensure physical and logical consistency of data: ■ A logical log ■ A physical log ■ Reserved pages Figure 2-11 on page 2-83 illustrates the relationships among these physical and logical units of disk space. A basic definition of each unit is provided in the paragraphs that follow. Chunk The chunk is the largest unit of physical disk that is dedicated to OnLine data storage.
OnLine Disk Space Terms and Definitions Figure 2-11 The logical units of OnLine disk space can be envisioned as overlaying the physical units. Initial extent Add’tl extent Add’tl extent Page - Blob Page - Data Page - Index Blobpage tblspace UNIX file Database Blobspace Dbspace Chunk Chunk Chunk Chunk Chunk Disk unit Several rules of OnLine space allocation are illustrated here. The chunks that compose a dbspace (or blobspace) need not be contiguous.
OnLine Disk Space Terms and Definitions Blobpage A blobpage is the unit of disk space allocation used to store BYTE and TEXT data within a blobspace. The size of a blobpage is selected by the user who creates the blobspace; the size of a blobpages can vary from blobspace to blobspace. Blobpage is specified as a multiple of BUFFSIZE, the page size defined in the configuration file. For more information, refer to page 2-147. A dbspace contains databases and tables.
OnLine Disk Space Terms and Definitions Database A database resides in the dbspace named in the SQL statement CREATE DATABASE. If no dbspace is specified, the database resides in the root dbspace. When a database is located in a dbspace, it means two things: ■ The database system catalog tables are stored in that dbspace. ■ That dbspace is the default location of tables that are not explicitly created in other dbspaces. Users create a table by executing the SQL statement CREATE TABLE.
OnLine Disk Space Terms and Definitions Extents for a single table can be located within different chunks of the same dbspace. However, extents must be located wholly in one chunk or another; extents cannot span chunk boundaries. All data within an extent pertains to a single tblspace. The initial extent of a table and all subsequent “next” extents may differ in size. The size of the table extents are specified as part of the SQL statement CREATE TABLE.
Structure of the Root Dbspace Structure of the Root Dbspace The OnLine configuration file contains the location of the initial chunk of the root dbspace. If the root dbspace is mirrored, the mirror chunk location is also specified in the configuration file.
Structure of the Root Dbspace Root dbspace (initial chunk) Chunk free-list page Reserved pages Tblspace tblspace Physical log Logical log files Database tblspace 2-88 Unused pages IBM Informix OnLine Database Server Administrator’s Guide Figure 2-12 Structures within the initial chunk of the root dbspace following disk space initialization
Structure of a Regular Dbspace Structure of a Regular Dbspace After disk space initialization, you can add new dbspaces. When you create a dbspace, you assign at least one chunk (either raw or cooked disk space) to the dbspace. This is the initial, primary chunk. Figure 2-13 illustrates the structure of the initial chunk of a regular (nonroot) dbspace.
Structure of an Additional Dbspace Chunk Structure of an Additional Dbspace Chunk You can create a dbspace that comprises more than one chunk. The initial chunk in a dbspace contains the tblspace tblspace for the dbspace. Additional chunks do not.
Structure of a Blobspace Structure of a Blobspace After disk initialization, you can create blobspaces. When you create a blobspace, you can specify the effective size of the blobholding pages, called blobpages. The blobpage size for the blobspace is specified when the blobspace is created as a multiple of BUFFSIZE (the page size). All blobpages within a blobspace are the same size, but the size of the blobpage can vary between blobspaces.
Structure of a Blobspace or Dbspace Mirror Chunk Figure 2-15 illustrates the blobspace chunk structure as it appears immediately after the blobspace is created. Blobspace (any chunk) Free-map pages Bit map that tracks the free-map pages Figure 2-15 Structures within a blobspace, after the blobspace is created. Blobpage size must be a multiple of page size. Unused space initialized as blobpages Structure of a Blobspace or Dbspace Mirror Chunk Each mirror chunk must be the same size as its primary chunk.
OnLine Limits for Chunks If the primary chunk goes down and the mirror chunk becomes the primary chunk, disk space allocation reports will accurately describe the fullness of the new primary chunk. Figure 2-16 illustrates the mirror chunk structure as it appears after the chunk is created. Generic mirror chunk Number and type of control pages varies, depending on chunk type.
OnLine Limits for Chunks The size of each chunk entry on the chunk-tracking page is the length of the chunk pathname plus 29 bytes. The available space on the tracking page is BUFFSIZE minus 28 bytes. As you calculate the number of possible chunk entries, remember that each chunk entry might require up to three additional bytes to accommodate 4-byte alignment.
Reserved Pages Reserved Pages The first 12 pages of the initial chunk of the root dbspace are reserved pages. Copies of the first two reserved pages are also found on every other OnLine chunk. Each reserved page contains specific control and tracking information used by tbinit. Below are listed the function of each of the 12 reserved pages. Each reserved page is described, by field, in the pages that follow in this section.
Reserved Pages The reserved page checkpoint information is stored in a two-page pair, PAGE_1CKPT and PAGE_2CKPT. This information changes for each checkpoint. During each checkpoint, tbinit writes the latest checkpoint information to one of the pages in the pair. During the next checkpoint, tbinit writes the information to the other page in the pair.
Reserved Pages PAGE_PZERO The first reserved page in the root dbspace is PAGE_PZERO. Below are listed the PAGE_PZERO fields and definitions. To obtain a listing of the reserved page, execute the command tbcheck -pr.
Reserved Pages Below are listed the checkpoint and logical log file tracking fields and definitions. To obtain a listing of the reserved page, execute the command tbcheck -pr.
Reserved Pages Field Name Description Date/time file filled Date and time that this log filled Unique ID ID number of this logical log file Physical location Address of this logical log file on disk Log size Number of pages in this logical log file Number pages used Number of pages used in this logical log file (2 of 2) PAGE_DBSP The fifth reserved page in the root dbspace is PAGE_1DBSP. The sixth reserved page, PAGE_2DBSP, is the second page in the pair.
Reserved Pages Field Name Description Date/time created Date and time the dbspace was created Dbspace name Dbspace name Dbspace owner Dbspace owner (2 of 2) PAGE_PCHUNK The seventh reserved page in the root dbspace is PAGE_1PCHUNK. The eighth reserved page, PAGE_2PCHUNK, is the second page in the pair. The tbinit daemon uses the primary chunk page to describe each chunk, its pathname, its relation to the dbspace, and its current status. Below are listed the primary chunk fields and definitions.
Reserved Pages Field Name Description Chunk flags: 0x01 0x02 0x04 0x08 0x20 0x40 0x80 0x100 0x200 0x400 Raw device Block device UNIX file Needs sync() to operating system Chunk is offline Chunk is online Chunk is in recovery Chunk is newly mirrored Chunk is part of a blobspace Chunk is being dropped Chunk name length Length of the chunk pathname Chunk path Operating system path for chunk (2 of 2) PAGE_MCHUNK The ninth reserved page in the root dbspace is PAGE_1MCHUNK.
Reserved Pages Field Name Description Overhead Free-map page address (blobspace only) Chunk flags: 0x01 0x02 0x04 0x08 0x10 0x20 0x40 0x80 0x100 0x200 0x400 Raw device Block device UNIX file Needs sync() to operating system Chunk is a mirror chunk Chunk is offline Chunk is online Chunk is in recovery Chunk is newly mirrored Chunk is part of a blobspace Chunk is being dropped Chunk name length Length of the chunk pathname Chunk path Operating-system path for chunk (2 of 2) PAGE_ARCH The eleventh r
Chunk Free-List Page Field Name Description Timestamp archive Timestamp for this archive (decimal) Logical log unique ID ID number of the logical log file containing the record of this archive Logical log position Physical location of this checkpoint record in the logical log file (2 of 2) Chunk Free-List Page In every chunk, the page that follows the last Preserved page is the first of one or more chunk free-list pages that tracks available space in the chunk.
tblspace Tblspace If newly freed space is contiguous with existing free space, only the length of the existing entry is changed; otherwise, a new entry is created. Illustrated here is a sample listing from a chunk free-list page. Chunk Offset Number of Free Pages 14 28 123 36 208 52 If an additional chunk free-list page is needed to accommodate new entries, a new chunk free-list page is created in one of the free pages in the chunk. The chunk free-list pages are chained in a linked list.
tblspace Tblspace tblspace Tblspace Entries Each data page in the tblspace tblspace describes one tblspace in the dbspace and is considered one entry. Entries in the tblspace tblspace are added when a new table is created. The first page in every tblspace tblspace is a bit map of the pages in the tblspace tblspace. The second page is the first tblspace entry, and it describes itself. The third page describes the first user-created table in this dbspace.
tblspace Tblspace The hexadecimal representation of partnum is actually a composite of two numbers. The most-significant 8 bits indicate the dbspace number where the tblspace resides. The least-significant 24 bits indicate the logical page number where the tblspace is described. Figure 2-19 on page 2-106 illustrates the elements of a tblspace number. Logical page numbers are relative to the tblspace. That is, the first page in a tblspace is logical page 0.
Database Tblspace tblspace Tblspace Bit-Map Page The first page of the tblspace tblspace, like the first page of any initial extent, is a bit map that describes the page fullness of the following pages. Each page that follows has an entry on the bit-map page. If needed, additional bit-map pages are located throughout the contiguous space allocated for the tblspace, arranged so that each bit map describes only the pages that follow it, until the next bit map or the end of the dbspace.
Create a Database: What Happens on Disk Each database tblspace entry includes four components: ■ Database name ■ Database owner ■ Date and time the database was created ■ The tblspace number of the systables system catalog table for this database The database tblspace includes a unique index on the database name to ensure that every database is uniquely named. For any database, the systables table describes each permanent table in the database.
Create a Database: What Happens on Disk Allocate Disc Space OnLine searches the linked list of chunk free-list maps in the dbspace, looking for free space in which to create the system catalog tables. For each table in turn, OnLine allocates eight contiguous pages, the size of the initial extent of each system catalog table. The tables are created individually and do not necessarily reside next to each other in the dbspace. They might be located in different chunks.
OnLine Limits for Databases OnLine Limits for Databases The size limits that apply to databases are related to their location in a dbspace. You can specify the dbspace where a database resides, but you cannot control the placement of database tables within the dbspace chunks. If you want to be certain that all tables in a database are created on a specific physical device, assign only one chunk to the device and create a dbspace that contains only that chunk. Place your database in that dbspace.
Create a Table: What Happens on Disk The paragraphs that follow describe the major events that occur when OnLine creates a table and allocates the initial extent of disk space. Allocate Disc Space OnLine searches the linked list of chunk free-list maps in the dbspace for contiguous free space equal to the initial extent size for the table. When adequate space is found, the pages are allocated and the associated chunk free-list page is updated. If space for the extent cannot be found, an error is returned.
Create a Table: What Happens on Disk Figure 2-22 illustrates the pointers within the disk data structures that track and monitor the disk space allocated to a table. Dbspace Tblspace tblspace Tblspace Initial extent Tblspace Initial extent Dbspace Tblspace Initial extent Next extent Tblspace Initial extent 2-112 IBM Informix OnLine Database Server Administrator’s Guide Figure 2-22 A table can be located in a dbspace that is different than the dbspace that contains the database.
Create a Temporary Table: What Happens on Disk Create a Temporary Table: What Happens on Disk After the root dbspace exists, users with the necessary SQL privileges can create an explicit temporary table by executing the SQL statement CREATE TABLE with the TEMP keyword. During processing, OnLine user processes may create implicit temporary tables as part of SQL statement processing. If a user creates a temporary table by executing a SELECT ...
Structure of an Extent Cleanup Explicit temporary tables are dropped when the OnLine user process exits. Implicit temporary tables may be dropped at any time during processing. If the OnLine database server shuts down without adequate time to clean up temporary tables, the tbinit daemon performs the cleanup as part of the next OnLine initialization. (To request shared-memory initialization without temporary table cleanup, execute tbinit with the -p option.
Structure of an Extent Page Types Within the extent, individual pages contain different types of data. Extent pages can be separated into five categories: ■ Data pages ■ Index pages (root, branch, and leaf pages) ■ Bit-map pages (a 4-bit bit map if the table contains a VARCHAR, BYTE, or TEXT data type or if the length of one row is greater than BUFFSIZE; otherwise, a 2-bit bit map) ■ Blob pages ■ Free pages Refer to page 2-120 for further information about the structure of a dbspace data page.
Structure of an Extent Figure 2-23 The initial extent size for this table is 8 pages; the next extent size is 16 pages. Dbspace pages remain free until they are needed for data storage.
Next Extent Allocation Next Extent Allocation When an extent fills, OnLine attempts to allocate another extent of contiguous disk space. Extent information is tracked as one component of the tblspace tblspace information for a table. The maximum number of extents allocated for any tblspace is application- and machine-dependent since it varies with the amount of space available on the tblspace tblspace entry. (Refer to page 2-104.
Next Extent Allocation If the disk space allocated for a next extent is physically contiguous with disk space already allocated to the same table, OnLine allocates the disk space but does not consider the new allocation as a separate extent. Instead, OnLine extends the size of the existing contiguous extent. Thereafter, all OnLine disk space reports reflect the allocation as an extension of the existing extent.
Next Extent Allocation Next Extent Allocation Strategies Extent sizes double every 64 extents. Chunk 6 64th extent 65th extent size is doubled Some other tblspace extent If the dbspace is too full to accommodate the next extent size, OnLine allocates the largest available contiguous block of disk space. Figure 2-24 When one extent fills, another is automatically allocated.
Structure of a Dbspace Page Structure of a Dbspace Page The basic unit of OnLine I/O is a page. Page size might vary among machines. The page size for your machine is specified as BUFFSIZE in the configuration file. You cannot modify the page size. Pages in a dbspace are allocated in a group called an extent. Pages can be categorized according to the type of information they contain.
Structure of a Dbspace Page Page Header The page header includes six components: ■ Page identification number (address of the page on disk) ■ Number of slot table entries used on the page (used to calculate where to locate the next slot table entry) ■ Number of free bytes left on the page ■ Pointer to the contiguous free space on the page that lies between the last data entry and the first slot table entry ■ Timestamp that changes each time the page contents are modified ■ Two index-related poin
Structure of a Dbspace Page For example, in a data page, the slot table entry would describe the page offset where the data row (or portion of a data row) starts and the length of the row (or portion of a row). (Refer to the discussion of data row storage, which begins on page 2-125, for more details about the function of the slot table.) The number of the slot table entry is stored as part of the data row rowid. The data row rowid is a unique identifier for each data row.
Data Row Format and Rowid Data Row Format and Rowid OnLine can store rows that are longer than a page. OnLine also supports the VARCHAR data type, which results in rows of varying length. As a result, rows do not conform to a single format. The following facts about rows must be considered when OnLine stores data rows in a page: ■ Rows within a table are not necessarily the same length. ■ The length of a row may change when it is modified. ■ The length of a row can be greater than a page.
Data Row Format and Rowid The rowid structure permits the length of the row and its location on a page to change without affecting the contents of the rowid. Either change—a change in length caused by an insert or a delete, or a change in location on the page caused by OnLine page compression—is reflected in the entry stored in the slot table. If the page where the data row is stored changes, a forward pointer is left on the home page. In all cases, the rowid remains accurate.
Data Pages and Data Row Storage Eventually, a row may outgrow its original storage location. If this occurs, a forward pointer to the new location of the data row is left at the position defined by the rowid. The forward pointer is itself a rowid that defines the page and the location on the page where the data row is now stored. (Refer to page 2-127 for further information about the role of the forward pointer in row storage.
Data Pages and Data Row Storage Single-Page Storage To minimize retrieval time, rows are not broken across page boundaries unnecessarily. Rows that are shorter than a page are always stored as whole rows. A page is considered full when the count of free bytes is less than the number of bytes needed to store a row of maximum size. Figure 2-27 illustrates data storage when rows are less than a page.
Data Pages and Data Row Storage Multipage Storage When OnLine receives a row that is longer than a page, the row is stored in as many whole pages as possible. The trailing portion is less than a full page. The page that contains the first byte of the row is the row home page. The number of the home page becomes the logical page number contained in the rowid. Each full page that follows the home page is referred to as a bigremainder page.
Data Pages and Data Row Storage Home page Figure 2-29 Rows that are longer than two pages are stored in home pages, bigremainder pages, and remainder pages.
Data Pages and Data Row Storage Storage of Modified Rows When a row is modified, OnLine attempts to return the modified row to its current location. If the row size is unchanged, no changes are needed in the slot table. If the row is smaller than before, OnLine changes the slot table entry for this row to reflect the new row length. If the row no longer fits, OnLine attempts to store the row in another location on the same page.
Data Pages and Data Row Storage 1. Data storage before data is modified Header Original row size and location 2. Data is modified Figure 2-30 Updated rows that no longer fit in their original pages but are shorter than a full page receive a forward pointer and are stored on a different page. New, longer row after processing 3.
Data Pages and Data Row Storage If the leading segment cannot fit into the current location of the row on the home page, OnLine divides the page into whole-page segments again, this time beginning with the leading end of the row. OnLine stores only a forwarding pointer in the current page location. The rest of the data row is stored in whole-page segments on one or more big-remainder pages. Forward pointers are added to each page. The trailing portion of the row is stored on a remainder page.
Data Pages and Data Row Storage 1. Data storage before data is modified Header Original row size and location 2. Data is modified Modified row size is longer than a page 3.
Structure of an Index Page Page Compression Over time, the free space on a page can become fragmented. When OnLine attempts to store data, it first checks row length against the number of free bytes on a page to determine if the row fits. If there is adequate space, OnLine checks to see if the page contains adequate contiguous free space to hold the row (or row portion). If the free space is not contiguous, OnLine calls for page compression.
Structure of an Index Page Figure 2-33 through Figure 2-36 illustrate the progressive creation of a complete index. A complete index is represented by Figure 2-36, which displays a root page, four branch pages, and an unspecified number of leaf pages. The rules governing index creation, page splitting, and page merging are far more complicated than the treatment provided in this manual.
Structure of an Index Page The byte part of the index key value is as long as needed to contain the value of the index key. If the indexed data is a VARCHAR data type, the calculated length of the index key is the maximum length plus 1. The additional byte is a required-length byte, which precedes the VARCHAR data when it is stored in the database. Therefore, the maximum VARCHAR that can be indexed is 254 bytes. As an example of an index key value, consider row 101 in the table stores5:customer.
Structure of an Index Page Figure 2-33 represents this initial phase of index storage. (Refer to page 2-121 for a general explanation of the function of the slot table. Refer to page 2-121 for more information about the page-header and page-ending timestamp pair.) Page header Index key values Slot table entries Timestamp Figure 2-33 The first index page fills with index key values and slot table entries until there is no room for an additional index entry on the page.
Structure of an Index Page Following is an example of some sample data from the stores5:customer table that is included in Figure 2-34 on page 2-137 and Figure 2-35 on page 2-139.
Structure of an Index Page Figure 2-34 illustrates the root node page and the two leaf node pages that result from a split after the root node fills. Root node Page header Albertson: leaf node address O’Brien: leaf node address Infinity slot Leaf nodes Header Horizontal Header Link Key values from Albertson to Miller, arranged in byte:rowid entries Figure 2-34 After the root node page fills, it splits into two leaf nodes.
Structure of an Index Page When the byte value is followed by a rowid and two addresses, the index key entry indicates that more than one data row shares the same byte value. The two addresses are a range of pages. The first address specifies the node page where the specified rowid (the first rowid with this key value) appears. The second address points to the last node page that contains a rowid for this same byte value. One index key entry on the root node page contains only an address.
Structure of an Index Page Root node index page Smith 542 1054 1056 Tsonga 1087 1099 Branch node index pages Address 1054 Address 1087 Tsonga 523 Smith 542, 639, 790, 148, 232, 193... Address 1099 256, 385, 786, 646, 611, 577... Watson 106 Address 1056 459, 475, 513, 193, 821 2-140 Last Name (lname) Customer number (customer_num) Smith 148, 193, 232, 256, 385, 459, 475, 513, 542, 577, 598, 611, 639, 646, 773.
Structure of an Index Page Branch Node Pages The first index branch node is created after the root node and at least two leaf nodes exist. Regardless of which page fills first, either the root node or one of the leaf nodes, the result is the creation of a branch node. If the root node becomes full, it splits and creates two branch nodes, each with half of the root node entries. The root node retains only three entries: one pointer to each of the branch nodes and one to the infinity slot.
Structure of an Index Page Root node header node address key: node address key: node address key: rowid, node, node infinity slot table header Branch nodes header header node address node le node address tmstp tmstp header key, rowid rowid, rowid, rowid rowid, rowid, rowid rowid, rowid, rowid slot table header tmstp Leaf nodes key, rowid key, rowid key, rowid key, rowid slot table 2-142 tmstp key, rowid key, rowid key, rowid, rowid key, rowid, rowid, rowid slot table header tmstp stp h
Structure of a Dbspace Bit-Map Page Structure of a Dbspace Bit-Map Page Extents contain one or more bit-map pages that track free pages in the extent. Each bit-map entry describes the fullness of one page in the extent.
Structure of a Dbspace Bit-Map Page Two bits are all that are needed to describe page fullness for these limited conditions, as illustrated here.
Blob Storage and the Blob Descriptor Bit Values Description of Page Fullness 0110 Blob page, room for partial-page segments 1010 Blob page, room for small segments 1110 Blob page, no room for even small segments (2 of 2) Blob Storage and the Blob Descriptor Data rows that include blob data do not include the blob data in the row itself. Instead, the data row contains a 56-byte blob descriptor that includes a forward pointer (rowid) to the location where the first segment of blob data is stored.
Structure of a Dbspace Blob Page Blobs are never modified: only inserted or deleted. When blob data is updated, a new blob is created and the data row is updated with the new blob descriptor. The old image of the row contains the descriptor that points to the obsolete blob value. The obsolete blob is deleted after the update is committed. Blobs are automatically deleted if the rows containing their blob descriptors are deleted.
Structure of a Dbspace Blob Page For more information about the role of the blob timestamps in maintaining the consistency of the blob data, refer to page 2-44. Figure 2-37 illustrates blob data storage in a dbspace. Blob timestamp and forward-pointer information Page header Blob data segment (first part) Blob timestamp Slot table entry Figure 2-37 Extra information is stored with the blob data. This extra information includes a forward pointer if the blob is larger than a page.
Blobspace Page Types Blobspace Page Types Every blobspace chunk contains three types of pages: ■ Blobspace free-map page ■ Bit-map page (which tracks the blobspace free-map pages) ■ Blobpage Blobspace Free-Map Page The blobspace free-map page locates unused blobpages and allocates them as part of blob creation. When a blobpage is allocated, the free-map entry for that page is updated. All entries for a single blob are linked.
Structure of a Blobspace Blobpage Blobpage The blobpage contains the blob data. Blobpage size is specified by the OnLine administrator who creates the blobspace. Blobpage size is specified as a multiple of the page size: for example, four times BUFFSIZE or 20 times BUFFSIZE. (Refer to page 5-5 for further information about selecting blobpage size. Refer to page 2-148 for further information about the structure of a blobspace blobpage.
Structure of a Blobspace Blobpage Figure 2-38 illustrates the structure of a blobpage. Figure 2-38 General structure of a blobpage. The size of a blobpage must be a multiple of the page size.
Structure of a Blobspace Blobpage Figure 2-39 illustrates the different locations of the two pairs of timestamps that appear on the blobspace blobpage. Before the blob is overwritten Blob descriptor 003 Blob timestamp pair Header 003 74 Blob data segment Free space Figure 2-39 Blob timestamps register the mostrecent point in time when this blobpage was allocated. Page-header and page-ending timestamps validate page consistency and confirm that the page write was successful.
Physical Log Physical Log The function of the physical log is to maintain a set of “before-images” of dbspace pages that represent a time at which all data is both physically and logically consistent. The physical log “before-images” can be combined with the logical log records of transactions to recover all transactions that occurred since the most-recent point of known consistency. The point of known physical consistency in an OnLine database server system is called a checkpoint.
Physical Log Blobspace blobpages do not appear in the physical log because blobs are logged differently than all other data types. (For further information about blobspace logging, refer to page 4-22.) The first time following a checkpoint that a page is modified, the “beforeimage” of the page is written to the physical log buffer in shared memory.
Logical Log Files This same unlikely scenario could occur during the rollback of a long transaction even after the LTXEHWM is reached. (Refer to page 2-158 for more details about the long transaction exclusive high-water mark.) After the LTXEHWM is reached, and after all processes have exited critical sections, only the database server process that is performing the rollback has access to the physical and logical logs.
File Rotation For further information about what happens during a logical log backup that makes this possible, refer to page 4-26. For further information about what happens during an OnLine restore with archive and logical log backup tapes, refer to page 4-45. File Rotation OnLine safeguards the logical log records by requiring that a full logical log file is marked with a status of used until it is backed up to tape and it is no longer needed for fast recovery.
File Contents File Contents The logical log files contain five types of records: ■ SQL data definition statements for all databases ■ Record of a checkpoint ■ Record of a change to the configuration ■ SQL data manipulation statements for databases that were created with logging ■ Record of a change to the logging status of a database The logical log files receive the first three types of records during processing even if no databases are created with transaction logging.
Number and Size You cannot change the size of the logical log files after OnLine disk space is initialized. If a logical log file is dropped, the disk space formerly occupied by the file is freed and added to the chunk free-list page. For further information about logical log management and administration, refer to page 3-13. As OnLine administrator, you determine the size of each logical log file and the total disk space allocated for the log.
Blobspace Logging The duration of a transaction is a key variable that might be beyond your control. An application that does not require much space for logical log records might generate long transaction errors if the users permit transactions to remain open for long periods of time. The more logical log space is available, the longer a transaction may be permitted to remain open before a long-transaction error condition develops. (Refer to page 2-158 for further information about long transactions.
Long Transactions To free a logical log, the log must be backed up to tape and all records with the logical log must be part of closed transactions. If any record in the log is part of an open transaction, the log file cannot be freed. The backup strategy for OnLine requires that the statement that creates a blobspace and the statements that insert blobs into that blobspace must appear in separate logical log files.
Long Transactions The second long-transaction high-water mark, LTXEHWM, indicates that the logical log has filled to a critical level. Most user processes are denied access to the logical log. Only user processes currently rolling back transactions (including the long transaction) and database server processes currently writing COMMIT records are allowed access to the logical log.
Long Transactions If the transactions cannot be rolled back before the logical log fills, OnLine shuts down. If this occurs, you must perform a data restore. During the data restore, you must not roll forward the last logical log file. Doing so re-creates the problem by filling the logical log again.
Chapter Operating OnLine In This Chapter . . . . . . . . . . 3 . . . . . . . . . . 3-5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-6 3-6 3-7 3-7 3-7 3-7 3-7 3-8 3-8 3-9 3-10 3-11 3-12 Logical Log Administration . . . . . . . . .
Logical Log File Status . . . . . . . . . . Logical Log File ID Numbers . . . . . . . . Add a Logical Log File . . . . . . . . . . Drop a Logical Log File . . . . . . . . . . Move a Logical Log File to Another Dbspace. . . Change the Logging Status of a Database . . . . Adding Logging to a Database . . . . . . Ending or Modifying Logging from DB-Monitor ANSI Compliance . . . . . . . . . . Back Up a Logical Log File . . . . . . . . . Start Continuous Logical Log Backup . . . . .
Change Pathname of Archive Tape Device . . Change Block Size of Archive Tape Device . . Change Tape Size of Archive Tape Device . . Create an Archive, Any Type. . . . . . . If the Logical Log Files Fill During an Archive. Two Tape Drives. . . . . . . . . . One Tape Drive . . . . . . . . . . If an Archive Terminates Prematurely . . . Monitor OnLine Activity . . . . . . Monitor Archive History . . . . . Monitor Blobs in a Blobspace . . . Monitor Blobs in a Dbspace . . . . Monitor Buffers . . . . . . . .
3-4 Modify OnLine Configuration . . . . . . . . Create a Blobspace . . . . . . . . . . . Drop a Blobspace . . . . . . . . . . . Change the Number of Buffers in the Pool . . Change the Size of Either Log Buffer . . . . Add a Chunk . . . . . . . . . . . . Change the Maximum Number of Chunks . . Create a Dbspace . . . . . . . . . . . Drop a Dbspace . . . . . . . . . . . . Enforce/Turn Off Residency for This Session . Enforce/Turn Off Residency . . . . . . . Change the Status of a Mirrored Chunk . . .
In This Chapter Occasionally, administrators conceive of a shortcut that seems like a good idea. Because of the complexity of OnLine, an idea that appears to be an efficient time-saver can create problems elsewhere during operation. The last section in this chapter, “Things to Avoid,” attempts to safeguard you from bad ideas that sound good. You start up and shut down OnLine by changing the mode. The first section, “Changing Modes,” describes each OnLine mode and how to move OnLine from one mode to another.
Changing Modes In the final section, “Modify OnLine Configuration,” configuration-changing actions are divided into eight categories, according to the area of OnLine that is affected: ■ Blobspaces (creating or dropping) ■ Buffers (changing the size of the logical or physical log buffer, or changing the number of buffers in the shared-memory buffer pool) ■ Chunks (adding a chunk or changing its status) ■ Dbspaces (creating or dropping) ■ Forced residency (on or off, temporarily or for this session)
Types of OnLine Modes You can determine the current OnLine mode by executing tbstat. The mode is displayed in the header. The mode also appears in the status line displayed in DB-Monitor. Offline Mode When OnLine is in offline mode, it is not running. OnLine must be offline when you initiate a data restore. Quiescent Mode When OnLine is in quiescent mode, no user can start a database server process. Only user informix can access the administrative options of DB-Monitor.
From Offline to Quiescent From Offline to Quiescent When OnLine changes from offline to quiescent mode, the tbinit daemon process reinitializes shared memory. When OnLine is in quiescent mode, no user can start a database server process. If you are user informix, you can take OnLine from offline to quiescent mode from within DB-Monitor or from the command line. If you are root, you can only use the command-line option.
From Quiescent to Online Execute tbinit from the command line to take OnLine from offline to online mode. To verify that OnLine is running, execute tbstat from the command line. The header on the tbstat output gives the current operating mode. For further information about the tbinit utility, refer to page 7-45. From Quiescent to Online When you take OnLine from quiescent to online mode, all users gain access.
Gracefully from Online to Quiescent Gracefully from Online to Quiescent Take OnLine gracefully from online to quiescent mode when you want to restrict access to OnLine without interrupting current processing. If you are user informix, you can take OnLine gracefully from online to quiescent mode from within DB-Monitor or from the command line. If you are root, you can only use the command-line option.
Immediately from Online to Quiescent Immediately from Online to Quiescent Take OnLine immediately from online to quiescent mode when you want to restrict access to OnLine as soon as possible. Work in progress can be lost. If you are user informix, you can take OnLine immediately from online to quiescent mode from within DB-Monitor or from the command line. If you are root, you can only use the command-line option. A prompt asks for confirmation of the immediate shutdown.
From Any Mode Immediately to Offline From Any Mode Immediately to Offline This is the proper action to take if you receive a message that the OnLine daemon is no longer running. After you take OnLine to offline mode, reinitialize shared memory by taking OnLine to quiescent or online mode. If you are user informix, you can take OnLine from any mode to offline (bypassing quiescent mode) from within DB-Monitor or from the command line. If you are root, you can only use the command-line options.
Logical Log Administration Logical Log Administration This section discusses configuration and backup of logical log files. For an overview discussion of the function of the logical log, refer to page 4-18. For background information about the role of the logical log in OnLine fast recovery, refer to page 4-39. For background information about what happens when OnLine backs up a logical log file, refer to page 4-26.
Examine Your Logical Log Configuration Examine Your Logical Log Configuration Complete the tasks outlined here to examine your logical log configuration and to verify that it is appropriate for your OnLine environment. Your Configuration File To examine your specified configuration, you need a copy of your OnLine configuration file, $INFORMIXDIR/etc/$TBCONFIG. Execute tbstat -c while OnLine is running.
Examine Your Logical Log Configuration Freeing the Logical Log Files The operator should monitor backed-up logical log files to ensure that they are being freed (released for reuse) in a timely manner. Even a backed-up log file cannot be freed (its status remains unreleased) if it contains records belonging to an open transaction. (Refer to page 3-26 for more information about log file status.
Examine Your Logical Log Configuration If you modify the initial configuration values, you might be able to improve performance. Weigh these three considerations: ■ Size the logical log large enough to prevent a long transaction condition. (Refer to page 3-39 for a definition of a long transaction.) ■ Create enough logical log files so that you can switch log files if needed without running out of free logical logs.
Examine Your Logical Log Configuration If the LTAPEDEV and TAPEDEV values are the same, you must plan your logical log file backups to leave the maximum amount of free space available before the archive begins. If the logical log files fill while the archive is under way, normal OnLine processing stops. If this happens, your options are limited. You can either abort the archive to free the tape device and back up the logical logs or leave normal processing suspended until the archive completes.
Change Pathname of Logical Log Tape Device Location of Logical Log Files When OnLine disk space is initialized, the logical log files are located in the root dbspace. You cannot control this. After OnLine is initialized, you can improve performance by moving the logical log files out of the root dbspace and onto one or more disks that are not shared by active tables. This can reduce disk contention. If you do not know where your logical log files currently reside, select the Status menu, Logs option.
Change Pathname of Logical Log Tape Device If you are logged in as user informix, you can change the value of LTAPEDEV from within DB-Monitor or from the command line. If you are logged in as root, you must use the command-line option. Preliminary Consideration Tape devices must rewind before opening and on closing to be compatible with OnLine operation. The reason for this is a series of checks that OnLine performs before writing to a tape.
Change Pathname of Logical Log Tape Device The host machine where the tape device is attached must permit user informix to run a UNIX shell from your machine without requiring a password. If your machine does not appear in the hosts.equiv file of the other host machine, then it must appear in the .rhosts file in the home directory of the informix login. If you are backing up logical log files as root, the machine name must appear in the .rhosts file for root on the other host machine.
Change Block Size of Logical Log Tape Device From the Command Line To change the value of LTAPEDEV from the command line, use an editor to edit the file specified by $INFORMIXDIR/etc/$TBCONFIG. Change the value of LTAPEDEV (and LTAPEBLK and LTAPESIZE, if appropriate). Change Block Size of Logical Log Tape Device The block size of the logical log tape device is specified as LTAPEBLK in the configuration file. The block size is expressed in kilobytes.
Change Tape Size of Logical Log Tape Device From the Command Line 1. Use an editor to edit the file specified by $INFORMIXDIR/etc/$TBCONFIG. 2. Change the value of LTAPEBLK to the new block size, expressed in kilobytes. Change Tape Size of Logical Log Tape Device The tape size of the logical log tape device is specified as LTAPESIZE in the configuration file. Tape size refers to the maximum amount of data that should be written to this tape, expressed in kilobytes.
Change Maximum Number of Logical Log Files From the Command Line 1. To change the value of LTAPESIZE, use an editor to edit the file specified by $INFORMIXDIR/etc/$TBCONFIG. 2. Change the value of LTAPESIZE to the new tape size, expressed in kilobytes. Change Maximum Number of Logical Log Files The maximum number of logical log files is specified as LOGSMAX in the configuration file. Do not confuse the maximum number of logical log files with the actual number of log files.
Change Size of Logical Log Files From DB-Monitor 1. From within DB-Monitor, select the Parameters menu, SharedMemory option to change the value of LOGSMAX. DB-Monitor displays the current value. 2. Enter the new value for LOGSMAX in the Max # of Logical Logs field. 3. Reinitialize shared memory (take OnLine offline and then to quiescent mode) for the change to take effect. From the Command Line 1.
Change Size of Logical Log Files From DB-Monitor 1. From within DB-Monitor, first unload all OnLine data. Refer to page 4-53. 2. Select the Parameters menu, Initialize option to reinitialize disk space. Change the value in the field labelled Log.Log Size. Proceed with OnLine disk space initialization. For more information about the disk space initialization procedure, refer to page 1-52. 3. After OnLine disk space is initialized, re-create all databases and tables. Then reload all OnLine data.
Logical Log File Status Logical Log File Status You can display the status of the logical logs through the Status menu, Logs option, or by executing tbstat -l (lowercase L). The status portion of the logical log display contains two columns, Number and Flags. The numbers are the ID numbers of the individual logical log files. (Refer to page 3-27 for more information about the ID numbers.) The flags are the logical log status flags.
Logical Log File ID Numbers Position Flag Description 3 B The logical log file is backed up. 5 C The logical log file is the current log. 7 L The logical log file contains the most recent checkpoint record in the logical log (all log files). You cannot free this file until a new checkpoint record is written to the logical log. (Refer to page 3-39.) (2 of 2) Logical Log File ID Numbers OnLine tracks the logical log files by assigning each free log file a unique number.
Add a Logical Log File Logical log file 1st rotation ID number 2nd rotation ID number 3rd rotation ID number 4th rotation ID number 4 4 10 16 22 5 5 11 17 23 6 6 12 18 24 (2 of 2) In general, log files become free after the file has been backed up to tape and all logical log records within the file are associated with closed transactions. (Refer to page 3-39.) It is possible for one logical log file to become free earlier than another logical log file with a lower ID number.
Add a Logical Log File From DB-Monitor 1. From within DB-Monitor, select the Parameters menu, Add-Log option to add a logical log file. 2. Enter the name of the dbspace where the new logical log file will reside in the field labelled Dbspace Name. Because you cannot change the size of the logical log file unless you reinitialize OnLine disk space, the size of the log file automatically appears in the Logical Log Size field. 3.
Drop a Logical Log File Drop a Logical Log File You can drop a log to increase the amount of the disk space available within a dbspace. If you are logged in as user informix, you can drop a logical log file from within DB-Monitor or from the command line. If you are logged in as root, you must use the command-line option. When dropping a log file, consider the following requirements: ■ OnLine requires a minimum of three logical log files at all times.
Move a Logical Log File to Another Dbspace From the Command Line 1. From the command line, execute the tbparams utility with the -d and -l (lowercase L) option to drop a logical log file. The -d option indicates you are dropping a log file. When the -d option is followed by the -l option, the -l introduces the ID number of the logical log file you are dropping. The following example drops the logical log file with ID number 121: tbparams -d -l 121 2.
Move a Logical Log File to Another Dbspace The following example illustrates moving six logical log files from the root dbspace to another dbspace, dbspace_1. For more information about completing a specific step, turn to the page indicated. 1. Free all log files except the current log file. A log file is free if it has been backed up and all records with the log file are part of closed transactions. (Refer to page 3-39.) 2.
Change the Logging Status of a Database Change the Logging Status of a Database You can make any one of the following changes to a database: ■ Add logging (buffered or unbuffered) to a database ■ End logging for a database ■ Change logging from buffered to unbuffered ■ Change logging from unbuffered to buffered ■ Make a database ANSI-compliant Add logging to a database to take advantage of transaction logging or other OnLine features that require logging, such as deferred checking.
Change the Logging Status of a Database Adding Logging to a Database Unbuffered logging is the best choice for most databases. In the event of a failure, only the single alteration in progress at the time of the failure is lost. If you use buffered logging and a failure occurs, you could lose more than just the current transaction. In return for this risk, performance during alterations is slightly improved.
Change the Logging Status of a Database Add logging to a database by executing one of these two commands: tbtape -s -B database tbtape -s -U database (buffered logging) (unbuffered logging) You can change logging status for any number of databases with the same command. For further information about the tbtape utility, refer to page 7-102. If you can create an online archive and you do so from DB-Monitor, users are able to access all databases during the archive.
Back Up a Logical Log File You can change the logging status for any number of databases with the same command. To change buffered logging to unbuffered logging, or vice versa, execute one of the following commands: tbtape -B database tbtape -U database (buffered logging) (unbuffered logging) You can change the logging status for any number of databases with the same command. For further information about the tbtape utility, refer to page 7-102.
Start Continuous Logical Log Backup If you are logged in as user informix, you can back up a log file from within DB-Monitor or from the command line. If you are logged in as root, you must use the command-line option. You can back up a log file while OnLine is in online mode. From DB-Monitor From within DB-Monitor, select the Logical-Logs menu, Auto-Backup option to explicitly request the backup of all full logical log files.
End Continuous Logical Log Backup When you initiate Continuous-Backup, OnLine backs up any full logical log files immediately. Continuous-Backup does not back up the current log (status C). If you press the Interrupt key while a backup is under way, OnLine finishes the backup and then returns control to you. If OnLine is waiting for a log file to fill, the option is ended immediately.
Switch to the Next Logical Log File You must explicitly request logical log backups (using the DB-Monitor AutoBackup option or its command-line equivalent, tbtape -a) until you restart continuous logging. If you press the Interrupt key while a backup is underway, OnLine finishes the backup and then returns control to you. If you press the Interrupt key while OnLine is waiting for a log file to fill, the option is ended immediately.
Free a Logical Log File Refer to page 3-15 for a discussion about the importance of freeing the logical log files in a timely manner. Refer to page 3-26 for information about the logical log file status flags. Refer to page 3-27 for more information about how the logical log files rotate through ID numbers. OnLine user processes attempt to free logical log files under the following conditions: ■ The first OnLine user process that writes to a new logical log file attempts to free the previous log.
Free a Logical Log File Status A If a log file is newly added (status A), create a level-0 archive to activate the log file and make it available for use. Status U If a log file contains records but is not yet backed up (status U), execute tbmode -a to back up the log from the command line, or (if you are logged in as user informix) select the Logical-Logs menu, Auto-Backup option. (Refer to page 3-36.) If backing up the log file does not free it, its status is either U-B or U-B-L.
If the Logical Log Backup Cannot Complete OnLine cannot free this log. To do so would permit new records to overwrite the most recent record in the logical log that designates a time when both physical and logical consistency was ensured. If this happened, OnLine would be unable to execute fast recovery. (Refer to page 4-39.) Therefore, this log file maintains a backed-up status until a new checkpoint record is written to the current logical log file.
Archive Administration This process continues with each log file on Tape A. When the restore process encounters the partial log at the end of Tape A, it reads what records it can. Because OnLine has not reached the end of the log file, it prompts the operator for the next tape. The operator mounts Tape B. OnLine reads the logical log header indicating that this is the beginning of the same log file that has been partially read.
Archive Types An online archive is an archive that is created while OnLine is online and database server processes are modifying data. Allocation of some disk pages in dbspaces and blobspaces might be temporarily frozen during an online archive. (Refer to page 4-30 for an explanation of how OnLine manages an online archive.) A quiescent archive is an archive that is created while OnLine is in quiescent mode. No database activity occurs while the archive is being made.
Archive Types Level-1 Archive A level-1 archive contains a copy of every disk page that has changed since the last level-0 archive. All disk page copies on the tape reflect the contents of the disk pages at the time the level-1 archive began. Level-2 Archive A level-2 archive contains a copy of every disk page that has changed since the last level-0 or level-1 archive, whichever is most recent. All disk page copies on the tape reflect the contents of the disk pages at the time the level-2 archive began.
How Long Will an Archive Take? How Long Will an Archive Take? The number of variables that you must consider in estimating the time for an archive make the task more of an art than a science.
Plan the Archive Schedule Plan the Archive Schedule Each of the following considerations affects the archive schedule you create for your environment: ■ If you want to minimize the time for a restore ■ If you want to minimize the time to create an archive ■ If you want to create online archives ■ If you must use the same tape drive to create archives and to back up the logical logs ■ If the operator is periodically unavailable Try to avoid unscheduled archives.
Plan the Archive Schedule ■ Size and number of archives. The minimum number of archives is just the single level-0 archive. The maximum number is three, one archive of each level. ■ Size and number of logical log files since the last archive. The size of the level-0 archive is fixed because it is the sum of all in-use data. The size of the level-1 archive is a function of the time between your level-0 archives. The more often you create level-0 archives, the smaller each level-1 archive will be.
Plan the Archive Schedule Minimize Archive Time You can reduce the number of disk pages that must be copied during an archive by storing explicitly created temporary tables in a dedicated dbspace and then dropping that dbspace before you archive. For example, suppose an application in your environment uses temporary tables to load data. If you load 250,000 rows into a temporary table and then later delete that temporary table, the pages that were allocated to the table are archived.
Examine Your Archive Configuration If the logical log files fill in spite of these precautions, you can either leave normal processing suspended until the archive completes or cancel the archive. (It is possible that even the archive will hang eventually, the result of a deadlock-like situation. The archive is synchronized with OnLine checkpoints. It can happen that the archive procedure must wait for a checkpoint to synchronize activity.
Examine Your Archive Configuration For further information about the relationship of the current configuration to the values in the configuration file ($INFORMIXDIR/etc/$TBCONFIG), refer to page 1-11. The Archives In the event of a system failure, OnLine can restore all data that has been archived. Without archives, you cannot perform a restore. Do not use the OnLine data-migration utilities to unload data as a substitute for a complete archive.
Change Pathname of Archive Tape Device If the TAPEDEV and LTAPEDEV values are the same, you must ensure that the logical logs do not fill before the archive completes. If the logical log files fill while the archive is under way, normal OnLine processing stops. Refer to page 3-47 for guidelines on planning an archive schedule. Refer to page 3-59 for an explanation of what happens if the logical log files become full during an archive.
Change Pathname of Archive Tape Device If you change the pathname to /dev/null, the change proceeds more smoothly if you make the change while OnLine is offline. If TAPEDEV is set to /dev/null, you cannot perform a restore. The tape device specified by the pathname must perform a rewind before opening and on closing. If you are logged in as user informix, you can change the value of TAPEDEV from within DB-Monitor or from the command line. If you are logged in as root, you must use the command-line option.
Change Pathname of Archive Tape Device To specify an archive tape device on another host machine, use the following syntax: host_machine_name:tape_device_pathname: The following example specifies an archive tape device on the host machine kyoto: kyoto:/dev/rmt01 The host machine where the tape device is attached must permit user informix to run a UNIX shell from your machine without requiring a password. If your machine does not appear in the hosts.
Change Block Size of Archive Tape Device From the Command Line 1. Use an editor to edit the file specified by $INFORMIXDIR/etc/$TBCONFIG. 2. Change the value of TAPEDEV. 3. Change the values for the archive device block size (TAPEBLK) and tape size (TAPESIZE) at the same time, if appropriate. Change Block Size of Archive Tape Device The block size of the archive tape device is specified as TAPEBLK in the configuration file. The block size is expressed in kilobytes.
Change Tape Size of Archive Tape Device From the Command Line 1. To change the value of TAPEBLK from the command line, use an editor to edit the file specified by $INFORMIXDIR/etc/$TBCONFIG. 2. Change the value of TAPEBLK to the new block size, expressed in kilobytes. Change Tape Size of Archive Tape Device The tape size of the archive tape device is specified as TAPESIZE in the configuration file.
Create an Archive, Any Type From the Command Line 1. To change the value of TAPESIZE from the command line, use an editor to edit the file specified by $INFORMIXDIR/etc/$TBCONFIG. 2. Change the value of TAPESIZE to the new tape size, expressed in kilobytes. Create an Archive, Any Type An archive can require multiple tapes. After a tape fills, OnLine rewinds the tape, prompts the operator with the tape number for labelling, and prompts the operator again to mount the next tape, if one is needed.
Create an Archive, Any Type A good practice to follow is to label archive tapes with the archive level, date, time, and tape number provided by OnLine, as shown in the following example: Level 1: Wed Nov 27, 1991 20:45 Tape # 3 of xx Each archive begins with its first tape reel numbered 1 and each additional tape reel incremented by 1. A five-volume archive is numbered 1 through 5. (You must supply the value for xx after the archive completes.) From DB-Monitor 1.
If the Logical Log Files Fill During an Archive 3. Place a write-enabled tape on the tape drive device (defined by TAPEDEV). Put the device online with the appropriate operatingsystem command. The prompts from tbtape ask you to specify the archive level. 4. Follow the prompts for labelling and mounting new tapes. A message informs you when the archive is complete. For further information about the tbtape utility, refer to page 7-102.
If an Archive Terminates Prematurely One Tape Drive If only one tape device is available to OnLine, you can take either one of two actions: ■ Leave OnLine processing suspended while the archive completes. ■ Cancel the archive and try again later. The archive messages display an estimate of how much of the archive is complete, expressed as a percentage. If the archive is nearly complete, you might be willing to hold up your users until the archive finishes.
Monitor OnLine Activity Monitor OnLine Activity Many OnLine structures can be monitored for more than one kind of information. For example, you can monitor chunks for either fragmentation or page usage. Other OnLine entities are too general to be researched directly. For example, table information is too broad a category.
Monitor Archive History From the Command Line From the command line, execute tbcheck -pr to display root dbspace reserved page information.
Monitor Blobs in a Blobspace Monitor Blobs in a Blobspace DB-Monitor measures blobspace availability by providing the number of used disk pages in the chunk. The tbstat -d display attempts to describe the the number of blobpages used. The tbcheck -pB display provides the most accurate assessment of current blobspace storage efficiency. From DB-Monitor From DB-Monitor, select the Status menu, Spaces option.
Monitor Blobs in a Blobspace Flag Description P Primary chunk. R Chunk is currently being recovered. X New mirror chunk that contains logical log files; a level-0 archive is needed before the mirror can become active. (2 of 2) From the Command Line From the command line, execute tbstat -d to obtain information that is similar to that displayed by the Dbspaces menu, Info option.
Monitor Blobs in a Dbspace The number of free blobpages is derived from the information stored in the shared-memory version of the chunk’s free-map page, not the disk version. These statistics are the most current possible and might conflict with the output of tbstat -d, which is derived from the disk version of the free-map page. Refer to page 5-5 for further information about how to interpret the display and modify your blobspace blobpage size to improve performance.
Monitor Buffers Monitor Buffers Use the tbstat -b, -X, and -B options to identify a specific user process that is holding a resource that might be needed by another user process. The tbstat -p option can help you assess performance as measured by the percentage of cached reads and writes. For monitoring information that describes the efficiency of the buffer pool, refer to page 3-68 (buffer-pool activity).
Monitor Buffers tbstat -B Execute tbstat -B to obtain the following buffer statistics: ■ Address of every regular shared-memory buffer ■ Page numbers for all pages remaining in shared memory ■ Address of the first user process waiting for each buffer Refer to page 7-84 for further information about the tbstat -B output. tbstat -p Execute tbstat -p to obtain statistics about cached reads and writes. The caching statistics appear in four fields on the top row of the output display.
Monitor Buffer-Pool Activity Monitor Buffer-Pool Activity Monitor buffer-pool activity to determine the general availability of buffers and to isolate the activity that is triggering buffer-pool flushing. (Refer also to page 3-66 for general buffer statistics.) The tbstat -p output contains the following two statistics that describe buffer availability: ovbuff lists the number of times that OnLine attempted to exceed the maximum number of shared buffers, BUFFERS.
Monitor Checkpoints tbstat -R Execute tbstat -R to obtain information about the number of buffers in each LRU queue and the number and percentage of the buffers that are modified. Details about the function of the LRU queues are described on page 2-57. Refer to page 7-95 for further information about the tbstat -R output. tbstat -D Execute tbstat -D to obtain, by chunk, the number of pages read and the number of pages written. Refer to page 7-86 for further information about the tbstat -D output.
Monitor Chunks A checkpoint check is performed if the time specified by CKPTINTVL has elapsed since the last checkpoint. If nothing is in the buffers waiting to be written to disk at the time of the checkpoint check, no checkpoint is needed. The time in the Last Checkpoint Done field is not changed until a checkpoint occurs. From the Command Line From the command line, execute tbstat -m to view the last 20 entries of the OnLine message log.
Monitor Chunks The second screen display lists the following chunk information for each dbspace: ■ Chunk ID ■ Chunk pathname and offset ■ Mirror status flags ■ Pages in the chunk ■ Number of used disk pages in the chunk If a chunk is mirrored, both the primary chunk and the mirror chunk shared the same chunk ID number. The chunk status flags are defined as follows: Flag Description - Chunk belongs to a dbspace. B Chunk belongs to a blobspace. D Chunk is down, no reads or writes can occur.
Monitor Chunks Execute tbcheck -pr to obtain the chunk information that is stored in the root dbspace reserved page. Refer to page 2-100 for a detailed description of each field in the chunk reserved page, PCHUNK. Execute tbcheck -pe to obtain the physical layout of information in the chunk. The chunk layout is sequential, and the number of pages dedicated to each table is shown.
Monitor Configuration Information A second solution to chunk fragmentation is to unload and reload the tables in the chunk. To prevent the problem from recurring, consider increasing the size of the tblspace extents. Monitor Configuration Information Configuration information is needed for documentation during OnLine administration. From DB-Monitor From DB-Monitor, select the Status menu, Configuration option.
Monitor Databases Refer to page 2-97 for a detailed description of each field in the configuration reserved page. Monitor Databases This section describes information that is available to you about OnLine databases. Monitor databases to track disk space usage by database. From DB-Monitor From DB-Monitor, select the Status menu, Databases option. This option lists all OnLine databases (up to a limit of 100).
Monitor Dbspaces Because tbcheck -pc derives its information directly from the tblspace, you do not need to run the UPDATE STATISTICS statement to ensure that the output is current. Refer to page 7-42 for further information about the tbcheck -pc display. Monitor Dbspaces Use these options to track available disk space. The Dbspaces menu, Info option describes the mirror status of each dbspace chunk. From DB-Monitor From DB-Monitor, select the Status menu, Spaces option.
Monitor Dbspaces The chunk status flags are defined as follows: Flag Description - Chunk belongs to a dbspace. B Chunk belongs to a blobspace. D Chunk is down. M Chunk is a mirror chunk. P Chunk is a primary chunk. R Chunk is in recovery mode. X Mirroring has been requested but the chunk contains a logical log file; a level-0 archive is needed before this mirror can begin functioning.
Monitor Disk Pages Monitor Disk Pages Use these options to obtain the specific data row rowid or to view a specific page in ASCII (and hexadecimal). Use the rowid to specify a disk page. From the command line, execute tbcheck -pD with either a database name or a table name as the parameter to obtain a listing of every row requested (either in the database or in the table). If you specify a table name, you can optionally specify a logical page number.
Monitor Extents If you prefer to use a tblspace number and page number, the tblspace number is stored as a decimal in the partnum column of the systables table. Use the HEX function to obtain the value as a hexadecimal: SELECT HEX(partnum) FROM systables WHERE tabname = tablename You can calculate the page number from the hexadecimal value of the rowid as follows: ■ The two right-most digits of the hexadecimal rowid refer to the slottable entry number for this row.
Monitor Index Information One solution to chunk fragmentation is to cluster the index of all tables in the chunk using the ALTER TABLE statement. Another solution is to unload and load the tables in the chunk. (For further information about what to do if fragmentation exists, refer to page 3-72.) Refer to page 7-43 for further information about the tbcheck -pe output. Execute tbstat -t to obtain general information about the limited set of active tblspaces.
Monitor Logging Activity Execute the -pk or -pK options of tbcheck to obtain all page information. (The mnemonic -k option refers to “keep all” information.) Execute tbcheck -pk with a database name or a table name parameter to obtain a listing of the index key values and the corresponding rowids for each index leaf page. Also listed is the node page to which each leaf page is joined. Execute the -pl or -pL options of tbcheck to obtain leaf node information.
Monitor Logging Activity Processing stops if OnLine attempts to switch to the next logical log file and finds that the log file status is not free (F). From DB-Monitor From DB-Monitor, select the Status menu, Logs option. The display contains three sections: physical log, logical log (general), and the individual logical log files. The first two sections contain buffer information. This information refers to the current physical and logical log buffers.
Monitor the Message Log Execute tbcheck -pr to obtain detailed logical log file information that is stored in the root dbspace reserved page dedicated to checkpoint information. Refer to page 2-97 for a description of each of the fields related to logical logs (PAGE_CKPT). Monitor the Message Log Monitor the message log periodically to verify that OnLine operations are proceeding normally and that events are being logged as expected. Use a UNIX editor to read the complete message log.
Monitor OnLine Profile Monitor OnLine Profile Monitor the OnLine profile to analyze performance over time. The Profile screen maintains cumulative statistics. Use the tbstat -z option whenever you wish to reset all statistics to 0. From DB-Monitor From DB-Monitor, select the Status menu, Profile option. The screen display contains 28 separate statistics, as well as the current OnLine operating mode, the boot time, and the current time.
Monitor Shared Memory and Latches Monitor Shared Memory and Latches Monitor shared memory (tbstat -o) to capture a static snapshot of OnLine shared memory that you can use for analysis and comparison. Monitor latches to determine if a user process is holding a latch or if contention of shared-memory resources exists. From the command line, execute tbstat -o to save a copy of the sharedmemory segment. You can execute tbstat -o with a filename parameter to specify the file to contain the output.
Monitor Tblspaces Monitor Tblspaces Monitor tblspaces to determine current space availability and allocation by table. For further information about monitoring tblspace extents, refer to page 3-78. From the command line, execute tbstat -t to obtain general information about the limited set of active (or open) tblspaces. The tbstat -t output includes the tblspace number and the following four fields: npages are the pages allocated to the tblspace. nused are the pages used from this allocated pool.
Monitor Users and Transactions Monitor Users and Transactions Monitor users’ database server processes to determine the number and type of server processes accessing OnLine, and the status of each one. From DB-Monitor From DB-Monitor, select the Status menu, User option. The display provides an overview of database server process activity. The first field, PID, is the user process identification number. The second field, User, is the login ID of the user that created this database server process.
Modify OnLine Configuration If you execute tbstat -u while OnLine is performing fast recovery, several database server processes might appear in the display. During fast recovery, each transaction that is rolled forward or rolled back appears in the display. Modify OnLine Configuration Your OnLine configuration includes configuration parameters as well as the number and status of OnLine structures such as blobspaces, dbspaces, and chunks. You can configure the areas of OnLine listed next.
Create a Blobspace ■ Physical log ❑ ■ “Change Physical Log Location or Size” on page 3-107 Shared-memory parameters ❑ “Change the Checkpoint Interval” on page 3-109 ❑ “Change the Destination of Console Messages” on page 3-110 ❑ “Change the Maximum Number of Dbspaces” on page 3-111 ❑ “Change the Maximum Number of Locks” on page 3-112 ❑ “Change the Maximum Number of Tblspaces” on page 3-113 ❑ “Change the Maximum Number of Users” on page 3-114 ❑ “Change the Number of Page Cleaners” on page 3-
Create a Blobspace Preliminary Considerations Verify that the DBSPACES value in the configuration file will not be exceeded. DBSPACES refer to the total number of blobspaces plus dbspaces. When you create a blobspace, you specify the blobpage size as a multiple of the machine-page size. You specify an explicit pathname for the blobspace. Informix recommends that you use a linked pathname. (Refer to page 1-22 for further information about the benefits of using linked pathnames.
Create a Blobspace To continue the example, assume your OnLine page size is 2 KB. If you decide on a 12-kilobyte blobpage size, specify the blobpage size parameter as 6. If your OnLine page size is 4 KB, specify the blobpage size parameter as 3. (That is, the size of the blob rounded up to the nearest kilobyte, divided by the page size, is equal to the blobpage size parameter.
Drop a Blobspace From the Command Line From the command line, execute the tbspaces utility with the following options and parameters: creates a new blobspace. -c -b blobspace specifies a blobspace name. -g page_unit specifies the blobpage-size parameter (number of OnLine pages). -p pathname specifies the explicit pathname of a primary chunk: either a raw device or a UNIX file. -o offset specifies the raw device offset in kilobytes, if appropriate.
Change the Number of Buffers in the Pool If you are logged in as user informix, you can drop a blobspace from within DB-Monitor or from the command line. If you are logged in as root, you must use the command-line option. You can drop a blobspace while OnLine is in online mode. From DB-Monitor 1. From within DB-Monitor, Select the Dbspaces menu, Drop option to drop a blobspace. 2. Use the RETURN key or Arrow keys to scroll to the blobspace you want to drop and press CTRL-B or F3.
Change the Size of Either Log Buffer From DB-Monitor 1. From within DB-Monitor, select the Parameters menu, SharedMemory option to change the number of page buffers in the sharedmemory pool. Change the value in the field Max # of Buffers. 2. Reinitialize shared memory (take OnLine offline and then to quiescent mode) for the change to take effect. From the Command Line 1. To change the value of BUFFERS from the command line, use an editor to edit the file specified by $INFORMIXDIR/etc/$TBCONFIG. 2.
Add a Chunk From DB-Monitor 1. From DB-Monitor, select the Parameters menu, Shared-Memory option to change the size of either the logical log buffer or the physical log buffer. Change the value in the appropriate field, either Logical Log Buffer Size or Physical Log Buffer Size. 2. Reinitialize shared memory (take OnLine offline and then to quiescent mode) for the change to take effect. From the Command Line 1.
Add a Chunk If you are allocating a raw disk device, you might need to specify an offset to preserve track 0 information used by your UNIX operating system. Refer to page 1-49 for further information about how to determine if you need an offset. If you are allocating cooked disk space, the pathname is a file in a UNIX file system. If you are logged in as user informix, you can make this change from within DB-Monitor or from the command line.
Change the Maximum Number of Chunks From the Command Line From the command line, execute the tbspaces utility with the following options and parameters: -a space_name specifies a chunk is to be added. The -a option is followed by either a blobspace name or a dbspace name, indicating the space to which the chunk is added. -p pathname specifies the explicit pathname of a chunk, either a raw device or a UNIX file. -o offset specifies the raw device offset in kilobytes, if appropriate.
Create a Dbspace From DB-Monitor 1. From within DB-Monitor, select the Parameters menu, SharedMemory option to change the maximum number of chunks. Change the value in the field, Max # of Chunks. 2. Reinitialize shared memory (take OnLine offline and then to quiescent mode) for the change to take effect. From the Command Line 1. To change the value of CHUNKS from the command line, use an editor to edit the file specified by $INFORMIXDIR/etc/$TBCONFIG. 2. Change the value of CHUNKS.
Create a Dbspace You can create a dbspace while OnLine is in online mode. The newly added dbspace (and its associated mirror, if there is one) is available immediately. From DB-Monitor 1. From within DB-Monitor, select the Dbspaces menu, Create option to create a dbspace. 2. Enter the name of the new dbspace in the field Dbspace Name. 3. If you want to create a mirror for the initial dbspace chunk, enter a Y in the Mirror field. Otherwise, enter N. 4.
Drop a Dbspace All options and parameters except -o and -m are required. The following example creates a mirrored dbspace dbspce5. An offset of 5,000 KB is specified. tbspaces -c -d dbspce5 -p /dev/rsd1f -o 5000 -m /dev/rsd2a 5000 Drop a Dbspace The dbspace you intend to drop must be unused. (It is not sufficient for the dbspace to be empty of data.) Execute tbcheck -pe to verify that no tables or logs are residing in the dbspace. Preliminary Considerations You cannot drop the root dbspace.
Enforce/Turn Off Residency for This Session From the Command Line From the command line, execute the tbspaces utility with the following option and parameter: -d dbspace specifies the dbspace to be dropped The following example drops a dbspace dbspce5 and its mirrors: tbspaces -d dbspce5 Enforce/Turn Off Residency for This Session This change takes effect immediately, but it does not change the values in the configuration file. If you want to change the configuration file, refer to page 3-100.
Change the Status of a Mirrored Chunk If you are logged in as user informix, you can create a dbspace within DB-Monitor or from the command line. If you are logged in as root, you must use the command-line option. From DB-Monitor 1. From within DB-Monitor, select the Parameters menu, SharedMemory option to change the shared-memory residency setting. A value of Y enforces shared memory, and a value of N permits all or part of shared memory to be swapped to disk.
Change the Status of a Mirrored Chunk When you initiate recovery for a “down” mirrored chunk, a daemon process begins copying the contents of the primary chunk to the chunk being recovered. OnLine brings the chunk online if the recovery is successful. The recovery status (R) is transitional. If you are logged in as user informix, you can change the status of a mirrored chunk from within DB-Monitor or from the command line. If you are logged in as root, you must use the command-line options.
Change the Status of a Mirrored Chunk From DB-Monitor 1. From within DB-Monitor, select the Dbspaces menu, Status option to change the status of a mirrored chunk. 2. Use the RETURN bar or the arrow keys to select the dbspace or blobspace that contains the chunk whose status you wish to change. Press CTRL-B or F3. The chunks screen displays all chunks assigned to the selected blobspace or dbspace. 3. Use the RETURN key or the arrow keys to select the chunk whose status you wish to change.
Enable Mirroring Enable Mirroring Mirroring activity does not begin until you define mirror chunks for a dbspace or a blobspace and explicitly start mirroring. Do not enable mirroring until you are ready to define the mirror chunks. Mirroring is enabled when the value of the MIRROR shared-memory configuration parameter is set to 1.
Start/End Mirroring in a Blobspace or Dbspace From the Command Line 1. From the command line, change the value of MIRROR to 1. To do this from the command line, use an editor to edit the file specified by $INFORMIXDIR/etc/$TBCONFIG. 2. Change the value of MIRROR to 1. Reinitialize shared memory (take OnLine offline and then to quiescent mode) for the change to take effect.
Start/End Mirroring in a Blobspace or Dbspace Create the mirror chunk on a separate device from the primary chunk. Ideally, the mirror chunk device should be managed by a different controller than the controller that manages the primary chunk. Mirroring begins immediately unless the chunk contains a logical log file. If this is the case, mirroring begins in this chunk as soon as you create a level-0 archive.
Change Physical Log Location or Size To start or end mirroring 1. Select the Dbspaces menu, Mirror option. Every OnLine blobspace and dbspace is listed. You can start mirroring for spaces with a status of N. You can end mirroring for spaces with a status of Y. 2. Use the RETURN key or the arrow keys to select a blobspace or dbspace. Press CTRL-B or F3. If the current status is Y, DB-Monitor ends mirroring and releases the mirror chunks.
Change Physical Log Location or Size Create a level-0 archive immediately after you reinitialize shared memory. This archive is critical for OnLine recovery. You can change the value of PHYSFILE or PHYSDBS while OnLine is in online mode, but the changes do not take effect until you reinitialize shared memory (take OnLine offline and then to quiescent or online mode). If you use the command-line option, you reinitialize shared memory in the same step.
Change the Checkpoint Interval From the Command Line From the command line, execute the tbparams utility with the following options and parameters: -p changes the physical log. -s size specifies the size of the physical log in kilobytes. -d dbspace specifies the dbspace where the physical log resides. -y initializes shared memory immediately. Only the -p option is required.
Change the Destination of Console Messages To change the value of CKPTINTVL from the command line, use an editor to edit the file specified by $INFORMIXDIR/etc/$TBCONFIG. Change the value of CKPTINTVL. Reinitialize shared memory (take OnLine offline and then to quiescent mode) for the change to take effect. Change the Destination of Console Messages The destination pathname is specified as CONSOLE in the configuration file.
Change the Maximum Number of Dbspaces 4. When a second prompt appears to confirm that you want to continue (to initialize OnLine disk space and destroy all existing data), respond N (no). 5. Reinitialize shared memory (take OnLine offline and then to quiescent mode) for the change to take effect. From the Command Line 1. To change the value of CONSOLE from the command line, use an editor to edit the file specified by $INFORMIXDIR/etc/$TBCONFIG. 2. Change the value of CONSOLE.
Change the Maximum Number of Locks From the Command Line 1. To change the value of DBSPACES from the command line, use an editor to edit the file specified by $INFORMIXDIR/etc/$TBCONFIG. 2. Change the value of DBSPACES. Reinitialize shared memory (take OnLine offline and then to quiescent mode) for the change to take effect. Change the Maximum Number of Locks The maximum number of available locks is specified as LOCKS in the configuration file.
Change the Maximum Number of Tblspaces Change the Maximum Number of Tblspaces The maximum number of active tblspaces permitted by OnLine is specified as TBLSPACES in the configuration file. You can make this change while OnLine is in online mode but the change does not take effect until you reinitialize shared memory (take OnLine offline and then to quiescent or online mode). The maximum number of tblspaces is 32,000. The minimum is 10 per user process (USERS).
Change the Maximum Number of Users Change the Maximum Number of Users Users, in this context, refers to the maximum number of processes that attach to shared memory. User processes include database server processes, daemon processes, and utility processes. This value is specified as USERS in the configuration file. You can make this change while OnLine is in online mode but the change does not take effect until you reinitialize shared memory (take OnLine offline and then to quiescent or online mode).
Change the Number of Page Cleaners From the Command Line 1. To change the value of USERS from the command line, use an editor to edit the file specified by $INFORMIXDIR/etc/$TBCONFIG. 2. Change the value of USERS. Change the value of the LOCKS, BUFFERS, or TBLSPACES parameter to match the new number of users, if required. 3. Reinitialize shared memory (take OnLine offline and then to quiescent mode) for the change to take effect.
Things to Avoid From the Command Line 1. To change the value of CLEANERS from the command line, use an editor to edit the file specified by $INFORMIXDIR/etc/$TBCONFIG. 2. Change the value of CLEANERS. Reinitialize shared memory (take OnLine offline and then to quiescent mode) for the change to take effect. Things to Avoid Here are some ideas that might sound good in theory, but have unexpected consequences that could adversely affect your OnLine performance.
Chapter Data Consistency, Recovery, and Migration In This Chapter . . . . . . . . . . . . 4-5 Consistency Checking . . . . . . . . Using the tbcheck Commands. . . . tbcheck -cr . . . . . . . . . tbcheck -cc . . . . . . . . . tbcheck -ce . . . . . . . . . tbcheck -cI . . . . . . . . . tbcheck -cD . . . . . . . . . Using the OnLine Message Log . . . Setting Consistency-Checking Variables GCORE . . . . . . . . . . DUMPCORE . . . . . . . . DUMPSHMEM . . . . . . . DUMPDIR . . . . . . . . .
4-2 What Happens During Logical Log Backup Ready LTAPEDEV . . . . . . . . Locate the Next Logical Log . . . . Copy Blobpages . . . . . . . . Place Log Header on Tape . . . . . Write Log Records to Tape . . . . . Write Trailer Page . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-26 4-27 4-27 4-27 4-28 4-29 4-30 What Happens During an Archive . . . Read Archive History Information . .
Initiate Data Restore from Offline Mode Mount Level-0 Archive Tape . . . . Verify Current Configuration . . . Prompt for Logical Log Backup . . . Write Each Archive Page to Disk . . Initialize Shared Memory . . . . . Roll Forward Logical Logs . . . . OnLine Is Quiescent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Database and Table Migration. . . . . . . . Description of Migration Methods . . . . . UNLOAD/dbschema/LOAD . . . . . UNLOAD/dbschema/dbload . . . . . dbexport/dbimport . . .
4-4 IBM Informix OnLine Database Server Administrator’s Guide
In This Chapter Several OnLine tasks that are critical to long-term operation are performed automatically. As administrator, you might find it interesting and helpful to understand what these tasks are and why they are important. Consistency-checking code has been implemented throughout the OnLine product to help alert administrators to occurrences of data inconsistency. The function of the code is, at minimum, to write messages to the OnLine message log if inconsistencies are detected.
Consistency Checking Consistency Checking OnLine 5 contains a page-level layer of checks that can detect data inconsistencies that might be caused by hardware or operating system errors or by unknown problems associated with OnLine operation. Associated with this consistency checking are four environment variables that, if set, collect diagnostic information that can be useful for IBM Informix technical support staff. (Descriptions of the four environment variables start on page 4-9.
Using the tbcheck Commands tbcheck -cr Execute tbcheck -cr to validate the OnLine reserved pages that reside at the beginning of the initial chunk of the root dbspace. These pages contain the primary OnLine control information. If this command detects errors (not warnings), perform a data restore from archive. (Refer to page 2-95 for more details about the reserved pages.) tbcheck -cc Execute tbcheck -cc to validate the system catalog for each of the databases that OnLine manages.
Using the OnLine Message Log tbcheck -cI Execute tbcheck -cI for each database to validate indexes on each of the tables in the database. This command locks each table as the table is checked, denying access to all other database server processes. If this command detects errors, drop and re-create the affected index. tbcheck -cD Execute tbcheck -cD to validate the pages for each of the tables in the database.
Setting Consistency-Checking Variables The problem text briefly describes the type of consistency error. The process# identifies the OnLine database server process identification number (pid) that encountered the error. The user# is the user identification number as defined in the UNIX /etc/passwd file. The address is the address of the database server process in shared memory. Locate the user login and try to discover the operation being performed at the time of the error.
Setting Consistency-Checking Variables (A core dump is an image of the database server process in memory at the time that the inconsistency was detected. Some core dumps include a copy of shared memory. To determine the size of your OnLine shared memory, refer to the kilobyte information listed in any tbstat header.) Administrators with disk space constraints might prefer to write a script that detects the presence of diagnostic output in a specified directory and sends the output to tape.
Setting Consistency-Checking Variables DUMPCORE Set the DUMPCORE environmental variable as an alternative to GCORE for systems that do not support the gcore utility. DUMPCORE directs each OnLine database server process to dump core when it detects an inconsistency or initiates an abort sequence. To accomplish this, the server process sends itself a segmentation violation signal.
Recovering from Corruption Set the DUMPSHMEM environment variable at the system prompt or in your .login or .cshrc (C shell) or your .profile (Bourne shell) file as follows: C shell: setenv DUMPSHMEM Bourne shell: DUMPSHMEM = export DUMPSHMEM DUMPDIR The DUMPDIR environment variable directs the OnLine database server process to save the diagnostic output generated by GCORE or DUMPSHMEM to the specified directory instead of to the current directory.
Recovering from Corruption OnLine alerts the user and administrator to possible corruption through the following means: ■ Error messages reported to the application state that pages, tables, or databases cannot be found. The following message: -105 ISAM error: bad isam file format is always returned to the application if an operation has failed because of an inconsistency in the underlying data or control information. ■ Consistency-checking messages are written to the OnLine message log.
Mirroring If the down chunk is mirrored, OnLine continues to operate using the mirror chunk. Use UNIX utilities to determine what is wrong with the down chunk and then to correct the problem and bring the chunk back to online mode. Restore mirrored chunk data by following the procedure described on page 3-101. If the down chunk is not mirrored and contains logical log files, the physical log, or the root dbspace, OnLine immediately initiates an abort.
Beginning Recover a mirrored chunk through the DB-Monitor Dbspaces menu, Status option or through the tbspaces utility (change the status of the failed chunk from down, D, to online, O). When you initiate recovery, OnLine puts the down chunk in recovery mode and copies the information from the online chunk to the recovery chunk. When the recovery is complete, the chunk automatically receives online status.
Processing The reason for the delay is to ensure a proper restore. The level-0 archive copies the updated OnLine configuration information (the newly created mirror chunk) from the root dbspace reserved pages to the first block of the archive tape. In the event of a data restore, the updated configuration information at the beginning of the archive tape can direct OnLine to look for the mirrored copies of the logical log files if the primary chunk becomes unavailable.
Recovery Important: If OnLine detects an I/O error on a chunk that is not mirrored, OnLine marks the chunk as down. If the down chunk contains logical log files, the physical log, or the root dbspace, OnLine immediately initiates an abort. Otherwise, OnLine can continue to operate but processes cannot write to the down chunk. The only method for restoring an unmirrored chunk is to perform a data restore from archive.
OnLine Logging Overview OnLine Logging Overview The logical log files are at the center of all OnLine data-recovery mechanisms.
Dbspace Logging Dbspace Logging OnLine logs dbspace data operations in six steps, illustrated in Figure 4-1 on page 4-20. Following is an overview of steps in logging dbspace data: 1. Read the data page from disk to shared-memory page buffer. 2. Copy the unchanged page to the physical log buffer. 3. Write the new data into the page buffer; create a logical log record of the transaction, if needed. 4. Flush physical log buffer to the physical log on disk. 5.
Dbspace Logging Figure 4-1 OnLine logs dbspace operations in six steps.
Dbspace Logging In general, an insert or an update begins when a database server process requests a row. OnLine identifies the page on which the row resides and attempts to locate the page in the OnLine shared-memory buffer pool. If the page is not already in shared memory, it is read into shared memory from disk. Before a dbspace data page is first modified, a copy of the unchanged page is stored in the physical log buffer.
Blobspace Logging Blobspace Logging OnLine logs blobspace data in three steps, illustrated in Figure 4-2 on page 4-23. Blobspace data does not pass through shared memory or the logical log files on disk. Following is an overview of steps in logging blobspace data: 4-22 1. Blobspace data flows from the pipe, through temporary buffers in the database server process memory space, and is written directly to disk. If the blob requires more than one blobpage, links and pointers are created as needed. 2.
Blobspace Logging Application process Figure 4-2 OnLine logs blobspace operations in three steps.
Blobspace Logging Operations Logging OnLine does not copy blobspace data to the logical log files. This is the important difference between the way that OnLine logs blobspace data and dbspace data. A record of the blobspace operation is written to the logical log buffer, but the logical log records do not include a copy of the blobspace data. The logical log records only include images of the blobspace overhead pages: the freemap pages and the bit-map pages.
Blobspace Logging OnLine allocates and deallocates blobpages via the blobspace free-map pages. (Refer to page 2-148.) If a blobspace blob is deleted during a transaction, the entries in the free-map page for the blobpages storing the blob are marked for deletion. The blob data is unchanged. If the transaction is eventually rolled back, OnLine simply reverses the change to the blobpage status in the free-map page entry.
What Happens During Logical Log Backup For further information about what happens during a data restore, refer to page 4-45. What Happens During Logical Log Backup Logical log file backup can be initiated implicitly as part of continuous logging or explicitly by the OnLine administrator or operator, either through DB-Monitor or by executing tbtape. The backup is performed by the tbtape process (even if requested through DB-Monitor). The logical log backup achieves two objectives: 1.
Ready LTAPEDEV Ready LTAPEDEV When you request a logical log backup, you are prompted to mount a tape on the tape device specified as LTAPEDEV in the configuration file. The tbtape process prompts you to verify that the tape device is ready. If the tape is new, the tbtape utility process writes a tape header page to the device.
Place Log Header on Tape Each blobpage that was allocated during the time that this log file was current is copied to the tape device LTAPEDEV. This is required as part of blobspace logging to ensure that blobspace blobs can be restored after a DELETE statement, if required. (Refer to page 4-22 for a detailed explanation of blobspace logging.
Write Log Records to Tape Write Log Records to Tape The tbtape process begins copying each page in the logical log file to tape. When the last page in the log file is copied, the backup is complete. Figure 4-3 illustrates the order of information on the logical log backup tape. Order of information on the logical log backup tape 1 Blobspace A Figure 4-3 Each logical log backup begins with blobpages, if any, then the header page, and then the log records.
Write Trailer Page If the backup was initiated explicitly through tbtape -a or Auto-Backup, tbtape looks for another log file with status U. If another log file requires backup, the procedure is repeated. If tbtape cannot find another candidate for logging, it prompts the operator to indicate if the current log file is to be backed up. If so, the log files are switched and the backup procedure is repeated for the formerly current log.
Read Archive History Information 5. tbtape forces a checkpoint. 6. tbtape synchronizes activity with the tbinit process. 7. tbtape writes tape header page. 8. tbtape archives reserved pages and logical log files that contain open transactions. 9. tbtape defines the archive criteria. 10. tbtape searches by chunk for disk pages that meet archive criteria and archives those pages. 11. tbtape monitors pages written to the physical log and archives all pages that meet archive criteria. 12.
Verify the Archive Level Verify the Archive Level As part of the archive request, you specify an archive level. The tbtape process compares the specified archive level with the information that was obtained from the PAGE_ARCH reserved page. If tbtape cannot find a record of a previous archive on the reserved page, the only valid archive level is a level-0 archive. Otherwise, any archive level is valid. (A level-0 archive to /dev/null registers as a valid archive.
Synchronize tbtape and tbinit Activities (It is likely that some transactions are ongoing during an online archive procedure. The restore procedure describes how transactions that span the archive tape and the logical log file are rolled back during a data restore, if necessary. Refer to page 4-45.) Purpose of Checkpoint Timestamp The checkpoint timestamp becomes the standard against which disk pages are compared for archiving purposes. (Timestamps are not based on system time.
Synchronize tbtape and tbinit Activities The answer is that tbtape and tbinit synchronize their activities at the beginning of the archive and continue to work in concert until the end of the archive. The following paragraphs describe the consequences of this cooperation. Archive Disk Pages The first task is to prevent any specific disk page from being modified until tbtape has had a chance to archive that page in its archive-begin state.
Write Tape Header Page Archive blobpages The second task facing tbtape is to prevent database server processes from overwriting blobspace blobpages before they have been archived. Since blobpages do not pass through shared memory, the strategy of archiving from the physical log (described in the preceding section) is insufficient in itself. In addition, tbtape must postpone all changes to blobpages until after the blobpage is archived.
Archive Reserved Pages With this action, the root dbspace reserved pages receive acknowledgment that an archive has occurred. This event enables OnLine to make use of newly added or changed resources. (A level-0 archive to /dev/null registers as a valid archive. OnLine permits you to create a level-1 archive on a tape device even if your only level-0 archive was created when the archive device was /dev/null. Because of the problems this could create if a data restore were needed, avoid this situation.
Determine Archive Criteria Determine Archive Criteria As tbtape reads each disk page, it applies a set of criteria that determines which disk pages should be archived. The tbtape process only archives pages that meet these criteria: ■ The page has been allocated. ■ The page is not part of a logical log file or the physical log. ■ The page is needed for this archive level.
Archive Disk Pages That Meet Criteria A level-1 archive directs tbtape to consider a narrower range of used pages. The archive criteria become all disk pages containing a timestamp that is less than the begin-archive checkpoint timestamp but greater than the timestamp associated with the most recent level-0 archive. The tbtape process reads the value of the most-recent level-0 archive timestamp from the active PAGE_ARCH reserved page.
Fast Recovery Fast Recovery Fast recovery is an automatic, fault tolerance feature that OnLine executes any time the operating mode changes from offline to quiescent mode. The aim of fast recovery is to return OnLine to a state of physical and logical consistency with minimal loss of work in the event of a system failure. Fast recovery attains two goals: ■ The physical log is used to return OnLine to the most-recent point of known physical consistency, the most-recent checkpoint.
Fast Recovery and Logging The aim of fast recovery is to return OnLine to a consistent state as part of shared-memory initialization. The actions that OnLine takes as it implements fast recovery can be summarized in four steps: 1. Return all disk pages to their condition at the time of the most-recent checkpoint using the data in the physical log. (See Figure 4-4 on page 4-41.) 2. Locate the most-recent checkpoint record in the logical log files. (See Figure 4-5 on page 4-42.) 3.
Step 1: Checkpoint Condition Step 1: Checkpoint Condition The first step, returning all disk pages to their condition at the time of the most-recent checkpoint, is accomplished by writing the “before-images” stored in the physical log back to disk. Each “before-image” in the physical log contains the address of a page that was updated after the checkpoint. By writing each “before-image” page in the physical log back to disk, changes to OnLine data since the time of the most-recent checkpoint are undone.
Step 2: Find Checkpoint Record in Logical Log Once this information is read, it also identifies the location of all logical log records written after the most-recent checkpoint. Figure 4-5 illustrates this step. Fast Recovery: Step 2 The physical address of the most-recent checkpoint record is stored in the root dbspace reserved page, PAGE_CKPT. The checkpoint record is located in the logical log.
Step 3: Roll Forward Log Records Step 3: Roll Forward Log Records The third step in fast recovery is to roll forward the logical log records that were written after the most-recent checkpoint record. This action reproduces all changes to the databases since the time of the last checkpoint, up to the point where the uncontrolled shutdown occurred. Figure 4-6 illustrates this step.
Step 4: Roll Back Incomplete Transactions Step 4: Roll Back Incomplete Transactions The final step in fast recovery is to roll back all logical log records that are associated with transactions that were not committed (or were rolled back). This rollback procedure ensures that all databases are left in a consistent state.
Data Restore: When Should You Do It? Data Restore: When Should You Do It? Three types of situations could occur in an OnLine environment that would require you, as OnLine administrator, to perform a data restore: ■ You want to replace one or more disks. ■ Your disk experiences a media failure. ■ Your OnLine data experiences extreme corruption.
Steps That Occur During a Data Restore 4-46 7. Mount the first level-0 archive tape on TAPEDEV. 8. The tbtape process reads reserved page information from the tape and verifies that the current configuration and the tape are compatible. 9. Back up any logical log files remaining on the disk, if prompted by tbtape. Mount tape on LTAPEDEV. 10. The tbtape process reads each page of data from the archive tape(s) and writes the page to the address contained in header. 11.
Gather All Tapes Needed for Restore Gather All Tapes Needed for Restore To restore OnLine, you need all archive tapes (level-0, and possibly level-1 and level-2) and the tapes containing the logical log backups since the last archive. The tapes you need are listed for you when you select the DB-Monitor Status menu, Archive option. Refer to Figure 4-8 if you are uncertain about how to determine which archive tapes are needed for a data restore.
Verify OnLine Configuration Logical logs files that remain on disk and which have not yet been backed up can still be included in the restore. As part of the restore procedure, OnLine prompts you to back up those logical log files to tape, so that they can be rolled forward after the archive tapes have been restored. (This might not be true if the disks containing the logs failed.) Verify OnLine Configuration During the restore, you cannot reinitialize shared memory, add chunks, or change tape devices.
Initiate Data Restore from Offline Mode Verify that all raw devices that have been in use since the level-0 archive are available. For example, if you dropped a dbspace or mirroring for a dbspace since your level-0 archive, you must ensure that the dbspace or mirror chunk device is available to OnLine when you begin the restore. If the tbtape process attempts to write pages to the chunk as it reads the level-0 archive page, and cannot find the chunk, the restore will not complete.
Verify Current Configuration Verify Current Configuration As its first task, tbtape reads the root dbspace reserved pages from the first block of the tape. These pages contain both the configuration file values at the time of the level-0 archive and a complete listing of all dbspaces and chunks that were defined at the time. The tbtape process verifies that the current configuration is compatible with the configuration information contained on the tape.
Write Each Archive Page to Disk Write Each Archive Page to Disk It might be that your configuration defines both TAPEDEV and LTAPEDEV as the same device. Since this is possible, tbtape prompts for you to mount the level-0 archive tape and to press RETURN to continue the restore. If your tape is already mounted on the archive device, simply press RETURN. As the restore begins, tbtape reads each page of data from the archive tape(s) and writes the page to the address contained in the page header.
OnLine Is Quiescent Mount the correct tape on LTAPEDEV. Verify that the tape drive is online and press RETURN. If you do not mount the correct tape, tbtape notifies you of the error and prompts again for the correct tape. The tbinit process rolls forward the records contained in the logical log backup tapes. The tbtape process prompts for new tapes as needed until all the log files are processed.
Database and Table Migration The correct method for you depends on your processing environment and what you want to move (a database, selected tables, or selected columns from selected tables). The table displayed in Figure 4-9 compares the advantages and different characteristics of each migration method. The sections that follow describe and compare each migration method in detail. (Refer to Chapter 7, “Utilities,” for a complete discussion of each OnLine utility.
Description of Migration Methods Description of Migration Methods This section provides an overview of how each data migration method works. Figure 4-10 on page 4-56 illustrates the four migration methods. This section does not attempt to compare the methods. Comparisons and contrasts begin on page 4-57. Refer to Chapter 7, “Utilities,” for a complete discussion of each OnLine utility. The LOAD and UNLOAD statements are documented the DB-Access User’s Manual.
Description of Migration Methods dbexport/dbimport The dbexport and dbimport utility pair operates only on databases, not on tables. The dbexport utility creates, on disk or on tape, a directory that contains an ASCII file of data for each table in the specified database. Additionally, dbexport creates on disk or on tape an ASCII file of SQL data definition language (DDL) statements and accounting information necessary to recreate the database on another Informix database server.
Description of Migration Methods Figure 4-10 Description of migration methods UNLOAD/dbschema/LOAD UNLOAD ASCII data } dbschema SQL statements LOAD Data in a table UNLOAD/dbschema/dbload Command-line options dbschema SQL statements } } ASCII data dbload Data in a table } UNLOAD Command file dbexport/dbimport dbexport ASCII data .
Which Migration Method Is Best for You? Which Migration Method Is Best for You? Each of the migration methods imposes constraints of one form or another on the user. The decision trees shown in Figure 4-11 through Figure 4-14 summarize the choices among the migration methods. After you determine which method best suits your needs, refer to Chapter 7, “Utilities,” for detailed instructions for using each utility. Refer to the DB-Access User’s Manual for information about the LOAD and UNLOAD statements.
Which Migration Method Is Best for You? You do not want to move data in database units. Do you want to move the data in table units? No Use either LOAD or dbload. See Figure 4-14. No Use either LOAD or dbload. See Figure 4-14. No Use either LOAD or dbload. See Figure 4-14. No Use either LOAD or dbload. See Figure 4-14.
Which Migration Method Is Best for You? You want to modify the current database schema. Do you want to write data directly to tape? Yes dbexport/dbimport Figure 4-13 Third decision tree summarizing the choices among OnLine migration methods No Use dbexport/dbimport, LOAD, or dbload. To choose between LOAD and dbload, see Figure 4-14. In the choice between LOAD or dbload, the trade-off is ease-of-use and speed versus flexibility. The advantage of the dbload utility is flexibility.
Using UNLOAD with LOAD or dbload How to choose between LOAD and dbload Do you need to commit any inserts during the load? Yes UNLOAD/ dbschema/ dbload Figure 4-14 Fourth decision tree summarizing the choices among OnLine migration methods No Yes UNLOAD/ dbschema/ LOAD Yes UNLOAD/ dbschema/ LOAD Is the ASCII input file format acceptable to LOAD? No Can you use sed, awk, or an editor to easily reformat the ASCII input files? No UNLOAD/dbschema/dbload Using UNLOAD with LOAD or dbload This section d
Using UNLOAD with LOAD or dbload Create and Edit the Schema File First Use dbschema to create a schema file for the database or table that will receive the data, if it does not yet exist. After the schema file is created, you can edit the file with a system editor. By editing the schema file, you can change access privileges, object (table, index, or view) ownership, lock mode, or initial and next extent sizes. Otherwise, all privileges and ownership remain unchanged.
Using dbexport and dbimport Use LOAD or dbload to Populate the Tables If you plan to use the LOAD statement to load data from an ASCII file into a table, load the data now. If you plan to use the dbload utility to load data, refer to page 7-21 for explicit instructions for creating the command file and loading the data. Using dbexport and dbimport This section describes the steps you take when you migrate data using the dbexport and dbimport utilities. Refer to page 4-53 for an overview of this method.
Using tbunload and tbload The .sql file does not contain all table information available from the existing database. You can modify the .
Using tbunload and tbload tbunload The tbunload utility can unload data more quickly than either dbexport or the UNLOAD command because it copies the data in binary and in page-sized units. However, this places some constraints on its use: ■ tbunload writes data to tape only. ■ You must load the tape written by tbunload onto a machine with the same page size as the original machine. ■ You must load the data on the tbunload tape into a database or table managed by OnLine.
Migrating Data from OnLine to SE ■ tbload creates a database without logging; you must initiate logging after the database is loaded, either through DB-Monitor or with the tbtape utility. ■ When you use tbload to load a table into a logged database, you must turn logging off for the database during the operation. Migrating Data from OnLine to SE This section describes the information you need if you are migrating an OnLine database to an IBM Informix SE database server.
Migrating Data from SE to OnLine For more information about the differences between the two database servers and their interpretation of SQL, refer to IBM Informix Guide to SQL: Reference. Migrating Data from SE to OnLine This section describes the information you need if you are migrating an IBM Informix SE database to an OnLine database server. Use the IBM Informix SE dbexport utility to prepare the data and the OnLine dbimport utility to load the data.
For more information about the differences between the two database servers and their interpretation of SQL, refer to IBM Informix Guide to SQL: Reference.
Migrating Data from SE to OnLine 4-68 IBM Informix OnLine Database Server Administrator’s Guide
Chapter How to Improve Performance 5 In This Chapter . . . . . . . . . . . . . . . . . . . . 5-3 Disk Layout . . . . . . . . . . . . . . . . . . . . 5-4 Optimize Blobspace Blobpage Size . . . . . . tbcheck -pB and tbcheck -pe Utility Commands Blobpage Average Fullness . . . . . . . . Apply Effective Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-5 5-5 5-7 5-8 Eliminate User-Created Resource Bottlenecks. . . . .
5-2 Checkpoint Frequency . . . . . . . . . . . . . . . . . . Performance Tradeoffs . . . . . . . . . . . . . . . . How Is Tuning Done?. . . . . . . . . . . . . . . . . 5-20 5-20 5-21 Psort Parallel-Process Sorting Package How Psort Works . . . . . . Tuning Psort . . . . . . . . Psort and Shared Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-22 5-22 5-23 5-24 SPINCNT Configuration Parameter . .
In This Chapter With each OnLine release, Informix engineers incorporate new code that increases processing efficiency, reduces overhead, and improves performance. This effort to tune the OnLine code has two consequences for you: ■ The OnLine database server runs faster. ■ OnLine administration requires less performance tuning. The information in this chapter assumes that your application has been written as efficiently as possible. (Refer to IBM Informix Guide to SQL: Tutorial.
Disk Layout Performance-tuning issues are addressed as six topics in this chapter: ■ When is tuning needed? (page 5-10) ■ Shared-memory buffers (page 5-13) ■ Shared-memory resources (page 5-14) ■ Log buffer size (page 5-15) ■ Page-cleaner parameters (page 5-17) ■ Checkpoint frequency (page 5-20) If you are running OnLine on a multiprocessor machine, two multiprocessorspecific features are available: ■ Psort parallel-process sorting package (page 5-22) ■ SPINCNT configuration parameter (page
Optimize Blobspace Blobpage Size Optimize Blobspace Blobpage Size Familiarize yourself with the OnLine approach to blobspace blob storage before you begin this section. Refer to page 2-148 and page 2-149 for background information. When you are evaluating blobspace storage strategy, you can measure efficiency by two criteria: ■ Blobpage fullness ■ Blobpages required per blob Blobpage fullness refers to the amount of data within each blobpage. Blobs stored in a blobspace cannot share blobpages.
tbcheck -pB and tbcheck -pe Utility Commands Refer to page 7-38 for tbcheck -pB and tbcheck -pe syntax information. The tbcheck -pB command displays statistics that describe the average fullness of blobpages. These statistics provide a measure of storage efficiency for individual blobs in a database or table. If you find that the statistics for a significant number of blobs show a low percentage of fullness, OnLine might benefit from resizing the blobpage in the blobspace.
Blobpage Average Fullness The example output indicates that four blobs are stored as part of the table sriram.catalog. Two blobs are stored in the blobspace blobPIC in 6144-byte blobpages. Two more blobs are stored in the blobspace bspc1 in 2048-byte blobpages. The summary information that appears at the top of the display, Total pages used by table, is a simple total of the blobpages needed to store blobs.
Apply Effective Criteria Apply Effective Criteria Looking at the efficiency information for blobspace bspc1 in Figure 5-1, an administrator might decide that a better blob-storage strategy would be to double the blobpage size from 2048 bytes to 4096 bytes. (Recall that blobpage size is always a multiple of the OnLine page size.
Eliminate User-Created Resource Bottlenecks ■ Do not perform mass updates on frequently accessed tables. During an update, the row must be locked. Mass updates to a table are best performed with table-level locking to reduce locking overhead. However, requesting an update with table-level locking denies access to the table to all users except those using Dirty Read isolation. Users should balance their desire to perform a large transaction against the effect their work has on concurrency.
When Is Tuning Needed? When Is Tuning Needed? As administrator, attempt to follow as closely as possible the guidelines for disk layout, blobpage sizing, and user education. As part of your daily routine, monitor OnLine activity to become familiar with what can be considered normal operation. (Refer to page 3-83.) In the course of your monitoring, pay particular attention to several fields that could indicate a need for tuning.
ovtbls, ovlock, ovuser, and ovbuff Fields Refer to page 5-17 for more details about using these fields to modify the values of the page-cleaner parameters. ovtbls, ovlock, ovuser, and ovbuff Fields Use the fields that report unmet database server requests for shared-memory resources to indicate a possible need for tuning.
Bufsize Pages/IO Fields Buffer size and the amount of I/O per write are reported by the DB-Monitor Status menu, Logs option or as part of the tbstat -l or tbstat -p output: Bufsize is the size of the physical or logical log buffer. The Bufsize field appears twice in the display: once for the physical log buffer and once for the logical log buffer. Pages/IO is the number of pages, on average, that are written to disk with each I/O operation.
Shared-Memory Buffers Shared-Memory Buffers In general, you want to allocate shared-memory buffers to OnLine until you no longer see an improvement in performance. However, shared memory is rarely an unlimited resource. You must always weigh the positive effect of increasing OnLine shared memory against negative effects that might be experienced by other applications running on your host machine. When Is Tuning Necessary? Look at the cached-read and cached-write percentages for OnLine.
Shared-Memory Resources Important: Low caching percentages might reflect improperly tuned page-cleaning parameters. If increasing the value of BUFFERS does not increase the caching percentages, refer to page 5-17. Refer to page 3-92 for more details about how to change the value of BUFFERS. Shared-Memory Resources If database server processes are waiting for a limited number shared-memory resources, you can improve performance by allocating more of the needed resource.
How Is Tuning Done? How Is Tuning Done? You can increase the number of shared-memory resources by increasing the value of TBLSPACES, LOCKS, or USERS in the configuration file. If you increase the value of USERS, you might need to also increase the value of TBLSPACES, LOCKS, and BUFFERS, since the minimum values for all three of these parameters are based on the value of USERS.
How Is Tuning Done? How Is Tuning Done? If the value of Pages/IO is 75 percent or more of Bufsize, each write to the disk is, on average, deferred until the buffer is almost full. (Since the value of Pages/IO is an average, some writes might be closer to 100 percent of the buffer. The logging status affects this value; see the preceding paragraph.) In this case, you can try to further improve buffer efficiency by increasing the size of the buffer to accommodate more data before each write.
Page-Cleaner Parameters Page-Cleaner Parameters In the discussion of page-cleaning tuning, it is especially true that your hardware configuration and your application influence the values that are best for your environment. Familiarize yourself with the OnLine approach to page cleaning before you begin this section. Refer to page 2-57 and page 2-58 for background information about the LRU queues and their role in the page-cleaning process.
Efficient Page Cleaning You should still avoid foreground writes and LRU writes, displayed as Fg Writes and LRU Writes in the tbstat -F output. (Refer to page page 7-87.) However, the introduction of OnLine LRU queued pairs (composed of FLRU and MLRU queues) significantly reduces the likelihood of these write types. (Refer to page 2-57 for an explanation of the FLRU and MLRU queues.) Monitoring tbstat -F might not alert you that you can affect performance by tuning the page-cleaning parameters.
How Is Tuning Done? How Is Tuning Done? If the cached-read percentage is lower than 95 percent, you might be able to improve performance by lowering the values of LRU_MAX_DIRTY and LRU_MIN_DIRTY to increase the number of free and/or unmodified pages that are available in the shared-memory LRU queues. If the cached-write percentage is lower than 82 percent, you might be able to improve performance by increasing the LRU_MAX_DIRTY and LRU_MIN_DIRTY values.
Checkpoint Frequency Checkpoint Frequency Familiarize yourself with the definition of a checkpoint, and with the events that happen during a checkpoint, before you begin this section. Refer to pages page 2-70 and page 2-72 for background information. Performance Tradeoffs The frequency of checkpoints and their duration affects OnLine performance.
How Is Tuning Done? How Is Tuning Done? The first step in tuning is to determine the cause of frequent checkpoints. Are the checkpoints occurring because the physical log is becoming full too rapidly or as a result of some other event? To answer this question, examine the value of the Numpage field in the physical log portion of the DB-Monitor Status menu, Logs option, or in the tbstat -l (lowercase -L) output.
Psort Parallel-Process Sorting Package Psort Parallel-Process Sorting Package Psort is a sorting package that improves performance by taking advantage of multiprocessors to start and synchronize multiple sort processes. This parallel-process sorting package is transparent to end users. (If you are working on a uniprocessor machine and you set any of the parallel-sort parameters, OnLine ignores the parameters and proceeds with nonparallel sorting.
Tuning Psort Tuning Psort If PSORT_NPROCS is set to 0, Psort uses three as the default number of processes for the sort. When PSORT_NPROCS is set to some number greater than zero, the value is the maximum number of processes available. OnLine calculates the number of sort processes to use given that constraint. You maximize the effectiveness of Psort if you set PSORT_NPROCS to the number of available processors on the system. The maximum value for PSORT_NPROCS is 10.
Psort and Shared Memory Psort and Shared Memory Each parallel sort uses one UNIX shared-memory segment. A front-end process is able to open an unlimited number of SELECT cursors that contain an ORDER BY clause. However, the number of sorts that can be executed in parallel is limited by the number of shared-memory segments that each OnLine database server process can attach to.
SPINCNT Configuration Parameter The number of times that a user process spins and tests is specified by the configuration parameter SPINCNT. The default value is 300. If you increase the value of SPINCNT, you increase the period that the user process remains in the CPU. If you decrease the value, you reduce the period. You cannot directly affect the amount of time that the process spins between tries.
Chapter DB-Monitor Screens In This Chapter . . . Main Menu . . . Status Menu . . . Parameters Menu . Dbspaces Menu. . Mode Menu . . . Force-Ckpt Option. Archive Menu . . Logical-Logs Menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6-2 IBM Informix OnLine Database Server Administrator’s Guide
In This Chapter This chapter serves as a reference for the DB-Monitor screens. You can use it to quickly determine the purpose and use of a specific screen or option. To start the monitor, execute tbmonitor from the command line. If you are logged in as user informix, the main menu appears. All users other than informix have access only to the Status menu. All menus and screens function in the same way.
Main Menu Main Menu Menu Item Status Parameters Dbspaces Mode Options Profile Initialize Create Users SharedMemory Spaces Archive Logical-Logs Startup Create AutoBackup BLOBSpace On-Line Restore ContinuousBackup Add-Log Mirror GracefulShutdown TapeParameters Databases Databases Drop-Log Drop ImmediateShutdown Exit TapeParameters Logs PhysicalLog Info TakeOffline Archive Exit Add_chunk Exit Output Status Configuration Exit Exit 6-4 ForceCkpt IBM Informix OnLine
Status Menu Status Menu Option Description See page... Profile Use the Profile option to display OnLine performance statistics. 3-83 Users Use the Users option to display the status of active OnLine database server processes. 3-86 Spaces Use the Spaces option to display status information about OnLine dbspaces, blobspaces, or each chunk that is part of a dbspace or blobspace. 3-70 3-75 Databases Use the Databases option to display the name, owner, and logging status of the first 100 databases.
Parameters Menu Parameters Menu Option Description See page... Initialize Use the Initialize option to initialize OnLine disk space or to modify OnLine disk space parameters. 1-52 SharedMemory Use the Shared-Memory option to initialize OnLine shared memory or to modify OnLine sharedmemory parameters. 1-54 Add-Log Use the Add-Log option to add a logical log file to an OnLine dbspace. 3-28 Drop-Log Use the Drop-Log option to drop a logical log file from an OnLine dbspace.
Dbspaces Menu Dbspaces Menu Option Description See page... Create Use the Create option to create a dbspace. 3-97 BLOBSpace Use the BLOBSpace option to create a blobspace. 3-88 Mirror Use the Mirror option to add mirroring to an existing blobspace or dbspace or to end mirroring for a blobspace or dbspace. 3-105 Drop Use the Drop option to drop a blobspace or a dbspace from the OnLine configuration.
Mode Menu Mode Menu Option Description See page... Startup Use the Startup option to initialize shared memory and take OnLine to quiescent mode. 3-8 On-Line Use the On-Line option to take OnLine from quiescent to online mode. 3-9 GracefulShutdown Use the Graceful-Shutdown option to take OnLine from on-line to quiescent mode. Users can complete their work. 3-10 ImmediateShutdown Use the Immediate-Shutdown option to take OnLine from online to quiescent mode in 10 seconds.
Force-Ckpt Option Force-Ckpt Option Description See page... Use the Force-Ckpt option to see the time of the most recent checkpoint or to force OnLine to execute a checkpoint.
Archive Menu Archive Menu Option Description See page... Create Use the Create option to create a level-0, level-1, or level-2 archive. 3-57 Restore Use the Restore option to perform an OnLine data restore. 4-45 TapeParameters Use the Tape-Parameters option to modify the parameters of the archive tape device.
Logical-Logs Menu Logical-Logs Menu Option Description See page... Auto-Backup Use the Auto-Backup option to direct OnLine to back up all full logical log files and/or the current log file. 3-36 ContinuousBackup Use the Continuous-Backup option to back up each logical log file as it becomes full. 3-37 Databases Use the Databases option to modify the logging status of an OnLine database.
Chapter Utilities In This Chapter . . 7 . . . . . . . . . . . . . . . 7-5 dbexport: Unload a Database and Schema File Syntax . . . . . . . . . . . . . Destination Options . . . . . . . . Contents of the Schema File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-5 7-6 7-7 7-9 dbimport: Create a Database . Syntax . . . . . . . . Input File Location Options Create Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7-2 tbcheck: Check, Repair, or Display. . . Syntax . . . . . . . . . . . Option Descriptions . . . . . . No Options . . . . . . . . -cc Option . . . . . . . . -cd and -cD Options . . . . . -ce Option . . . . . . . . -ci and -cI Options . . . . . -cr Option . . . . . . . . -n Option . . . . . . . . -pB Option . . . . . . . . -pc Option . . . . . . . . -pd and -pD Options. . . . . -pe Option . . . . . . . . -pk and -pK, -pl and -pL Options -pp and -pP options . . . . . -pr Option . . . . . . . .
Interpreting tblog Output . Record Types . . . . Record Contents. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-55 . 7-56 . 7-57 tbmode: Mode and Shared-Memory Changes Syntax . . . . . . . . . . . . Change OnLine Mode . . . . . . . -k Option . . . . . . . . . . -m Option . . . . . . . . . . -s Option . . . . . . . . . . -u Option . . . . . . . . . . Force a Checkpoint . . . . . . . . Change Shared-Memory Residency . . Switch the Logical Log File . . . . .
-F Option. -k Option. -l Option . -m Option -o Option. -p Option -P Option -r Option . -R option . -s Option . -t Option . -u Option -X Option -z Option. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . tbtape: Logging, Archives, and Restore Syntax . . . . . . . . . . Request a Logical Log Backup . . Start Continuous Backups . . . Create an Archive . . .
In This Chapter This chapter describes the OnLine utilities that allow you to execute administrative tasks directly from the shell prompt. dbexport: Unload a Database and Schema File The dbexport utility unloads a database into ASCII files. The dbexport utility creates an ASCII schema file that dbimport uses to re-create the database schema in another Informix environment. You can edit the schema file to modify the database that dbimport creates.
Syntax The SQL statements contained in the dbexport schema file do not contain all available information. The following information is omitted: ■ Initial and next extent values ■ Lock mode ■ Dbspace where the table should reside ■ Blobspace where a blob column should reside ■ Logging mode of the database, if there is one For this reason, you might want to unload the file to disk where you can edit it before you import the database.
Destination Options You can use the OnLine syntax database@dbservername to specify the database. Specifying a database server name allows you to choose a database on another server as your current database if you have installed IBM Informix STAR. During the export, the database is locked in exclusive mode. If dbexport cannot obtain an exclusive lock, the program ends with a diagnostic message.
Destination Options -o directory names the directory on disk where you want the ASCII data files and the schema file stored. -s tapesize specifies in kilobytes the amount of data that can be stored on the tape. -t device names the pathname of the tape device where you want the ASCII data files and, possibly, the schema file stored. If you do not specify a destination for the data and schema files, the directory database.exp is placed in the current working directory.
Contents of the Schema File Contents of the Schema File The .sql file contains the SQL statements needed to re-create the exported database, as well as some additional ownership and privilege information. The schema file does not retain all the information that might have been included in the original statements used to create the database and tables.
dbimport: Create a Database The statements in the ASCII schema file that create tables, views, and indexes and grant privileges do so using the name of the person who originally created the database. In this way, the original owner retains DBA privileges for the database and is the owner of all the tables, indexes, and views. In addition, whoever executes the dbimport command also has DBA privileges for the database.
Syntax If you are loading a database from IBM Informix SE into an OnLine environment, check that you set the SQLEXEC environment variable to $INFORMIXDIR/lib/sqlturbo (to specify the OnLine database server). You can display the software version number by executing dbimport -V. Syntax Input File Location p. 7-12 dbimport -c Create Options p. 7-14 database -q -c instructs dbimport to complete importing unless fatal errors occur.
Input File Location Options You can cancel dbimport at any time by pressing the Interrupt key. The dbimport program asks for confirmation before terminating. Input File Location Options Input File Location -i directory -t device -b blocksize -s tapesize -f pathname -b blocksize specifies in kilobytes the block size of the tape device. -f pathname specifies the pathname on disk where dbimport will find the schema file to use as input to create the database (data files are read from tape).
Input File Location Options You cannot use the -f option unless it was used when the schema file was exported with the dbexport program. If you use -f, you typically use the same command filename that you specified in the dbexport command. If you specify only a filename, dbimport looks for the file in the .exp subdirectory of either your current directory or the directory you specify with the -i option.
Create Options Create Options Create Options -d dbspace -l buffered -ansi -ansi creates an ANSI-compliant database. -d dbspace names the OnLine dbspace where the database will be created. -l establishes unbuffered transaction logging for the imported database. -l buffered establishes buffered transaction logging for the imported database. A database that is ANSI-compliant uses unbuffered logging. In addition, the ANSI rules for transaction logging are enabled.
dbload: Load Data from a Command File The next command imports the stores5 database from tape into the auckland dbspace. The database is created with unbuffered logging. The command suppresses the echo of the SQL statements and continues processing unless fatal errors occur. dbimport -cq -d auckland -l -t /dev/rmt0 -b 16 -s 24000 stores5 dbload: Load Data from a Command File The dbload utility transfers data from one or more ASCII files into one or more existing tables.
Syntax The dbload syntax and use information begins in the next section. The dbload command file structure and instructions for creating a command file begin on page 7-21. After you create the command file, you can use the -s option (see page 7-18) to check the syntax of the statements within the command file. You can display the software version number by executing dbload -V. Syntax dbload -d database -c command file -l error log file -r Starting Line No. p. 7-18 Batch Size p.
Syntax The error log file specified by the -l flag stores any input file rows that dbload cannot insert into the database, as well as diagnostic information. Tables specified in the command file are locked during loading, preventing other users from modifying data in the table, unless you specify the -r flag. Table locking reduces the number of locks needed during the load but at the price of reduced concurrency.
Command-File Syntax Check Command-File Syntax Check Command-File Syntax Check 7-18 -s > output file -s instructs dbload to check the syntax of the statements in the command file without inserting data. > output file specifies the name of the file where the output from the syntax check is stored. The -s option performs a syntax check on the FILE and INSERT statements in the specified dbload command file. The screen displays the command file with any errors marked where they are found.
Batch Size The -i option instructs dbload to read and ignore the specified number of NEWLINE characters in the input file before it begins processing. (This option assumes that a NEWLINE character indicates the end of an individual data row and header information.) This option is useful if your most recent dbload session ended prematurely. If, for example, dbload ended after inserting 240 lines of input, you can resume loading at line 241 by setting number rows ignore equal to 240.
Bad-Row Limits Bad-Row Limits Bad-Row Limits -e number errors read -p -e number errors read specifies the number of bad rows that dbload reads before terminating. -p prompts for instructions if the number of bad rows exceeds the limit. If you set -e number errors read to an integer, dbload terminates when it reads (number errors read +1) bad rows. If you set the value of number errors read to 0, dbload terminates when it reads the first bad row. There is no default value for number errors read.
How to Create a Command File How to Create a Command File Before you use dbload, you first create an ASCII command file that names the input data files and the tables that receive the data. The command file maps fields from one or more input files into columns of one or more tables within your database. The command file contains only FILE and INSERT statements. Each FILE statement names an input data file.
How to Create a Command File You can combine both forms of the FILE statement in a single command file.
How to Create a Command File Two consecutive delimiters define a null field. As a precaution, you might wish to place a delimiter immediately before the NEWLINE character that marks the end of each data row. If you omit this delimiter, an error results whenever the last field of a data row is empty. If you are certain that none of the input data rows ends with an empty field, you can omit this step. The following example command file illustrates a simple delimiter form of the FILE and INSERT statements.
How to Create a Command File The restrictions dbload imposes on the VALUES clause value list affect only data types DATE, DATETIME, and INTERVAL. Values of type DATE must be in mm/dd/yyyy format. (This is the case if the DBDATE environment variable is set to its default value, MDY4/.) Data for DATETIME and INTERVAL columns must be in character form, showing only field digits and delimiters (no type or qualifiers). Inserted data types correspond to the explicit or default column list.
How to Create a Command File The FILE statement describes the stock.unl data rows as composed of six fields, each separated by a vertical bar (| = ASCII 124) as the delimiter. Compare the FILE statement with the following data rows, which appear in the input file stock.unl. (Since the last field is not followed by a delimiter, an error results if any data row ends with an empty field.) 1|SMT|baseball gloves|450.00|case|10 gloves/case 2|HRO|baseball|126.00|case|24/case 3|SHK|baseball bat|240.
How to Create a Command File In this example, the VALUES clause uses the automatically assigned field names assigned by dbload. You must reference the automatically assigned field names with the letter f followed by a two-digit number: f01, f02, f10, and so on. All other formats are incorrect. The user has changed the column names, the order of the data, and the meaning of col6 in the new stock table.
How to Create a Command File Fields in the input file are padded with blanks (represented by + in the following example) to create data rows in which the locations of data fields and the number of characters are the same across all rows. The definitions for these fields are CHAR(15), CHAR(2), CHAR(5), and CHAR(12), respectively.
How to Create a Command File - end is a hyphen followed by an integer that indicates the character position within a data row that ends a range of character positions. fieldn is a name that you assign to a data field you are defining with the range of character positions. filename is the name of the input file. null string is a quoted string that specifies the data value for which dbload should substitute a null.
How to Create a Command File The restrictions dbload imposes on the VALUES clause value list affect only data types DATE, DATETIME, and INTERVAL. Values of type DATE must be in mm/dd/yyyy format. (This is the case if the DBDATE environment variable is set to its default value, MDY4/.) Data for DATETIME and INTERVAL columns must be in character form, showing only field digits and delimiters (no type or qualifiers). Inserted data types correspond to the explicit or default column list.
How to Create a Command File The FILE statement defines six data fields from the cust_loc_data table data rows. The statement names the fields and defines the length of each field using character positions.
How to Create a Command File The following data rows would be inserted into the cust_address table: Column Values from Row 1 Values from Row 2 col1 Sunnyvale++++++ Tempe++++++++++ col2 null null col3 CA AZ col4 94086 85253 Since the second column in cust_address (col2) is not named, the new data row contains a null (assuming that the column permits nulls).
dbschema: Output SQL Statements dbschema: Output SQL Statements Use the dbschema utility to display the SQL statements required to replicate a database or a specific table, view, or procedure. Options enable you to perform the following activities: ■ Display CREATE SYNONYM statements, by owner, for a database or for a specific table. ■ Display all GRANT privilege statements that affect a specified user or that affect all users for a database or for a specific table. ■ Save the output to a file.
Include Synonyms All SERIAL fields included in CREATE TABLE statements displayed by dbschema have a starting value of 1, regardless of their original starting value. The dbschema utility uses the owner.object convention when it generates any CREATE TABLE, CREATE INDEX, CREATE SYNONYM, CREATE VIEW, CREATE PROCEDURE, or GRANT statements, and when it reproduces any unique or referential constraints.
Include Privileges Include Privileges Privileges -p user -p user directs dbschema to output the GRANT statements that grant privileges to user. If you specify all for user, dbschema outputs GRANT statements for all users for the database, table, or view specified. In the dbschema output, the AS keyword indicates the grantor of a GRANT statement. The following example output indicates that norma issued the GRANT statement: GRANT ALL ON "tom".
Specify a Table, View, or Procedure Specify a Table, View, or Procedure Tables, Views, or Procedures -f -t table name -t view name procedure -f procedure specifies the name of the procedure for which you want dbschema to output CREATE PROCEDURE statements. -t view name directs dbschema to limit the SQL statement output to only those statements needed to replicate the specified view.
tbcheck: Check, Repair, or Display tbcheck: Check, Repair, or Display Depending on the options you choose, tbcheck can do the following things: ■ Check specified structures for inconsistencies ■ Repair index structures found to contain inconsistencies ■ Display information about the structures The only structures that tbcheck can repair are indexes. If tbcheck detects inconsistencies in other structures, messages alert you to these inconsistencies but tbcheck cannot resolve the problem.
tbcheck: Check, Repair, or Display Check Repair Display -cI -cI -y, -pK -y -pK Index (leaf key values) -pl -y -pl Index (leaf keys plus rowids) -pL -y -pL Index (keys plus rowids) Pages (by table) -pp Pages (by chunk) -pP Root reserved pages -cr -pr Space usage (by table) -pt Space usage (by table, with indexes) -pT System catalog tables cc -pc (2 of 2) Utilities 7-37
Syntax Syntax tbcheck -ce -n -cr -pe -pr -cc -y database -pc -cd -cD -ci -c -pB -pk -pK -pl -pL -pt -pT -pd database table name database table name rowid -pD database table name logical page num -pp table name tblspace num -pP 7-38 chunk num IBM Informix OnLine Database Server Administrator’s Guide rowid logical page num logical page num -q
Option Descriptions chunk num is a decimal value that specifies a chunk. Execute the -pe option to learn which chunk numbers are associated with specific dbspaces or blobspaces. database is the name of the database. The database name cannot include a database server name because tbcheck does not support a client/server environment. logical page is a decimal value that specifies a page in the tblspace. The num logical page number is contained in the most-significant three bytes of the rowid.
Option Descriptions To check the tables, tbcheck compares each system catalog table to it corresponding entry in the tblspace tblspace. The data in the tables are also checked for consistency. Refer to page 2-104 for more details about the tblspace tblspace. (The -pc option performs the same checks and also displays the system catalog information as it checks it, including extent use for each table.
Option Descriptions If inconsistencies are detected and OnLine is in quiescent mode, you are prompted for confirmation to repair the problem index. If you specify the -y (yes) option, indexes are automatically repaired. If you specify the -n (no) option, only the problem is reported. No prompting occurs. Index rebuilding can be time-consuming if you use tbcheck. Processing is usually faster if you use the DROP INDEX and CREATE INDEX SQL statements to drop the index and re-create it.
Option Descriptions -pB Option The -pB option displays statistics that describe the average fullness of blobspace blobpages in a specified table. These statistics provide a measure of storage efficiency for individual blobs in a database or table. If a table is not specified, statistics are displayed for the database. Refer to page 5-5 for more details about interpreting tbcheck -pB output. tbcheck -pB stores5:catalog -pc Option The -pc option performs the same checks as the -cc option.
Option Descriptions -pe Option The -pe option performs the same checks as the -ce option. In addition, -pe displays the chunk and tblspace extent information as it checks it. Refer to the -ce discussion on page 7-40. tbcheck -pe -pk and -pK, -pl and -pL Options The -pk option performs the same checks as the -ci option. In addition, -pk displays the key values for all indexes on the specified table as it checks them. The -pK option performs the same checks as the -cI option.
Option Descriptions Use the -pD option to obtain the rowid. Refer to page 2-104 for more details about the tblspace number. Refer to page 2-124 for more details about the logical page number. Refer to page 2-123 for more details about the rowid. Refer to page 2-121 for more details about the slot table. The -pP option provides the same information as the -pp option but requires a chunk number and logical page number as input.
tbinit: Initialize OnLine Output for both -pt and -pT contains listings for “Number of pages used” and “Number of data pages.” The values provided are never decremented; that is, they always represent the maximum value that is valid for the tblspace. For an accurate count of the number of pages currently used, refer to the detailed information on usage (organized by page type) that the -pT option provides.
Syntax Syntax tbinit -p -s -y -i -i specifies disk-space initialization. If you initialize disk space, all existing data on the disk you are initializing is destroyed. -p directs tbinit not to search for and delete temporary tables during shared-memory initialization. -s directs tbinit to leave OnLine in quiescent mode following initialization. This option is equivalent to the DB-Monitor Mode menu, Startup option. -y automatically responds “yes” to all prompts.
tbload: Create a Database or Table -p Option The -p option directs the tbinit daemon not to search for (and delete) temporary tables left by database server processes that died without performing cleanup. If you use this option, OnLine returns to online mode more rapidly but space used by temporary tables left on disk is not reclaimed. tbinit -p -s Option The -s option initializes shared memory and leaves OnLine in quiescent mode.
Syntax If you are loading a table that contains blobs stored in a blobspace, a prompt asks you if you want to move the blobs to another blobspace. If you respond “yes,” the next prompt displays the blobspace name where the blobs were stored when the tape was created. You are asked to enter the name of the blobspace where you want the blobs stored. If the name you enter is valid, all blob columns in the table are moved to the new blobspace during the load.
Specify Tape Parameters If you do not specify any tape parameter options, tbload uses the archive tape parameters by default. The tape device to which data is sent is assumed to be the device specified as TAPEDEV. The block size and tape size are assumed to be the values specified as TAPEBLK and TAPESIZE, respectively. To specify other parameter values, see the next section.
Create Options The host machine where the tape device is attached must permit user informix to run a UNIX shell from your machine without requiring a password. If your machine does not appear in the hosts.equiv file of the other host machine, it must appear in the .rhosts file in the home directory of the informix login. If you are executing tbload as root, the machine name must appear in the .rhosts file for root on the other host machine.
tblog: Display Logical Log Contents tblog: Display Logical Log Contents The tblog utility displays the contents of an OnLine logical log file. The tblog output is most useful in debugging situations, when you want to be able to track a specific transaction or to see what changes have been made to a specific tblspace. Syntax tblog Log-Record Read Filters p. 7-52 Log-Record Display Filters p. 7-54 -q directs tblog to suppress the one-line header that appears every 18 records by default.
Log-Record Read Filters In addition to the header, you can direct tblog to display the following information: ■ Copies of blobpages from blobspaces (copied from the logical log backup tape only, not available from disk) ■ Logical log record header and data (including copies of blobs stored in a dbspace) If tblog detects an error in the log file, such as an unrecognizable log type, it displays the entire log page in hexadecimal format and terminates.
Log-Record Read Filters -d Option If you do not use the -d option, tblog reads the logical log files stored on disk, starting with the logical log file with the lowest logid number. The tblog utility uses the pathnames stored in the root dbspace reserved pages to locate the logical log files. If OnLine is in offline mode when you execute tblog, only the files on disk are read.
Log-Record Display Filters Log-Record Display Filters Log-Record Display Filters 1 -l 1 -t tblspace num 1 -u username 1 -x transaction num -l directs tblog to display the long listing of the logical log record, both heading and associated data. -t tblspace num directs tblog to display only records associated with the specified tblspace. The tbspace number can be specified as either a decimal or hexadecimal value. (If you do not use an 0x prefix, the value is interpreted as a decimal.
Interpreting tblog Output Interpreting tblog Output The tblog utility displays the header of each logical log record. Depending on the record type, additional columns of information also appear in the output. Displayed below is a sample tblog output that illustrates the header columns that display for every logical log record.
Interpreting tblog Output Record Types In addition to the six columns that display for every record, some record types display additional columns of information. The information that is displayed varies, depending on record type. A complete listing of record types, alphabetized by type, is contained in the table on page 7-57. The following paragraphs contain additional notes about specific record types or processing conditions.
Interpreting tblog Output ■ Remainder page update ■ Before update record ■ After update record Record Contents The table below lists tblog record types as they appear in the log. The contents of the log record indicate the type of operation that generated the log entry. The additional column and format information describes what is displayed for each record type if you request the -l option (long listing).
Interpreting tblog Output Type Contents Additional Columns Format BFRMAP blob free map change tblspace id hexadecimal bpageno hexadecimal status USED/FREE log id decimal prevpage hexadecimal tblspace id hexadecimal fextsize decimal nextsize decimal row size decimal ncolumns decimal BLDCL build tblspace BSPADD add blobspace blobspace name ASCII BTCPYBCK copyback child key to parent tblspace id hexadecimal parent logical page decimal child logical page decimal slot de
Interpreting tblog Output Type Contents Additional Columns Format BTSHUFFL shuffle B+ tree nodes tblspace id hexadecimal parent logical page decimal left logical page decimal right logical page decimal left slot decimal left rowoff decimal key number decimal flags hexadecimal tblspace id hexadecimal rowid hexadecimal parent logical page decimal left logical page decimal right logical page decimal infinity logical page decimal rootleft logical page decimal midsplit decim
Interpreting tblog Output Type Contents Additional Columns Format CKPOINT checkpoint max users decimal number of active transactions decimal CLR compensation log record; part of a roll back none CLUSIDX create clustered index tblspace id hexadecimal COLREPAI adjustment of BYTE, TEXT, or VARCHAR column tblspace id hexadecimal no.
Interpreting tblog Output Type Contents Additional Columns Format ERASE drop tblspace tblspace id hexadecimal HDELETE home row delete tblspace id hexadecimal rowid hexadecimal slotlen decimal HEURTX heuristic decision to roll back transaction flag hexadecimal HINSERT home row insert tblspace id hexadecimal rowid hexadecimal slotlen decimal tblspace id hexadecimal rowid hexadecimal slotlen decimal tblspace id hexadecimal rowid hexadecimal slotlen decimal tblspace id
Interpreting tblog Output Type Contents Additional Columns Format PBINSERT tblspace blob page insert bpageno hexadecimal tblspace id hexadecimal rowid hexadecimal slotlen decimal pbrowid hexadecimal PDINDEX pre-drop index tblspace id hexadecimal PERASE pre-erase old file tblspace id hexadecimal PNSIZES set tblspace extent sizes tblspace id hexadecimal fextsize decimal nextsize decimal PREPARE participant in twophase commit can commit coordinator’s DBSERVERNAME ASCII PTEX
Interpreting tblog Output Type Contents Additional Columns Format RUPBEF remainder page update, before image tblspace id hexadecimal rowid hexadecimal slotlen decimal list of locked tblspaces held by transaction no.
tbmode: Mode and Shared-Memory Changes tbmode: Mode and Shared-Memory Changes The flags that accompany tbmode determine which of the following operations is performed: ■ Change OnLine operating mode ■ Force a checkpoint ■ Immediately change residency of OnLine shared memory for this session ■ Switch the logical log file ■ Kill an OnLine database server process ■ Kill an OnLine transaction You must be logged in as root or user informix to execute tbmode.
Syntax Syntax tbmode Change OnLine Mode page 7-66 -y Force a Checkpoint page 7-67 Change SharedMemory Residency page 7-68 Switch the Logical Log File page 7-68 Kill an OnLine Server Process page 7-69 Kill an OnLine Transaction page 7-69 -y automatically responds “yes” to all prompts.
Change OnLine Mode Change OnLine Mode Change OnLine Mode -k -m -s -u -k removes OnLine shared memory and takes OnLine to offline mode. -m takes OnLine from quiescent to online mode. -s restricts new access to OnLine but current processing can finish. When all processing is finished, -s takes OnLine to quiescent mode. -u ends current processing and takes OnLine to quiescent mode. The -u option leaves shared memory intact. -k Option The -k option is equivalent to the DB-Monitor Take-Offline option.
Force a Checkpoint -s Option The -s option is equivalent to the DB-Monitor Graceful-Shutdown option. Once you execute the -s option, you cannot cancel the request. A prompt asks for confirmation. If you want to eliminate this prompt, execute the -y option with the -s option. Refer to page 3-10 for more details about what happens when you execute this command. -u Option The -u option is equivalent to the DB-Monitor Immediate-Shutdown option. A prompt asks for confirmation.
Change Shared-Memory Residency Change Shared-Memory Residency Change SharedMemory Residency -n -r -n immediately ends forced residency of OnLine shared memory for this session without affecting the value of RESIDENT, the forced-memory parameter in the configuration file. -r immediately begins forced residency of OnLine shared memory for this session without affecting the value of RESIDENT, the forced-memory parameter in the configuration file.
Kill an OnLine Server Process Kill an OnLine Server Process Kill an OnLine Server Process -z pid -z pid kills an OnLine database server process associated with the process identification number pid. Warning: Do not kill an OnLine database server process that is in a critical section of code or is holding a latch. If you do, OnLine initiates an abort. Refer to page 2-32 for instructions on how to determine if an OnLine database server process is in a critical section of code or is holding a latch.
tbparams: Modify Log Configuration Parameters tbparams: Modify Log Configuration Parameters Use tbparams to add or drop a logical log or to change the size or location of the physical log. You must be logged in as root or user informix to execute tbparams. Syntax tbparams Add a Logical Log File p. 7-70 -y Drop a Logical Log File p. 7-71 Change Physical Log Parameters p. 7-72 automatically responds “yes” to all prompts. -y Any tbparams command will fail if an OnLine archive is in progress.
Drop a Logical Log File You cannot add a log file during an archive (quiescent or online). The newly added log file or files retain a status of A and do not become available until you create a level-0 archive. The tbparams command to add a logical log file is but one step in a larger procedure. Refer to page 3-28 for more details about the complete procedure for adding a logical log file. Drop a Logical Log File Drop a Logical Log File -d -l logid -d indicates that a logical log file is to be dropped.
Change Physical Log Parameters Change Physical Log Parameters Change Physical Log Parameters -p 1 -s size 1 -d dbspace -p indicates a change to the physical log. -d dbspace names the dbspace where the physical log will reside. -s size indicates the size of the physical log, in kilobytes. The space allocated for the physical log must be contiguous.
tbspaces: Modify Blobspaces or Dbspaces tbspaces: Modify Blobspaces or Dbspaces Use tbspaces to perform the following modifications: ■ Create a blobspace or dbspace ■ Drop a blobspace or dbspace ■ Add a chunk ■ Change chunk status You must be logged in as root or user informix to execute tbspaces. Syntax tbspaces Create a Blobspace or Dbspace p. 7-74 -y Drop a Blobspace or Dbspace p. 7-75 Add a Chunk p. 7-76 Change Chunk Status p. 7-77 -y automatically responds “yes” to all prompts.
Create a Blobspace or Dbspace Create a Blobspace or Dbspace Create a Blobspace or Dbspace -c -b blobspace -d dbspace -o offset -p pathname -g pageunit -s size -m pathname offset -b blobspace names the blobspace to be created. -c indicates that a blobspace or dbspace is to be created. -d dbspace names the dbspace to be created. -g page_unit specifies the blobspace blobpage size in terms of page_unit, the number of disk pages per blobpage.
Drop a Blobspace or Dbspace Drop a Blobspace or Dbspace Drop a Blobspace or Dbspace -d blobspace dbspace blobspace names the blobspace to be dropped. -d indicates that either a blobspace or a dbspace is to be dropped. dbspace names the dbspace to be dropped. The blobspace or dbspace you intend to drop must be unused. It is not sufficient for the blobspace or dbspace to be empty. Execute tbcheck -pe to verify that no table is currently storing data in the blobspace or dbspace.
Add a Chunk Add a Chunk Add a Chunk -a blobspace -p pathname dbspace -o offset -s size -m pathname offset -a indicates that a chunk is to be added. blobspace names the blobspace that will receive the new chunk. dbspace names the dbspace that will receive the new chunk. -g page_unit specifies the blobspace blobpage size in terms of page_unit, the number of disk pages per blobpage. -m pathname offset is an optional pathname and offset to the chunk that will mirror the new chunk.
Change Chunk Status Change Chunk Status Change Chunk Status -s blobspace -p pathname dbspace -o offset -D -O blobspace indicates that the chunk belongs to a blobspace. -D takes the chunk down. dbspace indicates that the chunk belongs to a dbspace. -o offset indicates, in kilobytes, the offset into the disk partition or into the device to reach the chunk. -O restores the chunk and brings it online. -p pathname indicates the disk partition or device of the chunk.
tbstat: Monitor OnLine Operation tbstat: Monitor OnLine Operation The tbstat utility reads shared-memory structures and provides statistics that are accurate at the instant that the command executes. The contents of shared memory might change as the tbstat output displays. The tbstat utility does not place any locks on shared memory so running the utility does not affect performance.
tbstat: Monitor OnLine Operation Topic or Function Option Flag Shared-memory segment (save it to a file) -o Summary of user-oriented (lowercase) options -a Tblspaces, active -t User processes and transactions -u Write type statistics (gathered when pages are flushed from buffers) -F Zero out all statistic counts -z (2 of 2) Utilities 7-79
Syntax Syntax tbstat 1 a 1 b 1 B 1 c 1 d 1 D 1 F 1 k 1 l 1 m 1 p 1 P 1 r 1 R 1 s 1 t seconds 1 u 1 X 1 z 1 o filename_dest filename_source -- 7-80 IBM Informix OnLine Database Server Administrator’s Guide
Syntax filename_dest is the destination file that will contain the copy of the sharedmemory segment. filename_source is the file that tbstat will read as source for the requested information. This file must include a previously stored shared-memory segment. seconds is the number of seconds between each execution of this tbstat command. Use the filename_source parameter with other option flags to derive the requested tbstat statistics from the shared-memory segment contained in filename_source.
Option Descriptions An example header follows: RSAM Version 5.0--On-Line--Up 15:11:41--368 KBytes Option Descriptions You can combine multiple tbstat option flags in a single command. No Options If you invoke tbstat without any options, the command is interpreted as tbstat -pu. For an explanation of the -p and -u options, refer to the paragraphs that follow. -- Option The -- option displays a listing of all tbstat options and their functions.
Option Descriptions 0x01 0x02 0x04 0x08 Modified data Data LRU Error pagenum is the physical page number on the disk. memaddr is the buffer memory address. nslots is the number of slot-table entries in the page. This indicates the number of rows (or portions of a row) that are stored on the page.
Option Descriptions -B Option Use the -B option to obtain information about all OnLine buffers, not just buffers currently in use. The -B output display fields are the same as the fields that appear in the -b output. See the -b option. -c Option Use the -c option to display the OnLine configuration file. OnLine first checks to see if you have assigned a value to the system variable TBCONFIG. If so, OnLine displays the contents of $INFORMIXDIR/etc/$TBCONFIG.
Option Descriptions Position 1: Position 2: Position 3: M – Mirror N – Not Mirrored X – Newly mirrored B – Blobspace owner is the owner of the dbspace. name is the name of the dbspace. The number of active blobspaces and dbspaces and the total number of existing blobspaces and dbspaces are listed. The maximum number of blobspaces and dbspaces is specified as DBSPACES in the OnLine configuration file.
Option Descriptions The number of active chunks and the number of existing chunks are listed. The maximum number of chunks is specified as CHUNKS in the OnLine configuration file. Occasionally, the timing of the tbstat -d command can affect the utility output. Timing becomes a factor in two cases. The first case occurs immediately after blobspace blobpages are allocated. The tbstat -d routine looks directly at the disk to obtain blobpage statistics from the blobspace free-map page.
Option Descriptions -F Option Use the -F option to display account for each type of write performed when a page was flushed to disk. You can interpret output from this option as follows: Fg Writes is the number of times a foreground write occurred. Refer to page 2-77. LRU Writes is the number of times an LRU write occurred. Refer to page 2-77. Idle Writes is the number of times an idle write occurred. Refer to page 2-76. Chunk Writes is the number of times a write occurred at a checkpoint.
Option Descriptions state indicates the current page-cleaner activity using the following codes: C E I L chunk write exit cleaner is idle LRU queue The “exit” code indicates either that OnLine is in the process of performing a shutdown or that a page cleaner did not return from its write in a specific amount of time. This is also known as a time-out condition. The tbinit daemon does not know what happened to the cleaner, so it is marked as “exit.” In either case, the cleaner process eventually exits.
Option Descriptions type indicates the type of lock using the following codes: HDR B S X I U IX IS SIX header bytes lock shared exclusive intent update intent-exclusive intent-shared shared, intent-exclusive tblsnum is the tblspace number of the locked resource. rowid is the row identification number. The rowid provides the following lock information: size ■ If the rowid equals 0, the lock is a table lock. ■ If the rowid ends in two 0s, the lock is a page lock.
Option Descriptions numpages is the number of pages written to the logical log. numwrits is the number of writes to disk. pages/io is calculated as (numpages)/(numwrits). This value indicates how effectively physical log writes are being buffered. phybegin is the physical page number of the beginning of the log. physize is the size of the physical log in pages. phypos is the current position in the log where the next log record write will occur. phyused is the number of pages used in the log.
Option Descriptions The following fields are repeated for each logical log file: address is the address of the log file descriptor. number is the logical log file logid number. flags gives the status of each log as follows: A B C F L U newly added backed up current logical log file free, available for use contains the most-recent checkpoint record unreleased uniqid is the unique ID number of the log. begin is the beginning page of the log file. size is the size of the log in pages.
Option Descriptions -p Option Use the -p option to display profile counts. The first portion of the display describes reads and writes. Reads and writes are tabulated in three categories: from disk, from buffers, and number of pages (read or written). The first %cached field is a measure of the number of reads from buffers compared to reads from disk. The second %cached field is a measure of the number of writes to buffers compared to writes to disk.
Option Descriptions The next portion of the -p display tabulates the number of times different ISAM calls were executed. The calls occur at the lowest level of operation and do not necessarily correspond one-to-one with SQL statement execution. A single query might generate multiple ISAM calls. These statistics are gathered across the OnLine system and cannot be used to monitor activity on a single database unless only one database is active or only one database exists.
Option Descriptions The third portion of the -p display tracks the number of times a resource was requested when none was available. For example, if the value of TBLSPACES is set to 200 in your configuration file and a user process attempted to open the 201st table, the ovtbls field would be incremented by 1. ovtbls is the number of times that OnLine attempted to exceed the maximum number of available tblspaces (specified as TBLSPACES in the configuration file).
Option Descriptions dltouts increments each time the distributed deadlock time-out value is exceeded while a user process is waiting for a lock. lchwaits increments when a process waits to gain access to a sharedmemory resource. ckpwaits is the number of checkpoint waits. compress increments each time a data page is compressed. (Refer to page 2-133 for more details about how OnLine implements page compression.
Option Descriptions Summary information follows the individual LRU queue information. You can interpret the summary information as follows: dirty is the total number of buffers that have been modified in all LRU queues. queued is the total number of buffers in LRU queues. total is the total number of buffers. hash buckets is the number of hash buckets. 7-96 buffer size is the size of each buffer. start clean is the value of LRU_MAX_DIRTY. Refer to page 5-17 for more details.
Option Descriptions -s Option Use the -s option to display general latch information. (Refer to page 2-41 for more details about latches.
Option Descriptions -t Option Use the -t option to display tblspace information for active tblspaces. You can interpret output from this option as follows: n is a counter of open tblspaces. address is the address of the tblspace in the shared-memory tblspace table. flgs describes the flag using the following flag bits: 0x01 0x02 Busy Dirty ucnt is the usage count, which indicates the number of user processes currently accessing the tblspace.
Option Descriptions -u Option Use the -u option to print a profile of user activity. The output described in this paragraph is provided for each user process. At a minimum, two user processes always appear in -u output: tbinit always occupies the first entry and tbundo always occupies the second entry. If you have configured OnLine for additional page cleaners (specified as CLEANERS in the configuration file), each page-cleaner daemon, tbpgcl, is listed as well.
Option Descriptions The second section of the -u output describes transactions. This information is required only for an X/Open environment or in some situations in which OnLine is participating in queries managed by IBM Informix STAR. Refer to page 9-58 for a detailed description of the transactions section.
Option Descriptions wait if the user process is waiting for a specific latch or lock, this field displays the address of the resource. Use this address to map to information provided in the -s (latch) or -k (lock) output. tout is the lock time-out counter. nlocks is the number of locks that the user process is holding. (The -k output should include a listing for each lock held.) nreads is the number of read calls that the user process has executed.
tbtape: Logging, Archives, and Restore -z Option Use the -z option to set the profile counts to 0. If you use the -z option to reset and monitor the count of some fields, be aware that profile counts are incremented for all activity that occurs in any database that the OnLine database server manages. Any user can reset the profile counts and thus interfere with monitoring that another user is conducting.
Syntax Syntax tbtape Request a Logical Log Backup p. 7-104 Start Continuous Backups p. 7-104 Create an Archive p. 7-105 Perform a Data Restore p. 7-105 Change Database Logging Status p. 7-106 If more than one tape is needed during the logical log backups or during an archive, tbtape prompts for each additional tape. Do not run tbtape in background mode (using the UNIX & operator) since you might need to provide input from the terminal or window.
Request a Logical Log Backup Request a Logical Log Backup Request a Logical Log Backup -a -a directs tbtape to back up all full logical log files. The -a option is the equivalent of the DB-Monitor Auto-Backup option from the Logical-Logs menu. OnLine backs up all full logical log files and prompts you with an option to switch the log files and back up the formerly current log. Refer to page 3-36 for more details to consider when you execute this command.
Create an Archive Create an Archive Create an Archive -s -s directs tbtape to create an archive. The -s option is the equivalent of the DB-Monitor Create option from the Archive menu. You are prompted to supply the level archive (0, 1, or 2) that you wish to create. You can create an archive while OnLine is in quiescent or online mode. Refer to page 3-57 for more details to consider when you execute this command.
Change Database Logging Status Change Database Logging Status Change Database Logging Status -B -s database -N -U -B directs tbtape to change the status of the specified database to buffered logging. database is the name of the database. The database name cannot include a database server name. -N directs tbtape to end logging for the specified database. -s initiates an archive. -U directs tbtape to change the status of the specified database to unbuffered logging.
tbunload: Transfer Binary Data in Page Units tbunload: Transfer Binary Data in Page Units The tbunload utility writes a database or table to tape. The program unloads the data in binary in disk-page units, making it more efficient than dbexport. The tape created by tbunload is read using the tbload utility. The machine receiving the data and the machine used to create the tape must share the same page size (specified as BUFFSIZE in the configuration file).
Syntax Syntax tbunload database Specify Tape Parameters p. 7-109 table name database is the name of the database. The database name cannot include a database server name because tbunload does not support a client/server environment. table name is the name of the table. The table name cannot include a database server name because tbunload does not support a client/server environment. If you do not specify any tape parameter options, tbunload uses the archive tape parameters by default.
Specify Tape Parameters Specify Tape Parameters Specify Tape Parameters 1 -l 1 -b blocksize 1 -s tapesize 1 -t device -b blocksize specifies in kilobytes the block size of the tape device. -l directs tbunload to read the values for tape device, block size, and tape size from the logical log backup device parameters (LTAPEDEV, LTAPEBLK, and LTAPESIZE, respectively). -s tapesize specifies in kilobytes the amount of data that can be stored on the tape.
Chapter OnLine Message Log In This Chapter . . 8 . . . . . . . . . . . . . . . . . . 8-3 OnLine Message Log . . . . . . . . . . . . . . . . . . 8-3 Alphabetized Messages . . . . . . . . . . . . . . . . .
8-2 IBM Informix OnLine Database Server Administrator’s Guide
In This Chapter This chapter introduces the OnLine Message Log, a UNIX file specified by the MSGPATH configuration file parameter. OnLine Message Log The messages contained in the OnLine message log rarely require immediate action. (Situations that require your immediate attention are reported through error messages or status messages sent to the system console.
OnLine Message Log Informix recommends that you monitor the message log once or twice a day to ensure that processing is proceeding normally. To that end, Informix has documented the messages with the intent of providing you with as much information about OnLine processing as possible. Of the approximately 150 messages included here, six might require you to contact Informix technical support staff. These six messages are rarely, if ever, seen at customer locations.
Alphabetized Messages Alphabetized Messages Aborting Long Transaction: tx 0xn The logical log has filled beyond the long transaction high-water mark (LTXHWM) and the offending long transaction is in the process of rolling back. No additional action is needed. The address of the transaction structure in shared memory is displayed as hexadecimal value.
Alphabetized Messages Attempt to write pages nn to a nn page buffer. OnLine detected a page buffer overflow. No action is needed. OnLine handles the situation internally. Backup Mode This message appears during a data restore. OnLine shared memory is in an internal backup mode while the blobspaces and dbspaces are being restored, before OnLine changes to online mode.
Alphabetized Messages The messages can be interpreted as follows: ■ The first message indicates the inconsistency. ■ The second message identifies the database server process affected. ■ The third message is the header for the hexadecimal/ASCII dump of the buffer header structure that follows. ■ The fourth message is the headers for the hexadecimal/ASCII dump of the page header structure that follows. call to ptmap() from ptbld() failed OnLine detected an error in an attempt to build a table.
Alphabetized Messages Cannot execute gtrid_aaaa -- No TP monitor available. (This message is only received by users of the IBM Informix TP/XA product.) An OnLine database server process is making an XA function call within an X/Open environment, but cannot detect the presence of a TP monitor. Refer to the IBM Informix TP/XA User Manual. Cannot Open Dbspace nnn OnLine is unable to access the specified dbspace.
Alphabetized Messages Cannot Perform Checkpoint A recovery process that is attempting to restore a mirror chunk has requested a checkpoint, but the checkpoint cannot be performed because the tbinit daemon process has died. Cannot Read Logical Log The logical log or the chunk containing the logical log has become corrupted. OnLine cannot initialize. Perform a data restore from archive.
Alphabetized Messages Cannot Rollforward from Checkpoint OnLine is unable to recover the logical log and thus is unable to complete fast recovery. If OnLine does not come online, perform a data restore from archive. Can’t fork to create tbundo errno=nn The tbinit process cannot start a tbundo daemon that is needed to perform cleanup on a transaction on behalf of an aborted database server process. OnLine initiates a shutdown. The errno nn refers to the UNIX error that indicates the cause of the problem.
Alphabetized Messages Chunk number nn pathname -- Online The indicated chunk in a mirrored pair has been recovered and is online (marked with status “O”). The chunk number and chunk device pathname are displayed. Chunk number nn pathname -- Recovery Aborted Due to Signal The indicated chunk could not be recovered from the other chunk in the mirrored pair because of a signal that was received. The chunk number and chunk device pathname are displayed.
Alphabetized Messages Chunk number nn pathname -- Recovery Failed - can’t fork The indicated chunk cannot be mirrored or recovered from the other chunk in the mirrored pair because the process that is needed to perform the recovery cannot be spawned. The chunk number and chunk device pathname are displayed. Refer to your operating system documentation for more information about fork().
Alphabetized Messages Dbspace dbspacename -- Recovery Begins (pid) Dbspace recovery occurs when mirroring is turned on for a dbspace. An attempt to recover this dbspace has begun. The process ID of the recovery process is displayed. This message is followed by messages for each chunk in the dbspace indicating when chunk recovery begins and completes (or fails). Dbspace dbspacename -- Recovery Complete (pid) Mirroring has been successfully turned on for this dbspace.
Alphabetized Messages Empty B-tree node 0xn; Unable to do CopyBack OnLine detected B+ tree corruption. Drop and re-create the index using the SQL statements DROP INDEX and CREATE INDEX. An ISAM error number is returned to the user. Node 0xn is the index node where the corruption was detected, expressed as a hexadecimal value. To determine which index must be re-created, run tbcheck -cI on the table. ERROR - bfput (BF_MODIFY) not in Critical Section.
Alphabetized Messages type is the log record type (refer to tblog); len is the length of the log record in bytes. ERROR: logread()- loguniq nn logpos 0xn OnLine detected an error while attempting to read from the logical log. The loguniq value is the logical log logid value; the logpos value is the logical log position, expressed as a hexadecimal. ERROR: logundo(n)- iserrno nn us 0xn pid nn tx 0xn logpos 0xn loguniq nn OnLine detected an error while attempting to roll back a logical log record.
Alphabetized Messages ERROR: page cleaner nn has timed out. A page-cleaner process could not complete its task within the allowed time (two minutes). The cleaner process is disabled. Page cleaning continues with the remaining page-cleaner processes and the tbinit daemon. The disabled page cleaner is identified by its process ID, nn.
Alphabetized Messages Failed to stat chunk chunk_pathname, errno = nn OnLine detected an error during an operating system stat() call. The UNIX error number is returned. Refer to your operating system documentation. fatal pgcompress error: pid = nn, uid = nn OnLine detected an error in an attempt to compress a page. The process ID and the user ID of the database server process that generated the error are displayed.
Alphabetized Messages INFORMIX-OnLine Must ABORT; Log error nn; us 0xn pid 0xn us_flags 0xn This error can occur when OnLine detects logical log corruption of some kind. Reinitialize OnLine shared memory. If this does not bring OnLine online, perform a data restore from archive. The log error value refers to an area in the code where the corruption was detected.
Alphabetized Messages Insufficient resources for index change rollback (partnum = nn, keynum = nn) Not enough disk space is available to complete the rollback of the index changes made. OnLine marks this index as unusable. The index must be dropped and re-created by the user using the SQL statements DROP INDEX and CREATE INDEX. (partnum refers to the tblspace number of where the index is located, expressed as an integer. keynum refers to an internal tblspace description page.
Alphabetized Messages I/O error, Primary Chunk pathname -- Offline OnLine detected an I/O error on a primary chunk in a mirrored pair. In response, the chunk was taken offline. I/O error ‘read’: expect nn actual nn addr 0xn errno nn retries nn The operating system read() call encountered an error. The operation is retried three times to ensure that the error is not spurious. The values reported are as follows: ■ expect is the expected byte count. ■ actual is the actual byte count.
Alphabetized Messages Level n Archive Completed The archive (of the indicated level) has ended. Level n Archive Started An archive (of the indicated level) is under way. Lock table overflow - user id nn process id nn A database server process attempted to acquire a lock when no locks were available. The user ID and process ID of the requesting database server process are displayed.
Alphabetized Messages ■ The number of transactions that were committed ■ The number of transactions that were rolled back ■ The number of transactions that were left open or unresolved (applicable for users of IBM Informix STAR or IBM Informix TP/XA) ■ The number of transactions that were unable to acquire locks needed to complete the transaction (These transactions are considered open.
Alphabetized Messages Not enough Logical Logfiles, Increase “LOGFILES” During a data restore, the value of the LOGFILES configuration file parameter must always be greater than or equal to the total number of logical log files. At some point during the restore, the number of logical log file exceeded the value of LOGFILES. Increase the value of LOGFILES. Not enough main memory OnLine detected an error in an attempt to acquire more memory space from the operating system.
Alphabetized Messages Physical Recovery Complete: nn Pages Restored Physical recovery is the first phase in OnLine fast recovery, during which the pages from the physical log are written back to the disk. This phase is complete and the second phase, logical recovery, is ready to begin. Whenever OnLine is shut down in a controlled manner, the physical log contains no pages and zero pages are restored.
Alphabetized Messages Process Aborted Abnormally (latch): pid=nn user=nn flags=0xn A database server process was terminated while it was holding a latch. OnLine initiates a shutdown to preserve database integrity and consistency. The database server process ID, its user ID, and the value of the user structure flags (expressed as hexadecimal values) are displayed.
Alphabetized Messages read_record: deleted rowid = 0xn, partnum = 0xn OnLine detected an error in an attempt to read a portion of a row. The page and slot value indicated by the rowid is empty and appears to have been deleted. The rowid should be deleted by OnLine. The rowid of the portion of the row is displayed as a hexadecimal value. partnum refers to the tblspace number, expressed as a hexadecimal value.
Alphabetized Messages Remote tape using alternate shell command: This message indicates that a remote tape device has been specified using the environment variable DBREMOTECMD. DBREMOTECMD overrides the default remote shell for your hardware platform. Set it using either a simple command or the full pathname. When you use the full pathname, the database server searches your PATH for the specified command. The value of DBREMOTECMD displays as rcmd.
Alphabetized Messages Some dirty buffers not written. diskcnt=nn writes=nn notflsh=nn OnLine detected that some modified buffers destined for a specific chunk were not written during this cleaning. These buffers will be written during the next flushing of the buffer pool. The value of diskcnt is the number of buffers that user processes attempted to write; writes is the number of buffers that were written; and notflsh is the difference. All values are integers.
Alphabetized Messages Too Many Active Transactions, Increase “TRANSACTIONS” During a data restore, the value of the TRANSACTIONS configuration file parameter must always be greater than or equal to the total number of active transactions. At some point during the restore, the number of active transactions exceeded the value of TRANSACTIONS. Increase the value of TRANSACTIONS.
Alphabetized Messages Transaction table overflow - user id nn, process id nn A database server process attempted to gain access to the transaction table when no entries in the shared-memory table were available. The user ID and process ID of the requesting database server process are displayed. tx_offwtlist() - userp 0xn not on wait list - txp 0xn OnLine detected an error in the wait-list management for a transaction. No action is needed. OnLine handles the situation internally.
Alphabetized Messages User table overflow - user id nn process id nn A database server process attempted to attach to shared memory when there were no available entries in the shared-memory user table. The user ID and process ID of the requesting database server process are displayed. Warning: Unable to open tblspace nn, iserrno = nn OnLine cannot open the specified tbspace. (The value nn is the hexadecimal representation of the tblspace number.
Chapter Product Environment In This Chapter . . . . . . . . . . 9 . . . . . . . . . . 9-3 The OnLine Environment . . . . . . . OnLine Features . . . . . . . . . High Performance . . . . . . . Fault Tolerance and High Availability. Multimedia Support. . . . . . . Distributed Data Queries . . . . . Features Beyond the Scope of OnLine . . Bad-Sector Mapping. . . . . . . Blob Scanning or Compression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Two-Phase Commit and Automatic Recovery . . . . . Coordinator Recovery . . . . . . . . . . . . Participant Recovery. . . . . . . . . . . . . Presumed-Abort Optimization . . . . . . . . . Independent Action and Manual Recovery . . . . . . Heuristic Decisions: What and Why . . . . . . . . . Heuristic Rollback . . . . . . . . . . . . . . . Condition 1: Logical Log Fills to a High-Water Mark. . Condition 2: You Execute tbmode -z . . . . . . . When a Heuristic Rollback Occurs . . . . . . . .
In This Chapter This chapter describes three possible OnLine environments: ■ A single OnLine database server operating on a host machine ■ Two or more OnLine database servers operating independently on the same host machine (referred to as multiple residency) ■ Two or more OnLine database servers operating on a network configured with the IBM Informix STAR client/server product Read only the sections of this chapter that are of interest to you.
OnLine Features High Performance OnLine achieves high performance through two mechanisms: ■ Raw disk management ■ Shared-memory management OnLine performs its own disk management using raw devices. By storing tables on one or more raw devices instead of in a standard UNIX file system, OnLine can manage the physical organization of data and minimize disk I/O.
OnLine Features Fault Tolerance and High Availability OnLine uses several logging and recovery mechanisms to protect data integrity and consistency in the event of an operating-system or media failure. The data needed to support these logging and recovery mechanisms are stored in the OnLine physical and logical logs, on archive tape, and on logical log backup tape. OnLine fault-tolerant features are enhanced by associated high-availability features.
Features Beyond the Scope of OnLine Distributed Data Queries The IBM Informix STAR product enables an OnLine user to query and update more than one database across multiple OnLine database servers within a single transaction. The OnLine database servers can reside within a single host machine or on the same network. Support is provided for TCP/IP networks. For more information about IBM Informix STAR administration, refer to page 9-15.
What Is Multiple Residency? Blob Scanning or Compression OnLine receives blob data into an existing table in any one of four ways: ■ From the DB-Access LOAD statement ■ From the OnLine dbload utility ■ From INFORMIX-ESQL/C locator variables ■ From INFORMIX-ESQL/C FILE host data types OnLine does not contain any mechanisms for scanning blobs and inserting the data into a file, or for blob compression, after the blob has been scanned.
What Is Multiple Residency? ■ Create multiple configuration files and OnLine database server environments that are precision-tuned for a specific use. Because multiple OnLine database servers each have their own shared-memory configurations and locations, you can create a configuration file for each database server that reflects special tuning requirements. Each application could run in an OnLine environment designed especially for it. ■ Separately administer sensitive databases.
What Is Multiple Residency? Figure 9-1 Conceptual illustration of multiple residency Each OnLine accesses a unique area of the disk disk disk tbconfig1 tbconfig2 shared-memory segments database server process database server process Application: Application: TBCONFIG set to tbconfig1 TBCONFIG set to tbconfig2 Product Environment 9-9
How Multiple Residency Works How Multiple Residency Works Multiple residency is possible because separate shared-memory segments can be maintained for each instance of OnLine. The link between each OnLine instance and its associated shared-memory segments is the value of SERVERNUM. Under the UNIX operating system, each OnLine user process that wishes to attach to shared memory passes to the operating system a shared-memory key value.
How to Set Up Multiple Residency How to Set Up Multiple Residency This section describes the six-step installation procedure for multiple OnLine database servers on the same host machine. Before you perform this procedure, you already should have installed one OnLine database server as described in Chapter 1, “Installation and Initial Configuration.” Chapter 1 contains background information that is useful for understanding this section. To install multiple residency 1. Log in as user informix.
How to Set Up Multiple Residency Step 2: Edit the Configuration File for the New OnLine Use an operating-system editor to edit the parameters in the new configuration file. Do not use DB-Monitor for this task. If you use DB-Monitor, you will edit the values of the existing OnLine configuration file, and not the new one as you intend. (DB-Monitor accesses the OnLine configuration indicated by TBCONFIG at the time.
How to Set Up Multiple Residency SERVERNUM The SERVERNUM parameter specifies the unique value associated with this OnLine configuration. (Values need to be unique within a machine. OnLine database servers within a network can use the same SERVERNUM value.) The name of the configuration file should reflect the value you select for SERVERNUM. The default value for SERVERNUM is zero. The value cannot exceed 255.
How to Set Up Multiple Residency Step 4: Complete Tuning OnLine Configuration (Optional) Access DB-Monitor or use an operating-system editor to modify other tuning modifications to the configuration parameters if you wish. (If you execute tbmonitor to use DB-Monitor, the configuration values that appear in DB-Monitor are read from the configuration file specified by the TBCONFIG environment variable.
OnLine Administration with IBM Informix STAR OnLine Administration with IBM Informix STAR The IBM Informix STAR product enables a user to query and update more than one database across multiple OnLine database servers within a single transaction. The OnLine database servers can reside within a single host machine or on the same network.
Sharing Data by Using IBM Informix STAR Figure 9-2 illustrates a situation in which data is shared over a connection created between two different OnLine database servers. Current OnLine server: italy Figure 9-2 IBM Informix STAR enables OnLine to share data across a network.
Sharing Data by Using IBM Informix STAR Updating Multiple OnLine Servers IBM Informix STAR ensures that transactions that span more than one OnLine database server meet all the requirements of transaction processing (atomicity, consistency, isolation, and durability).
Sharing Data by Using IBM Informix STAR In the transaction shown in Figure 9-2, IBM Informix STAR must ensure that all three OnLine database servers implement the same action, either to commit the transaction or to roll back the transaction. This requirement of atomicity must be enforced even if a failure occurs at any host machine.
IBM Informix STAR and Two-Phase Commit Protocol Within a client/server network, the identity of the coordinator OnLine is dynamic. In any transaction that includes multiserver modifications, the role of coordinator is assumed by the OnLine database server that is managing the current database. Thus, if the first statement in the Figure 9-2 example were changed to specify the current database as stores5@france, the twophase commit protocol would assign the role of coordinator to france.
IBM Informix STAR and Two-Phase Commit Protocol During the postdecision phase the coordinator directs each database server to either commit the changes or roll back the transaction. If the coordinator directs the participants to commit the transaction, it waits to receive acknowledgments from each before ending the global transaction. If the coordinator directs the participants to roll back the transaction, no acknowledgments are expected from the participants.
IBM Informix STAR and Two-Phase Commit Protocol Example Transaction Commit Figure 9-4 is a simple representation of a two-phase commit protocol that results in a transaction being committed. Notice that the coordinator’s decision to commit or roll back the transaction is stored on stable storage before the decision is propagated to the participants. This is done to facilitate recovery.
IBM Informix STAR and Two-Phase Commit Protocol Example Transaction Rollback Figure 9-5 is a simple representation of a two-phase commit protocol that results in a transaction being rolled back. Notice that the participants do not send a confirmation to the coordinator when the piece of work is rolled back. The coordinator does not keep a record that a transaction completed.
Two-Phase Commit and Automatic Recovery Two-Phase Commit and Automatic Recovery The two-phase commit protocol includes automatic recovery in the event of an uncontrolled shutdown (such as an operating-system failure) experienced by either the coordinator or a participant OnLine database server. Two types of recovery mechanics are included in the two-phase commit protocol: Coordinator The coordinating database server process is terminated recovery prematurely.
Two-Phase Commit and Automatic Recovery Coordinator recovery is not needed if the following occurs: ■ The coordinator has not recorded a decision. ■ The coordinator’s decision is to roll back the transaction. If either one of these two conditions is true, the situation is handled by participant recovery. Coordinator failure can occur as a result of either a system failure or the isolated failure of the coordinator database server process.
Two-Phase Commit and Automatic Recovery If the coordinator is unable to contact one or more of the participants, or if execution errors are detected, messages are recorded in the OnLine message log. The coordinator continues its attempts to contact all participants and to complete the protocol. Figure 9-6 Simple illustration of automatic coordinator recovery. See text on page 9-24. Start Protocol Coordinator: Sends a “prepare” message to participants.
Two-Phase Commit and Automatic Recovery Participant Recovery Participant recovery occurs whenever a database server process acting as participant precommits a piece of work but is terminated before the twophase commit protocol can be completed. The goal of participant recovery is to complete the two-phase commit protocol according to the decision reached by the coordinating OnLine. During participant recovery, the piece of work waits for direction that the work should be committed.
Two-Phase Commit and Automatic Recovery If the participant does not receive contact from the coordinator by the time specified as TXTIMEOUT (refer to page 9-57), the tbinit process at the participant OnLine reads the value of SQLEXEC to fork a new database server process. (If SQLEXEC is not set, tbinit uses the pathname $INFORMIXDIR/lib/sqlturbo. The database server process is owned by the user who started tbinit, either informix or root.) This server process acts as the new participant.
Two-Phase Commit and Automatic Recovery Start Protocol Coordinator: Sends a “prepare” message to participants. C P1 P2 P3 P1 Participant : Returns message: “cannot commit.” P2 and P3 Participants: Return message: “can commit.” P3 Participant fails. Coordinator: Receives “cannot commit” response. Records decision to roll back. C Start Participant Recovery After OnLine returns online, transaction waits for message from coordinator until TXTIMEOUT elapses tbinit forks database server process.
Independent Action and Manual Recovery Presumed-Abort Optimization If the coordinator database server process fails before it makes a decision or after it decides to roll back a transaction, it is up to each participant OnLine to initiate automatic recovery. This responsibility is part of the presumedabort optimization. Optimization is realized because the coordinator is not required to flush the logical log record (BEGPREP) that indicates a two-phase commit protocol has begun.
Heuristic Decisions: What and Why Independent action in and of itself does not create the need for manual recovery. For example, if a piece of work at a participant database server is rolled back because it developed into a long transaction and the coordinator issues a decision to roll back the global transaction, the complete database remains consistent and there is no problem. However, within the context of two-phase commit protocol, some independent actions can develop into heuristic decisions.
Heuristic Decisions: What and Why The word heuristic refers to the independent nature of the action; that is, it occurred at the participant OnLine independent of the two-phase commit protocol, without the coordinator’s instruction or knowledge. The only event that would cause tbinit to initiate a heuristic rollback is a long-transaction condition. (Refer to page 2-159 for more details about a long-transaction condition.
Heuristic Decisions: What and Why Once a heuristic rollback or end-transaction occurs, you have started down the road toward manual recovery, a complex and time-consuming process. It is important for you to fully understand heuristic decisions, as that is the best means for avoiding them. You should always be wary of executing tbmode -z or tbmode -Z within the context of two-phase commit.
Heuristic Decisions: What and Why Whenever a heuristic rollback occurs, the possibility exists that manual recovery might be required. If manual recovery is required, you are advised by messages in the OnLine message log. This possibility is described completely in the discussion that begins on page 9-36. Figure 9-8 Characteristics of Heuristic Rollback When it occurs: A heuristic rollback occurs after the participant OnLine sends a “can commit” message for a piece of work.
Heuristic Decisions: What and Why In the first case, when you execute tbmode -Z to end a global transaction at the coordinator OnLine database server, you interfere with participant recovery in the following way. If a participating OnLine database server was down at the time that the coordinator issued its decision to commit, participant recovery eventually queries the coordinator for information.
Heuristic Decisions: What and Why Figure 9-9 Characteristics of Heuristic “End-Transaction” When it occurs: A heuristic end-transaction occurs after the participant OnLine sends a “can commit” message for a piece of work. Why it occurs: Administrator executed. tbmode -Z address, terminating the piece of work being performed on behalf of the global transaction. What occurs: This piece of work is rolled back and ended at this OnLine database server, independent of coordinator instructions.
Heuristic Rollback Heuristic Rollback Two conditions might initiate a heuristic rollback: ■ The logical log fills to the point defined by one of the long-transaction high-water marks (configuration file parameters LTXHWM or LTXEHWM). The source of the long-transaction condition is a piece of work being performed on behalf of a global transaction.
Heuristic Rollback It is important to recognize that this rollback does not close the transaction as it would in a single OnLine environment. Under two-phase commit, the logical log files that contain records associated with the piece of work are considered open until an ENDTRANS logical log record is written. (Refer to page 9-44 for more details about the logical log records during the two-phase commit protocol.
Heuristic Rollback The next event in the scenario happens at the coordinator OnLine. The coordinator issues second-phase instructions either to roll back the transaction or to commit the transaction. If the coordinator decision is to roll back the global transaction, a database server process at the participant OnLine writes an ENDTRANS record in the logical log and the transaction associated with this piece of work is closed.
Heuristic Rollback However, if the coordinator gathers all responses from participants, if at least one participant reports a heuristic rollback, and at least one reports an acknowledgment of a commit, the result is referred to as a mixed transaction result. The following events occur as a consequence: 1. The coordinator writes the following message to its own OnLine message log. (Refer to page 8-22 for a complete description of the message.) Mixed transaction result.
Heuristic End-Transaction Heuristic End-Transaction There is only one, rare situation in which it is reasonable for you to decide to execute the tbmode -Z option to initiate a heuristic end-transaction: a piece of work that has been heuristically rolled back remains open. This open transaction prevents your logical log files from becoming free. As a result, the logical log is dangerously close to full.
Heuristic End-Transaction As a consequence, the transaction remains open. The open transaction prevents you from backing up logical log files and freeing space in the logical log. As the logical log continues to fill, it might reach the point specified by the exclusive-access, long-transaction high-water mark, LTXEHWM. If this occurs, normal processing is suspended. At some point after the LTXEHWM high-water mark is reached, you must decide if the open transaction is endangering your logical log.
Heuristic End-Transaction In the first case, when you execute tbmode -Z to end a global transaction at the coordinator OnLine database server, you interfere with participant recovery in the following way. If a participating OnLine database server was down at the time that the coordinator issued its decision to commit, participant recovery eventually queries the coordinator for information.
Two-Phase Commit Protocol Errors Two-Phase Commit Protocol Errors Three two-phase commit protocol errors require special attention from the administrator: ■ -698 Inconsistent transaction ... ■ -699 Transaction heuristically rolled back ■ -716 Possible inconsistent transaction If you receive either error -698 or -699, a heuristic rollback has occurred.
Two-Phase Commit and Logical Log Records Two-Phase Commit and Logical Log Records Support of two-phase commit requires five additional logical log records. These records can be used to detect heuristic decisions and, if required, to help you perform a manual recovery in the event of an inconsistent database system. The five records and their contents are described in this section. (For more details about how to display and interpret the contents of the logical log, refer to page 7-55.
Two-Phase Commit and Logical Log Records TABLOCKS The TABLOCKS logical log record can be written by either a coordinator or a participant OnLine database server. It is associated with either a BEGPREP or a PREPARE record and contains a list of the locked tblspaces (by tblspace number) held by the transaction. (Within the IBM Informix STAR environment, transactions, not database server processes, are shown as the owners of locks.
Two-Phase Commit and Logical Log Records The output from tblog contains only the type field, as follows: ENDTRANS Transaction Commit Records Figure 9-10 illustrates the writing sequence of the logical log records during a successful two-phase commit protocol that results in a committed transaction. (Refer to page 9-22 for a more general discussion of this same illustration.) Some of the logical log records must be flushed immediately; for others, flushing is not critical.
Two-Phase Commit and Logical Log Records The coordinator’s commit work record (COMMIT record) contains all information needed to initiate the two-phase commit protocol and it serves as the starting point for automatic recovery in the event of a failure on the coordinator’s host machine. Because this record is critical to recovery, it is not allowed to remain in the logical log buffer. The coordinator must immediately flush the COMMIT logical log record.
Two-Phase Commit and Logical Log Records Heuristic Rollback Records Figure 9-11 illustrates the writing sequence of the logical log records during a heuristic rollback. (Refer to page 9-36 for a description of a heuristic rollback.) Since a heuristic rollback only occurs after the participant sends a message that it can commit, and the coordinator sends a message to commit, the first phase of this protocol is the same as that shown in Figure 9-10.
Two-Phase Commit and Logical Log Records Start Coordinator: Writes log record: BEGPREP. Sends message: “precommit.” C P1 P2 P3 P2 C All Participants: Write log record: TABLOCKS. Write and flush log record: PREPARE. Send message: “commit ok.” Within P1 Participant’s Environment: tbinit detects long-transaction condition rollback starts. Writes log record: HEURTX. Writes log record: ROLLBACK. Message written in OnLine message log. Coordinator: Writes and flushes log record: COMMIT.
Two-Phase Commit and Logical Log Records Heuristic End-Transaction Records Figure 9-12 illustrates the writing sequence of the logical log records during a heuristic end-transaction. The event is always the result of an administrator executing tbmode -Z at a participant OnLine database server after the participant has sent a message: “can commit.” (Refer to page 9-40 for a description of a heuristic end-transaction.
Determining Database Consistency Determining Database Consistency This section describes the administrative procedure that might be required if a database server process or transaction is killed after it has entered the second phase of the two-phase commit protocol.
Determining Database Consistency Step 1: Determine Where a Heuristic Decision Occurred There are a number of ways for you to determine the specific OnLine participants affected by a heuristic decision to either roll back or end a transaction: ■ Examine the return code from the COMMIT WORK statement. ■ Examine the messages in the OnLine message log file for each participant. ■ Examine the tblog output for each participant. Each of these options is addressed in the paragraphs that follow.
Determining Database Consistency Step 2: Determine If the Networked Database Contains Inconsistent Data Suppose that you have determined that a transaction was inconsistently implemented and that one or more participants committed the transaction while one or more participants rolled it back.
Determining Database Consistency Steps 3 and 4: Decide If Correction Is Needed If an inconsistent transaction creates an inconsistent database, three options are available to you: ■ Leave the networked database in its inconsistent state. ■ Remove the effects of the transaction wherever it was committed, thereby rolling back the entire transaction. ■ Reapply the effects of the transaction wherever it was rolled back, thereby committing the transaction.
Determining Database Consistency You can also use the information in the HEURTX log record to locate all log records (at all participating OnLine database servers) associated with the global transaction of which this local transaction is just a piece. The steps involved are as follows: 1. Obtain the local xid from the HEURTX log record at a participant OnLine where the transaction rolled back. 2.
Determining Database Consistency DATABASE tmp; BEGIN WORK; INSERT INTO t VALUES (2); INSERT INTO tmp@apex:t VALUES (2); COMMIT WORK; ### return code -698 The following excerpt is taken from the logical log at the current database server: ..... 17018 18018 1802c 1804c begin 1 19018 19060 1a018 ....
IBM Informix STAR Configuration Parameters 1. Find all records that were updated. 2. Identify their type (insert, delete, update) using the table on page 7-57. 3. Use the tblog -l output for each record to obtain the local xid, the tblspace number, and the rowid. (Refer to page 7-55 for a description of the tblog -l header.) 4. Map the tblspace number to a table name by comparing the tblspace number to the value in the partnum column of the systables system catalog table. 5.
Track a Transaction with tbstat Output The default value of DEADLOCK_TIMEOUT is 60 seconds. Adjust this value carefully. If you set it too low, individual OnLine database servers abort transactions that are not deadlocks. If you set it too high, multiserver deadlocks could reduce concurrency. TXTIME-OUT The configuration TXTIMEOUT is specific to IBM Informix STAR two-phase commit protocol.
Track a Transaction with tbstat Output Important: In tbstat -k output, the owner field still displays the address of the database server process that owns the transaction. This address corresponds to the address displayed in the user field of the transactions section of tbstat -u output and not to the transaction address field. Tracking a Global Transaction When a global transaction starts, it receives a unique identification number called a GTRID (global transaction identification).
Track a Transaction with tbstat Output Transaction flag Field The transaction flag field describes the state of the transaction. Flags can appear in the first, middle, and last position of the field as described in the following table. Position Flag Description 1 A Attached. This flag appears in the first position when a transaction is owned by (attached to) a database server process. If this flag does not appear, the transaction is orphaned 1 C Clean.
Track a Transaction with tbstat Output Position Flag Description 5 C Coordinator. This transaction is an IBM Informix STAR coordinator. 5 G Global. If this global transaction is owned by a database server process, a second transaction should also appear in the table, owned by the same database server process. The second transaction would be the piece of work that is performed by this OnLine database server. 5 S Participant. This transaction is an IBM Informix STAR participant.
Track a Transaction with tbstat Output Transaction isolation Field The isolation field describes the transaction isolation level. Four isolation levels are valid under IBM Informix STAR: ■ COMMIT (Committed Read) ■ CURSOR (Cursor Stability) ■ DIRTY (Dirty Read) ■ REPEAT (Repeatable Read) A fifth value, NO TRANS, might appear in this field.
Appendix Notices IBM may not offer the products, services, or features discussed in this document in all countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used.
The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
Any performance data contained herein was determined in a controlled environment. Therefore, the results obtained in other operating environments may vary significantly. Some measurements may have been made on development-level systems and there is no guarantee that these measurements will be the same on generally available systems. Furthermore, some measurements may have been estimated through extrapolation. Actual results may vary.
Trademarks Each copy or any portion of these sample programs or any derivative work, must include a copyright notice as follows: © (your company name) (year). Portions of this code are derived from IBM Corp. Sample Programs. © Copyright IBM Corp. (enter the year or years). All rights reserved. If you are viewing this information softcopy, the photographs and color illustrations may not appear.
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z @ Index Index A Administration tasks consistency and recovery 4-5 things to avoid 3-116 ANSI compliance level Intro-19 Archive administration topics 3-43 blobpage allocation blocked during 2-159 configuration guidelines 3-50 creating 3-57 criteria for archiving disk pages 4-37 criteria includes checkpoint timestamp 4-33 estimating duration 3-46 if the archive is interrupted 3-60 if the device is /dev/null 4-35 if the logical log f
A B C D E F G H description of 2-143 Blob blobspace storage statistics 5-5 effect of Committed Read isolation 2-46 effect of Dirty Read isolation 2-45 entering blob data 9-7 illustration of blobspace storage 2-80 illustration of creating a blob in a blobspace 2-80 modified by creating new blob 2-146 monitoring in a blobspace 3-63 monitoring in a dbspace 3-65 restoring a blobspace blob 4-25 role of blob descriptor in modifications 2-146 role of blob timestamps 2-44 scanning or compression, absence of
A B C D E F G H C Caching percentages description of 5-10 mentioned 5-13, 5-17, 5-19 Cautions 3-116 Character-special file 1-41 Checkpoint affected by online archiving 2-71 changing interval value 3-109 description of 2-70 forcing with tbmode -c 7-67 frequency trade-offs 5-20 initiating 2-70 maximum number of transactions on a checkpoint record 1-33 monitoring activity 3-69 page-cleaning parameters affect frequency 5-18, 5-20 reserved page information 2-97 role in fast recovery 2-71, 4-40, 4-41 role
A B C D E F G H D Daemon process description of 2-33 user process type flag 2-30 Data consistency blob timestamp and isolation levels 2-44 consistency-checking code 4-6 fast recovery 4-39 role of timestamp pairs 2-44 Data restore minimizing time needed 3-48 procedure 4-45 recovery after a long transaction 2-161 tape requirements 4-47 tapes needed 3-61 what happens during a restore 4-50 to 4-52 when is restore needed 4-45 Data row big-remainder page 2-127 blob descriptor component 2-145 displaying ro
A B C D E F G H creating during initial configuration 1-59 description of 2-84 dropping 3-99 ending mirroring 3-105 identifying the dbspace for a table 2-106 index page structure 2-133 logging activity 4-19 mirroring if logical log files included 4-15 monitoring 3-75 monitoring blobs 3-65 page header 2-120 page types in an extent 2-115 reserved page information 2-99 shared-memory table 2-50 starting to mirror 3-105 structure 2-89, 2-90 structure of mirror chunk 2-92 tbcheck -pe links name to table 3
A B C D E F G H next extent allocation 2-117 reclaiming space in an empty extent 2-118 size limitations 2-114 structure 2-114 tracking free pages using bit-map page 2-143 F Fast recovery description of 4-39 effect of buffered logging 4-40 role of PAGE_CHKT reserved page 4-41 step in shared-memory initialization 2-13 FILE statement dbload character-position form 7-26 dbload delimiter form 7-23 finderr script, use and syntax Intro-16 FLRU queue description of 2-57 role in buffer acquisition 2-62 Forc
A B C D E F G H key entries on root node page 2-138 key value 2-134 leaf node page 2-133, 2-136 monitoring integrity 3-79 page structure 2-133 repairing structures with tbcheck utility 7-36 root node page 2-133, 2-134 Industry standards, compliance with Intro-19 Infinity slot 2-139 Informix products application development tools Intro-7 network products Intro-7 INFORMIXDIR environment variable 1-57 $INFORMIXDIR/msg/errmsg.
A B C D E F G H Logging blobspace data 4-22 buffered logging and fast recovery 4-40 buffered versus unbuffered 3-34 comparison of blobspace to dbspace 4-18 dbspace data 4-19 description of 4-18 monitoring activity 3-80 role of blobspace free-map page 2-148, 4-22 transaction logging 1-26 Logical consistency description of 4-39 Logical log buffer changing size 3-93 description of 2-66 flushing 2-68 if it becomes full 2-69 role in dbspace logging 4-21 tuning size for performance 5-11, 5-15 Logical log
A B C D E F G H LRU_MAX_DIRTY parameter description of 1-15 how to calculate value 2-59 initial configuration value 1-37 role in buffer pool management 2-59 tuning for performance 5-17 LRU_MIN_DIRTY parameter description of 1-15 how to calculate value 2-60 initial configuration value 1-38 role in buffer pool management 2-59 tuning for performance 5-17 LTAPEBLK parameter changing block size 3-21 description of 1-15 initial configuration value 1-30 LTAPEDEV parameter changing pathname 3-18 description
A B C D E F G H MSGPATH parameter changes for multiple residency 9-13 description of 1-16 initial configuration value 1-28 Multiple residency benefits of 9-7 configuration and setup 9-11 description of 9-7, 9-10 Multiprocessor features Psort sorting package 5-22 SPINCNT parameter 1-40 spin-count latch acquisition 2-42, 5-24 tuning for performance 5-4 O Offline mode description of 3-7 Online archive description of 3-44 works by synchronizing activity 4-33 Online files provided with the product Intro
A B C D E F G H Physical page number 2-124 PREPARE logical log record 9-44 Presumed-abort optimization description of 9-20 implications of 9-29 Psort sorting package 5-22 PSORT_DBTEMP environment variable 5-23 PSORT_NPROCS environment variable 5-22 Q Quiescent archive description of 3-44 Quiescent mode description of 3-7 R Railroad diagrams conventions used in Intro-9 example of syntax conventions Intro-12 Raw disk space description of 1-40 how to allocate 1-48 Recovery mode 3-7 Release notes Intr
A B C D E F G H taking offline 3-12 Slot table description of 2-121 entry number 2-121 entry reflects changes in row size 2-124, 2-129 location on a dbspace page 2-120 relation to rowid 2-122 Snooze time 2-75 Sorted write 2-76 Sorting space location specifying directory 5-4 using Psort variables 5-22 SPINCNT parameter description of 1-17 initial configuration value 1-40 tuning for performance 5-24 SQL statement ALTER INDEX 2-118 CREATE SYNONYM statement and dbschema 7-32 displaying with dbschema 7-3
A B C D E F G H tbload utility description of 7-47 migrating with tbunload 4-63 overview of migration methods 4-52 syntax 7-48 tblog utility description of 7-51 filters for displaying logical log records 7-54 filters for reading logical log records 7-52 how to interpret tblog output 7-55 Tblspace changing maximum number 3-113 description of 2-85 hash table 2-53 identifying its dbspace 2-106 monitoring 3-85 number 2-105 number displayed 7-98 shared-memory table 2-52 tbcheck -pe links table with dbspa
A B C D E F G H dbspace blob page 2-146 dbspace page 2-120 page-header and page-ending pair 2-44, 2-121 role in data consistency 2-44 synchronizing buffer flushing 2-74 role of checkpoint timestamp in archive 4-33 Tip icons Intro-9 Transaction address 9-59 atomicity in IBM Informix STAR 9-18 flags 9-60 global transaction definition 9-18 global transaction identification number, GTRID 9-59 isolation levels 9-62 killing with tbmode -Z 7-69 logging 1-26 maximum number on a checkpoint record 1-33 monito
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z @ tbtape 7-102 tbunload 7-107 V VARCHAR data type byte locks 2-52 implications for data row storage 2-125 indexing considerations 2-135 migrating from OnLine 4-65 requires 4-bit bit map 2-115, 2-144 storage considerations 2-123 Virtual address space definition of 2-22 example illustration 2-23 W Warning icons Intro-9 Warnings 3-116 Write types big-buffer write 2-78 blobspace blobpages 2-79 chunk write 2-77 efficiency trade-off