HPSS Management Guide High Performance Storage System Release 7.3 November 2009 (Revision 1.0) HPSS Management Guide Release 7.3 (Revision 1.
© Copyright (C) 1992, 2009 International Business Machines Corporation, The Regents of the University of California, Los Alamos National Security, LLC, Lawrence Livermore National Security, LLC, Sandia Corporation, and UT-Battelle. All rights reserved. Portions of this work were produced by Lawrence Livermore National Security, LLC, Lawrence Livermore National Laboratory (LLNL) under Contract No. DE-AC52-07NA27344 with the U.S.
Table of Contents Chapter 1. HPSS 7.1 Configuration Overview.....................................................................................15 1.1. Introduction...................................................................................................................................15 1.2. Starting the SSM GUI for the First Time.......................................................................................15 1.3. HPSS Configuration Roadmap (New HPSS Sites)...............................
3.3.3.1. login.conf........................................................................................................................................41 3.3.3.2. krb5.conf (For Use with Kerberos Authentication Only)................................................................41 3.3.4. SSM Help Files (Optional)......................................................................................................................42 3.3.5. SSM Desktop Client Packaging.....................................
4.2.4. Modifying a Storage Subsystem..............................................................................................................81 4.2.5. Deleting a Storage Subsystem..................................................................................................................81 Chapter 5. HPSS Servers.......................................................................................................................83 5.1. Server List.................................................
5.1.1.2. Migration/Purge Server Information Window..............................................................................133 5.1.1.3. Mover Information Window.........................................................................................................133 5.1.1.1. Physical Volume Library (PVL) Information Window.................................................................134 5.1.1.2. Physical Volume Repository (PVR) Information Windows...................................................
6.2.4. Deleting a Storage Hierarchy Definition...............................................................................................173 6.3. Classes of Service........................................................................................................................174 6.3.1. Classes of Service Window...................................................................................................................174 6.3.2. Class of Service Configuration Window........................
.1. Adding Storage Space..................................................................................................................223 8.1.1. Importing Volumes into HPSS .............................................................................................................223 8.1.1.1. Import Tape Volumes Window.....................................................................................................225 8.1.1.2. Selecting Import Type for Tape Cartridges...............................
.1. Logging Overview.......................................................................................................................294 9.2. Log Policies.................................................................................................................................295 9.2.1. Creating a Log Policy............................................................................................................................295 9.2.2. Logging Policies Window................................
13.1. Managing HPSS Users...............................................................................................................325 13.1.1. Adding HPSS Users.............................................................................................................................325 13.1.1.1. Add All User ID Types...............................................................................................................325 13.1.1.2. Add a UNIX User ID............................................
15.1.2. Overview of the DB2 Backup Process................................................................................................357 15.1.2.1. Configuring DB2 for Online Backup..........................................................................................358 15.1.3. Overview of the DB2 Recovery Process.............................................................................................359 15.2. HPSS System Environmental Backup.....................................................
List of Tables Table 1. SSM General Options..............................................................................................................39 Table 2. HPSSGUI Specific Options.....................................................................................................40 Table 3. HPSSADM Specific Options...................................................................................................40 Table 4. Mover TCP Pathname Options.................................................
Preface Who Should Read This Book The HPSS Management Guide is intended as a resource for HPSS administrators. For those performing the initial configuration for a new HPSS system, Chapter 1 provides a configuration roadmap. For both new systems and those upgraded from a previous release, Chapter 1 provides a configuration, operational, and performance checklist which should be consulted before bringing the system into production.
HPSS Management Guide Release 7.3 (Revision 1.
Chapter 1. HPSS 7.1 Configuration Overview 1.1. Introduction This chapter defines the high-level steps necessary to configure, start, and verify correct operation of a new 7.1 HPSS system, whether that system is created from scratch or created by conversion from a 6.2 HPSS system. To create or modify the HPSS configuration, we recommend that the administrator first be familiar with the information described in the HPSS Installation Guide, Chapter 2: HPSS Basics and Chapter 3: HPSS Planning.
listed. Each step is required unless otherwise indicated. Each step is discussed in more detail in the referenced section. 1. Configure storage subsystems (Section 4.2.2: Creating a New Storage Subsystem on page 76) Subsystems can be configured only partially at this time. The Gatekeeper, Default COS, and Allowed COS fields will be updated in a later step. 2. Configure HPSS storage policies · Accounting Policy (Section 13.2.1: on page 330) · Log Policies (Section 9.
B. Create storage resources (Section 8.1.2: Creating Storage Resources on page 234) 4. Create additional HPSS Users (Section 13.1.1: Adding HPSS Users on page 325) 5. Create Filesets and Junctions (Section 10.1: Filesets & Junctions List on page 308 and Section 10.5: Creating a Junction on page 315) 6. Create HPSS /log Directory If log archiving is enabled, using an HPSS namespace tool such as scrub or ftp, create the /log directory in HPSS.
• Verify that a Core Server and Migration Purge Server have been configured for each storage subsystem. • Verify that each storage subsystem is accessible by using lsjunctions and ensuring that there is at least one junction to the Root fileset of each subsystem. (The root fileset for a given subsystem can be found in the specific configuration for the subsystem’s Core Server) Servers • Verify that all required HPSS servers are configured and running.
chosen if specified by their COS ID. • Verify that classes of service with multiple copies have the Retry Stage Failures from Secondary Copy flag enabled. File Families, Filesets, and Junctions • Verify that file families and filesets are created according to the site’s requirements. • Verify that each fileset is associated with the appropriate file family and/or COS. • Verify that each fileset has an associated junction.
• Monitor free space from the top level storage class in each hierarchy to verify that the migration and purge policy are maintaining adequate free space. 1.6.3. Performance Checklist Measure data transfer rates in each COS for: • Client writes to disk • Migration from disk to tape • Staging from tape to disk • Client reads from disk Transfer rates should be close to the speed of the underlying hardware.
Chapter 2. Security and System Access 2.1. Security Services As of release 6.2, HPSS no longer uses DCE security services. The new approach to security divides services into two APIs, known as mechanisms, each of which has multiple implementations. Configuration files control which implementation of each mechanism is used in the security realm (analogous to a DCE cell) for an HPSS system. Security mechanisms are implemented in shared object libraries and are described to HPSS by a configuration file.
This can be "unix" or "ldap". · - a string used by the authorization mechanism to locate the security data for this realm. This should be "unix" for UNIX authorization, and for LDAP it should be an LDAP URL used to locate the entry for the security realm in an LDAP directory. 2.1.2. Security Mechanisms HPSS 7.1 supports UNIX and Kerberos mechanisms for authentication. It supports LDAP and UNIX mechanisms for authorization. 2.1.2.1.
2.1.2.3. LDAP LDAP authorization is not supported by IBM Service Agreements. The following information is provided for sites planning to use LDAP authorization with HPSS 7.1 as a site supported feature. An option for the authorization mechanism is to store HPSS security information in an LDAP directory. LDAP (Lightweight Directory Access Protocol) is a standard for providing directory services over a TCP/IP network.
To create a new group, use the following command at the hpss_ldap_admin prompt: group create -gid -name [-uuid ] If no UUID is supplied, one will be generated. • Deleting a group To delete a group, use the following command at the hpss_ldap_admin prompt: group delete [-gid ] [-name ] [-uuid ] You may supply any of the arguments listed. This command will delete any group entries in the LDAP information that have the indicated attributes.
obtained from the foreign site's administrator. An example would be: "ldap://theirldapserver.foreign.com/cn=FOREIGNREALM.FOREIGN.COM" • Deleting a trusted foreign realm To delete an entry for a trusted foreign realm, use the following hpss_ldap_admin command: trealm delete [-id ] [-name ] Any of the arguments listed can be supplied to select the trusted realm entry that will be deleted. 2.2. HPSS Server Security ACLs Beginning with release 6.
rw---dt user ${HPSS_PRINCIPAL_PVR} rw-c-dt user ${HPSS_PRINCIPAL_SSM} ------t any_other PVR: rw---dt user ${HPSS_PRINCIPAL_PVL} rw-c--t user ${HPSS_PRINCIPAL_SSM} ------t any_other SSM: rwxcidt user ${HPSS_PRINCIPAL_ADM_USER} ------t any_other All other types: rw-c-dt user ${HPSS_PRINCIPAL_SSM} ------t any_other In most cases, the ACLs created by default for new servers should be adequate. In normal operation, the only ACL that has to be altered is the one for the SSM client interface.
2.4.1. Configuring/Updating a Location Policy The Location Policy can be created and updated using the Location Policy window. If the Location Policy does not exist, the fields will be displayed with default values for a new policy. Otherwise, the configured policy will be displayed. Once a Location Policy is created or updated, it will not be in effect until all local Location Servers are restarted or reinitialized.
Maximum Request Threads. The maximum number of concurrent client requests allowed. Advice - If the Location Server is reporting heavy loads, increase this number. If this number is above 300, consider replicating the Location Server on a different machine. Note if this value is changed, the general configuration thread value (Thread Pool Size) should be adjusted so that its value is always larger than the Maximum Request Threads. See Section 5.1.1.2: Interface Controls on page 92.
1. Add the HPSS_RESTRICTED_USER_FILE environment variable to /var/hpss/etc/env.conf. Set the value of this variable to the name of the file that will contain the list of restricted users. For example: HPSS_RESTRICTED_USER_FILE=/var/hpss/etc/restricted_users 2. Edit the file and add the name of the user to the file. The name should be in the form: name@realm The realm is not required if the user is local to the HPSS realm. For example: dsmith@lanl.gov 3.
Field Descriptions Restricted Users list This is the main portion of the window which displays various information about each restricted user. User Name. The name of the user that is restricted from HPSS access. Realm Name. The name of the HPSS realm that encompasses the restricted user. User ID. The identifier number of the restricted user. Realm ID. The identifier number which identifies the realm which encompasses the restricted user. Buttons Reload List.
Chapter 3. Using SSM 3.1. The SSM System Manager 3.1.1. Starting the SSM System Manager Before starting the SSM System Manager (SM), review the SM key environment variables described in the HPSS Installation Guide, Section 3.7.10: Storage System Management. If the default values are not desired, override them using the hpss_set_env utility. See the hpss_set_env man page for more information. To start the SM, invoke the rc.hpss script as follows: % su % /opt/hpss/bin/rc.hpss -m start 3.1.2.
To help mitigate this, when the thread pool is full, the System Manager notifies all the threads in the thread pool that are waiting on list updates to return to the client as if they just timed out as normal. This could be as many as 15 threads per client that are awakened and told to return, which makes those threads free to do other work.
port hpssgui and hpssadm clients must access to reach the System Manager. This task can be made a bit easier if the System Manager RPC program number is labeled in the portmapper. To do this, add a line for the System Manager in the /etc/rpc file specifying the program number and a convenient rpc service name such as “hpss_ssm” (note that names may not contain embedded spaces). Then this service name will show up in the rpcinfo output. The format of the /etc/rpc file differs slightly across platforms.
· When you have decided on the hpssgui command line that is best for your installation, it will probably be useful to put the command in a shell script for the convenience of all SSM Administrators and Operators. For example, create a file called “gui” and put the following in it: /opt/hpss/bin/hpssgui.pl \ -m /my_directory/my_ssm.conf \ -d \ -S /tmp/hpssguiSessionLog.$(whoami) Please refer to the hpssgui man page for an extensive list of command line options.
• The proper authorization entries for the user are created in the AUTHZACL table. 3. The proper SSM configuration files are created and installed. See Section 3.3.1: Configuring the System Manager Authentication for SSM Clients, Section 3.3.2: Creating the SSM User Accounts, and Section 3.3.3: SSM Configuration File for the procedures for these tasks. See Section 3.3.4: SSM Help Files (Optiona on page 42, for instructions on installing the SSM help package. See Section 3.3.
% /opt/hpss/bin/hpssuser -add john -ssm [ adding ssm user ] 1) admin 2) operator Choose SSM security level (type a number or RETURN to cancel): > 1 [ ssm user added : admin ] After SSM users are added, removed, or modified, the System Manager will automatically discover the change when the user attempts to login. See the hpssuser man page for details. Removing an SSM user or modifying an SSM user's security level won't take effect until that user attempts to start a new session.
Access to the hpss_server_acl program, hpssuser program, to the HPSS DB2 database, and to all HPSS utility programs should be closely guarded. If an operator had permission to run these tools, he could modify the type of authority granted to anyone by SSM. Note that access to the database by many of these tools is controlled by the permissions on the /var/hpss/etc/mm.keytab file.
Keytabs are created for the user by the hpssuser utility when the krb5keytab or unixkeytab authentication type is specified. Keytabs may also be created manually with the hpss_krb5_keytab or hpss_unix_keytab utility, as described below. 3.3.2.3.1. Keytabs for Kerberos Authentication: hpss_krb5_keytab The hpss_krb5_keytab utility may be used to generate a keytab with Kerberos authentication in the form usable by the hpssadm program. See the hpss_krb5_keytab man page for details.
3.3.3. SSM Configuration File The hpssgui and hpssadm scripts use the SSM configuration file, ssm.conf for configuration. The mkhpss utility will create the SSM configuration file for the security mechanism supported by SSM. The mkhpss utility will store the generated ssm.conf at $HPSS_PATH_SSM; the default location is /var/ hpss/ssm. The configuration file will contain host and site specific variables that the hpssgui and hpssadm script will read.
File Option Command Line Option Functionality HPSS_SSM_SM_HOST_NAME -h System manager hostname HPSS_SSM_USER_PREF_PATH -i Path to ssm preferences JAVA_BIN -j Path to java bin directory KRB5_CONFIG -k Full path to krb5.
File Option Command Line Option HPSS_AUTHEN_TYPE -t Functionality Authenticator type Information on tuning client polling rates for optimal performance is available in the hpssadm and hpssgui man pages. Options are specified, in precedence order, by 1) the command line, 2) the user's environment (see the man pages for environment variable names), 3) the SSM configuration file, or 4) internal default values. 3.3.3.1. login.conf The login.
Note that having encryption types other than "des-cbc-crc" first on the "default_tkt_enctypes" and "default_tgs_enctypes" lines can cause authentication failures. Specifically, keytab files generated by the HPSS utility programs will use the first encryption type and only "des-cbc-crc" is known to work in all cases. Other encryption types are known to fail for some OSs and Java implementations.
3.3.5.1. Automatic SSM Client Packaging and Installation The hpssuser utility provides a mechanism for packaging all the necessary client files required to execute the hpssgui program on the user's desktop host. Refer to the hpssuser man page for more information on generating an SSM Client Package. These files may also be copied manually; see Section 3.3.5.2: Manual SSM Client Packaging and Installation, for a list of the required files. This example creates an SSM Client Package named “ssmclient.
These files may be installed in any location on the SSM client machines. The user must have at least read access to the files. The SSM startup scripts hpssgui.pl, hpssgui.vbs, hpssadm.pl, and hpssadm.vbs provide the user with a command line mechanism for starting the SSM client. The hpssgui.pl script is a Perl script for starting the SSM Graphical User Interface and the hpssadm.pl script is a Perl script for starting the SSM Command Line User Interface.
3.3.6.2. Solutions for Operating Through a Firewall SSM can operate through a firewall in three different ways: • The hpssgui and hpssadm can use ports exempted by the network administrator as firewall exceptions. See the -n option described in the hpssgui and hpssadm man pages. • The hpssgui and hpssadm can contact the System Manager across a Virtual Private Network connection (VPN). See the -p and -h options described in the hpssgui and hpssadm man pages.
• Verify that the proper version of Java is installed. Add the Java bin directory to the user's $PATH, or use the -j switch in the hpssgui script, or set JAVA_BIN in the user's ssm.conf file. Java can be downloaded from http://www.java.com. • Obtain files from the server machine: • Obtain the preferred hpssgui script for the client system from /opt/hpss/bin on the server machine and place it in the directory created on the client machine (see Section 3.3.5: SSM Desktop Client Packaging on page 42).
If access through the firewall is needed for other ports (eg., the Kerberos kdc), set up a separate tunnel for each port the firewall does not allow through. • On the client machine, run the GUI: • For Kerberos authentication: % hpssgui.pl -S hpssgui.sessionlog -k krb5.conf -n 49999 -h localhost • For UNIX authentication: % hpssgui.pl -S hpssgui.sessionlog -s unix -u example.com -n 49999 -h localhost The HPSS Login window should open on the client machine for the user to log in.
applicable to the platform on which the graphical user interface is running. Custom Look and Feels are also available at http://www.javootoo.com · -b "background color" • · The only Look and Feel that supports color settings and themes is the metal Look and Feel. The color may be set by using the color name or hexadecimal Red/Green/Blue value.
include “static” text painted on the window background or labels on things like buttons. Text fields may appear as single or multiple lines and they may be “enterable” (the displayed data can be altered) or “non-enterable” (the displayed data cannot be changed directly). • Non-enterable text fields have gray backgrounds.
• Select/cut/copy/paste operations can be performed on enterable text fields; on non-enterable fields, only select and copy operations can be performed. • In some cases, modifying a field value or pressing a button causes the action to be performed immediately. A confirmation window will pop up to inform the user that all changes made to the data window will be processed if the user wishes to continue.
all of the current configuration’s field values. • Freeze - A checkbox that, while checked, suspends the automatic updates made to an SSM window. This allows reviewing information at the frozen point in time. Unchecking the checkbox will reactivate normal update behavior. • Refresh button - Requests an immediate update of the displayed information. This can be useful if the user does not wish to wait for an automatic update to occur.
menu item is available on all SSM windows. • Edit menu - The Edit Menu is located on all SSM data windows. From each Edit Menu, the user can access Cut, Copy and Paste functions which enable the user to remove data from text fields or transfer data among them. Editable text fields can be updated. Non-editable text fields can be copied, but not changed. Field labels cannot be copied. Most windowing systems provide keyboard shortcuts for the Cut, Copy, and Paste commands.
variables may be overridden. 3.8. Monitor, Operations and Configure Menus Overview The Monitor, Operations and Configure menus are used by the System Manager to monitor, control and configure HPSS. They are available only from the HPSS Health and Status window. This section provides a brief description on each submenu option listed under the Monitor, Operations and Configure menus. See related sections for more detailed information on the window that gets opened after selecting the menu option. 3.8.1.
Accounting Status. Opens the Subsystem list window where the Accounting Status and Start Accounting buttons can be found. Log Files Information. Opens the Log Files Information window to display information for the HPSS log files such as the log file's size and state. Lookup HPSS Objects. This submenu lists the type of objects which can be looked up by specifying the object's identifying information. • Cartridges & Volumes.
volume labels can be entered and a request to add the disks to a storage class can be submitted. • Create Tape Resources. Opens the Create Tape Resources window where a list of tape volume labels can be entered and a request to add the tapes to a storage class can be submitted. • Delete Resources. Opens the Delete Resources window allowing deletion of existing tape or disk storage resources. • Export Volumes.
of the accounting policy. Only one accounting policy is allowed. • Location. Opens the Location Policy window allowing configuration and management of the location policy. Only one location policy is allowed. • Logging. Opens the Logging Policies list window allowing configuration and management of the logging policies. • Migration. Open the Migration Policies list window allowing configuration and management of the migration policies. • Purge.
The HPSS Login window appears after starting the hpssgui script. The user must supply a valid HPSS user name and password in order to access SSM and monitor HPSS. If a login attempt is unsuccessful, review the user session log for an indication of the problem. See the hpssadm or hpssgui man pages for more information about the user session log. Field Descriptions User ID. Enter a valid user ID here. Password. Enter the password for the user ID. OK.
mismatched versions may cause compatibility problems. 3.9.2. About HPSS The About HPSS window displays version information and a portion of the HPSS copyright statement. The About HPSS window is accessible by selecting the Help menu's “About HPSS” submenu from any of the hpssgui windows. The HPSS System Name and System Manager Version are not displayed when the About HPSS window is requested from the HPSS Login window.
When a user successfully connects to the System Manager through the Login window, the HPSS Health and Status window replaces the Login window on the screen. The HPSS Health and Status window will remain on the screen until the user exits or logs out. It provides the main menu and displays information about the overall status of HPSS . The HPSS Health and Status window is composed of several high-level components, each of which is discussed in its own section below. 3.9.3.1.
icon is red, the client’s connection to the System Manager is lost; when it is green, the connection is active. 3.9.3.2. HPSS Status On the upper section of the HPSS Health and Status window are four status fields that represent the aggregate status of the HPSS system. These fields are: Servers. Displays the most severe status reported to SSM by any HPSS server. Devices and Drives. Displays the most severe status as reported to SSM for any configured Mover device or PVL drive. Storage Class Thresholds.
In addition to the text which describes the status, these fields are displayed with colored icons. The icon color depicts that status as follows: • Red - Major and Critical problems • Magenta – Minor problems • Yellow - Unknown, Stale, Suspect, and Warning problems • Green - Normal, no problem Click on the button to the right of the status icon to get more details. For Servers, Devices and Drives, and Storage Class Thresholds the button will open the corresponding SSM list window in sick list mode.
HPSS Statistics fields show general trends in HPSS operations; the numbers are not all-inclusive. Some values may fluctuate up and down as servers are started or shut down. Some values, such as Bytes Moved, can be reset to zero in individual Movers and by SSM users. Bytes Moved. Total bytes moved as reported by all running Movers. Bytes Used. Total bytes stored on all disk and tape volumes as reported by all running Core Servers. Data Transfers. Total data transfers as reported by all running Movers.
user the ability to hide or display elements of the HPSS Health and Status window in order to optimize the viewable area. Under the View Menu there is a menu item and checkbox for each window element that can be hidden. If the box contains a check mark then the corresponding section of the HPSS Health and Status window that displays this element will be visible. If the checkbox is empty, then the element is hidden from the window view.
Field Descriptions Start Time. The time the System Manager was started. Uptime. The elapsed clock time since the System Manager was started. HPSS Management Guide Release 7.3 (Revision 1.
CPU Time. The amount of CPU time that the System Manager has consumed. Memory Usage. The amount of memory that the System Manager is currently occupying. Process ID. The process id of the System Manager. Hostname. The name of the host where the System Manager is running. RPC Calls to Servers. The number of RPCs the System Manager has made to other HPSS servers. RPC Interface Information. Information about the server and client RPC interfaces.
Thread Pool Size and/or Request Queue Size to help with the System Manager performance. However, increasing these 2 parameters could cause the System Manager to require more memory. • Data Change Notifications. The number of data change notifications received from servers. (Server RPC Interface only). • Unsolicited Notifications. The number of notifications which the System Manager received from other HPSS servers but which it did not request from them. (Server RPC Interface only). • Log Messages.
• Hostname. The name of the host where the client is running. • Connections. The number of RPC connections this client has to the System Manager. • Start Time. The time that the client connected to the System Manager. • Connect Time. The elapsed time since the client connected to the System Manager. • Idle Time. The elapsed time since the System Manager received an RPC from the client. • Cred Refreshes.
Field Descriptions User Login Name. The name that the user entered as the User ID when logging into HPSS. User Authority. The authority level that the user has in order to perform operations in SSM. This can be admin or operator. Login Time. The time that the user logged into SSM. Total Session Time. The amount of time that the user has had the graphical user interface running. Total Time Connected to System Manager. The amount of time that the user has been connected to the System Manager.
Percent Memory Free. The ratio of free memory to total memory in the hpssgui process. Total Windows Opened During This Session. The number of windows created during the current user session. 3.10. SSM List Preferences When a user logs into SSM, the System Manager reads the saved preferences file and loads them into the SSM Session, if they exist. Each SSM list type has a Default preferences record. The Default preferences configuration is set so that the more commonly used columns are displayed.
HPSSGUI_USER_CFG_PATH or the configuration file entry HPSS_SSM_USER_PREF_PATH. If this option is not specified, the default value is :/hpss-ssm-prefs. The user must have permissions to create the preferences file in the directory. Preferences windows contain filters for controlling the data displayed in the list window. Columns in the list window can be rearranged and resized by dragging columns and column borders on the list window itself.
Save. Save the current preference settings to the preferences file using the same preference name. Delete. Delete the currently displayed preference settings from the preferences file. Sick and Default preference settings cannot be deleted. Reload. Reread the preference settings from the preferences file. Apply. Apply the current preference configuration to the parent SSM list window. Pressing Apply does not save the changes to the preferences file.
Chapter 4. Global & Subsystem Configuration This chapter discusses two levels of system configuration: global and storage subsystem. The global configuration applies to the entire HPSS installation while subsystem configurations apply only to servers and resources allocated to storage subsystems. For new HPSS systems, it is recommended that the first step of configuration be the partial definition of subsystems.
This window allows you to manage the HPSS global configuration record. Only one such record is permitted per HPSS installation. To open the window, on the Health and Status window select the Configure menu, and from there the Global menu item. Field Descriptions System Name. An ASCII text string representing the name for this HPSS system. Root Core Server. The name of the Core Server which manages the root fileset (“/”) in the HPSS namespace. Advice - The root Core Server should be selected with care.
Storage Subsystem configuration. Root User ID. The UID of the user who has root access privileges to the HPSS namespace. This only applies if the Root Is Superuser flag is set. COS Change Stream Count. The number of background threads that run in the Core Server to process Class of Service change requests. This field may be overridden on the Storage Subsystem configuration. Global Flags: Root Is Superuser. If checked, root privileges are enabled for the UID specified in the Root User ID field.
This window lists all the subsystems in the HPSS system and provides the ability to manage these subsystems. To open the window, from the Health and Status window select the Configure menu, and from there select the Subsystems menu item. To create a new subsystem, click on the Create New button. To configure an existing subsystem, select it from the list and click on the Configure button. When creating or configuring a subsystem, the Storage Subsystem Configuration window will appear.
subsystem is selected in the list. Delete - Delete the selected subsystem(s). This button is disabled unless a subsystem is selected in the list. Always contact HPSS customer support before deleting a subsystem definition. An improperly deleted subsystem can cause serious problems for an HPSS system. Refer to Section 4.2.5 Deleting a Storage Subsystem on Page 81. Related Information HPSS Installation Guide, Section 2.2.7: Storage Subsystems and Section 2.3.3: HPSS Storage Subsystems Section 13.2.2.
This window allows an administrator to manage the configuration of a storage subsystem. The Add button is only displayed during the creation of a new configuration. The Update button is displayed when an existing configuration is being modified. To open this window for creation of a new subsystem, click the Create New button on the Subsystems window. To open this window for an existing subsystem, select the subsystem from the Subsystems window and click the Configure button.
user does not specify a COS or any hints with the creation request. The global configuration specifies a default COS for an entire HPSS installation. Selecting a COS on the storage subsystem configuration window allows the global value to be overridden for a particular subsystem. If the field is blank, the global default COS will be used. If no Classes of Service are configured, this value can be updated after the Classes of Service are in place. Subsystem Name.
DB Log Monitor Interval. The Core Server will check consistency of Database Logs and Backup Logs at the indicated interval, specified in seconds. The logs are consistent if both primary and backup log directories exist and contain log files with the same names. The minimum value for this field is 300 seconds (5 minutes). A value of 0 will turn off DB Log monitoring. This field may be overridden on the Storage Subsystem configuration. COS Change Stream Count.
Gatekeeper is already configured, simply add it to your storage subsystem's configuration. However, if it is not yet configured, it will be necessary to wait until Section 4.2.3.4: Assign a Gatekeeper if Required on page 80 to add the Gatekeeper. 6. Set the metadata space thresholds and the update interval. Typical values are 75 for warning, 90 for critical and 300 to have the metadata space usage checked every 300 seconds. 7. Set the DB Log Monitor Interval.
4.2.3.7. Migration and Purge Policy Overrides The migration and purge policies contain two elements, the basic policy and the storage subsystem specific policies. This can be seen on the Migration Policy and Purge Policy windows. If a given migration or purge policy does not contain any subsystem specific policies, then the basic policy applies across all storage subsystems and no other configuration is needed.
E. Issue the following SQL command: db2> select count(*) from nsobject The result of the command should indicate 2 rows in this table. 3. If any of these checks gives an unexpected result, do not delete the subsystem. Contact HPSS customer support. When deleting an existing storage subsystem, it is critical that all of the different configuration metadata entries described in section 4.2.3: Storage Subsystem Configuration Window on page 76 for the storage subsystem be deleted.
Chapter 5. HPSS Servers Most HPSS Server administration is performed from the SSM graphical user interface Servers list window. Each HPSS server has an entry in this list. 5.1. Server List This window facilitates management of the configured HPSS servers. From this window, an HPSS server can be started, shut down, halted, reinitialized, and notified of repair. Once a server is up and running, SSM monitors and reports the server state and status.
The server’s configuration should be carefully reviewed to ensure that it is correct and complete. Check the Alarms and Events window and the HPSS log file to view SSM alarm messages related to configuration problems. This situation can be caused by: • A DB2 record required by the server is missing or inaccessible. • The principal name configured for the server does not match the HPSS_PRINCIPAL_* environment variable for the server's type. • Not Executable - The server is configured as non-executable.
rate (see the hpssgui/hpssadm man pages for more details). If a server is configured as executable but is not running, SSM will treat it as an error. Therefore, if a server is not intended to run for an extended period, its Executable flag should be unchecked. SSM will stop monitoring the server and will not report the server-not-running condition as a critical error. This will also help reduce the work load for SSM. Type. Indicates the type of the server in acronym form.
Execute Host. The Execute Hostname field from the server's basic configuration record. This field is intended to specify the hostname on which the server is supposed to run; however, no checking is done to verify if the server is actually running on the specified host. This field is only used by the SSM to locate the Startup Daemon that manages this server.
server hangs up or otherwise won't respond to the Shutdown command. Force Connect - Request the System Manager to immediately attempt to connect to the selected servers. The System Manager routinely attempts to connect to any unconnected servers; using this button will simply cause the next attempt to occur right away, instead of after the normal retry delay. Information Buttons. These buttons allow you to open information windows for servers.
• Storage System Manager • Startup Daemon (on each host where an HPSS server will be executing) The fields of the Server Configuration window are divided into the following sections. The Basic Controls section is at the top of the window and the other sections are on individual tabs: • Basic Controls. Server identification and type information. • Execution Controls. Information required to properly control the server's execution. • Interface Controls.
● Section 5.1.3: Mover Specific Configuration on page 102 ● Section 5.1.1: Physical Volume Repository (PVR) Specific Configuration on page 109 ● Details about all of the other sections on this window, which apply to all server types, are described in Section 5.1.1: Common Server Configuration. To view the Server Configuration window for an existing server, bring up the Servers list window and select the desired server.
Field Descriptions Server Name. A unique descriptive name given to the server. Ensure that the Server Name is unique. A server’s descriptive name should be meaningful to local site administrators and operators, in contrast to the server’s corresponding UUID, which has meaning for HPSS. For HPSS systems with multiple subsystems it is very helpful to append the subsystem ID to the Server Name of subsystem-specific servers. For instance, “Core Server 1” for the Core Server in subsystem 1.
Execute Hostname. This is the hostname of the node on which the server will execute. It must match the Execute Hostname of the Startup Daemon that is to manage this server. For most servers, setting this field is straightforward, but for remote Movers, this indicates the node on which the Mover administrative interface process runs (not the node where the remote Mover process runs). Note that if the Execute Hostname is changed, it is likely that the RPC program number will change as well.
5.1.1.2. Interface Controls The Interface Controls section of the Server Configuration window is common to all servers. In the example window above, the server displayed is a Core Server. Field Descriptions Maximum Connections. The maximum number of clients that this server can service at one time. This value should be set based on the anticipated number of concurrent clients. Too large a value may slow down the system. Too small a value will mean that some clients are not able to connect.
The Security Controls section of the Server Configuration window is common to all servers. In the example window above, the server displayed is a Core Server. Field Descriptions Principal Name. The name of the principal the server will use to authenticate. Protection Level. The level of protection that will be provided for communication with peer applications. The higher the level of protection, the more encryption and overhead required in communications with peers.
Authenticator. The argument passed to the authentication mechanism indicated by the Authenticator Type configuration variable and used to validate communications. If it is a keytab, the server must have read access to the keytab file. Other access permissions should not be set on this file or security can be breached. For the Not Configured or None values of the Authenticator Type, this field can be left blank. 5.1.1.1.
• UTIME. Core Server bitfile time modified events. • ACL_SET. Core Server access control list modification events. • CHBFID. Core Server change bitfile identifier events. • BFSETATTRS. Core Server set bitfile attribute events. 5.1.1.1. Log Policy The server Log Policy may also be accessed from the Logging Policies window. It is not necessary to define a log policy for every server. If no server-specific log policy is defined for the server, the server will use the System Default Logging policy.
• TRACE. If selected, Trace messages generated by the server are sent to the log. It is recommended that this be OFF for all servers except the Mover. These messages give detailed information about program flow and are generally of interest only to the server developer. In normal operation, logging Trace messages can flood the log with very low level information. In particular, it is important to avoid TRACE for the SSM System Manager Log Policy. • STATUS.
COS Change Retry Limit, Tape Dismount Delay, Tape Handoff Delay, PVL Max Connection Wait, Fragment Trim Limit and Fragment Smallest Block can be changed in the Core Server while the server is running by changing the value on this screen, updating the metadata, then re-initializing the appropriate Core Server. The Core Server re-reads the metadata and changes its internal settings. The changes take effect the next time the settings are used by the server. See Section 5.2.
The server uses built-in default values for these settings, but if the environment variables can be found in the server's environment, the server uses those values. The following is a list of the names of the variables and the aspects of the server's operation they control. Since the environment is common to all sub-systems, all Core Servers in an HPSS installation are subject to these values.
for both gatekeeping and account validation. If multiple Gatekeepers are configured, then any Gatekeeper may be contacted for account validation requests. Note: If a Gatekeeper is configured, then it will either need to be running or marked non-executable for HPSS Client API requests to succeed in the Core Server (even if no Gatekeeping nor Account Validation is occurring); this is due to the HPSS Client API performing internal accounting initialization.
5.1.4. Log Client Specific Configuration This window controls the local log settings that will be in effect for the node on which this Log Client runs. Field Description Client Port. The port number for communication between the Log Client and the HPSS Servers. The default value is 8101. Ensure that the specified port is not being used by other applications. The port number must be a different number than the Daemon Port used by the Log Daemon. Maximum Local Log Size.
Related Information Section 9.5: Managing Local Logging on page 301. 5.1.1. Log Daemon Specific Configuration This window controls configuration of the log daemon and the central HPSS log. Field Descriptions Log File Maximum Size. The maximum size in bytes of each central log file. The default value is 5,242,880 (5 MB). Once the maximum size is reached, logging will switch to a second log file. The log file that filled up will then be archived to an HPSS file if the Archive Flag is on.
Field Descriptions Storage Class Update Interval (seconds). The interval that indicates how often the MPS will query the Core Server in its subsystem to get the latest storage class statistics. This is also the interval the MPS uses to check whether it needs to initiate a purge operation on the storage class based on the associated purge policy. The valid range for this field is 10-600. Maximum Volumes for Whole File Migration.
and usage. The trade-off for this value is that large buffer sizes will use more system memory and may be inefficient for small transfers (e.g., if the Mover buffer size is 4MB, but client requests are 512KB, the Mover will not achieve any double buffering benefit because the entire amount of the transfer fits in one Mover buffer). A smaller buffer size will cause device and network I/O to be interrupted more often, usually resulting in reduced throughput rates for all but the smallest transfers.
Range End. If this field is zero, Port Range End field must also be zero. Port Range End. Used in conjunction with Port Range Start (See above). Valid values are zero or any TCP port number to which the Mover may bind (that is greater than or equal to the value of Port Range Start). The default value is 0. If non-zero, this field must be equal or greater than Port Range Start. If this field is zero, Port Range Start field must also be zero. 5.1.3.1.
This will cause inetd to run the executable /opt/hpss/bin/hpss_mvr_tcp under the root user ID when a connection is detected on port 5002. The Mover process uses the /var/hpss/etc/mvr_ek file to read the encryption key that will be used to authenticate all connections made to this Mover. After modifying the /etc/inetd.
/etc/xinetd.d file. For example, if the encryption key in the Mover’s type specific configuration is 1234567890ABCDEF, then the encryption key file (/var/hpss/etc/ek.mvr1) should contain: 0x12345678 0x90ABCDEF 5.1.3.1.3. /var/hpss/etc Files Required for Remote Mover The Mover process on the remote machine requires access to the following files in /var/hpss/etc: • auth.conf • authz.conf • env.conf • ep.conf • HPSS.conf • ieee_802_addr • site.
Table 1. IRIX System Parameters Parameter Name semmsl maxdmasz Minimum Value 512 513 Parameter Description Maximum number of semaphores per set Maximum DMA size (required for Ampex DST support) Solaris Solaris system parameters which affect the remote Mover can be modified by editing the /etc/system configuration file and rebooting the system. The following table defines the parameter names and minimum required values.
Table 3. Linux System Parameters Parameter Name SEMMSL SHMMAX Minimum Value Parameter Description 512 0x2000000 Maximum number of semaphores per ID Maximum shared memory segment size (bytes) 5.1.3.1.1. Setting Up Remote Movers with mkhpss The mkhpss utility may be used to copy the files needed for a remote Mover from the root subsystem machine, to create the files which may not be copied, to install the required files on the remote Mover machine, and to configure the inetd to run the remote Mover.
to all nodes or unexpected results may occur. • The Mover must be built with the LFT option. This is the default option for all Movers. If not all Movers have been built with this option, clients must explicitly specify a class of service which is valid for a Mover supporting the local file transfer option. • The Mover must be running as root. If there are other Movers running on the same node, they must also run as root to take advantage of the Mover-to-Mover shared memory data transfer.
Specific Configuration Window on page 120, Section 5.1.1.1: 3494 PVR Specific Configuration on page 111, Section 5.1.1.2: AML PVR Specific Configuration on page 113, and Section 5.1.1.2: SCSI PVR Specific Configuration Window on page 118). These sections provide additional vendor-specific advice on PVR/robot configuration. The AML PVR is supported by special bid only. 5.1.1.1. Operator PVR Specific Configuration Window Field Descriptions Cartridge Capacity.
5.1.1.1. 3494 PVR Specific Configuration 5.1.1.1.1. 3494 PVR Specific Configuration Window Field Descriptions Cartridge Capacity. The total number of cartridge slots in the library dedicated to this HPSS PVR. This may or may not be the total cartridge capacity of the library; a site might use part of the library for some other HPSS PVR or for some non-HPSS application.
only if the Support Shelf Tape checkbox is selected. The alarm value must be 2 or greater. Dismount Delay. When Defer Dismounts is checked, this value is used by the PVL to determine the number of minutes that dismounts are delayed after the last data access. Retry Mount Time Limit. The default value for this field is -1. When the default value (-1) is used, if an error is encountered during a PVR mount operation, the mount will pend and be retried every 5 minutes.
named /dev/lmcp0 and /dev/lmcp1 respectively. Control connections must be made prior to configuration of the /dev/lmcpX devices or undefined errors may result. For Linux systems, the symbolic library name defined in /etc/ibmatl.conf (e.g., 3494a) should be used. For RS-232 and Ethernet connected robots, the device special files support both command and async capabilities.
Weight 2 * Cartridges from other jobs mounted on this drive’s controller + Weight 3 * Units of distance from the cartridge to the drive This method has the effect of distributing a striped tape mount across as many controllers as possible for the best performance. It also will try to pick controllers that are currently driving a minimum number of tapes. So, in an environment with many tape drives per controller, the best performance will be achieved by minimizing the load on any one controller.
requesting client to the AML’s storage positions, drives, and Insert/Eject units. Access configurations for clients are set in the configuration file C:\DAS\ETC\CONFIG on the OS/2 PC. The client name can be up to 64 alphanumeric characters in length and is case sensitive. Server Name. TCP/IP host name or IP address of the AML OS/2-PC DAS server. This value must be defined in the network domain server and must be resolvable during DAS start. The server name is set in the configuration file C:\CONFIG.
1. Make sure the AMU archive management software is running and the hostname is resolved, 2. Select an OS/2 window from the Desktop and change the directory to C:\DAS, C:> cd \das 3. At the prompt, type tcpstart and make sure that TCP/IP gets configured and that the port mapper program is started, C:\das> tcpstart 4.
Score = Weight 1 * Cartridges from this job mounted on this drive’s controller + Weight 2 * Cartridges from other jobs mounted on this drive’s controller + Weight 3 * Units of distance from the cartridge to the drive This method has the effect of distributing a striped tape mount across as many controllers as possible for the best performance. It also will try to pick controllers that are currently driving a minimum number of tapes.
• Support Shelf Tape. If ON, the PVR and the PVL will support the removal of cartridges from the tape library using the shelf_tape utility. Command Device. The name of the device that the PVR can use to send commands to the robot. For AIX systems, this is generally /dev/smc0. For Linux systems, use the symbolic library name defined in /etc/ibmatl.conf. 5.1.1.1.1.
Score = Weight 1 * Cartridges from this job mounted on this drive’s controller + Weight 2 * Cartridges from other jobs mounted on this drive’s controller + Weight 3 * Units of distance from the cartridge to the drive This method has the effect of distributing a striped tape mount across as many controllers as possible for the best performance. It also will try to pick controllers that are currently driving a minimum number of tapes.
• Enforce Home Location. If ON, the SCSI PVR will always try to dismount a mounted cart back to its home location. Otherwise, it will just use the first free slot. The scsi_home utility can be used to view and manipulate the home location values. Serial Number. The serial number of the robot, obtained from device_scan. This serial number will allow the SCSI PVR to automatically look up all available control paths upon startup. 5.1.1.1.
requested shelf tape has been checked-in. The PVR will continue checking at this interval until the tape is checked-in. This field applies only if the Support Shelf Tape checkbox is selected. The retry value must be 30 or greater. Shelf Tape Check-In Alarm. The PVR will periodically log alarm messages when a requested shelf tape has not been checked-in. This field specifies the number of minutes between alarms. This field applies only if the Support Shelf Tape checkbox is selected.
HPSS will use any Cartridge Access Port (CAP) in the STK Robot that has a priority greater than zero. When it needs a CAP, HPSS will pick the highest priority CAP that is currently available. At least one CAP must be assigned a non-zero priority. See the STK Automated Cartridge System Library Software (ACSLS) System Administrator’s Guide for procedures to set CAP priority. ACSLS Packet Version See the STK Automated Cartridge System Library Software (ACSLS) System Administrator’s Guide for details.
Would you like Multi-Host or Single-Host testing? Enter one of the following followed by ENTER: M Multi-host testing S Single-host testing X eXit this script Enter choice: m Would you like to define the server side or client side for Multi-Host testing? Enter one of the following followed by ENTER: S Server side C Client side Enter choice: c The Remote Host Name is the name of the server which has the ACSLS software (or simulator) running on it.
server are removed from the HPSS configuration. The steps described in this section are general guidelines. Specific procedures should be worked out with the aid of HPSS technical support so that the details of the system's configuration can be considered. A server’s configuration should be removed only when it is no longer needed. To modify a server’s configuration, update the existing configuration instead of deleting the configuration and creating a new one.
pressing the Delete button. 5.1. Monitoring Server Information A server that is running and connected to SSM will allow the SSM user to view and update its information. This section describes the server execution statuses and configuration information. A typical HPSS server allows the SSM users to control its execution and monitor its server related data through the Basic Server Information window and the server specific information windows. These windows are described in the following subsections. 5.1.1.
• Busy - The server is busy performing its function. Most servers do not update Usage State dynamically, so it is unlikely you will see this value reported. • Unknown - The server has not reported a recognized Usage State. Administrative State. The Administrative State of the server. The possible states are: • Shut Down - The server shut itself down in an orderly way. • Force Halt - The server accepted a Force Halt command and terminated immediately.
Communication Status: Normal However, when the server is experiencing errors or encountering abnormal conditions, it will change the appropriate states and statuses to error values, notify SSM of the changes, and issue an alarm to SSM. Refer to Section 9.6.2: Alarm/Event Information on page 303 for more information. The Startup Daemon and the System Manager do not have their own Basic Server Information windows. 5.1.1.
Global Database Name. The name of the global database for the HPSS system. Subsystem Database Name. The name of the database which contains the subsystem tables used by this Core Server. Schema Name. The name of the database schema. Root Fileset Name. The name of the root fileset used by the Core Server. Root Fileset ID. The fileset id for the root fileset used by the Core Server. Maximum Open Bitfiles. The maximum number of bitfiles that can be open simultaneously. Maximum Active I/O Reqs.
• File Deletes. The number of bitfile delete requests processed in the Core Server since startup or last reset of the statistics. • Last Reset Time. The last time the subsystem statistics were reset. If this value is 0, the statistics have not been reset since server startup. Name Space Statistics: • Files. The number of files managed by this Core Server. • Directories. The number of directories managed by this Core Server. • Symbolic Links. The number of symbolic links managed by this Core Server.
• Free Tape Bytes. This is an estimate based on the sum of the estimated sizes of the partially written and unwritten tape volumes. It is not, and cannot be, an accurate value as the amount of data that can be written on tapes varies with individual tape volumes and data compression levels. Options: • Can change UID to self if has Control Perm. If this flag is ON, any user having Control permission to an object can change the UID of that object to their own (but not any other) UID.
must be greater than zero and is used if the Gatekeeping Site Interface returns a wait time of zero for the create, open, or stage request being retried. Changing the value of this field will cause the Gatekeeper to use the new value until the next restart at which point it will then go back to using the value defined in the Gatekeeper Configuration window. Refer to Section 5.1.2 Gatekeeper Specific Configuration on page 98. Site Policy Pathname (UNIX).
Server when the Gatekeeper is monitoring Requests and a client disconnects. • Get Monitor Types. Statistics from the gk_GetMonitorTypes API. This API is called by the Core Server to figure out what types of Requests being monitored by the Gatekeeper. • Pass Thrus. Statistics from the gk_PassThru API. • Queries. Statistics from the gk_Query API. • Read Site Policies. Statistics from the gk_ReadSitePolicy API. • Last Reset Time. The time stamp when the Statistics were last (re)set to zero.
Minimum Location Map Update Time. The shortest time, in seconds, needed for a location map update. Average Location Map Update Time. The average time, in seconds, needed for a location map update. Maximum Location Map Update Time. The longest time, in seconds, needed for a location map update. Associated Button Descriptions Reset. Reset the Statistics to 0. Related Information HPSS Error Manual: Chapter 1, Section 1.2.10: Location Server Problems. 5.1.1.2.
changes made to fields on this window are sent directly to the Mover after the appropriate button is pressed and are effective immediately. Field Descriptions Server Name. The descriptive name of the Mover. Number of Request Tasks. The number of Mover request-processing tasks that currently exist. Number of Active Requests. The number of active requests that are being handled by the Mover. Time of Last Statistics Reset. The time and date when the Mover statistics were last reset. Buffer Size.
This window allows you to view the type-specific information associated with a PVL. Field Descriptions Server Name. The descriptive name of the PVL. Total Volumes. The total number of volumes that have been imported into the PVL. Total Repositories. The total number of PVRs in the Servers list window. Total Drives. The total number of drives controlled by this PVL. 5.1.1.2.
Characteristics. Flags for the PVR: • Defer Dismounts. If ON, the PVL will delay the dismounting of a tape cartridge until the drive is required by another job or until the Dismount Delay time limit is exceeded. • Support Shelf Tape. If ON, the PVR and the PVL will support the removal of cartridges from the tape library using the shelf_tape utility. 5.1.1.2.1. 3494 PVR Information Window Field Descriptions Server Name. The descriptive name of the PVR. Total Cartridges.
heavily used controller, then a more distant drive will be selected. Retry Mount Time Limit. The default value for this field is -1. When the default value (-1) is used, if an error is encountered during a PVR mount operation, the mount will pend and be retried every 5 minutes. Setting a value in this field will change the mount behavior to periodically retry the mount until the specified time limit is exceeded. Once exceeded, an error is generated and the mount request is canceled.
5.1.1.2.1. AML PVR Information Window Field Descriptions Server Name. The descriptive name of the PVR. Total Cartridges. The number of cartridges currently being managed by the PVR. Cartridge Capacity. The total number of cartridge slots in the library dedicated to this HPSS PVR. This may or may not be the total cartridge capacity of the library; a site might use part of the library for some other HPSS PVR or for some non-HPSS application.
the number of consecutive mount errors which occur to any drive in this PVR equal or exceed this value, the drive is automatically locked by the PVL. The only mount errors that apply are those set through the Retry Mount Time Limit mechanism. The Drive Error Count field in the PVL Drive Information records the number of consecutive errors on a drive by drive basis. To turn off the automatic drive disable feature, set the Drive Error Limit to 0 or -1.
alarm. Same Job on Controller, Other Job on Controller, & Distance To Drive. These values are used by the PVR when selecting a drive for a tape mount operation. The three values are essentially weights that are used to compute an overall score for each possible drive. After the score has been calculated, the drive with the lowest score is selected for the mount. If two or more drives tie for the lowest score, one drive is selected at random.
Shelf Tape Check-In Retry. The number of seconds the PVR will wait before asking the robot if a requested shelf tape has been checked-in. The PVR will continue checking at this interval until the tape is checked-in. This field applies only if the Support Shelf Tape checkbox is selected. The retry value must be 30 or greater. Shelf Tape Check-In Alarm. The PVR will periodically log alarm messages when a requested shelf tape has not been checked-in. This field specifies the number of minutes between alarms.
the best performance. It also will try to pick controllers that are currently driving a minimum number of tapes. So, in an environment with many tape drives per controller, the above algorithm will minimize the load on any one controller. The Distance To Drive helps minimize mount times by mounting the tape in a physically close drive. All other things being equal, the tape will be mounted in the closest drive.
5.1.1.2.1. STK PVR Information Window Field Descriptions Server Name. The descriptive name of the PVR. Total Cartridges. The number of cartridges currently being managed by the PVR. Cartridge Capacity. The total number of cartridge slots in the library dedicated to this HPSS PVR. This may or may not be the total cartridge capacity of the library; a site might use part of the library for some other HPSS PVR or for some non-HPSS application.
Server to set the VV Condition of the associated tape volume to DOWN. Once in DOWN state, the volume will no longer be available for read or write operations. For further information about the Core Server VV Condition, see Section 4.5.4.2: Core Server Tape Volume Information Window on page 271. Drive Error Limit. This field is used in conjunction with the PVR Server Retry Mount Time Limit.
HPSS administrators and operators may use SSM to view the active RTM requests. The RTM Summary window lists a summary of the current RTM requests. The RTM Detail window displays detailed information for selected RTM requests. 5.1.1. RTM Summary List Field Descriptions RTM Summary List. This is the main portion of the window which displays various information about each RTM request summary. ID. The RTM request identifier. Action. The action or operation that this request is currently executing.
5.1.2. RTM Detail The RTM Detail window displays a snapshot of the details of the selected RTM requests from the RTM Summary List window. This may contain information from multiple servers, Gatekeeper, Core and Mover. The actual data displayed will be different for each server type and is displayed in a tree structure. Each node of the tree can be expanded/collapsed by clicking the mouse on the tree node indicator. A new snapshot can be taken and added to the display by pressing the Snapshot button.
ReqId. The RTM request identifier. ReqCode. The action or operation that this request is currently executing. Examples include "Mover write", "PVL verify", "write tm" (tape mark), etc. ReqState. The state of the requested operation. Examples include "in progress", "suspended", "blocked", etc. ServerDescName. The descriptive name of the server that holds this RTM request record. StartTimeDelta. The age of this request since it entered the server.
PVLJobId. The ID of the PVL job associated with this request. MvrId. The ID of the Mover this request is currently waiting on. DeviceId. The Device this request is curretly waiting on to complete a data move operation. Segment. The ID of the storage segment being operated on by this request. VV. The ID of the virtual volume associated with this request. PVName. The name of the Physical Volume associated with this request. CurrentRelPosition.
GroupId. The Group ID of the user associated with this request. HostAddr. The address of the originating host RequestType. The type of this request (Open, Create or Stage). Oflag. The Open flags associated with this file open request. StageFlags. Flags associated with this file stage operation. StageLength. The number of bytes to stage. StageOffset. The offset of the file where the stage is to begin. StageStorageLevel. The Storage Class level that the file will be staged to. UserId.
To start the Startup Daemon, use the “-d” option to rc.hpss: % su % /opt/hpss/bin/rc.hpss -d [start] 5.2.2.2. Starting SSM The SSM System Manager configuration metadata should have already been created by mkhpss as part of the the infrastructure configuration. After SSM is started, this configuration metadata may be modified if necessary. Refer to the HPSS Installation Guide, Section 2.3.4: HPSS Infrastructure for more information. The SSM server startup script, rc.
To start a server, select the desired server(s) from the Servers window and click on the Start button. Verify the result of the request in the message area on the Servers window. In addition, monitor the Alarms and Events window for the “Server Initialized” event. Reference Section 5.2.2.3: on page 151. The Startup Daemon allows only one instance of a configured server to be brought up at a time. If a server is already running, the subsequent startup request for that server will fail.
window for the “Server Terminated” event. The HPSS Startup Daemon(s) and the SSM System Manager cannot be shut down from the Servers window. Select System Manager from the Shutdown submenu of the Operations menu of the Health and Status window to shut down the System Manager. Use the rc.hpss script stop option to shut down either the System Manager or the Startup Daemon. Servers may not terminate immediately since they may wait for pending operations to complete before terminating.
command line sessions, will detect that the System Manager has exited. Choosing this option will pop up a confirmation window which allows the shutdown request to be approved or canceled. As the System Manager exits, a notification window will pop up on each logged on SSM graphical user interface session informing the user that the GUI has lost connection to the System Manager.
button. Verify the result of the request in the status bar on the Servers window. In addition, monitor the Alarms and Events window for the “Server Repaired” event. Repairing a server does not correct the underlying problem that caused the server's reported state to change. Rather, it is a means for the administrator to notify the server that the underlying problem has been corrected or dismissed. It is an “alarm reset”. 5.2.2.
Core Server Resets the COS Copy to Disk, COS Change Retry Limit, Tape Dismount Delay, Tape Handoff Delay, PVL Max Connection Wait, Fragment Trim Limit and Fragment Smallest Block values to the values in the specific configuration metadata record. Reloads cached Class of Service information for those COSs that were already cached in the server's memory. Does not add new COSs to, or remove deleted COSs from the cache. Does not update COS cache information if the Hierarchy ID in the COS changes.
Servers that do not support reinitialization, or those that do not support reinitializing the settings in question, must be restarted in order for configuration modifications to take affect. Some groups of servers depend on consistent configuration information to run properly. For example, the Core Server and Migration/Purge Server must agree on Class of Service, Hierarchy and Storage Class configurations.
Chapter 6. Storage Configuration This chapter describes the procedures for creating, modifying, and deleting storage classes, hierarchies, classes of service, migration policies, purge policies, and file families. 6.1. Storage Classes This section describes the process of configuring Storage Classes. 6.1.1. Configured Storage Classes Window A storage class can be created and managed using the Configured Storage Classes window.
Information Buttons. Migration Policy. Opens the configuration window for the migration policy that is configured for the selected storage class. This button will be disabled if no storage classes are selected in the Storage Classes list or the selected storage class does not have a migration policy. Purge Policy. Opens the configuration window for the purge policy that is configured for the selected storage class.
This window is used to manage disk storage class configurations. Field Descriptions Storage Class ID. The numeric identifier assigned to the storage class. Storage Class Name. The descriptive name of the storage class. Storage Class Type. The type of the storage class (Disk). Migration Policy. The migration policy associated with this storage class, or None if no migration is desired. HPSS Management Guide Release 7.3 (Revision 1.
Advice - Do not configure a migration policy for a storage class at the lowest level in a hierarchy. If a migration policy is added to a storage class after files are created in the storage class, those files may never be migrated. Use the mkmprec utility to correct this problem. See the mkmprec man page for more information. Purge Policy. The purge policy associated with this storage class, or None if no purge is desired.
VVs, fragmentation of the volumes may make it difficult to find space for a new segment. Setting Average Number of Storage Segments to a larger value will increase the number of segments occupied by files, and decrease the segment size. Fragmentation of the volumes will be reduced, but the amount of metadata required to describe the files will be increased.
Min Storage Segment Size (MINSEG). The lower bound for storage segment sizes created on volumes in this storage class. This value is the product of the Stripe Length (SL) and the Min Multiplier (MINMULT). Max Multiplier (MAXMULT). The Max Storage Segment Size (MAXSEG) must be a power of 2 multiple of the Stripe Length (SL). This selection list contains the valid power of 2 values from 1 (2^0) to 16,777,216 (2^24). Select the appropriate multiplier from the selection list.
This window is used to manage tape storage class configurations. Field Descriptions Storage Class ID. The numeric identifier assigned to the storage class. Storage Class Name. The descriptive name of the storage class. Storage Class Type. The type of the storage class (Tape). Migration Policy. The migration policy associated with this storage class, or None if no migration is desired. Advice - Do not configure a migration policy for a storage class at the lowest level in a hierarchy.
If a migration policy is added to a storage class after files are created in the storage class, those files may never be migrated. Use the mkmprec utility to correct this problem. See the mkmprec man page for more information. Warning Threshold. A threshold for space used in this storage class expressed as a number of empty tape volumes.
movement protocol overhead and helps to keep the data streams flowing smoothly. VV Block Size must meet the following constraining requirements: • It must be an integer multiple of the Media Block Size. • The PV Section Length (Media Block Size (MBS) * Blocks Between Tape Marks (BBTM) divided by the VV Block Size (VVBS) must be a whole number. For example, if the Media Block Size (MBS) is 64 KB, and the Blocks Between Tape Marks (BBTM) is 512, the physical volume section length is 32 MB.
If the tape media supports "fast locate", and that feature is enabled for the tape devices, choose larger values of Seconds Between Tape Marks (SBTM). When reading from the middle of a file on tape, the fast locate feature is used by HPSS to locate the data block in which a given portion of a file is located, rather than skipping tape marks and data blocks. When fast locate is enabled, there is no advantage to using smaller values of Seconds Between Tape Marks (SBTM) for locating positions within files.
This window is used to define Warning and Critical thresholds unique to a particular storage subsystem, overriding the values defined in the disk storage class. The user may modify either the Warning percent, Critical percent or both for one or more of the listed subsystems. Select a subsystem from the list and then modify the values on the lower portion of the window. When the new values have been entered, select Update to commit the changes.
Subsys Name. The name of the selected storage subsystem. Warning. The current warning threshold value for the selected subsystem. If the storage class defaults are to be used, the text “default” will be displayed. Critical. The current critical threshold value for the selected subsystem. If the storage class defaults are to be used, the text “default” will be displayed. Buttons Set To Defaults. A button to remove the threshold override values from the selected storage subsystem.
Warning volumes, Critical volumes or both for one or more of the listed subsystems. Select a subsystem from the list and then modify the values on the lower portion of the window. When the new values have been entered, select Update to commit the changes. To remove the customized threshold values, select the desired subsystem and click the Set To Defaults button. The changes will be committed and displayed in the Subsystem Thresholds table.
the administrator to reflect desired behavior. If files have been stored in a storage class without a migration policy, and a migration policy is subsequently configured for it, the files created before the addition of the policy will not be migrated. Use the mkmprec utility to create migration records for these files so that they will migrate properly. See the mkmprec man page for more information.
accessed from the HPSS Health and Status window's Configure menu, submenu Storage Space, item Heirarchies. Refer to Section 3.9.3: HPSS Health and Status on page 58. The following rules for creating storage hierarchies are enforced by the Hierarchies window: • A storage class may be used only once per hierarchy. • Disk migration may migrate to one or more target levels in the hierarchy. To create multiple copies of data, select multiple migration targets.
Configuration Buttons Create New. Open a Storage Hierarchy window with default values. Configure. Open the selected storage hierarchy configuration for editing. One hierarchy from the list must be selected before this button is active. Delete. Delete the selected storage hierarchy(s). 6.2.2. Storage Hierarchy Configuration Window This window allows an administrator to manage a storage hierarchy. A maximum of 5 storage classes can be configured into a hierarchy.
Field Descriptions Hierarchy ID. The ID associated with this hierarchy . Any unique, positive 32-bit integer value. The default value is the last configured ID plus 1. Hierarchy Name. The descriptive name associated with this hierarchy. The default value is “Hierarchy ”. Top Storage Class. The storage class at the highest level of the hierarchy. After a storage class is selected, a new storage class drop-down list will appear in the Migrate To field.
do this will render the files unreadable. It is recommended that HPSS Customer Support be called to assist with this operation. 6.3. Classes of Service This section describes how to configure classes of service. 6.3.1. Classes of Service Window A COS can be created and managed using the Classes of Service window. This window lists the classes of service that are currently configured. It also allows an administrator to update and delete existing classes of service and to add a new class of service.
Create New. Open a Class of Service window containing default values for a new class of service. Configure. Open the selected class(es) of service configuration(s) for editing. Delete. Delete the selected class(es) of service. 6.3.2. Class of Service Configuration Window This window allows an administrator to manage a class of service. Field Descriptions Class ID. The unique integer ID for the COS. Any positive 32-bit integer value. Class Name. The descriptive name of the COS.
is On Open. For all subsequently created COSes, the default value is the same as the most recent COS configured. Advice – Changing the Stage Code should be done with care. See Section 6.3.3: Changing a Class of Service Definition on page 178 for detailed information on each of the choices for Stage Code. Minimum File Size. The size, in bytes, of the smallest bitfile supported by this COS. Valid values are any positive 64-bit integer value. Maximum File Size.
• Auto Stage Retry. When this flag is turned on, and a valid secondary copy of the data exists, and a stage from the primary copy fails, HPSS will automatically retry the stage using the secondary copy. • Auto Read Retry. When this flag is turned on, and a valid secondary copy of the data exists, and an attempt to read the first copy fails, HPSS will automatically retry the read using the secondary copy.
users do frequent appends, or if those who do can be relied upon to turn truncation off for their own files, or if the system administrator can easily identify files which are frequently appended and can turn off truncation on them individually, then the site might want to take advantage of the space savings for the remaining files and leave Truncate Final Segment on in the COS definition. One additional consideration is that truncating the final segment incurs a small performance penalty.
have a significant impact. Turning the flag on constrains files that are already larger than the Maximum File Size to their current size. Existing smaller files will be constrained to the Maximum File Size. Changing Minimum File Size can have an impact on COS selection. Currently, the PFTP and FTP interfaces use the Minimum File Size to select an appropriate COS based on file size.
6.3.5. Changing a File's Class of Service The Core Server provides a means to change the class of service of a file. The Core Server moves the body of the file as appropriate to media in the destination Class of Service, then allows the usual migration and purge algorithms for the new Class of Service to apply. The file body is removed from the media in the old Class of Service.
Both basic and subsystem specific migration policies are created and managed using the Migration Policies window. The basic policy must be created before creating any subsystem specific policies. The fields in the basic policy are displayed with default values. Change any fields to desired values as needed. Click on the Add button to write the new basic policy to the HPSS metadata. To configure a subsystem specific policy, select an existing basic policy and click on Configure.
Other Migration Policy List columns. The remaining columns provide the same information that can be found in Section 6.4.2.1: Disk Migration Policy Configuration on page 182 and Section 6.4.2.2: Tape Migration Policy Configuration on page 185 windows. Configuration Buttons. Create Disk. Opens a Disk Migration Policy window with default values. Create Tape. Opens a Tape Migration Policy window with default values. Configure. Opens the selected migration policy for editing. Delete.
This window allows an administrator to manage disk migration policies and their subsystem-specific overrides. Subsystem-specific policies define migration rules to be applied on a subsystem basis instead of using the default migration policy.
Last Update Interval. The number of minutes that must pass since a file was last updated before it can become a candidate for migration. Number of Migration Streams Per File Family. The number of migration streams to be allocated to each file family. This value effectively determines how many file families can be migrated simultaneously.
storage class. Triggered Migration Options. There are four choices for managing migration behavior when a storage class is running out of space and the next migration isn’t yet scheduled to occur: Migrate At Warning Threshold. A migration run should be started immediately when the storage class warning threshold is exceeded. Migrate At Critical Threshold. A migration run should be started immediately when the storage class critical threshold is exceeded. Migrate At Warning and Critical Thresholds.
This window allows an administrator to manage tape migration policies and their subsystem-specific overrides. Subsystem-specific policies define migration rules to be applied on a subsystem basis instead of using the default (basic) migration policy.
migration criteria. This goal may not be attainable if the total size of all files not eligible for migration is large. Total Migration Streams. This value determines the degree of parallelism in the file migration process. This applies to policies using the Migrate Files and Migrate Files and Purge options only (see File and Volume Options below). File and Volume Options. There are four options available for determining how tape migration handles files and volumes.
based on individual files rather than tape volumes, and is able to make second copies of files stored on tape. In this algorithm, individual files are selected for migration based on their last write time and the settings in the Migration Policy. The selected files are migrated downwards to the next level in the hierarchy. The order in which files are migrated is based approximately on the order in which they are written to tape.
To delete a migration policy, select the policy from the Migration Policies list window and press the Delete button. If a basic policy is selected, and the policy has sub-system specific policies associated with it, a prompt will appear asking if the basic policy and the related sub-system specific policies should all be deleted since they must be deleted before the basic policy can be.
Policy button after the window refreshes, enter the specific purge policy parameters, and press the Update button. This process can be repeated for each sub-system. When a purge policy is added to an existing storage class, the Migration Purge Servers must be restarted in order for the policy to take effect. Field Descriptions Purge Policy List columns. The columns provide the same information that can be found on the Purge Policies window in the following section. Configuration Buttons. Create New.
This window allows you to manage a Purge Policy. Purge policies are assigned to storage classes to tell the Migration Purge Server and Core Server how to free disk space occupied by files which have been migrated. Purge policies apply to disk storage classes only. The window always includes the Basic tab and may include one or more Subsystem tabs. These will be referred to as "basic" and "subsystem" below. Each purge policy consists of a single basic policy and zero or more subsystem policies.
unaccessed (for read or write) for the length of time specified by this field. Start purge when space used exceeds. Purge will begin for a storage class when the amount of its space used exceeds this threshold. Used space includes any file in the storage class, whether it has been migrated or not. Stop purge when space used falls to. Purging will stop for a storage class when the amount of its space used drops to this threshold.
policy are applied to all storage classes which reference the policy. If the policy is reread, the changes are only applied to the storage class and storage subsystem for which the policy is reread. Core Servers are not able to reread purge policies. If the Purge By field is changed on either a basic or subsystem specific purge policy, the relevant Core Servers must be restarted. 6.5.4.
Field Descriptions The fields of the columns of this window are those of the File Family Configuration described in Section 6.6.1: File Family Configuration. Configuration Buttons Create New. Open a File Family Configuration window with default values. Configure. Open the selected file family configuration(s) for editing. Delete. Delete the selected file family configuration(s). 6.6.1. File Family Configuration This window allows you to manage a file family configuration. Field Descriptions Family ID.
HPSS Management Guide Release 7.3 (Revision 1.
Chapter 7. Device and Drive Management Every disk and tape drive that is used by HPSS is controlled by two servers. The PVL controls mounts and dismounts (for disk devices these are logical operations only), and the Mover controls I/O. In support of these two views, the terms “PVL drive” and “Mover device” are used to refer to the configuration and information maintained about the drive by the PVL and Mover, respectively. The configuration information is all managed through a single SSM list window.
may result in a large performance degradation. For SAN disk devices, the hpss_san3p_part utility with the -i option and device name must be run to assign a UUID to the disk device. This UUID should be used in the Device Name field when configuring the disk device. Currently the number of drives which may be configured per PVR is limited to 256. The current maximum number of PVR's is 64. The maximum number of devices per mover is also 64.
These windows allow you to manage a tape or disk device/drive configuration. Modifying/Updating the device/drive configuration via the Tape Device Configuration or Disk Device Configuration windows is not permitted while the PVL, the associated Mover(s), or the associated PVR(s) (for tape) are running. Updates to these configuration windows can only be attempted after these servers have been shutdown.
For IRIX systems, SCSI attached tape drives are typically referred to by pathnames of the form /dev/rmt/ tpsXdYns, where X is the SCSI controller number, and Y is the SCSI ID of the drive. Note that for Ampex DST drives, the tpsXdYnrns name should be used (indicating that the driver should not attempt to rewind the drive upon close). For other drives on IRIX, the tpsXdYnsvc name should be used (indicating that the driver allows compression and variable block sizes).
Advice - This option is supported for 3590, 3590E, 3580, 3592, 9840, 9940, DST-312, DST-314, T10000 and GY-8240 devices. • NO-DELAY Support (tape only). An indication of whether the device supports opening the device with no delay flag set, while allowing tape I/O operation after the open. Advice - On some tape devices, this will allow for a quicker polling operation when no tape is presently loaded in the device. This field is meaningful for tape devices only. • Write TM(0) to Sync (tape only).
device. Without the reservation, it is possible for other hosts to interleave SCSI commands to the drive with those issued by HPSS. This effect could potentially lead to corruption of data. Table 2.
must be a valid ID. The valid IDs can be found in the Affinity list for any cartridge in the robot. Use the command “mtlib -l -qV -V” to obtain the Affinity list for a cartridge. Polling Interval (tape only). The number of seconds to wait between polling requests performed by the PVL to determine if any media is present in the drive. Use -1 to disable polling. Values of 0 to 14 are not valid.
This window allows you to view the list of configured Mover devices and PVL drives. It also provides a number of function buttons, which allow certain operations to be performed on devices or drives. The Device and Drive List Preferences window may be used to select the device/drives which will be displayed. Select the columns to be displayed in the list from the Column View menu. Most of the function buttons to the right of this list require that one or more device/drive entries be selected from the list.
• Disabled - The device is locked, which makes it unavailable for use by HPSS. • Unknown - The state of the device is not known to SSM; this is usually caused by the controlling Mover being down or disconnected from SSM. Device Admin State. The current administrative state of the device, as reported by its controlling Mover. The possible states are: • Locked - The device is locked and unavailable for use by HPSS. • Unlocked - The device is unlocked and available for use by HPSS.
• Unknown - The state of the drive is not known to SSM; this is usually caused by the PVL being down or disconnected from SSM Comment. This field provides a 128 character buffer in the PVL drive metadata which gives the administrator the opportunity to associate miscellaneous text with a device/drive. For example, a site may want to place a comment in this field that the drive is out of service or being used by another system, etc. PVR.
Drive Administration Buttons This group of buttons affects selected drives. All the buttons are disabled unless one or more drives are selected (see figure above). Lock. Lock the selected drives, making them unavailable to HPSS. When a drive is locked, the PVL will no longer schedule the PVL drive. When locking a tape drive due to a cartridge problem (e.g. stuck tape), it is beneficial to also cancel the PVL Job associated with the cartridge.
window in Add mode, allowing you to create a new disk device and drive. Create Tape. This button is always active. Clicking on it opens the Tape Device Configuration window in Add mode, allowing you to create a new tape device and drive. Configure. This button is active if one or more devices/drives are selected. Clicking the button opens the Disk/Tape Device Configuration window for the selected devices and drives, allowing you to view, delete, clone, or update the configuration(s). Delete.
Some PVL drive configuration attributes can be updated dynamically using the PVL Drive Information window (Section 7.2.2: PVL Drive Information Window on page 214). The settable fields in this window are updated dynamically (i.e. saved to metadata and used by the PVL upon successful Update). It is also possible to make temporary updates to some Mover device attributes parameters by changing the settable fields in the Mover Device Information window (Section 7.2.
There are a number of situations in which the PVL won't allow the device/drive to be deleted: • If the device/drive is still attempting to notify the Mover/PVR about being added • If the device/drive is in the process of aborting Mover/PVR notification • If the drive is in use by the PVL • If the drive is not locked • For disk: If storage resources haven't been deleted from the disk device/drive • For disk: If the physical volume hasn't been exported from the disk device/drive If it is only nece
HPSS Management Guide Release 7.3 (Revision 1.
The Mover Device Information window reports the current statistics for the device, such as the workload history of the device since the startup of the controlling Mover. The Mover Device Information window can also be used to lock and unlock a mover device (note: locking the Mover device generally is not helpful; see S7.1.1ctDevices and Drives Windowon : o202 page ). Additionally, it can be used to control the I/O aspects of the device.
means that the corresponding flag is set. • Read Enabled. An indication of whether the device is available for reading. • Write Enabled. An indication of whether the device is available for writing. • Locate Support (tape only). An indication of whether the device supports a high speed (absolute) positioning operation. Advice - This option is supported for IBM 3590, 3590E, 3580, 3592, StorageTek 9840, 9940, DST312, DST-314, T10000 and GY-8240 devices. • NO-DELAY Support (tape only).
• Multiple Mover Tasks (disk only). If ON, the Mover will allow multiple Mover tasks to access the disk device. • Reserve/Release (tape only). An indication of whether a SCSI reservation is taken on the device when it's opened. Advice - This is useful on fibre attached tape devices to ensure that HPSS has sole control on the device. Without the reservation, it is possible for other hosts to interleave SCSI commands to the drive with those issued by HPSS.
7.2.2. PVL Drive Information Window HPSS Management Guide Release 7.3 (Revision 1.
HPSS Management Guide Release 7.3 (Revision 1.
This window allows you to view/update the information associated with an HPSS drive. The PVL Drive Information window is typically used to lock and unlock drives since newly configured drives are locked by default and must be unlocked to be used. It may also be used to determine which volume is mounted on the drive when the drive reports a mount error condition. Any changes made to fields on this window are saved in metadata and sent directly to the PVL and thus are effective immediately.
PVR (tape only). The descriptive name of the PVR used to control this drive. This field is only meaningful for tape drives. Administrative State. This field allows you to modify the state of the drive. The options are: • Locked - Makes the drive unavailable for HPSS requests. • Unlocked - Makes the drive available for HPSS requests. • Mark Repaired - Tells the PVL to clear any error status for the drive. This can be useful if you think a problem has been fixed, but the PVL is unaware of it.
• PVR/Mover Notify Pending - The PVL needs to notify the associated PVR and Mover that the drive has been created or deleted. • PVR Notify Pending - The PVL needs to notify the associated PVR that the drive has been created or deleted. • Mover Notify Pending - The PVL needs to notify the associated Mover that the drive has been created or deleted. • Abort PVR/Mover Notify – The PVL is aborting a pending notification. Controller ID.
and thus availability. To do this, the HPSS administrator will need to: • Associate tape drives to a specific drive pool by configuring the HPSS tape drives with a non-zero positive integer Drive Pool ID. • Modify the end client to dictate that their read request be serviced by tape drives from this particular Drive Pool. 7.3.1.
7.4. Changing Device and Drive State The administrative state of a device or drive can be set to Unlocked or Locked. This controls whether HPSS can access the drive. Changing the state of a device or drive can be accomplished via the Devices and Drives list window. Notice that there are two sets of Lock, Unlock and Mark Repaired button groups on the Devices and Drives window. The first group is titled Device Administration and the second group is titled Drive Administration.
Locking a disk drive has little effect since disks are logically mounted when the PVL initializes and are not usually unmounted; however, a disk drive must be in the locked state to be deleted. 7.4.3. Repairing the State of a Device or Drive A drive can enter an error or suspect state as reported by the PVL, Mover, or both. After a drive has entered one of these abnormal states, it can be repaired to return it to a normal state. From the Devices and Drives window (Section 7.1.
HPSS Management Guide Release 7.3 (Revision 1.
Chapter 8. Volume and Storage Management This chapter describes the procedures for adding, removing, monitoring, and managing storage space in the HPSS system. The basic unit of storage which can be added to the HPSS system is the volume. Before volumes can be added to HPSS, the underlying configuration structures must be created to support them. These structures include: • Storage classes, hierarchies, classes of service, migration policies, purge policies, and, optionally, file families.
/var/hpss/etc/AML_EjectPort.conf and /var/hpss/etc/AML_InsertPort.conf. The AML robot can have multiple insert and eject ports, which have the capability to handle different media types. These two configuration files in conjunction with the AMU AMS configuration files specify which Insert/Eject areas or bins the tapes should be placed in for insertion into the archive and the area to which they are ejected when the HPSS export command is used. The AML PVR is supported by special bid only.
information contained in the internal label will not match the side information passed to PVL in the Import request. If the start of the OwnerID field is not “HPSS”, then the volume will be imported as a Foreign Label volume and the side will be set to zero. When importing a non-removable disk volume into HPSS, the raw disk must already be defined in the host system. The import labels the volumes with HPSS labels. The volumes can be imported into HPSS using the Import Disk Volumes window (Section 8.1.1.
This window allows the user to import tape volumes into the HPSS system, making them known to the PVL server. To make them known to the Core Server so they can be used by HPSS, storage resources must then be created for the volumes via the Create Tape Resources window. When the Max Number of Drives is set to 1, the volumes are processed one at a time in sequence. If an error occurs, processing stops.
a successful import and goes on to the next volume in the list. This makes it easy to restart a partially completed import (after fixing the cause of the error which terminated the first request) by clicking the Import button again. There is no need to remove from the list the volumes which were imported successfully. The list of the volumes to be imported may be constructed in any of three ways.
“AB5329” “AB7329” The filling will not occur and an error will be displayed if the specified values would generate an invalid volume label (e.g., one greater than zzz999). To specify a list of volumes from a file, create a file containing the name of each volume to be imported on a separate line. Volume names must be six alphanumeric characters. No other characters are allowed on the line. The file must be accessible from the host on which the hpssgui is executing.
the end of the Volume list. If Fill Count is greater than 1, multiple labels are generated using the entered label as a starting point. Maximum Volumes Allowed. The maximum number of volume labels that will fit in the Volume List. The value is 10,000 and is set by SSM. This field is non-editable. Total Count. The total number of tapes to be imported. This is an informational field reflecting the number of volume names generated in the Volume List and is not directly editable. Volume List.
An ANSI (non-UniTree) or Tape Imported Label Written, Tape Imported HPSS label with a correct Tape Imported Volume ID (the Volume ID on the label is as expected by HPSS) An ANSI or HPSS label with Tape Not Imported Tape Not Imported Tape Not Imported an incorrect Volume ID (the Volume ID on the label is different from the Volume ID expected by HPSS) Random data (e.g.
This window allows the user to import disk volumes into the HPSS system, making them known to the PVL and PVR servers. To make them known to the Core Server so they can be used, they must be created via the Core Server's Create Disk Resources window. The SSM System Manager processes the volumes one at a time in sequence. If it encounters an error, it stops and returns. The window completion message will report the number of successful imports, from which the volume causing the failure can be found.
List. To automatically generate a list of volume names in the Volume List, set the Fill Count to the desired number of volumes. Set the Fill Increment to the number by which each automatically generated label should differ from the previous one. Then type the starting volume name into the Volume Label field. The specified number of volume names will be added to the Volume List, each one larger than the previous entry by the specified Fill Increment.
Based on the Import Type, the import request will be processed depending on how the media is currently labeled. See Section 8.1.1.4: Selecting Import Type for Disk Volumes on page 234 for more information on selecting the appropriate Import Type. File Containing Volume List. The name of an external file containing a list of volume labels to be added to the end of the Volume List. Fill Count. The number of labels to be added to the Volume List when a value is typed into the Volume Label field.
again. You may dismiss the window before completion; however, completion messages will be displayed in a pop-up window. At this point you can begin entering data for another import, or you can dismiss the window. 8.1.1.4. Selecting Import Type for Disk Volumes The following table lists information for selecting disk import types. Table 4.
are copied to the volumes' metadata and become a permanent part of the definition of the volumes. The Core Server creates the necessary metadata structures for each of the new virtual volumes. Each new virtual volume is immediately available for use. Note that new tape resources are not assigned to a file family. Tapes are assigned to file families from the unused tapes in the storage class as they are needed to satisfy requests to write to tape in a given family.
List. To automatically generate a list of volume names in the Volume List, set the Fill Count to the desired number of volumes. Set the Fill Increment to the number by which each automatically generated label should differ from the previous one. Then type the starting volume name into the Volume Label field. The specified number of volume names will be added to the Volume List, each one larger than the previous entry by the specified Fill Increment.
Server. VVs To Create. The number of virtual volumes to be created. This value determines the number of rows in the Volume List table at the bottom of the window. Optional or Informational Fields PVs in Each VV. The Stripe Width of the selected Storage Class. This field may not be changed. PV Size. The size, in bytes, of each physical volume used. The default value is the value from the storage class. For tapes, this is an estimate and is informational only.
This window is used to create disk storage resources, disk virtual volumes, in a Core Server. The disks must first be imported to the appropriate PVL. The names of the volumes may be entered into the window one at a time, or a list of volume names may be automatically generated from a single entry. Volume names are not entered directly into the Volume List at the bottom of the window but are typed into the Volume Label field.
"AA0080" "AA0090" "AA0100" "AA0110" "AA0120" When an addition produces overflow in a column, numerical columns are carried over properly to alphabetic columns and vice versa. Example: Fill Count = 6 Fill Increment = 2000 Volume Label= "AA7329" Labels automatically inserted into Volume List table: "AA7329" "AA9329" "AB1329" “AB3329” “AB5329” “AB7329” Once the Core Server completes the Create request, SSM reports the number of physical volumes and virtual volumes created in the window status field.
available. The default value is 1. If the list fills before the Fill Count is exhausted, filling stops and a message is displayed. Fill Increment. This field determines how each new volume label is generated. The Fill Increment is the number by which each automatically generated label will differ from the previous one when a value is in the Volume Label field and the Fill Count is greater than 1. Valid values are positive integers up to the number of available drives. The default value is 1.
reuse the volumes, create new storage resources on them using the Create Resources window. To remove them entirely from the HPSS system, export them from the PVL using the Export Volumes window. 8.2.1.1. Rules for Deleting Resources Volumes on which resources are to be deleted must be empty. There must be no storage segments associated with the volumes. The Core Server Disk Volume or Tape Volume window must show a zero in the Number of Active Segments field, and the VV Condition must be EMPTY.
Volume List at the bottom of the window. Any of the three entry methods may be repeated multiple times on the same window to add additional volumes to the list. All three entry methods or any combination of them may be used in succession on the same window. To add a single volume name to the Volume List, set the Fill Count and the Fill Increment each to 1 and type the volume name into the Volume Label field. The volume name will be added to the Volume List.
the file in the File Containing Volume List field. The volume names from the file will be added to the Volume List. Field Descriptions File Containing Volume List. The name of an external file containing a list of volume labels to be added to the end of the Volume List. Fill Count. The number of volume labels to be added to the end of the Volume List when the Volume Label field is next modified. Fill Increment.
8.2.2.1. Rules for Exporting Volumes Tape cartridges may be physically exported from any managing robotic library. To export a tape cartridge from HPSS, the administrator must be familiar with the operation of the PVR from which the cartridge will be removed because the physical cartridge ejection process differs among the PVRs supported by HPSS: • Operator - No cartridge ejection step is necessary (or possible).
This window allows you to export tape and disk volumes from the HPSS system. Exporting a volume is equivalent to telling HPSS that the volume no longer exists. Before volumes can be exported, the Core Server storage resources that describe the volumes must be deleted using the procedure described in Section 8.2.1: Deleting Storage Resources on page 240. The list of the volumes to be exported may be constructed in any of three ways.
Fill Count = 6 Fill Increment = 10 Volume Label = "AA0070" Labels automatically inserted into Volume List: "AA0070" "AA0080" "AA0090" "AA0100" "AA0110" "AA0120" When an addition produces overflow in a column, numerical columns are carried over properly to alphabetic columns and vice versa.
Field Descriptions Eject Tapes After Exporting. If this checkbox is selected, the exported tape volumes will also be ejected from the PVR. File Containing Volume List. The name of an external file containing a list of volume labels to be added to the end of the Volume List. Fill Count. The number of volume labels to be added to the end of the list when the Volume Label field is next modified. This number may be one or greater.
available to the storage class. The migration and purge policies may need to be modified to free up more space or to free up the space more frequently. In addition, the total storage space for the storage class may need to be reviewed to determine whether it is sufficient to accommodate the actual usage of the storage class. Storage space in HPSS is monitored from the Active Storage Classes window. 8.3.1.
To select a row, click on it with the mouse; the selection will be highlighted. Note that when you select a row, you are selecting a storage class within a particular storage subsystem. See also the related window Configured Storage Classes, described in Section 6.1.1 on page 157. The Configured Storage Classes window lists all configured storage classes in the system, whether or not any storage resources have been assigned to them.
appropriate when the MPS is recycled. Possible values are: • Waiting - Migration is not taking place at this time. The start of the next migration is waiting until criteria specified in the migration policy are met. • Running - A migration is in progress. • None - The storage class, as configured, is not a candidate for migration. • Suspended - Migration has been suspended. Migr Policy. The migration policy assigned to the storage class. Purge State. The purge state for the storage class.
• Suspend - If the purge state is Waiting or Running, this puts the purge into the Suspended state. • Resume - If the purge state is Suspended, this returns it to Waiting and allows MPS to again begin scheduling purge runs. • Reread policy - Tells the MPS to refresh its purge policy information by rereading the policy. Repack Volumes. Opens the Repack Virtual Volumes window, allowing you to start the repack utility program on the selected storage class. See Section 8.4.3.
This window allows you to view and update the information associated with an active disk storage class. It reports the storage space data as well as any exceeded thresholds. The window also reports detailed information on the migration and purge status. In addition, the window allows the SSM user to control the migration and purge process to override the associated migration and purge policies. There are three differences between the disk and tape storage class information windows.
Field Descriptions Storage Class Name. The name assigned to this storage class. Storage Class ID. The numeric ID of this storage class. Storage Class Type. The class of media assigned to this storage class (tape or disk). Subsystem Name. The name of the storage subsystem for which the storage class information is being displayed. Total Space. For disk storage classes this reports the total capacity of the storage class in bytes. Free Space.
alarm to SSM. Critical Threshold. For disk storage classes, this value is a percentage of total storage space. When the used space in this storage class exceeds this percent of the total space, the MPS will send a critical alarm to SSM. Note that the disk threshold reaches the warning level when the percent of used space rises above the threshold value, while the tape threshold reaches the warning level when the free volume count drops below the threshold value.
Suspended state. • Resume - If the migration state is Suspended, this returns it to Waiting and allows MPS to again begin scheduling migration runs. • Reread policy - Tells the MPS to refresh its migration policy information by rereading the policy. Pending Operations. When the MPS cannot respond immediately to a Control command, a command may be saved as pending. Any such pending operations are displayed here. Purge Attributes Tab. This panel contains the current purge status for this storage class.
8.3.3. MPS Tape Storage Class Information This window allows you to view and update the information associated with an active tape storage class. It reports the storage space data as well as any exceeded thresholds. The window also reports detailed information on the migration status. In addition, the window allows the SSM user to control the migration process to override the associated migration policies. There are three differences between the disk and tape storage class information windows.
Field Descriptions Storage Class Name. The name assigned to this storage class. Storage Class ID. The numeric ID of this storage class. Storage Class Type. The class of media assigned to this storage class (tape or disk). Subsystem Name. The name of the storage subsystem for which the storage class information is being displayed. Total Space. For tape storage classes this reports the total capacity of the storage class in virtual volumes (VVs). Free Space.
administrator. Start Time. The date and time when the most recent migration run started. It may still be running. End Time. The date and time when the last migration run completed. Total Units Processed. The amount of space in the storage class which has been migrated during the current or or most recent migration run. For tape storage classes running the tape volume migration algorithm, this is a number of virtual volumes (VVs). Control.
Core Server metadata that describes the volumes. See Section 8.4.3, Repacking and Reclaiming Volumes. 8.4.1. Forcing Migration The Migration Purge Server runs migration periodically in the time interval specified in the migration policy. However, between these automatic migration runs, an administrator can use the Active Storage Classes window to force a migration to take place. When a migration run is forced, the run timer is reset.
Repack selects tape volumes in one of two ways. The administrator can provide a list of tape volumes to repack, or repack can select volumes based on a number of selection criteria. If repack is provided with a list of tape volumes to process, those volumes must be in RW, RO, EOM or EMPTY Condition. Volumes in RWC or DOWN Condition cannot be selected. Repacking RW and EMPTY tape volumes is permitted because the administrator has explicitly selected the tape.
This window provides an interface for repacking tape virtual volumes. Field Descriptions Storage Class Name. The name of the storage class that will be repacked. Storage Class ID. The ID of the storage class that will be repacked. Subsystem ID. The ID of the subsystem which contains the volumes to be repacked. Core Server. The name of the Core Server that manages the volumes to be repacked. File Family Criteria. Indication of whether to use File Family criteria.
which a large percentage of the files have been deleted, may be selected by repack. If VV Space is 100, the comparison is not performed and selection of tape volumes to repack is made using the remaining criteria. Repack Options. • Select Only Retired VVs. If selected, repack selects only retired volumes in the indicated storage class. Retired volumes may be in RO, EMPTY or EOM condition. DOWN volumes are never selected.
Buttons Reclaim. Press this button to start the reclaim utility program on the indicated storage class. Tape volumes that are described as EMPTY, and are not retired, will be reclaimed. Status messages are displayed on the status bar at the bottom of the window at the start and end of the reclaim. SSM invokes the reclaim utility program and passes it the storage class and number of tape volumes to be reclaimed, as entered on the Reclaim Virtual Volumes window.
imported. CS Volume. Once you have filled in both fields, clicking on this button will open the Core Server Disk Volume or Core Server Tape Volume window for the specified volume. This metadata is created when the disk/tape storage resources are successfully created. 8.5.2. PVL Volume Information Window The PVL Volume Information window allows the SSM user to view the data for imported volumes. Before using the window, the user should know the 6-character label of the PVL volume.
• Unallocated. The volume has been successfully imported into the HPSS system and labeled. However, no storage resources have been created on it and it has not been allocated to a Core Server. It is therefore not available to the system for I/O. • Allocated - On Shelf. Storage resources have been created on the volume, it is assigned to a Core Server, and it is available to the system.
HPSS Management Guide Release 7.3 (Revision 1.
This window allows you to view and update the information associated with an HPSS tape cartridge. Note that the Location Type fields are represented differently for certain types of robots, for which Port, Drive and Slot (Unit, Panel, Row, and Column) may each be displayed as 0. If it is necessary to locate a cartridge in one of these robots, the robot’s operator interface must be used. This window contains three bookkeeping fields: Maintenance Date, Mounts Since Maintenance, and Mount Status.
note that this can also mean that side 0 of the cartridge is mounted. PVR Server. The descriptive name of the PVR which manages the cartridge. • Cartridge Type. The HPSS media type corresponding to this cartridge. This controls which the type of drive in which the cartridge can be mounted. Manufacturer. The Manufacturer string specified when the cartridge was imported. Lot Number. The Lot Number string specified when the cartridge was imported. Service Start Date.
• Port - The location is a port number. [This option is currently not used.] • Drive - The location is a drive ID number. • Slot - The location is a slot specification. The following fields are filled in (non-zero) based on the Location Type and whether or not the PVR has the information: Port. The port number where the cartridge is located. This option is currently not used and thus will always be zero. Drive ID. The drive ID number where the cartridge is located.
This window displays information about a disk volume as represented by the Core Server. Field Descriptions Name. The ASCII name of the first physical volume that is a part of the disk virtual volume. The entire virtual volume can be referred to by this name. VV Condition. This is the administrative control for the disk virtual volume. It will have one of five values: RWC, RW, RO, EMPTY or DOWN. • In RWC condition, the volume can be read and written. This is the normal operational state.
volume. • In DOWN condition, the volume cannot be read, written or mounted. This condition can be used to make a disk unavailable to the system. Change the VV Condition of a disk virtual volume by selecting the desired condition from the drop down menu and then pressing the Update button. Changes Pending. If there are any VV Condition changes for this volume pending in the Core server, Changes Pending will be "Yes" with a red bullet. Otherwise, Changes Pending will be None. Retired.
Less Commonly Used Data Tab Actual Length. The length of disk virtual volume in bytes. This length includes all of the space set aside for system use. See Usable Length. PV Size. The length in bytes of a physical volume in this disk virtual volume. All physical volumes in a disk virtual volume must be the same length. Cluster Length. The size of the allocation unit, in bytes. When disk storage segments are created on the volume, they are created at a length that will be a multiple of this value.
Physical Volumes This is a table of physical volume attributes for the physical volumes that make up this disk virtual volume. Vol Name. The ASCII name of the physical volume. Type. The media type. Dev ID. The ID of the device the physical volume is mounted on. Mvr IP Addr. The IP address of the Mover that operates this physical volume. Mvr. The descriptive name of the Mover that operates this physical volume. Mvr Host. The name of the host on which the Mover runs. 8.5.4.2.
This window displays information about a tape volume as represented by the Core Server. Field Descriptions Name. The ASCII name of the first physical volume that is a part of the tape virtual volume. The entire virtual volume can be referred to by this name. VV Condition. This is the administrative control for the tape virtual volume. It will have one of six values: RWC, RW, RO, EOM, EMPTY or DOWN. HPSS Management Guide Release 7.3 (Revision 1.
• In RWC condition, the volume can be read and written. This is the normal operational state. • In RW condition, the volume can be read and written, but new tape storage segments may not be created on the volume. • In RO condition, the volume can be read but not written. New storage segments cannot be created on the volume. • In EOM condition, the volume can be read but not written. One or more of the tapes has been written to its end and the tape virtual volume is now full.
its data. When data is being written to a volume, it will be shown in Allocated state. When the End Of Media marker is reached during a tape write operation, the volume will enter EOM state. If the number of segments on the volume drops to zero after reaching EOM, the state will change to Empty. The volume may be placed in Deny state by the server depending on the setting of the VV Condition. Storage Class. The storage class to which the tape virtual volume is assigned. Map Flags.
of time may pass between updates. Next Write Address. The next address that will be written on the tape volume, expressed as an HPSS Relative Stripe Address. Less Commonly Used Data Tab File Family. The family to which the volume is assigned, if any. If it is not assigned to a family, it is assigned to the default family, family zero. HPSS Management Guide Release 7.3 (Revision 1.
VV Block Size. The virtual volume block size. This is the number of bytes written from a data stream to an element of the striped volume before the stream switches to the next element of the stripe. PV Block Size. The size, in bytes, of the media data block. Max Blocks Between Tape Marks. The maximum number of media blocks that will be written to this tape before a tape mark is written. The product of this value and the Media Block Size defines the Physical Volume Section Length. Stripe Width.
VV Condition controls the availability of the volume for the following actions: • Creation of new storage segments • Reading of existing storage segments • Writing of existing storage segments • Mounting of tape media Tape volumes have six possible settings for VV Condition: • RWC - Read, Write, Create • RW - Read/Write • RO - Read Only • EOM - End of Media reached • EMPTY – The volume reached EOM, and all data has been removed.
can be read, but not written. Unlike RO condition, tapes in EOM condition can only be changed to DOWN. EOM volumes cannot enter either RWC or RO condition. Volumes in DOWN condition cannot be read, written, created on or mounted. This setting effectively removes the volume from the system while maintaining the records of the storage segments on it and is useful for dealing with failed disk or tape volumes. Disks in DOWN condition can be changed to RWC, RW or RO condition.
This window allows you to change which PVR owns a set of cartridges. Before initiating the request from the SSM window, the cartridges must already be physically placed into a tape library managed by the new PVR. The list of the volumes to be moved may be constructed in any of three ways. Each volume name may be typed in one at a time, or a list of volume names may be automatically generated from a single entry, or a list of volume names may be specified from an input file.
Fill Increment = 10 Volume Label = "AA0070" Labels automatically inserted into Volume List: "AA0070" "AA0080" "AA0090" "AA0100" "AA0110" "AA0120" When an addition produces overflow in a column, numerical columns are carried over properly to alphabetic columns and vice versa.
Fill Count. The number of cartridge labels to be added to the end of the list when the Volume Label field is next modified. This number may be one or greater. If the list fills up before the Fill Count is exhausted, filling stops, and a message box is displayed (see Maximum Volumes Allowed below). Fill Increment. In a multiple-cartridge fill, this field determines how each new cartridge label is generated from the previous one. A cartridge label is six alphanumeric characters.
8.6.1. PVL Job Queue Window This window shows all outstanding jobs in the PVL. From this window, the user can issue a request to view more information for a particular PVL job or to cancel it. Each PVL job represents a volume mount (or series of mounts for a striped disk or tape volume). Field Descriptions Job List. This is the main part of the window, consisting of a table of job information, a title line containing labels for each column, and vertical and horizontal scrollbars.
• Relabel - A cartridge being relabeled. • Sync Mount - A synchronous mount. • Tape Check-In - A cartridge being added to the library. • Tape Check-Out - A cartridge being removed from the library to be placed on the shelf or vault. Status. The current status of the job. Possible values are: • Uncommitted – The mount job is a multi-part job that has been started, but the last volumes have not been added, and the mount operation has not been committed.
8.6.2. PVL Request Information Window This window is displayed when the Job Info button is pressed on the PVL Job Queue window. It allows you to view the information associated with a PVL job/request. Field Descriptions Job ID. The unique number assigned to the job being viewed. Request Type. The type of job/request. Possible types are: • Async Mount - An asynchronous mount. • Default Import - A media import of type default. • Scratch Import - A media import of type scratch.
• Aborting - The job is being aborted. • Cartridge Wait - The job is waiting for another job to release a cartridge that it needs. • Completed - The job is completed. Once a job is completed, it no longer exists in the PVL job queue, and this window will no longer receive any updates. • Deferred Dismount - Dismount for cartridges will be delayed. • Dismount Pending - Volumes are in the process of being dismounted. • Drive Wait - The job is waiting for a drive to become available.
• Tape Check-In • Tape Check-Out • Uncommitted • Unload Pending Drive Pool ID. If non-zero the drive pool id will restrict this drive's scheduling to tape requests specifying this value. This field is not applicable for disks. Drive Type. The HPSS drive type assigned to the drive. Mover. The name of the mover that owns the device where the volume is mounted. This field will be blank if the volume is not currently mounted. 8.6.3.
will begin logging alarms indicating that the appropriate tape has yet to be checked-in. The frequency of these alarms is controlled by the Shelf Tape Check-In Alarm field of the PVR-specific configuration window. The requests are displayed in chronological order. Each time such a request is received by SSM, it is added to the list, but duplicate requests for the same tape are not displayed.
The Tape Mount Requests window displays tapes which need to be mounted in a drive. All HPSS tape mount requests, including both robotic and operator tape mounts, will be displayed in the window. For operator PVRs, such mount requests mean that a tape must be mounted by hand. When mount requests for robotic PVRs do not disappear from the window in a timely manner, it can be an indication of hardware or other problem in the robot.
Clear List. Clears the list of mount requests. Note that this does not cancel any mount requests, but just removes them from the list. Pending mounts will reappear in the window as the PVR periodically retries the mounts. This can be useful for removing stale mount requests that, for some reason, never issued a completion message to SSM. When this button is clicked from any SSM session, the Tape Mount Requests windows on all SSM sessions will be cleared. 8.6.6.
refresh their caches. 3. Use the retire utility program to retire the old technology volumes. retire can accept a list of PVs to retire, or can retire all volumes in a storage class that are in a specified Condition. dump_sspvs can be used to create a list of PVs for retire to process. Creating a list of PVs for this step and the following steps is the recommended procedure.
HPSS Management Guide Release 7.3 (Revision 1.
Chapter 9. Logging and Status 9.1. Logging Overview The purpose of logging is to record events of interest that occur in HPSS in the sequence they occur to support diagnostic research. HPSS provides eight log message types: • Alarm • Event • Status • Debug • Request • Security • Accounting • Trace The purpose of each of these log message types is described later in this chapter. Log messages are deposited in four places.
5. The SSM Alarms and Events window (Section 5.2.2.3: on page 151) A standard configuration for logging services is usually set by the administrator during the HPSS system configuration. Specialized configurations can be set up and used to temporarily (or permanently) provide more or less logging for site-specific or shift-specific operational situations.
• If no server-specific logging policy is configured for a server and no default logging policy is configured, only Alarm and Event messages will be logged. 9.2.2. Logging Policies Window This window is used to manage all the log policies in the HPSS system. To create a new log policy, click on the Create New button. To configure an existing log policy, select the policy from the list and click on the Configure button. When creating or configuring a log policy, the Logging Policy window will appear.
Delete. Deletes the selected log policy(s). 9.2.2.1. Logging Policy Configuration Window The Logging Policy window is used to manage a log policy. When creating a new log policy, the Descriptive Name field will be blank, and a set of default options will be selected. If there is a Default Logging Policy defined, the default log options will match those in the global configuration's Default Logging Policy.
recommended that this always be selected. • EVENT. An informational message (e.g., subsystem initializing, subsystem terminating) about a significant occurrence in the system that is usually not an error. It is recommended that this always be selected. • REQUEST. A message which reports the beginning and ending of processing of a client request in a server. It is recommended that this type not be selected except for short periods as an aid to isolating a problem. • SECURITY.
A server’s log policy can be modified to control the volume of messages to the chosen logging destinations. Typically, during normal operations, the level of logging may be decreased to only Alarm, Event, and Security to reduce overhead. However, while tracking an HPSS problem, it may be desirable to include more log message types such as Debug, Request and Trace to obtain more information.
9.3.2. Viewing the Central Log (Delogging) HPSS provides the ability to retrieve and examine HPSS log records as a means of analyzing the activity and behavior of HPSS. The retrieval and log record conversion process is referred to as “delogging.” Delogging is the process of retrieving specific records from the HPSS central log files (which is in binary format), converting the records to a readable text format, and sending the resulting text to a local UNIX file.
The information is acquired from the HPSS Log Daemon. The Log Files Information window provides information about the HPSS central log files. Log file information includes the state of each log file, the current size of each log file in bytes, the time at which each log file was marked in use, the time at which each log file was last active, and the log file names. Field Descriptions For each of the two central log files, the following fields are displayed: Log File Name. The name of the log file.
9.5.1. Configuring Local Logging Options The “Log Messages To:” field in the Logging Client specific configuration window (Section 5.1.4: Log Client Specific Configuration on page 100) can be modified to control the destinations for the messages logged by the HPSS servers running in a node. This parameter consists of a set of options that apply to the local logging. These options are: • Log Daemon - Send log messages to the central log. • Local Logfile - Send log messages to the local log file.
This window displays a number of the most recent alarm and event messages which have been received by SSM. It also allows you to view individual messages in greater detail by selecting the message and pressing the Alarm/Event Info button to bring up the Alarm/Event Information window. Field Descriptions This list displays a column for each field shown on the Alarm/Event Information window. See Section 9.6.2: Alarm/Event Information for the field descriptions. 9.6.2.
This window displays all the details of the alarm or event selected from the Alarms and Events window. Field Descriptions ID. A sequence number assigned to the log message by SSM. This ID is not used outside of SSM. Log Type. General class of the message. May be either “Alarm” or “Event”. Event Time. The date and time the message was generated. Server Name. Descriptive name of the HPSS server or utility that logged the message. Routine. Name of the function that was executing when the message was logged.
• Critical • Indeterminate • Cleared These may be accompanied by a color status indicator: • (Red) - Critical or Major alarm • (Yellow) - Minor or Warning alarm • None – Events and other alarm types Error Code. The error code associated with the problem underlying the message. MsgID. The 8-character ID code for this message, consisting of a 4-character mnenomic identifynig the type of server or subsystem which issued the message followed by a 4-digit message number.
environment variable HPSS_SSM_ALARMS with the desired name of the cache file. The default for HPSS_SSM_ALARMS is defined in hpss_env_defs.h as NULL. SSM will revert to the internal memory cache if it cannot access the specified cache file for any reason. The site may set the HPSS_SSM_ALARMS environment variable to any UNIX file that has read/write access for user root on the machine where the SM is to be run (since the SM runs as user root).
could be used, for example, to reduce the size of each data request on a slow network. The internal cached Alarm and Event list is displayed by the hpssadm program by means of its "alarm list" command. This command has a "-c" option to specify how many of the most recent log messages in the internal copy to display. If more messages are requested than exist in the internal list, the full internal list is displayed. See the hpssadm man page for details.
Chapter 10. Filesets and Junctions A fileset is a logical collection of files that can be managed as a single administrative unit, or more simply, a disjoint directory tree. A fileset has two identifiers: a human readable name, and a numeric fileset ID. Both identifiers are unique to a given HPSS realm. Filesets are often used for HPSS name space administration. For example they can be used if a subtree of the HPSS name space needs to be assigned to a particular file family or class of service.
File Family. The name of the file family to which the fileset is assigned. If this field contains "Not in a family", the fileset has not been assigned to a family. Fileset ID. The ID number which identifies the fileset. A fileset ID is displayed as two doublecomma-separated unsigned integer numbers. Fileset Name. The unique name which has been assigned to the fileset. Read. If checked, the fileset is available for reading. Write. If checked, the fileset is available for writing. Destroyed.
10.2. Creating an HPSS Fileset This section provides information on how to create HPSS filesets. Only the HPSS root user and SSM principal are allowed to create filesets. In order to successfully perform fileset administration, the DB2 Helper Program must be bound. See the HPSS Installation Guide, Section 5.8.1.3 Generate and Bind the DB2 Helper Program for more information. An HPSS fileset can be created by using the Create Fileset SSM window, or by using the create_fset utility.
Field Descriptions Fileset Name. The name to be assigned to the fileset. This name must be unique to the realm in which HPSS resides. Fileset State. The state of the fileset. If Read is ON, the fileset will be available for reading. If Write is ON, the fileset will be available for writing. File Family. The name of the File Family assigned to this fileset. If the File Family is to be other than the default, the File Family must have been previously created. Class of Service.
10.3. Managing Existing Filesets This section describes how to look up information on, modify, or delete filesets. 10.3.1. Core Server Fileset Information Window This window allows an administrator to view, update and/or delete the Core Server information associated with a fileset. This information is acquired from the Core Server. While this window remains open, it will automatically update whenever change notifications about any field except UID, GID and permissions are received from the Core Server.
the data or metadata. Changing the state to Destroyed will prevent both reading and writing. Field Descriptions Fileset ID. The ID number which identifies the fileset. A fileset ID is displayed as two doublecomma-separated unsigned integer numbers. A new Fileset ID can be entered as two doublecomma-separated unsigned integer numbers, as two single-period-separated unsigned integer numbers, as a 64-bit hexadecimal number that begins with '0x', or as an unsigned 64-bit number. Fileset Name.
UID. The User ID identifying the user owning the root node of the fileset. GID. The Group ID identifying the principal group owning the root node of the fileset. Permissions. The UNIX-style permissions assigned to the root node of the fileset. There are nine checkboxes arranged in a matrix with the columns specifying "r" (read), "w" (write) and "x" (execute) permissions, and the rows specifying the three classes of users to which the permissions apply (User, Group, and Other).
10.5. Creating a Junction Only the HPSS root user and SSM principal are allowed to create junctions. A junction is a name space object that points to a fileset and is similar to a persistent UNIX mount point. The fileset pointed to may reside in another subsystem. Junctions can be created using SSM or by using the utility routine crtjunction. For more information, refer to the crtjunction man page. 10.5.1.
10.6. Deleting a Junction Junctions can be deleted using SSM or by using the utility routine deljunction. For more information, refer to the deljunction man page. To delete a junction using SSM, select the junction(s) to be deleted from the Filesets & Junctions List and press the Delete Junction button. You will be asked to confirm the deletion with the following dialog; To continue with the junction deletion, press “Yes”. HPSS Management Guide Release 7.3 (Revision 1.
Chapter 11. Files, Directories and Objects by SOID This chapter describes two ways to display basic information about files and directories stored in HPSS. Starting with a fully qualified path name, you can look up a file or directory in the system and display information about it. In other cases, your starting point may be a SOID, a Storage Object ID, which is the internal computer generated name for files, directories, and virtual volumes.
11.1.1. File/Directory Information Window This window shows details about a file or directory. If a file is displayed in the File/Directory Information window, a button labeled Show Bitfile ID will appear at the bottom of the window. Pressing this button will cause the Storage Object ID window to appear. Field Descriptions Path Name. The pathname to either the file or the directory. HPSS Management Guide Release 7.3 (Revision 1.
Object Type. The type of object being displayed, either File or Directory. Class of Service. The name of the Class of Service in which the file is stored. If the displayed object is a directory the value of this field will be NONE. File Family. The name of the file family to which the file has been assigned. If the file has not been assigned to a family, the value of this field will be “Not in a family”. Subsystem Name. The name of the HPSS subsystem which contains the file or directory. Realm ID.
Extended ACL. If any ACL entry other than the default ACL entries exist, then the file or directory is said to contain extended ACLs. There are three type of ACLs that could have extended ACLs: • Object ACL – HPSS Name Space Object ACL • IC ACL – HPSS Name Space Initial Container ACL • IO ACL – HPSS Name Space Initial Object ACL A check mark will be put into each of the ACLs containing extended ACL entries for this file or directory.
Comment x x CompositePerms x x COSId x DataLength x x EntryCount x ExtendedACLs x FamilyId x x FilesetHandle FilesetId x x FilesetRootId x x FilesetStateFlags x x FilesetType x x GID x x GroupPerms x x LinkCount x ModePerms x x Name x x OpenCount x OptionFlags x OtherPerms x ReadCount x SubSystemId x x TimeCreated x x TimeLastRead x x TimeLastWritten x x TimeModified x x Type x x UID x x UserPerms x x WriteCount x HPSS Management
11.2. Objects by SOID Window To display this window, select Monitor from the Health and Status window, then select Lookup HPSS Objects, and then“Objects by SOID. This window allows you to open an information window for an HPSS object which you specify by the object's HPSS Storage Object ID (SOID). The object types supported by this screen are Bitfiles and Virtual Volumes (Disk and Tape). Field Descriptions Object UUID. The UUID of the object. This must be entered in the indicated format.
Chapter 12. Tape Aggregation This chapter discusses the following operations: • Overview of Tape Aggregation • Tape Aggregation Performance Considerations • Configuring Tape Aggregation 12.1. Overview of Tape Aggregation Tape aggregation reduces file processing time when migrating relatively small files from disk to tape. In certain situations it may improve very small file disk-to-tape migration performance by two orders of magnitude over normal migration.
Migration Policy screen. Edit any other tape aggregation related fields on that screen as needed. If MPS is running, you must also tell it to reread the Disk Migration Policy. HPSS Management Guide Release 7.3 (Revision 1.
Chapter 13. User Accounts and Accounting 13.1. Managing HPSS Users After the HPSS system is up and running, the administrator must create the necessary accounts for the HPSS users. For a new HPSS user, a Kerberos, LDAP, or UNIX ID (depending on authentication type configured) and an FTP ID must exist before the user can access HPSS via FTP. In addition, if the HPSS user needs to use SSM, an SSM ID must also be created before the user can use SSM.
[ added unix user ] [ KADMIN_PRINC unset; using kadmin.
13.1.1.3. Add a Kerberos User ID The hpssuser utility invokes the kadmin utility to create the KRB principal and account. This can be done using both keytab and password by specifying the -krbkeytab option. Invoke the hpssuser utility as follows to add a KRB User ID using a keytab and password: hpssuser -add -krb [-krbkeytab ] The utility will prompt the user for the required data.
# hpssuser -add user1 -ftp -nohome User ID#: 300 Enter password for user1: ****** Re-enter password to verify: ****** Full name: Test User HPSS/LDAP home directory: /home/user1 Login shell: /bin/ksh Primary group ID#: 210 [ adding ftp user ] [ ftp user added ] If the -nohome option is not specified when adding an FTP user, you must authenticate (using kinit) as that user before running hpssuser. Ensure that the Core Server is up and running before adding the FTP User ID.
[ [ [ [ [ [ [ [ [ SSM user deleted ] deleting ldap principal ] deleted ldap principal ] deleting ftp user ] ftp user deleted ] deleting kerberos principal ] KADMIN_PRINC unset; using kadmin.local for Kerberos ops ] deleted kerberos principal ] deleting unix user ] 13.1.3. Listing HPSS Users The hpssuser utility can be used by the administrator to list all existing HPSS User IDs. The utility can be invoked to list all HPSS User IDs or a particular type of User ID.
# hpssuser -ssmclientpkg /tmp/ssmclientpkg.tar [ packaging ssm client ] [ creating /tmp/ssmclientpkg.tar ] ssm.conf krb5.conf hpssgui.pl hpssgui.vbs hpss.jar [ packaged ssm client in /tmp/ssmclientpkg.tar ] 13.2. Accounting HPSS maintains accounting information on the usage of the system whether the site charges for usage or not. Sites are encouraged to use the accounting information, even if they do not charge users, to gain a better understanding of the usage patterns of their system.
An accounting policy is required whether the site actually charges users for HPSS usage or not. Field Descriptions Accounting Style. The style of accounting that is used by the entire HPSS system. Valid values are SITE or UNIX. The default value is UNIX. Under UNIX style accounting, resource usage is reported by user ID (UID). Each user is allowed to use only one account ID, which has the same numerical value as his user ID. Under SITE style accounting, each user may user multiple account IDs.
manipulation operations. Account Inheritance. A flag that indicates whether or not newly created files and directories should automatically inherit the account index used by their parent directory. The default value is OFF. It is only used if Account Validation has been enabled and Site- style accounting has been selected. If this flag is disabled, new files and directories have the user’s current session account index applied to them. 13.2.2. Accounting Reports and Status 13.2.2.1.
This window allows an administrator to view the accounting status and start accounting. Field Descriptions Subsystem. The name of the storage subsystem containing this accounting status data. Run Status. Current status of accounting run. Possible values are: • Never run • Running • Failed • Completed • Report generated Last Run Time. If accounting is currently running, this is the time the run started. Otherwise it is the time the last run completed. Number of Accounts.
The first type, denoted by a zero (0) in the fourth column, gives the following summary information about the storage used by a particular HPSS Account Index (AcctId) in a particular Class Of Service (COS): • The total number of file accesses (#Accesses) to files owned by the Account Index in the Class Of Service. In general, file accesses are counted against the account of the user accessing the file, not the owner of the file itself.
In the above example, line 2 shows that a user using account 634 made a total of 89 accesses to COS 1 and has 125 files stored in COS 1 which together total 4168147 storage units. The storage units reported by the report utility may be configured in the Accounting Policy to represent bytes, kilobytes, megabytes, or gigabytes. Line 3 shows that 87 of the accesses to COS 1 made by account 634 were in storage class 1, and line 4 shows that 2 of them were in storage class 2.
● The site will need to create a local site Account Map to maintain a list of the account IDs for which each user is authorized. This is a locally designed and written table, not supported by HPSS. See section 13.2.3.1: Site Defined Accounting Configuration Files and Procedures on page 336 for suggestions for designing and maintaining an Account Map. ● Administrators will need to manage user default accounts by updating the user's LDAP registry information (i.e.
13.2.3.1.2. Site Defined Account Apportionment Table In UNIX-style accounting, the UID (as Account Index) maps only to a specific user. Some sites may wish to apportion different percentages of the charges for a single UID among different project charge codes, but without using site style accounting. HPSS does not provide a means to do this, but a site could implement its own table and utilities to do so.
13.2.3.2. Accounting Intervals and Charges The time between accounting runs and the charging policy for space usage should be developed after consulting the accounting requirements. The following are some guidelines to consider: • Accounting should be run at a regular intervals, such as once per month. • An accounting run may take several minutes, and the storage system will probably be active during the run.
Chapter 14. User Interfaces This chapter configuration information for the user interfaces provided with HPSS for transferring files: • Client Application Programming Interface (API) • Parallel File Transfer Protocol (FTP) or PFTP • HPSS Virtual File System (VFS) Interface 14.1. Client API Configuration The following environment variables can be used to define the Client API configuration. The defaults for these variables are defined in the hpss_env_defs.h file.
write operations between cache invalidates. The default value is 20. HPSS_API_HOSTNAME specifies the hostname to be used for TCP/IP listen ports created by the Client API. The default value is HPSS_HOST. This value can have a significant impact on data transfer performance for data transfers that are handled by the Client API (i.e., those that use the hpss_Read and hpss_Write interfaces). The default is HPSS_HOST.
etc/hpss.keytab. HPSS_UNIX_KEYTAB_FILE specifies the name of the file containing the security keys necessary for successfully initializing the Client API for UNIX authentication. The default is auth_keytab:/var/hpss/etc/hpss.unix.keytab. 14.2. FTP/PFTP Daemon Configuration The pftp_client binary is an enhanced ftp client providing parallel data transfer facilities, HPSS specific features, and transfer of files greater than 4GB.
non-HPSS Parallel FTP Daemon (DIS2COM PFTP Daemon). This file should be customized as needed. Refer to the HPSS.conf man page or the HPSS Installation Guide, Appendix D for details. NOTE: it may be necessary for a site to merge older copies of the HPSS.conf file with the template if modifications have been made since the prior release. There is no conversion job to perform this task. Step 2. Configuring the FTP/PFTP Daemon Syslog The PFTP Daemon attempts to write to the system log (using syslog()).
# HPSS file systems rather than the local file system of the # host executing the FTP Daemon. This is highly recommended! banner # Control for logging (sent to syslog()). log [ commands ] [ anonymous ] [ { inbound } ] [ transfers ] [ guest ] [ { outbound } ] [ real ] [debug] # Set the maximum number of login attempts before closing the # connection: loginfails # Determine the appropriate behavior when needed data must be staged.
• Message/readme/banner/shutdown (message lines) are text files, with the following keywords (all one character in length) recognized, identified by a preceding %: Table 5.
days of the week. Step 4. Creating FTP Users In order for an HPSS user to use FTP, a UNIX and/or Kerberos userid and password must be created. Refer to Section 3.3.2.1: The hpssuser Utility on page 35 for information on how to use the hpssuser utility to create the userid and password and set up the necessary configuration for the user to use FTP. Note that this step should not be done until the Core Server is running so that the hpssuser utility can create the home directory for the FTP user.
character device (/dev/hpssfs0) that is used to communicate with the HPSS VFS Daemon (hpssfsd). All POSIX I/O system calls like open, read, write, close, etc. are first handled by the VFS abstraction layer after which they are passed down to appropriate functions in the Kernel Module. The Kernel Module translates the POSIX I/O requests into HPSS requests and then forwards these requests (via /dev/hpssfs0) to the HPSS VFS Daemon (hpssfsd), which in turn sends requests to the Core Server.
% make BUILD_ROOT=/tmp/vfs_client build-clnt build-fs Create a tar file that can be used to build the client code on the VFS machine: % cd /tmp/vfs_client % tar -cvf ../vfs_client.tar * Copy (i.e. scp) the new tar file to the client machine. On the client machine, untar and build the client tree. % cd /opt/hpss % tar -xvf vfs_client.tar Update Makefile.macros if needed. If compiling in 64-bit mode, make sure that: BIT64_SUPPORT USE_PTHREAD_25 = on = on 14.3.3.2.
% rmmod hpssfs # this ensures that there isn't a pre-existing module loaded % modprobe hpssfs % make config % MAKEDEV hpssfs % /sbin/chkconfig –add hpssfs Build and install the application daemon by following the instructions given by the build-help target of the makefile in the directory above this one. % cd /opt/hpss/src/fs Follow the instructions reported by issuing: % make build-help Build the HPSS VFS Daemon. % make Install the HPSS VFS Daemon.
% mkdir /var/hpss/cred % mkdir /var/hpss/tmp On the Core Server machine, use mkhpss to create the client config bundle: % mkhpss Select "Create Config Bundle" to create a client config bundle that contains config. files from the Core Server machine: [ Adding HPSS Local passwd/group/shadow files to Bundle] [ Verifying that all files to be packaged exist ] [ generating client config bundle in /tmp/hpss_cfg.tar ] env.conf ep.conf site.conf auth.conf authz.conf HPSS.
• o /opt/hpss/lib/libhpssunixauth.so o /opt/hpss/lib/libhpssldapauthz.so If the code is installed in a non-standard location (not /opt/hpss), update the paths in auth.conf and authz.conf to use the correct location. If using kerberos authentication, modify the /etc/krb5.conf file on the client machine to list both Core Server in the 'realms' and 'domain_realm' sections. Example: default = FILE:/var/hpss/log/krb5libs.log kdc = FILE:/var/hpss/log/krb5kdc.log admin_server = FILE:/var/hpss/log/kadmind.
14.4. Mounting VFS Filesystems An HPSS fileset or directory is made available for user access by mounting it using the mount(8) command. The mount command accepts the mount input options directly from the command line or from the corresponding entry defined in the /etc/fstab file. By defining the mount options in the /etc/fstab file, the mount command can be issued in a much simpler fashion. 14.4.1.
none none none /dev/VolGroup00/LogVol01 0 jupiter:/auto/home 0 LOCAL:/ 0 LOCAL:/home/user1 remote- site:/home/user2 0 /dev/shm /proc /sys swap tmpfs defaults proc defaults sysfs defaults swap defaults /home nfs /tmnt 0 0 0 0 0 0 0 defaults 0 hpssfs noauto,cos=2 /home/hpss/user1 hpssfs noauto,maxsegsz /home/remote- hpss/user2 hpssfs noauto,ip=eth0 0 0 0 0 The last three entries are VFS Interface mount entries (indicated by the file system type “hpssfs”).
Mount Option Mount Description acdirtimeo Seconds to cache directory attributes. The shorter this value, the more frequently communication to HPSS is required. Synchronization between separate vfs nodes or mount points may benefit for a shorter cache period with the expense of longer latencies. flushd The flush deamons look for data pages that need to be written to HPSS, either time to write or data pages being requested for other files.
Mount Option Mount Description maxiowidth maxiowidth princ Override for principal auth Override for authorization type key Override keytab type keytab Override for keytab file stage / nostage Default is stage. This overrides the COS setting. Application can override this by specifying O_NONBLOCK on the open() system call. An application cannot override the nostage setting.
where can be obtained using the mount utility. The trace level can be changed while the hpssfsd is running.
Chapter 15. Backup and Recovery This chapter discusses the following operations: • Backup and recover HPSS metadata • Backup HPSS environment • Recover HPSS user data • Handling DB2 space shortage 15.1. HPSS Metadata Backup and Recovery Each HPSS site is responsible for implementing and executing the HPSS metadata backup and recovery process. The HPSS administrator must ensure that HPSS metadata is backed up on a regular basis.
made to the database. These logs allow DB2 to recover all changes made to the database since the time of the last database backup, forward to the time when DB2 was stopped by a crash, hardware failure, power outage or whatever. It is vital that the DB2 log files be hosted on a highly reliable disk systems. Furthermore, DB2 log mirroring must be used to protect this information on two separate disk systems. The disks systems must also be protected using RAID 1 or RAID 5 configurations.
“full”, “incremental”, or “delta”. Full backups record the full contents of the database at a point in time. Incremental backups record all the changed data since the last full backup. Delta backups record only the data that has changed since the most recent backup (of any type). Obviously, a full backup is necessary as a starting point in the recovery of a database. Incremental and delta backups help reduce the number of transaction logs that must be retrieved and applied during a restore operation.
15.1.3. Overview of the DB2 Recovery Process Upon discovering a damaged DB2 container, you must first determine the level of hardware recovery and problem determination available to you. For example, whether or not you were utilizing some level of RAID and can recover a failed disk from a good disk. The second step in recovering damaged or inaccessible DB2 containers is considering any software recovery tools available.
• {HPSS secondary metadata backup path}/subsys1 • / • Others site-specific filesystems 15.2.2. Operating System Backup It is also necessary to perform appropriate OS backups on a regular basis. However, these activities are outside the scope of this document. Refer to the relevant OS documentation for the appropriate backup procedures. 15.2.3. Kerberos Backup If Kerberos authentication is configured, periodic backups of the Kerberos infrastructure will need to be made.
prepare for and to perform the recovery process: 1. Determine the name of the potentially damaged volume. Attempts to read the damaged volume will result in Mover alarm messages being issued to SSM. The alarm messages will contain the name of the physical volume for which the error occurred. Record the volume name. 2. Determine if the volume is actually damaged. Typically, an alarm message from the Mover when it encounters a damaged volume will indicate the source of the problem.
can be used to clean up storage resources and report what data has been lost. The recover utility has several options. These options are used depending on the severity of the damage. Refer to the following sections for more information on how and when to use these options. The recover utility can only recover data from damaged volumes that are part of a copy set, and not from just any hierarchy level.
storage level = 1 on VV = VOL00100 path ( Fileset24: /home/bill/file2) ========= Trying to recover bitfile ========= 0786ab2c-156b-1047-8c81-02608c2f971f 00336b52 4631 10cf 00 00 00 02 storage level = 1 on VV = VOL00100 path ( Fileset24: /home/bill/file3) ========= Trying to recover bitfile ========= . . .
storage level = 1 on VV = VOL00100 path ( Fileset24: /home/bill/file2) lost segments from this storage level offset = 0 , length = 32768 offset = 32768, length = 32768 offset = 65536, length = 32768 At the end of the recovery, no segments or volumes associated with the damaged segments are purged or deleted. About the only thing accomplished by running recover against a damaged storage class that is not part of a copy set is a listing of the damaged segments.
00226b52 4631 10cf 00 00 00 17 At the end of the cleanup, all the virtual and physical volumes associated with the targeted volumes will be deleted. All the physical volumes contained in the virtual volume will be exported from the PVL. The media can then be imported back into HPSS to be reused. At this point, any of the files once residing on the damaged disk volume might be staged from tape and accessed.
15.5.1. DMS Table Spaces Capacity for a DMS table space is the total size of containers allocated to the table space. When a DMS table space reaches capacity (depending on the usage of the table space, 90% is a possible threshold), you should add more space to it. The database manager will automatically rebalance the tables in the DMS table space across all available containers. During rebalancing, data in the table space remains accessible.
HPSS Management Guide Release 7.3 (Revision 1.
Chapter 16. Management Tools 16.1. Utility Overview HPSS provides a variety of command-line utility programs to aid in the use and management of the system. The programs can be grouped into the following major categories: 16.1.1. Fileset and Junction Management These programs work for HPSS filesets. • crtjunction - Creates an HPSS junction. • deljunction - Deletes an HPSS junction. • lsfilesets - Lists all HPSS filesets. • lsjunctions - Lists all HPSS junctions. 16.1.2.
• dumppv_pvr - List the physical volumes managed by a particular PVR. • lshpss - Lists various views of the system configuration. • lsrb - Lists a list of bitfiles that have not been accessed since a given date/time. • mps_reporter - Produces human-readable output from MPS summary files. • plu - Lists files that have a purge record in HPSS metadata. Can be used to see which bitfiles have been purge-locked (e.g., bitfiles that cannot be purged).
• hpss_unix_user - Manages HPSS UNIX password file. • hpss_managetables – Creates, deletes and modifies databases, database tablespaces, tables, views and constraints. This program is normally used by mkhpss. It should be used by hand only by extremely knowledgeable administrators. This program can destroy database tables. • xmladdindex – Creates an XML index. • xmldeleteindex – Deletes an XML index. • schema_registrar – Registers an XML schema with the database.
can also be used to reset the number of segments counter in disk and tape storage maps. • showdiskmaps – Sends a command to the Core Server in the selected storage subsystem to dump its in-memory disk space allocation maps, then displays that information on standard output. HPSS Management Guide Release 7.3 (Revision 1.
HPSS Management Guide Release 7.3 (Revision 1.
Appendix A. Glossary of Terms and Acronyms ACI Automatic Media Library Client Interface ACL Access Control List ACSLS Automated Cartridge System Library Software (Science Technology Corporation) ADIC Advanced Digital Information Corporation accounting The process of tracking system usage per user, possibly for the purposes of charging for that usage. Also, a log record message type used to log information to be used by the HPSS Accounting process. This message type is not currently used.
bitfile segment An internal metadata structure, not normally visible, used by the Core Server to map contiguous pieces of a bitfile to underlying storage. Bitfile Service Portion of the HPSS Core Server that provides a logical abstraction of bitfiles to its clients. BMUX Block Multiplexer Channel bytes between tape marks The number of data bytes that are written to a tape virtual volume before a tape mark is required on the physical media.
DEC Digital Equipment Corporation. delog The process of extraction, formatting, and outputting HPSS central log records. deregistration The process of disabling notification to SSM for a particular attribute change. descriptive name A human-readable name for an HPSS server. device A physical piece of hardware, usually associated with a drive, that is capable of reading or writing data. directory An HPSS object that can contain files, symbolic links, hard links, and other directories.
fileset ID A 64-bit number that uniquely identifies a fileset. fileset name A name that uniquely identifies a fileset. file system ID A 32-bit number that uniquely identifies an aggregate. FTP File Transfer Protocol Gatekeeper An HPSS server that provides two main services: the ability to schedule the use of HPSS resources referred to as the Gatekeeping Service, and the ability to validate user accounts referred to as the Account Validation Service.
IBM International Business Machines Corporation ID Identifier IEC International Electrotechnical Commission IEEE Institute of Electrical and Electronics Engineers IETF Internet Engineering Task Force Imex Import/Export import An operation in which a cartridge and its associated storage space are made available to the HPSS system. An import requires that the cartridge has been physically introduced into a Physical Volume Repository (injected).
local log An optional circular log maintained by a Log Client. The central log contains formatted messages from all enabled HPSS servers residing on the same node as the Log Client. Location Server An HPSS server that is used to help clients locate the appropriate Core Server and/or other HPSS server to use for a particular request.
mount mount point An operation in which a cartridge is either physically or logically made readable and/or writable on a drive. In the case of tape cartridges, a mount operation is a physical operation. In the case of a fixed disk unit, a mount is a logical operation. A place where a fileset is mounted in the XFS and/or HPSS namespaces. Mover An HPSS server that provides control of storage devices and data transfers within HPSS.
physical volume An HPSS object managed jointly by the Core Server and the Physical Volume Library that represents the portion of a virtual volume. A virtual volume may be composed of one or more physical volumes, but a physical volume may contain data from no more than one virtual volume. Physical Volume Library An HPSS server that manages mounts and dismounts of HPSS physical volumes.
request A log record message type used to log some action being performed by an HPSS server on behalf of a client. RISC Reduced Instruction Set Computer/Cycles RMS Removable Media Service RPC Remote Procedure Call SCSI Small Computer Systems Interface security A log record message type used to log security related events (e.g., authorization failures).
stage To copy file data from a level in the file’s hierarchy onto the top level in the hierarchy. start-up An HPSS SSM administrative operation that causes a server to begin execution. status A log record message type used to log processing results. This message type is being used to report status from the HPSS Accounting process. STK Storage Technology Corporation storage class An HPSS object used to group storage media together to provide storage for HPSS data with specific characteristics.
System Manager The Storage System Management (SSM) server. It communicates with all other HPSS components requiring monitoring or control. It also communicates with the SSM graphical user interface (hpssgui) and command line interface (hpssadm). TB Terabyte (240) TCP/IP Transmission Control Protocol/Internet Protocol trace A log record message type used to record entry/exit processing paths through HPSS server software.
HPSS Management Guide Release 7.3 (Revision 1.
Appendix B. References 6. 3580 Ultrium Tape Drive Setup, Operator and Service Guide GA32-0415-00 7. 3584 UltraScalable Tape Library Planning and Operator Guide GA32-0408-01 8. 3584 UltraScalable Tape Library SCSI Reference WB1108-00 9. AIX Performance Tuning Guide 10. Data Storage Management (XDSM) API, ISBN 1-85912-190-X 11. HACMP for AIX, Version 4.4: Concepts and Facilities 12. HACMP for AIX, Version 4.4: Planning Guide 13. HACMP for AIX, Version 4.4: Installation Guide 14. HACMP for AIX, Version 4.
33. J. Steiner, C. Neuman, and J. Schiller, "Kerberos: An Authentication Service for Open Network Systems," USENIX 1988 Winter Conference Proceedings (1988). 34. R.W. Watson and R.A. Coyne, “The Parallel I/O Architecture of the High-Performance Storage System (HPSS),” from the 1995 IEEE MSS Symposium, courtesy of the IEEE Computer Society Press. 35. T.W. Tyler and D.S.
Appendix C. Developer Acknowledgments HPSS is a product of a government-industry collaboration. The project approach is based on the premise that no single company, government laboratory, or research organization has the ability to confront all of the system-level issues that must be resolved for significant advancement in high-performance storage system technology.
HPSS Management Guide Release 7.3 (Revision 1.