HPSS Installation Guide High Performance Storage System Release 4.
HPSS Installation Guide Copyright (C) 1992-2002 International Business Machines Corporation, The Regents of the University of California, Sandia Corporation, and Lockheed Martin Energy Research Corporation. All rights reserved. Portions of this work were produced by the University of California, Lawrence Livermore National Laboratory (LLNL) under Contract No. W-7405-ENG-48 with the U.S.
Table of Contents Table of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Preface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
HPSS Operational Planning .................................................................................. 42 HPSS Deployment Planning ................................................................................. 43 4 2.2 Requirements and Intended Usages for HPSS ....................................................... 43 Storage System Capacity....................................................................................... 43 Required Throughputs .............................................
Metadata Monitor .................................................................................................. 77 NFS Daemons........................................................................................................ 77 Startup Daemon ..................................................................................................... 78 Storage System Management ................................................................................ 79 HDM Considerations..........................
Miscellaneous Rules for Backing Up HPSS Metadata ....................................... 144 Chapter 3 6 System Preparation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 3.1 General ..................................................................................................................... 145 3.2 Setup Filesystems..................................................................................................... 146 DCE .......................................
Chapter 4 HPSS Installation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 4.1 Overview................................................................................................................... 205 Distribution Media .............................................................................................. 205 Installation Packages ........................................................................................... 205 Installation Roadmap..............
HPSS Configuration Limits................................................................................. 250 Using SSM for HPSS Configuration................................................................... 251 Server Reconfiguration and Reinitialization ....................................................... 252 8 6.2 SSM Configuration and Startup ............................................................................ 252 SSM Server Configuration and Startup.....................................
Recommended Settings for Tape Devices........................................................... 411 Chapter 7 HPSS User Interface Configuration . . . . . . . . . . . . . . . . . . . . . . . 413 7.1 Client API Configuration ....................................................................................... 413 7.2 Non-DCE Client API Configuration...................................................................... 415 Configuration Files............................................................
Performance......................................................................................................... 469 The Global Fileset File ........................................................................................ 469 Appendix A Glossary of Terms and Acronyms . . . . . . . . . . . . . . . . . . . . . . . . . 471 Appendix B References. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485 Appendix C Developer Acknowledgments . . . . . . . . . . . . . .
Appendix G High Availability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 535 G.1 Overview................................................................................................................... 535 Architecture ......................................................................................................... 536 G.2 Planning....................................................................................................................
September 2002 HPSS Installation Guide Release 4.
List of Figures Figure 1-1 Figure 1-2 Figure 1-3 Figure 2-1 Figure 2-2 Figure 6-1 Figure 6-2 Figure 6-3 Figure 6-4 Figure 6-5 Figure 6-6 Figure 6-7 Figure 6-8 Figure 6-9 Figure 6-10 Figure 6-11 Figure 6-12 Figure 6-13 Figure 6-14 Figure 6-15 Figure 6-16 Figure 6-17 Figure 6-18 Figure 6-19 Figure 6-20 Figure 6-21 Figure 6-22 Figure 6-23 Figure 6-24 Figure 6-25 Figure 6-26 Figure 6-27 Figure 6-28 Figure 6-29 Figure 6-30 Figure 6-31 Figure 6-32 Figure 6-33 Figure 6-34 Figure 6-35 Migrate and Stage Operations
Figure 6-36 Figure 6-37 Figure 6-38 Figure 6-39 Figure 6-40 Figure 6-41 Figure 6-42 Figure 6-43 Figure 6-44 Figure 7-1 Figure H-1 Figure H-2 Figure H-3 Figure H-4 Figure H-5 Figure H-6 Figure H-7 Figure H-8 Figure H-9 Figure H-10 Figure H-11 Figure H-12 Figure H-13 Figure H-14 Figure H-15 Figure H-16 Figure H-17 Figure H-18 Figure H-19 Figure H-20 14 AML PVR Server Configuration Window ........................................................ STK PVR Server Configuration Window ............................
List of Tables Table 1-1 Table 2-1 Table 2-2 Table 2-3 Table 2-4 Table 2-5 Table 2-6 Table 2-7 Table 2-8 Table 3-1 Table 3-2 Table 3-3 Table 3-4 Table 3-5 Table 3-6 Table 4-1 Table 6-1 Table 6-2 Table 6-3 Table 6-4 Table 6-5 Table 6-6 Table 6-7 Table 6-8 Table 6-9 Table 6-10 Table 6-11 Table 6-12 Table 6-13 Table 6-14 Table 6-15 Table 6-16 Table 6-17 Table 6-18 Table 6-19 Table 6-20 Table 6-21 Table 6-22 Table 6-23 Table 6-24 HPSS Client Interface Platforms .................................................
Table 6-25 Table 6-26 Table 6-27 Table 6-28 Table 6-29 Table 6-30 Table 6-31 Table 6-32 Table 6-33 Table 6-35 Table 6-34 Table 7-1 Table 7-2 Table 7-3 Table 7-4 16 Solaris System Parameters ................................................................................ Linux System Parameters .................................................................................. Name Server Configuration Variables ..............................................................
Preface Conventions Used in This Book Example commands that should be typed at a command line will be proceeded by a percent sign (‘%’) and be presented in a boldface courier font: % sample command Names of files, variables, and variable values will appear in a boldface courier font: Sample file, variable, or variable value Any text preceded by a pound sign (‘#’) should be considered shell script comment lines: # This is a comment HPSS Installation Guide Release 4.
September 2002 HPSS Installation Guide Release 4.
Chapter 1 HPSS Basics 1.1 Introduction The High Performance Storage System (HPSS) is software that provides hierarchical storage management and services for very large storage environments. HPSS may be of interest in situations having present and future scalability requirements that are very demanding in terms of total storage capacity, file sizes, data rates, number of objects stored, and numbers of users.
Chapter 1 HPSS Basics provide scalability and parallelism. The basis for this architecture is the IEEE Mass Storage System Reference Model, Version 5. 1.2.2 High Data Transfer Rate HPSS achieves high data transfer rates by eliminating overhead normally associated with data transfer operations. In general, HPSS servers establish transfer sessions but are not involved in actual transfer of data. 1.2.
Chapter 1 HPSS Basics added, new classes of service can be set up. HPSS files reside in a particular class of service which users select based on parameters such as file size and performance. A class of service is implemented by a storage hierarchy which in turn consists of multiple storage classes, as shown in Figure 1-2. Storage classes are used to logically group storage media to provide storage for HPSS files.
Chapter 1 1.3.1 HPSS Basics HPSS Files, Filesets, Volumes, Storage Segments and Related Metadata The components used to define the structure of the HPSS name space are filesets and junctions. The components containing user data include bitfiles, physical and virtual volumes, and storage segments. Components containing metadata describing the attributes and characteristics of files, volumes, and storage segments, include storage maps, classes of service, hierarchies, and storage classes.
Chapter 1 HPSS Basics • Virtual Volumes. A virtual volume is used by the Storage Server to provide a logical abstraction or mapping of physical volumes. A virtual volume may include one or more physical volumes. Striping of storage media is accomplished by the Storage Servers by collecting more than one physical volume into a single virtual volume.
Chapter 1 HPSS Basics Figure 1-1 Migrate and Stage Operations 24 September 2002 HPSS Installation Guide Release 4.
Chapter 1 HPSS Basics Figure 1-2 Relationship of HPSS Data Structures 1.3.2 HPSS Core Servers HPSS servers include the Name Server, Bitfile Server, Migration/Purge Server, Storage Server, Gatekeeper Server, Location Server, DMAP Gateway, Physical Volume Library, Physical Volume Repository, Mover, Storage System Manager, and Non-DCE Client Gateway. Figure 1-3 provides a simplified view of the HPSS system.
Chapter 1 HPSS Basics Figure 1-3 The HPSS System 26 • Name Server (NS). The NS translates a human-oriented name to an HPSS object identifier. Objects managed by the NS are files, filesets, directories, symbolic links, junctions and hard links. The NS provides access verification to objects and mechanisms for manipulating access to these objects. The NS provides a Portable Operating System Interface (POSIX) view of the name space.
Chapter 1 • HPSS Basics Migration/Purge Server (MPS). The MPS allows the local site to implement its storage management policies by managing the placement of data on HPSS storage media using site-defined migration and purge policies. By making appropriate calls to the Bitfile and Storage Servers, MPS copies data to lower levels in the hierarchy (migration), removes data from the current level once copies have been made (purge), or moves data between volumes at the same level (lateral move).
Chapter 1 HPSS Basics parallel I/O to that set of resources, and schedules the mounting and dismounting of removable media through the Physical Volume Library (see below). • Gatekeeper Server (GK). The Gatekeeper Server provides two main services: A. It provides sites with the ability to schedule the use of HPSS resources using the Gatekeeping Service. B. It provides sites with the ability to validate user accounts using the Account Validation Service.
Chapter 1 HPSS Basics exactly one PVR. Multiple PVRs are supported within an HPSS system. Each PVR is typically configured to manage the cartridges for one robot utilized by HPSS. For information on the types of tape libraries supported by HPSS PVRs, see Section 2.4.2: Tape Robots on page 54. An Operator PVR is provided for cartridges not under control of a robotic library. These cartridges are mounted on a set of drives by operators. • Mover (MVR).
Chapter 1 HPSS Basics purge, and storage servers must now exist within a storage subsystem. Each storage subsystem may contain zero or one gatekeepers to perform site specific user level scheduling of HPSS storage requests or account validation. Multiple storage subsystems may share a gatekeeper. All other servers continue to exist outside of storage subsystems. Sites which do not need multiple name and bitfile servers are served by running an HPSS with a single storage subsystem.
Chapter 1 HPSS Basics Storage Subsystems will effectively be running an HPSS with a single Storage Subsystem. Note that sites are not required to use multiple Storage Subsystems. Since the migration/purge server is contained within the storage subsystem, migration and purge operate independently in each storage subsystem. If multiple storage subsystems exist within an HPSS, then there are several migration/purge servers operating on each storage class.
Chapter 1 HPSS Basics provides HPSS with an environment in which a job or action that requires the work of multiple servers either completes successfully or is aborted completely within all servers. • Metadata Management. Each HPSS server component has system state and resource data (metadata) associated with the objects it manages. Each server with non-volatile metadata requires the ability to reliably store its metadata.
Chapter 1 HPSS Basics whereby a user's access permissions to an HPSS bitfile are specified by the HPSS bitfile authorization agent, the Name Server. These permissions are processed by the bitfile data authorization enforcement agent, the Bitfile Server. The integrity of the access permissions is certified by the inclusion of a checksum that is encrypted using the security context key shared between the HPSS Name Server and Bitfile Server. • Logging.
Chapter 1 HPSS Basics and the HPSS Movers. This provides the potential for using multiple client nodes as well as multiple server nodes. PFTP supports transfers via TCP/IP. The FTP client communicates directly with HPSS Movers to transfer data at rates limited only by the underlying communications hardware and software. 34 • Client Application Program Interface (Client API). The Client API is an HPSS-specific programming interface that mirrors the POSIX.
Chapter 1 1.3.6 HPSS Basics HPSS Management Interface HPSS provides a powerful SSM administration and operations GUI through the use of the Sammi product from Kinesix Corporation. Detailed information about Sammi can be found in the Sammi Runtime Reference, Sammi User’s Guide, and Sammi System Administrator’s Guide. SSM simplifies the management of HPSS by organizing a broad range of technical data into a series of easy-to-read graphic displays.
Chapter 1 HPSS Basics • Logging Policy. The logging policy controls the types of messages to log. On a per server basis, the message types to write to the HPSS log may be defined. In addition, for each server, options to send Alarm, Event, or Status messages to SSM may be defined. • Security Policy. Site security policy defines the authorization and access controls to be used for client access to HPSS.
Chapter 1 • HPSS Basics Gatekeeping Policy. The Gatekeeper Server provides a Gatekeeping Service along with an Account Validation Service. These services provide the mechanism for HPSS to communicate information though a well-defined interface to a policy software module that can be completely written by a site. The site policy code is placed in well-defined shared libraries for the gatekeeping policy and the accounting policy (/opt/hpss/lib/ libgksite.[a|so] and /opt/hpss/lib/libacctsite.
Chapter 1 HPSS Basics The MPI-IO API can be ported to any platform that supports a compatible host MPI and the HPSS Client API (DCE or Non-DCE version). See Section 2.5.6: MPI-IO API on page 60 for determining a compatible host MPI. The XFS HDM is supported on standard Intel Linux platforms as well as the OpenNAS network appliance from Consensys Corp. 1.4.2 Server and Mover Platforms HPSS currently requires at least one AIX or Solaris machine for the core server components.
HPSS Planning Chapter 2 2.1 Overview This chapter provides HPSS planning guidelines and considerations to help the administrator effectively plan, and make key decisions about, an HPSS system.
Chapter 2 HPSS Planning that are introduced into your HPSS system. For example, if you plan to use HPSS to backup all of the PCs in your organization, it would be best to aggregate the individual files into large individual files before moving them into the HPSS name space. The following planning steps must be carefully considered for the HPSS infrastructure configuration and the HPSS configuration phases: 40 1.
Chapter 2 6. HPSS Planning Define the HPSS storage characteristics and create the HPSS storage space to satisfy the site’s requirements: ◆ Define the HPSS file families. Refer to Section 2.9.4: File Families on page 105 for more information about configuring families. ◆ Define filesets and junctions. Refer to Section 8.7: Creating Filesets and Junctions on page 465 for more information. ◆ Define the HPSS storage classes. Refer to Section 2.9.
Chapter 2 HPSS Planning If deciding to purchase Sun or SGI servers for storage purposes, note that OS limitations will only allow a static number of raw devices to be configured per logical unit (disk drive or disk array). Solaris currently allows only eight partitions per logical unit (one of which is used by the OS). Irix currently allows only sixteen partitions per logical unit. These numbers can potentially impact the utilization of a disk drive or disk array. Refer to Section 2.
Chapter 2 2.1.4 HPSS Planning HPSS Deployment Planning The successful deployment of an HPSS installation is a complicated task which requires reviewing customer/system requirements, integration of numerous products and resources, proper training of users/administrators, and extensive integration testing in the customer environment.
Chapter 2 2.2.2 HPSS Planning Required Throughputs Determine the required or expected throughput for the various types of data transfers that the users will perform. Some users want quick access to small amounts of data. Other users have huge amounts of data they want to transfer quickly, but are willing to wait for tape mounts, etc. In all cases, plan for peak loads that can occur during certain time periods.
Chapter 2 2.2.7 HPSS Planning Security The process of defining security requirements is called developing a site security policy. It will be necessary to map the security requirements into those supported by HPSS. HPSS authentication, authorization, and audit capabilities can be tailored to a site’s needs. Authentication and authorization between HPSS servers is done through use of DCE cell security authentication and authorization services.
Chapter 2 HPSS Planning The Generic Security Service (GSS) FTP (available from the Massachusetts Institute of Technology, MIT) and Parallel FTP applications may also take advantage of the Cross Cell features for authentication and authorization. Use of Kerberos/DCE Credentials with the HPSS Parallel FTP Daemon requires using the hpss_pftpd_amgr server and an associated authentication manager in place of the standard hpss_pftpd.
Chapter 2 HPSS Planning For U.S. sites, assuming some level of encryption is desired for secure DCE communication, the DCE Data Encryption Standard (DES) library routines are required. For non-U.S. sites or sites desiring to use non-DES encryption, the DCE User Data Masking Encryption Facility is required. Note that if either of these products are ordered, install them on all nodes containing any subset of DCE and/ or Encina software.
Chapter 2 HPSS Planning 2.3.1.3 XFS HPSS uses the open source Linux version of SGI’s XFS filesystem as a front-end to an HPSS archive. The following nodes must have XFS installed: • 2.3.1.4 Nodes that run the HPSS/XFS HDM servers Encina HPSS uses the Encina distributed transaction processing software developed by Transarc Corporation, including the Encina Structured File Server (SFS) to manage all HPSS metadata.
Chapter 2 HPSS Planning The HPSS Server Sammi License and, optionally, the HPSS Client Sammi License available from the Kinesix Corporation are required. The Sammi software must be installed separately prior to the HPSS installation. In addition, the Sammi license(s) for the above components must be obtained from Kinesix and set up as described in Section 4.5.3: Set Up Sammi License Key (page 213) before running Sammi.
Chapter 2 HPSS Planning C++ interfaces may be selectively disabled in Makefile.macros if these components of MPI-IO cannot be compiled. • 2.3.2 2.3.2.1 2.3.2.2 2.3.3 2.3.3.1 50 Sites using the Command Line SSM utility, hpssadm, will require Java 1.3.0 and JSSE (the Java Secure Sockets Extension) 1.0.2. These are required not only for hpssadm itself but also for building the SSM Data Server to support hpssadm. Prerequisite Summary for AIX HPSS Server/Mover Machine - AIX 1. AIX 5.
Chapter 2 2.3.4 2.3.4.1 2.3.4.2 HPSS Planning Prerequisite Summary for Solaris HPSS Server/Mover Machine 1. Solaris 5.8 2. DCE for Solaris Version 3.2 (patch level 1 or later) 3. DFS for Solaris Version 3.1 (patch level 4 or later ) if HPSS HDM is to be run on the machine 4. TXSeries 4.3 for Solaris from WebSphere 3.5 (patch level 4 or later) 5. HPSS Server Sammi License (Part Number 01-0002100-A, version 4.
Chapter 2 2.3.5 2.3.5.1 HPSS Planning Prerequisite Summary for Linux and Intel HPSS/XFS HDM Machine 1. Linux kernel 2.4.18 or later (Available via FTP from ftp://www.kernel.org/pub/ linux/kernel/v2.4) 2. Linux XFS 1.1 (Available via FTP as a 2.4.18 kernel patch at ftp://oss.sgi.com/ projects/xfs/download/Release-1.1/kernel_patches) 3. Userspace packages (Available via FTP as RPMs or tars from ftp://oss.sgi.com/ projects/xfs/download/Release-1.1/cmd_rpms and ftp://oss.sgi.
Chapter 2 2.3.5.2 HPSS Planning HPSS Non-DCE Mover Machine 1. Linux kernel 2.4.18 2. HPSS KAIO Patch It will be necessary to apply the HPSS KAIO kernel patch (kaio-2.4.18-1). This patch adds asynchronous I/O support to the kernel which is required for the Mover. The procedure for applying this patch is outlined in Section 3.10: Setup Linux Environment for Non-DCE Mover on page 195 2.3.5.3 2.3.5.4 HPSS Non-DCE Client API Machine 1. Redhat Linux, version 7.1 or later 2.
Chapter 2 HPSS Planning transfer method, which provides for intra-machine transfers between either Movers or Movers and HPSS clients directly via a shared memory segment. Along with shared memory, HPSS also supports a Local File Transfer data path, for client transfers that involve HPSS Movers that have access to the client's file system. In this case, the HPSS Mover can be configured to transfer the data directly to or from the client’s file.
Chapter 2 HPSS Planning PVR. Each tape is assigned to exactly one PVR when it is imported into the HPSS system and will only be mounted in drives managed by that PVR. The tape libraries supported by HPSS are: 2.4.2.1 • IBM 3494/3495 • IBM 3584 • STK Tape Libraries that support ACSLS • ADIC AML IBM 3494/3495 The 3494/3495 PVR supports BMUX, Ethernet, and RS-232 (TTY) attached robots. If appropriately configured, multiple robots can be accessible from a single machine. 2.4.2.
Chapter 2 HPSS Planning 2.4.2.5 Operator Mounted Drives An Operator PVR is used to manage a homogeneous set of manually mounted drives. Tape mount requests will be displayed on an SSM screen. 2.4.3 Tape Devices The tape devices/drives supported by HPSS are listed below, along with the supported device host attachment methods for each device. • IBM 3480, 3490, 3490E, 3590, 3590E and 3590H are supported via SCSI attachment. • IBM 3580 devices are supported via SCSI attachment.
Chapter 2 HPSS Planning Table 2-1 Cartridge/Drive Affinity Table Cartridge Type Drive Preference List Single-Length 3590 Single-Length 3590 Double-Length 3590 Single-Length 3590E Double-Length 3590E Single-Length 3590H Double-Length 3590H Double-Length 3590 Double-Length 3590 Double-Length 3590E Double-Length 3590H Single-Length 3590E Single-Length 3590E Double-Length 3590E Double-Length 3590H Double-Length 3590E Double-Length 3590E Double-Length 3590H Single-Length 3590H Single-Length 3590H Do
Chapter 2 HPSS Planning • High Availability HPSS configuration 2.5 HPSS Interface Considerations This section describes the user interfaces to HPSS and the various considerations that may impact the use and operation of HPSS. 2.5.1 Client API The HPSS Client API provides a set of routines that allow clients to access the functions offered by HPSS.
Chapter 2 2.5.3 HPSS Planning FTP HPSS provides an FTP server that supports standard FTP clients. Extensions are also provided to allow additional features of HPSS to be utilized and queried. Extensions are provided for specifying Class of Service to be used for newly created files, as well as directory listing options to display Class of Service and Accounting Code information. In addition, the chgrp, chmod, and chown commands are supported as quote site options.
Chapter 2 HPSS Planning a stateless protocol. This allows use of a connectionless networking transport protocol (UDP) that requires much less overhead than the more robust TCP. As a result, client systems must time out requests to servers and retry requests that have timed out before a response is received. Client timeout values and retransmission limits are specified when a remote file system is mounted on the client system.
Chapter 2 HPSS Planning that an implementation is thread-safe provided only one thread makes MPI calls. With HPSS MPIIO, multiple threads will make MPI calls. HPSS MPI-IO attempts to impose thread-safety on these hosts by utilizing a global lock that must be acquired in order to make an MPI call.
Chapter 2 2.5.8 HPSS Planning XFS XFS for Linux is an open source filesystem from SGI based on SGI’s XFS filesystem for IRIX. HPSS has the capability to backend XFS and transparently archive inactive data. This frees XFS disk to handle data that is being actively utilized, giving users the impression of an infinitely large XFS filesystem that performs at near-native XFS speeds. It is well suited to sites with large numbers of small files or clients who wish to use NFS to access HPSS data.
Chapter 2 2.6.2 HPSS Planning Bitfile Server The Bitfile Server (BFS) provides a view of HPSS as a collection of files. It provides access to these files and maps the logical file storage into underlying storage objects in the Storage Servers. When a BFS is configured, it is assigned a server ID. This value should never be changed. It is embedded in the identifier that is used to name bitfiles in the BFS. This value can be used to link the bitfile to the Bitfile Server that manages the bitfile.
Chapter 2 2.6.3 HPSS Planning Disk Storage Server Each Disk Storage Server manages random access magnetic disk storage units for HPSS. It maps each disk storage unit onto an HPSS disk Physical Volume (PV) and records configuration data for the PV. Groups of one or more PVs (disk stripe groups) are managed by the server as disk Virtual Volumes (VVs). The server also maintains a storage map for each VV that describes which portions of the VV are in use and which are free.
Chapter 2 HPSS Planning The Tape Storage Server is designed to scale up its ability to manage tapes as the number of tapes increases. As long as sufficient memory and CPU capacity exist, threads can be added to the server to increase its throughput. Additional Storage Subsystems can also be added to a system, increasing concurrency even further. Note that the number of tape units the server manages has much more to do with the throughput of the server than the number of tapes the server manages.
Chapter 2 HPSS Planning storage class reaches the threshold configured in the purge policy for that storage class. Remember that simply adding migration and purge policies to a storage class will cause MPS to begin running against the storage class, but it is also critical that the hierarchies to which that storage class belongs be configured with proper migration targets in order for migration and purge to perform as expected.
Chapter 2 HPSS Planning There are two different tape migration algorithms, tape volume migration and tape file migration. The algorithm which is applied to a tape storage class is selected in the migration policy for that class.
Chapter 2 HPSS Planning MPS provides the capability of generating migration/purge report files that document the activities of the server. The specification of the UNIX report file name prefix in the MPS server specific configuration enables the server to create these report files. It is suggested that a complete path be provided as part of this file name prefix. Once reporting is enabled, a new report file is started every 24 hours.
Chapter 2 HPSS Planning are associated with storage subsystems using the Storage Subsystem Configuration screen (see Section 6.4: Storage Subsystems Configuration on page 259). If a storage subsystem has no Gatekeeper, then the Gatekeeper field will be blank. A single Gatekeeper can be associated with every storage subsystem, a group of storage subsystems, or one storage subsystem. A storage subsystem can NOT use more than one Gatekeeper.
Chapter 2 HPSS Planning requests from a particular host or user. The Site Interfaces will be located in a shared library that is linked into the Gatekeeper Server. It is important that the Site Interfaces return a status in a timely fashion.
Chapter 2 HPSS Planning might be necessary by making requests to the appropriate Physical Volume Repository (PVR). The PVL communicates directly with HPSS Movers in order to verify media labels. The PVL is not required to be co-resident with any other HPSS servers and is not a CPU-intensive server. With its primary duties being queuing, managing requests, and association of physical volumes with PVRs, the PVL should not add appreciable load to the system.
Chapter 2 HPSS Planning 2.6.9.2 LTO PVR The LTO PVR manages the IBM 3584 Tape Library and Robot, which mounts, dismounts and manges LTO tape cartridges and IBM 3580 tape drives. The PVR uses the Atape driver interface to issue SCSI commands to the library. The SCSI control path to the library controller device (/dev/smc*) is shared with the first drive in the library (typically /dev/rmt0).
Chapter 2 2.6.10.1 HPSS Planning Asynchronous I/O Asynchronous I/O must be enabled manually on AIX and Linux platforms. There should be no asynchronous I/O setup required for Solaris or IRIX platforms. 2.6.10.1.
Chapter 2 HPSS Planning 6. Now, rebuild the kernel configuration by running the "make config" command and answering "yes" when questioned about AIO support. The default value of 4096 should be sufficient for the number of system-wide AIO requests. At this time, you should also configure the kernel to support your disk or tape devices. If tape device access is required, be sure to also enable the kernel for SCSI tape support.
Chapter 2 2.6.10.2.2 HPSS Planning Solaris For Solaris, the method used to enable variable block sizes for a tape device is dependent on the type of driver used. Supported devices include Solaris SCSI Tape Driver and IBM SCSI Tape Driver. For the IBM SCSI Tape Driver, set the block_size parameter in the /opt/IBMtape/IBMtape.conf configuration file to 0 and perform a reboot with the reconfiguration option. The Solaris SCSI Tape Driver has a built-in configuration table for all HPSS supported tape drives.
Chapter 2 HPSS Planning The Linux raw device driver is used to bind a Linux raw character device to a block device. Any block device may be used. See the Linux manual page for more information on the SCSI Disk Driver, the Raw Device Driver and the fdisk utility. To enable the loading of the Linux native SCSI disk device, uncomment the following lines in the .config file and follow the procedure for rebuilding your Linux kernel.
Chapter 2 • HPSS Planning Mover to Mover data transfers (accomplished for migration, staging, and repack operations) also will impact the planned Mover configuration. For devices that support storage classes for which there will be internal HPSS data transfers, the Movers controlling those devices should be configured such that there is an efficient data path among them.
Chapter 2 HPSS Planning Even if no client NFS access is required, the NFS interface may provide a useful mechanism for HPSS name space object administration. The HPSS NFS Daemon cannot be run on a processor that also runs the native operating system's NFS daemon. Therefore it will not be possible to export both HPSS and native Unix file systems from the same processor. In addition the NFS daemon will require memory and local disk storage to maintain caches for HPSS file data and attributes.
Chapter 2 HPSS Planning use the descriptive name “Startup Daemon (tardis)”. In addition, choose a similar convention for CDS names (for example, /.:/hpss/hpssd_tardis). The Startup Daemon is started by running the script /etc/rc.hpss. This script should be added to the /etc/inittab file during the HPSS infrastructure configuration phase. However, the script should be manually invoked after the HPSS is configured and whenever the Startup Daemon dies.
Chapter 2 HPSS Planning requests to the DMAP Gateway. Migration processes (hpss_hdm_mig) migrate data to HPSS, and purge processes (hdm_hdm_pur) purge migrated data from DFS and XFS. A set of processes (hpss_hdm_tcp) accept requests from the DMAP Gateway, and perform the requested operation in DFS. A destroy process (hpss_hdm_dst) takes care of deleting files. Finally, XFS HDMs have a process that watches for stale events (hpss_hdm_stl) and keeps the HDM from getting bogged own by them.
Chapter 2 HPSS Planning must be in use before purging begins, and a lower bound specifying the target percentage of free space to reach before purging is stopped. 2.6.17 Non-DCE Client Gateway The Non-DCE Client Gateway provides HPSS access to applications running without DCE and/or Encina which make calls to the Non-DCE Client API. It does this by calling the appropriate Client APIs itself and returning the results to the client.
Chapter 2 2.8.1 HPSS Planning Migration Policy The migration policy provides the capability for HPSS to copy (migrate) data from one level in a hierarchy to one or more lower levels. The migration policy defines the amount of data and the conditions under which it is migrated, but the number of copies and the location of those copies is determined by the storage hierarchy definition.
Chapter 2 HPSS Planning • The Migrate At Warning Threshold option causes MPS to begin a migration run immediately when the storage class warning threshold is reached regardless of when the Runtime Interval is due to expire. This option allows MPS to begin migration automatically when it senses that a storage space crisis may be approaching. • The Migrate At Critical Threshold option works the same as the Migrate At Warning Threshold option except that this flag applies to the critical threshold.
Chapter 2 HPSS Planning laterally to another volume in the same storage class. Tape file migration with purge avoids moving read active files at all. If a file is read inactive, all three algorithms migrate it down the hierarchy. The purpose of this field is to avoid removing the higher level copy of a file which is likely to be staged again. 2.8.2 • The Last Update Interval is used by all of the tape migration algorithms to determine if a file is actively being written.
Chapter 2 HPSS Planning • The Start purge when space used reaches percent parameter allows sites control over the amount of free space that is maintained in a disk storage class. A purge run will be started for this storage class when the total space used in this class exceeds this value. • The Stop purge when space used falls to percent parameter allows sites control over the amount of free space that is maintained in a disk storage class.
Chapter 2 HPSS Planning In UNIX-style accounting, each user has one and only one account index, their UID. This, combined with their Cell Id, uniquely identifies how the information may be charged. In Site-style accounting, each user may have more than one account index, and may switch between them at runtime. A site must also decide if it wishes to validate account index usage. Prior to HPSS 4.2, no validation was performed.
Chapter 2 HPSS Planning If a user has their default account index encoded in a string of the form AA= in their DCE account's gecos field or in their DCE principal's HPSS.gecos extended registry attribute (ERA), then Site-style accounting will be used for them. Otherwise it will be assumed that they are using UNIX-style accounting. To keep the accounting information consistent, it is important for this reason to set up all users in the DCE registry with the same style of accounting (i.e.
Chapter 2 HPSS Planning 2.8.4.3 FTP/PFTP By default, FTP and Parallel FTP (PFTP) interfaces use a username/password mechanism to authenticate and authorize end users. The end user identity credentials are obtained from the principal and account records in the DCE security registry. However, FTP and PFTP users do not require maintenance of a login password in the DCE registry. The FTP/PFTP interfaces allow sites to use site-supplied algorithms for end user authentication.
Chapter 2 2.8.4.7 HPSS Planning Name Space Enforcement of access to HPSS name space objects is the responsibility of the HPSS Name Server. The access rights granted to a specific user are determined from the information contained in the object's ACL. 2.8.4.8 Security Audit HPSS provides capabilities to record information about authentication, file creation, deletion, access, and authorization events. The security audit policy in each HPSS server determines what audit records a server will generate.
Chapter 2 2.8.7 HPSS Planning Gatekeeping Every Gatekeeper Server has the ability to supply the Gatekeeping Service. The Gatekeeping Service provides a mechanism for HPSS to communicate information through a well-defined interface to a policy software module to be completely written by the site. The site policy code is placed in a well-defined site shared library for the gatekeeping policy (/opt/hpss/lib/ libgksite.[a|so]) which is linked to the Gatekeeper Server.
Chapter 2 HPSS Planning Site "Stat" Interface will be called (gk_site_CreateStats, gk_site_OpenStats, gk_site_StageStats) and the Site Interface will not be permitted to return any errors on these requests. Otherwise, if AuthorizedCaller is set to FALSE, then the normal Site Interface will be called (gk_site_Create, gk_site_Open, gk_site_Stage) and the Site Interface will be allowed to return no error or return an error to either retry the request later or deny the request.
Chapter 2 HPSS Planning to determine HPSS hardware requirements and determine how to configure this hardware to provide the desired HPSS system. The process of organizing the available hardware into a desired configuration results in the creation of a number of HPSS metadata objects. The primary objects created are classes of service, storage hierarchies, and storage classes. A Storage Class is used by HPSS to define the basic characteristics of storage media.
Chapter 2 HPSS Planning Figure 2-2 Relationship of Class of Service, Storage Hierarchy, and Storage Class 2.9.1 Storage Class Each virtual volume and its associated physical volumes belong to some storage class in HPSS. The SSM provides the capability to define storage classes and to add and delete virtual volumes to and from the defined storage classes. A storage class is identified by a storage class ID and its associated attributes.
Chapter 2 HPSS Planning Explanation: For example, if a site has ESCON attached tape drives on an RS6000, the driver can handle somewhat less than 64 KB physical blocks on the tape. A good selection here would be 32 KB. See Section 2.9.1.12 for recommended values for tape media supported by HPSS. 2.9.1.2 Virtual Volume Block Size Selection (disk) Guideline: The virtual volume (VV) block size must be a multiple of the underlying media block size.
Chapter 2 HPSS Planning cannot be greater than half the number of drives available. Also, doing multiple copies from disk to two tape storage classes with the same media type will perform very poorly if the stripe width in either class is greater than half the number of drives available. The recover utility also requires a number of drives equivalent to 2 times the stripe width to be available to recover data from a damaged virtual volume if invoked with the repack option.
Chapter 2 HPSS Planning 2.9.1.5 Blocks Between Tape Marks Selection Blocks between tape marks is the number of physical media blocks written before a tape mark is generated. The tape marks are generated for two reasons: (1) To force tape controller buffers to flush so that the Mover can better determine what was actually written to tape, and (2) To quicken positioning for partial file accesses. Care must be taken, however in setting this value too low, as it can have a negative impact on performance.
Chapter 2 HPSS Planning Explanation: The Class of Service (COS) mechanism can be used to place files in the appropriate place. Note that although the Bitfile Server provides the ability to use COS selection, current HPSS interfaces only take advantage of this in two cases. First, the pput command in PFTP automatically takes advantage of this by selecting a COS based on the size of the file.
Chapter 2 HPSS Planning 2.9.1.10 PV Estimated Size / PV Size Selection Guideline: For tape, select a value that represents how much space can be expected to be written to a physical volume in this storage class with hardware data compression factored in. Explanation: The Storage Server will fill the tape regardless of the value indicated. Setting this value differently between tapes can result in one tape being favored for allocation over another.
Chapter 2 HPSS Planning Table 2-3 Suggested Block Sizes for Disk Disk Type Fibre Channel Attached Media Block Size Minimum Access Size Minimum Virtual Volume Block Size Notes 4 KB 0 1 MB 1 In Table 2-3: • Disk Type is the specific type of media to which the values in the row apply. • Media Block Size is the block size to use in the storage class definition. For disk, this value should also be used when configuring the Mover devices that correspond to this media type.
Chapter 2 HPSS Planning Table 2-4 Suggested Block Sizes for Tape Tape Type Media Block Size Blocks Between Tape Marks Estimated Physical Volume Size IBM 3490E 32 KB 512 800 MB IBM 3580 256 KB 1024 100 GB IBM 3590 256 KB 512 10, 20GB IBM 3590E 256 KB 512 20, 40GB IBM 3590H 256 KB 512 60, 120 GB Sony GY-8240 256 KB 1024 60, 200 GB StorageTek 9840 256 KB 1024 20 GB StorageTek 9840 RAIT 1+0 128 KB 512 20 GB StorageTek 9840 RAIT 1+1 128 KB 512 20 GB StorageTek 9840 RAIT
Chapter 2 HPSS Planning Table 2-4 Suggested Block Sizes for Tape Tape Type Media Block Size Blocks Between Tape Marks Estimated Physical Volume Size StorageTek Redwood 256 KB 512 50 GB StorageTek Timberline 64 KB 1024 800 MB The STK RAIT PVR cannot be supported at this time since STK has not yet made RAIT generally available. In Table 2-4: 2.9.2 • Tape Type is the specific type of media to which the values in the row apply.
Chapter 2 HPSS Planning and its associated attributes. For detailed descriptions of each attribute associated with a storage hierarchy, see Section 6.7.2: Configure the Storage Hierarchies (page 315). The following is a list of rules and guidelines for creating and managing storage hierarchies. Rule 1: All writes initiated by clients are directed to the highest level (level 0) in the hierarchy. Rule 2: The data of a file at a storage class level in a hierarchy is associated with a single Storage Server.
Chapter 2 2.9.3.1 HPSS Planning Selecting Minimum File Size Guideline: This field can be used to indicate the smallest file that should be stored in this COS. Explanation: This limit is not enforced and is advisory in nature. If the COS Hints mechanism is used, minimum file size can be used as a criteria for selecting a COS. Currently, PFTP and FTP clients that support the alloc command will use the size hints when creating a new file.
Chapter 2 HPSS Planning Guideline 4: Select the Stage on Open Background option if you want the stage to be queued internally in the Bitfile Server and processed by a background BFS thread on a scheduled basis. Explanation: The open request will return with success if the file is already staged. If the file needs to be staged an internal staged request is placed in a queue and will be selected and processed by the Bitfile Server in the background. A busy error is returned to the caller.
Chapter 2 2.9.3.6 HPSS Planning Selecting Transfer Rate This field can be used via the COS Hints mechanism to affect COS selection. Guideline 1: This field should generally be set to the value of the Transfer Rate field in the storage class that is at the top level in the hierarchy. This should always be the case if the data is being staged on open.
Chapter 2 HPSS Planning 2.10.1 HPSS Storage Space HPSS files are stored on the media that is defined to HPSS via the import and create storage server resources mechanisms provided by the Storage System Manager. You must provide enough physical storage to meet the demands of your user environment. HPSS assists you in determining the amount of space needed by providing SSM screens with information on total space and used space in all of the storage classes that you have defined.
Chapter 2 HPSS Planning bitfile.# mpchkpt.# nsacls.# nsfilesetattrs.# nsobjects.# nstext.# sspvdisk.# sspvtape.# storagemapdisk.# storagemaptape.# storagesegdisk.# storagesegtape.# vvdisk.# vvtape.# The following files are part of an HPSS system, but are not associated with a particular subsystem: accounting acctsnap acctsum acctvalidate bfs cartridge_3494 cartridge_3495 cartridge_aml cartridge_lto cartridge_operator cartridge_stk HPSS Installation Guide Release 4.
Chapter 2 HPSS Planning cartridge_stk_rait The STK RAIT PVR cannot be supported at this time since STK has not yet made RAIT generally available. cos dmg dmgfileset filefamily gkconfig globalconfig hierarchy logclient logdaemon logpolicy lspolicy migpolicy mmonitor mountd mover moverdevice mps ndcg nfs nsconfig nsglobalfilesets purgepolicy pvl pvlactivity 108 September 2002 HPSS Installation Guide Release 4.
Chapter 2 HPSS Planning pvldrive pvljob pvlpv pvr sclassthreshold serverconfig site ss storageclass storsubsysconfig 2.10.2.1 Global Configuration Metadata Global Configuration File (globalconfig) - This file contains configuration data that is global to an HPSS system. Since it only ever contains one small metadata record, it plays a negligible role in determining metadata size. 2.10.2.
Chapter 2 HPSS Planning • Metadata Monitor Configurations (mmonitor) • Migration/Purge Server Configurations (mps) • Mount Daemon Configurations (mountd) • Mover Configurations (mover) • NFS Daemon Configurations (nfs) • Non-DCE Client Gateway Configurations (ndcg) • NS Configurations (nsconfig) • PVL Configurations (pvl) • PVR Configurations (pvr) • Storage Server Configurations (ss) General Server Configurations.
Chapter 2 HPSS Planning Mount Daemon Configurations. Each NFS Mount Daemon must have an entry in this configuration metadata file describing various startup/control arguments. There will be one Mount Daemon entry for each NFS server defined. Mover Configurations. Each Mover (MVR) must have an entry in this configuration metadata file describing various startup/control arguments.
Chapter 2 HPSS Planning Server. However, that would leave no space for any symbolic links, hard links, or growth. To cover these needs, the total number of SFS records might be rounded up to 1,500,000. If more name space is needed, additional space can be obtained by allocating more SFS records, by adding more storage subsystems, and/or by “attaching” to a Name Server in another HPSS. Refer to Section 10.7.
Chapter 2 • Bitfile Tape Segments (bftapesegment.#) • BFS Storage Segment Checkpoint (bfsssegchkpt.#) • BFS Storage Segment Unlinks (bfssunlink.#) • Bitfile COS Changes (bfcoschange.#) • Bitfile Migration Records (bfmigrrec.#) • Bitfile Purge Records (bfpurgerec.#) • Accounting Summary Records (acctsum) • Accounting Logging Records (acctlog.#) HPSS Planning Storage Classes. One record is created in this metadata file for each storage class that is defined.
Chapter 2 HPSS Planning map records is the total number of disk storage segments divided by 2. Another way to put an upper bound on the number of disk map records is as follows: • For each disk storage class defined, determine the total amount of disk space in bytes available in the storage class. • Divide this by the storage segment size for the storage class. This will give the maximum number of storage segments that could be created for this storage class.
Chapter 2 HPSS Planning bitfiles and bytes stored by the account index in the given COS, 2) if the storage class is not 0, this is a statistics record and contains the total number of bitfile accesses and bytes transferred associated with the account index, COSid, and referenced storage class. The number of records in this file should be the number of users in the HPSS system multiplied by the average number of levels in a hierarchy plus 1. For most configurations, the average number of levels will be 2.
Chapter 2 HPSS Planning Expect both the disk storage segment metadata file and the disk storage map metadata file to be quite volatile. As files are added to HPSS, disk storage segments will be created, and as files are migrated to tape and purged from disk, they will be deleted. If SFS storage is available on a selection of devices, the disk storage segment metadata file and the storage disk map file would be good candidates for placement on the fastest device of suitable size. SS Disk Physical Volumes.
Chapter 2 HPSS Planning SS Tape Physical Volumes. The tape PV metadata file describes each tape physical volume imported into HPSS. The number of records in this file will therefore equal the total number of tape cartridges that will be managed by this Storage Server. SS Tape Virtual Volumes. The tape VV metadata file describes all tape virtual volumes created by this Storage Server. Each VV is described by a separate record in this file.
Chapter 2 HPSS Planning PVL Activities. This metadata file stores individual PVL activity requests such as individual tape mounts. Depending on how many concurrent I/O requests are allowed to flow through the Storage Server and PVL at any given time, this metadata file should not grow beyond a few hundred records. 2.10.2.
Chapter 2 HPSS Planning The MPS shares the following SFS metadata files with the Bitfile Server (the default SFS file names are shown in parentheses). • Migration Policies (migpolicy) • Purge Policies (purgepolicy) • Migration Records (bfmigrrec.#, where # is the storage subsystem ID) • Purge Records (bfpurgerec.#, where # is the storage subsystem ID) Migration Policies. Each basic migration policy and each storage subsystem specific migration policy requires a record in this file.
Chapter 2 HPSS Planning 2.10.2.15 Storage System Management Metadata The SSM System Manager is the primary user of the following SFS metadata files (the default SFS filenames are shown in parentheses): • File Family (filefamily) In addition, the SSM System Manager requires an entry in the generic server configuration file. The SSM Data Server does not require an entry in any SFS file. File Families.
Chapter 2 HPSS Planning 2.10.2.20 Metadata Constraints The generic configurations for all HPSS servers must be contained in a single SFS file. SSM makes this happen transparently. Also, all server-specific configuration entries for a given server type can be in the same SFS file. For example, all Storage Servers should be defined in the single SFS file named ss.
Chapter 2 HPSS Planning Table 2-5 HPSS Dynamic Variables (Subsystem Independent) Variable Description Total Number of Users Defines the total number of users in the HPSS system. This value is used in conjunction with “Avg. Number of Levels Per Hierarchy” to define the size of accounting records. Avg. Number of Levels Per Hierarchy This value is the average number of levels defined in the hierarchies. For most hierarchies which are defined as disk and tape, this value would be 2.
Chapter 2 HPSS Planning Table 2-6 HPSS Dynamic Variables (Subsystem Specific) Variable Description Max Total Bitfiles The maximum number of bitfiles that will exist in HPSS. The spreadsheet also considers this value to also be the total number of bitfiles on tape, since it is assumed that every HPSS bitfile will eventually migrate to tape. If it is expected for 2 million files to be placed in HPSS, enter 2,000,000. This value significantly impacts the overall metadata sizing estimate.
Chapter 2 HPSS Planning Table 2-6 HPSS Dynamic Variables (Continued)(Subsystem Specific) Variable Description Avg. Text Overflows Per Name Space Object A name-space object record can store a filename that is 23 characters long (the base name, not the full pathname). If a filename is longer than 23 characters, a text overflow record must be generated. Also, if a comment is attached to a name-space object, a text overflow record must be created.
Chapter 2 HPSS Planning Table 2-6 HPSS Dynamic Variables (Continued)(Subsystem Specific) Variable Description Total Disk Physical Volumes The maximum total number of disk physical volumes. Total Disk Virtual Volumes The total number of disk virtual volumes that will be created, which is equal to the number of disk physical volumes divided by the average disk stripe width. Total Tape Physical Volumes This is the total number of Physical Volumes that will be managed by this subsystem.
Chapter 2 HPSS Planning Table 2-7 HPSS Static Configuration Values (Continued) Variable Description Total Log Clients The total number of Log Clients that will be used, which will be equal to the total number of nodes running any type of HPSS server. Total Metadata Monitor Servers The total number of Metadata Monitor servers, which should equal the total number of Encina SFS servers used by the HPSS system. Total Movers The total number of Movers that will be created.
Chapter 2 HPSS Planning Total Records—This column shows the projected number of metadata records for each metadata file, given the assumptions from the assumption worksheet. The formulas for computing the number of records are shown below. Note that the variable names on the left match the metadata file names listed in the Subsystem/Metadata File column while variables on the right of the equals sign (“=”) represent values from the assumptions worksheet (unless otherwise noted).
Chapter 2 HPSS Planning MPS/Server Configs (mps) = Total Migration/Purge Servers NDCG Non-DCE Gateway Configuration = Total Non-DCE Gateways NFS/Mount Daemons (mountd) = Total NFS Mount Daemons NFS/Server Configs (nfs) = Total NFS Servers NS/Global Filesets (nsglobalfilesets) = Global Number of Filesets NS/Server Configs (nsconfig) = Total Name Servers PVL/Activities (pvlactivity) = Max Queued PVL Jobs * Avg Activities Per PVL Job PVL/Drives (pvldrive) = Total Disk Physical Volumes + Total Tape Drives PVL
Chapter 2 HPSS Planning BFS/Disk Bitfile Segments (bfdisksegment.#) = Avg. Storage Segments Per Disk VV * Total Disk Virtual Volumes BFS/Storage Segment Checkpoint (bfsssegchkpt.#) = Max BFS Storage Segment Checkpoints BFS/Tape Bitfile Segments (bftapesegment.#) = Avg Bitfile Segments Per Tape Bitfile * (Max Total Bitfiles + (Max Total Bitfiles * (Avg Copies Per Bitfile - 1) * Percent of Extra Copies Stored on Tape)) BFS/Unlink Records (bfssunlink.
Chapter 2 HPSS Planning volume to use in allocating this disk space. An integer value from 1 to 10 can be entered. For example, if four AIX logical volumes are allocated to SFS, use values from 1 to 4 to assign each metadata file to the appropriate volume. The Space Allocation Per Encina Volume table, located on the same worksheet, will use this allocation to calculate total disk space requirements per SFS volume.
Chapter 2 • HPSS Planning Encina SFS Data Actual HPSS metadata is stored in SFS data volumes. SFS data volumes store the actual SFS record data as well as associated index and B-tree overhead information. SFS must have sufficient disk space allocated for its data volumes in order to store the projected amount of HPSS metadata.
Chapter 2 HPSS Planning If the file systems used to store the MRA files becomes full, SFS will not run. MRA files are written to the directory /opt/encinalocal/encina/sfs/hpss/archives. Before Encina is configured as described in Section 5.5.2: Configure Encina SFS Server (page 243), a separate mirrored file system should be created (e.g. /sfsbackups/mra), and an appropriate link created from the directory mentioned above.
Chapter 2 2.10.3.2.1 HPSS Planning Disk Space Requirements for Core files and Encina Trace Buffers The /var/hpss/adm/core is the default directory where HPSS creates core and Encina trace files resulting from subsystem error conditions. The actual size of the files differ depending on the subsystems involved, but it is recommended that there should be at least 512 MB reserved for this purpose on the core server node and at least 256 MB on Mover nodes.
Chapter 2 HPSS Planning 2.10.3.2.8 Disk Space Requirements for Running NFS Daemon The HPSS NFS server memory and disk space requirements are largely determined by the configuration of the NFS request processing, attribute cache, and data cache. Data cache memory requirements can be estimated by multiplying the data cache buffer size by the number of memory data cache buffers.
Chapter 2 2.10.3.3 HPSS Planning System Memory and Paging Space Requirements Specific memory and disk space requirements for the nodes on which the HPSS servers will execute will be influenced by the configuration of the servers—both as to which nodes the servers will run on and the amount of concurrent access they are set up to handle.
Chapter 2 HPSS Planning 2.11.1 DCE Due to several problems observed by HPSS, it is highly recommended that all DCE client implementations use the RPC_SUPPORTED_PROTSEQS=ncadg_ip_udp environment variable. Frequent timeouts may be observed if this is not done. Each HPSS system should be periodically checked for invalid/obsolete endpoints. Failure to comply may cause miscellaneous failures to occur as well as significantly decreasing HPSS performance.
Chapter 2 HPSS Planning for most HPSS installations. A value of 16000 or 32000 is more reasonable and can be changed by adding a -b 16000 to the arguments int /etc/rc.encina. Be sure to update the /etc/security/limits file (and reboot) to allow the system to handle the added processing size that both this and the increase in SFS threads will create. In the default section, the values for data should be increase to data = 524288 and for rss should be increased to rss = 262144.
Chapter 2 HPSS Planning application, a number of the disks can be grouped together in a striped Storage Class to allow each disk to transfer data in parallel to achieve improved data transfer rates. If after forming the stripe group, the I/O or processor bandwidth of a single machine becomes the limiting factor, the devices can be distributed among a number of machines, alleviating the limitation of a single machine.
Chapter 2 HPSS Planning Parallel transfers move the data between the Mover and the end-client processes bypassing the HPSS FTPD. Customers should be educated to use the parallel functions rather than the nonparallel functions. NOTE: ASCII transfers are not supported by the parallel functions and the nonparallel functions will need to be specified for ASCII transfers. ASCII transfers are NOT typically required; but the end-customer should familiarize themselves with the particulars.
Chapter 2 HPSS Planning of sites. Usually, the only time the policy values need to be altered is when there is unusual HPSS setup. The Location Server itself will give warning when a problem is occurring by posting alarms to SSM. Obtain the information for the Location Server alarms listed in the HPSS Error Manual. To get a better view of an alarm in its context, view the Location Server's statistics screen.
Chapter 2 HPSS Planning 2.11.12 Cross Cell Cross Cell Trust should be established with the minimal reasonable set of cooperating partners (Nsquared problem). Excessive numbers of Cross Cell connections may diminish Security and may cause performance problems due to Wide Area Network delays. The communication paths between cooperating cells should be reliable. Cross Cell Trust must exist to take advantage of the HPSS Federated Name Space facilities. 2.11.
Chapter 2 HPSS Planning of time. This makes a ‘migrate early, migrate often’ strategy a feasible way to keep XFS disks clear of inactive data. The only inherent size limitation for XFS is a 2 TB maximum filesystem size, which is a limitation of the Linux kernel. The file size limit is 16-64TB (depending on page size) which is limited in practice by the maximum filesystem size. 2.11.
Chapter 2 HPSS Planning and protect the SFS metadata, it is important that each site review this list of “rules” and check to insure that their site’s backup is consistent with these policies. The main I/O from Encina SFS is in the transaction log, the actual SFS data files, and media log archiving. When deciding on the size and number of disks for metadata, keep in mind the following: 1. At a minimum, the transaction log and SFS data files should be on different disks. 2.
Chapter 2 HPSS Planning • The file system used to store TRB (i.e., data volume backup) files must be mirrored on at least two separate physical disks or be stored on a redundant RAID device. • Separate disks must be used to store the SFS data volumes versus the file system used to store TRB files. • Separate disks must be used to store the SFS log volume versus the SFS data volumes.
System Preparation Chapter 3 This section will cover the steps that must be taken to appropriately prepare your system for installation and configuration of HPSS and its infrastructure. 3.1 General • Each HPSS administrator should request a login id and password to the IBM HPSS web site at http://www4.clearlake.ibm.com/hpss/support.jsp. • Download a copy of the HPSS Installation and Management Guides for the version of HPSS being installed.
Chapter 3 System Preparation • Download and install the HPSS deployment tool package (http:// www4.clearlake.ibm.com/hpss/support/ToolsRepository/deploy.jsp) on each HPSS node in /opt/hpss/tools/deploy. Run and become familiar with the lsnode tool, which will be helpful in other steps. To run lsnode and save the output to /var/hpss/stats/lsnode.out: % mkdir -p /var/hpss/stats % cd /opt/hpss/tools/deploy/bin % lsnode > /var/hpss/stats/lsnode.out 3.2 Setup Filesystems 3.2.
Chapter 3 3.2.2 System Preparation Encina Configure /opt/encinalocal and /opt/encinamirror such that the contents are either mirrored or each of these two directories is stored on separate disks. Make sure these file systems are automatically mounted at system reboot. 3.2.
Chapter 3 3.3.
Chapter 3 System Preparation mkhpss then prompts for whether mirroring is desired for this logical volume. If so, it will prompt for the name of the disk to use as the mirror. 3.4 Setup for HPSS Metadata Backup • Install and configure the automated SFS backup toolset. Keep in mind the following: ◆ Media archiving must always be enabled while HPSS is in production mode. If it is ever temporarily disabled (e.g.
Chapter 3 System Preparation % tapeutil -f inventory To test tape mounts: % tapeutil -f move To test tape dismounts: % tapeutil -f unload % tapeutil -f move Run this before HPSS has started since only one process can have an open smc device file descriptor.
Chapter 3 System Preparation % mtlib -l -qL where lmcpDevice is usually /dev/lmcp0. To test ability to use lmcp daemon to mount a tape: % mtlib -l/dev/lmcp0 -m -V -x Test ability to dismount the tape: % mtlib -l/dev/lmcp0 -d -V -x To automatically start the lmcp daemon after system reboot, add /etc/methods/startatl to the /etc/inittab file Refer to Section 6.8.13.3: IBM 3494/3495 PVR Information on page 388 for more information. 3.5.
Chapter 3 System Preparation • If using an AML PVR, configure the Insert/Eject ports using the configuration files /var/ hpss/etc/AML_EjectPort.conf and /var/hpss/etc/AML_InsertPort.conf. Refer to Section 6.8.13.5: ADIC Automatic Media Library Storage Systems Information on page 392 for more information. 3.5.5 Tape Drive Verification Verify that the correct number and type of tape devices are available on each Tape Mover node. 3.5.5.
Chapter 3 System Preparation % iocheck -r -t 20 -b 1mb /dev/rmt1.1 % iocheck -r -t 20 -b 1mb /dev/rmt1.1 WARNING: The contents of this tape will be overwritten so be sure to mount the correct tape cartridge. To unload a tape: % tctl -f rewoffl Repeat the above steps for each tape drive. 3.5.5.2 Solaris On each Tape Mover node, verify that each tape drive has variable-length block size set.
Chapter 3 System Preparation WARNING: The contents of this tape will be overwritten so be sure to mount the correct tape cartridge. 3.5.5.3 IRIX On each Tape Mover node, verify that each tape drive has variable-length block size set.
Chapter 3 System Preparation To measure uncompressed write performance (see warning below) on st1 (Note that specifying nst1 will cause the tape not to rewind): % iocheck -w -t 20 -b 1mb /dev/rmt/nst1 To measure the maximum-compressed write performance on st1 (and then rewind the tape): % iocheck -w -t 20 -f 0 -b 1mb /dev/nst1 To measure read performance on drive st1 using the previously-written uncompressed and compressed files: % iocheck -r -t 20 -b 1mb /dev/nst1 % iocheck -r -t 20 -b 1mb /dev/nst1 To e
Chapter 3 System Preparation ◆ There are two loops (a and b) per adapter and two ports per loop (a1, a2, b1, b2) ◆ The physical order of the disks are shown from the perspective of each port ◆ A disk is accessed according to its closest port (e.g., either a1 or a2, b1 or b2) ◆ When planning to configure striped SSA disks in HPSS, it is important to select disks for each striped virtual volume that span ports, loops, and/or adapters.
Chapter 3 System Preparation where logicalVolume is a raw logical volume that is sized to provide at least 20 seconds of I/O throughput. To measure write performance on a single disk (see warning below): % iocheck -w -t 20 -b 1mb -o 1mb /dev/r where logicalVolume is a raw logical volume that is sized to provide at least 20 seconds of I/O throughput. WARNING: The contents of this logical volume will be overwritten so be sure to use the correct logical volume name. 3.6.
Chapter 3 3.6.3 System Preparation Solaris & IRIX For Solaris & IRIX platforms, specific commands and syntax are not listed. Perform the following steps using the appropriate commands for the OS used: • Verify that the correct number and type of disk devices are available on each SFS and Disk Mover node. • Create all necessary raw disk volumes to be used by the HPSS Disk Mover(s).
Chapter 3 System Preparation To test whether an IP address is reachable (non-zero exit status indicates the ping was not successful): % ping -c 1 • Determine which networks will be used for control vs. data paths. DCE should not use all available networks on a multi-homed system unless each of those networks is guaranteed to have connectivity to other DCE services. If a particular network is removed (physically, or routing is changed), that connection remains in DCE's RPC mappings.
Chapter 3 System Preparation gather performance data using a variety of settings to determine the optimal combinations. The primary values that govern performance include send/receive buffers, size of reads/ writes, and rfc1323 value for high performance networks (HIPPI, G-Enet). Create a table showing these values.
Chapter 3 System Preparation Note that the ttcp tool is included in the deployment package and is not related to the Unix ToolTalk service. HPSS makes extensive use of a system’s networking capabilities. Therefore, the setting of the tunable networking parameters for the systems on which the various HPSS servers and clients will run can have a significant impact on overall system performance. Under AIX, a utility is provided to display and modify a number of networking parameters.
Chapter 3 System Preparation settings for the size of the send and receive pool buffer size, which have had an effect on throughput. It is recommended that the available interface specific documentation be referenced for more detailed information. The anticipated load should also be taken into account when determining the appropriate network option settings. Options that provide optimal performance for one or a small number of transfers may not be the best settings for the final multi-user workload. 3.7.
Chapter 3 • System Preparation Blank Lines are ignored. NOTE: HPSS and Network Tuning are highly dependent on the application environment. The values specified herein are NOT expected to be applicable to any installation! 3.7.1.1 PFTP Client Stanza The Parallel FTP Client configuration options are in two distinct stanzas of the HPSS.conf file (Section 3.7.1.1: PFTP Client Stanza on page 163, and Section 3.7.1.2: PFTP Client Interfaces Stanza on page 165).
Chapter 3 System Preparation Table 3-2 PFTP Client Stanza Fields SubStanza Transfer Buffer Size = E.g. Transfer Buffer Size = 16MB Optional SubStanza specifying the PFTP Buffer sizes. May be specified as a decimal number or “xMB” style notation. SubStanza Socket Buffer Size = E.g. Socket Buffer Size = 16MB Optional SubStanza specifying the Pdata Socket Buffer sizes. May be specified as a decimal number or “xMB” style notation. SubStanza MAX Ptran Size = E.g.
Chapter 3 System Preparation This syntax is identical to DCE’s RPC_RESTRICTED_PORTS environment variable. Only the ncacn_ip_tcp[start_port-end_port] (TCP component) is used so the ncadg_ip_udp component may be omitted. Additional options are available for controlling the size of the PFTP transfer buffers, Transfer Buffer Size, and the buffer size for the sockets in the PDATA_ONLY protocol, Socket Buffer Size. The value may be specified as a decimal number (1048576) or in the format: xMB.
Chapter 3 System Preparation particularly useful if both low speed and high speed interfaces are available to the client host and the PFTP data transfers should use the high speed interfaces. Table 3-3 PFTP Client Interfaces Stanza Fields Configuration Type Abbreviated Description Stanza (Compound) PFTP Client Interfaces = { Reserved Stanza specifier. Must be terminated with a matching “}” SubStanza (Compound) Section (Compound) SubSection = { E.g. my_host my_host.my.
Chapter 3 System Preparation • Destination Host (FTPD Host) may contain one or more hostnames separated by white spaces (subject to the 128 character limit). • Interface Specification must be IP Address Dot Notation. • Interfaces must be able to connect to destination. Communication failures that are not easily diagnosed will occur if the interface specification is invalid. PFTP Client Interfaces Stanza Example: PFTP Client Interfaces = { ; PFTP Client Host Name(s) water.clearlake.ibm.
Chapter 3 System Preparation Default = { ; Client Host Name water water.clearlake.ibm.com = { 134.253.14.227 } } } 3.7.1.3 Multinode Table Stanza The HPSS PFTP Client normally forks children to provide multiple network paths between the PFTP Client and the Mover(s). In some instances, it may be preferable to have these processes (pseudo children) running on independent nodes.
Chapter 3 System Preparation Multinode Table Stanza Rules: • SubStanza hostnames (local hosts) may contain one or more hostnames separated by white spaces (subject to the 128 character limit.) • Section hostnames (remote hosts) [and/or values] may be specified as either string-based hostnames or Dot Notation IP Addresses. Only one entry per line. Multinode Table Example: ; Options read by the Multinode Daemon Multinode Table = { ; Hostname of the Client water water.clearlake.ibm.
Chapter 3 System Preparation Table 3-5 Realms to DCE Cell Mappings Stanza Fields SubStanza = E.g. Kerberos.Realm = /…/my_dce_cell Contains Kerberos Realms and their associated DCE Cell Names. The Realms to DCE Cell Mappings = { … } stanza contains one or more substanzas providing Kerberos Realm to DCE Cell Mappings. The substanza(s) are used to specify the appropriate mappings. Realms to DCE Cell Mappings specific rules: For security reasons, only the HPSS. |
Chapter 3 System Preparation entry to the specified destination address. A “Default” destination may be specified for all sources/destinations not explicitly specified in the HPSS.conf file. Table 3-6 Network Options Stanza Fields Configuration Type Description Stanza (Compound) Network Options = { Reserved Stanza specifier. Must be terminated with a matching “}” SubStanza Default Write Size = E.g.
Chapter 3 System Preparation Table 3-6 Network Options Stanza Fields SubSection WriteSize = E.g. WriteSize = 1MB Size to be used for each individual write request to the network May be specified as a decimal number or “xMB” style notation SubSection TcpNoDelay = 0 | 1 E.g. TcpNoDelay = 1 Indicates whether the TCP Delay option should be disabled (0) or enabled (any other value) SendSpace & RecvSpace Controls the size of the receive and send buffers for TCP/IP sockets.
Chapter 3 System Preparation • The Source Interface Name SubStanza may specify one or more names [ subject to the 128 character limit (including the “= {“.) ] NOTE: Do not include the quotes when specifying Default. • Destination IP Address must be specified in Decimal Dot Notation. • Multiple Sections may be included in any SubStanza. A “Default” Destination Interface Name Section may be specified. NOTE: Do not include the quotes when specifying Default.
Chapter 3 System Preparation Default = { # Destination IP Address in Dot Notation 200.201.202.203 = { NetMask = 255.255.255.0 RFC1323 = 1 SendSpace = 1048576 RecvSpace = 1MB WriteSize = 2MB TCPNoDelay = 0 } # Default Destination – options to be used for destinations # NOT explicitly specified. Default = { NetMask = 255.255.255.0 RFC1323 = 1 SendSpace = 256KB RecvSpace = 128KB WriteSize = 512KB TCPNoDelay = 0 } } } 3.7.1.
Chapter 3 System Preparation 3.8 Install and Configure Java and hpssadm 3.8.1 Introduction The hpssadm utility and the modifications to the SSM Data Server necessary to support hpssadm require the installation and configuration of Java 1.3.0 and the Java Secure Sockets Extension. The default prebuilt Data Server executable and shared library require Java. If the hpssadm utility is not used, these can be replaced with the no-Java prebuilt versions of these files, which are also shipped with HPSS.
Chapter 3 System Preparation ii. Check password for the trusted store (cacerts): % $JAVA_HOME/bin/keytool -keystore cacerts -list Type "changeit" when prompted for password. iii. Change password for trusted store: % $JAVA_HOME/bin/keytool -keystore cacerts -storepasswd \ -new Type "changeit" when prompted for password. iv. Verify new password for trusted store: % $JAVA_HOME/bin/keytool -keystore cacerts -list Type "" when prompted for password. C.
Chapter 3 System Preparation -dname "cn=HPSS Data Server" -alias hpss_ssmds \ -keystore keystore.ds -validity 365 ii. Display the fingerprint for the certificate: % $JAVA_HOME/bin/keytool -keystore keystore.ds -list -v iii. Export the certificate to the temporary file ds.cer: % $JAVA_HOME/bin/keytool -keystore keystore.ds -export \ -alias hpss_ssmds -file ds.cer C. Set up SSMDS for normal or low security mode: i.
Chapter 3 System Preparation % cp /opt/hpss/config/templates/hpssadm.config.template \ hpssadm.config % chmod 640 hpssadm.config Add any additional authorized users to the hpssadm.config file. To add user joe: % vi /var/hpss/ssm/hpssadm.config Add line "HPSS_SSMDS_AUTH_USER=joe" at the end of file. Save and exit. 3. On each machine where hpssadm will be executed: A. Copy SSMDS certificate (/var/hpss/ssm/ds.cer) from SSMDS machine to /var/hpss/ssm on hpssadm machine.
Chapter 3 System Preparation D. Create hpssadm user keytab files % mkdir -p /var/hpss/ssm/keytabs % cd /var/hpss/ssm/keytabs For each hpssadm user on this machine, create DCE keytab file. This example creates a keytab file for user joe: % dce_login cell_admin % rgy_edit rgy_edit> ktadd -f keytab.joe -p joe (Need to enter joe's password twice) rgy_edit> ktadd -f keytab.
Chapter 3 System Preparation To use the hpssadm utility and the Java version of the Data Server, continue following the instructions for the remainder of this section. 3.8.1.3 Prerequisite Software This required software is: 1. 2. One of the following: ◆ Java 1.3.0 JRE (Java Runtime Environment) ◆ Java 1.3.0 SDK (Software Development Kit) Java 1.0.2 JSSE (Java Secure Sockets Extensions) This software is available for download for AIX, Solaris, and Windows at no cost. Section 3.8.
Chapter 3 System Preparation the host from which the hpssadm utility is executed and are transmitted to the Data Server, who authenticates them against the DCE registry. This file is discussed in Section 3.8.6: Setting up the hpssadm Keytab File on page 189. Section 3.8.9: Background Information on page 191 provides a high level discussion of DCE keytab files, the Java Security Policy, X.
Chapter 3 System Preparation Follow the instructions with the downloads for installation. Please observe these notes about the JSSE installation: It is recommended but not required that you download both the JSSE package and the documentation. The JSSE zip file may be unpacked anywhere desired. It is recommended that it be unpacked directly under the ${JAVA_ROOT} directory to make it easier to find.
Chapter 3 System Preparation You will be prompted for the password, WHICH WILL BE ECHOED AS YOU TYPE IT, so make sure you are working from a location where the password cannot be compromised. Type in the default password ("changeit"). The utility should list the certificates in the file. 4. Change the password with the -storepasswd option of the keytool command. In this example, the new password is "XXXXXX".
Chapter 3 System Preparation There should already be at least one security provider listed in this file, probably in a format something like: security.provider.1=sun.security.provider.Sun If there is more than one provider listed, they should be numbered in increasing numerical order: security.provider.2=XXX.security.provider.foox security.provider.3=YYY.security.provider.fooy security.provider.4=ZZZ.security.provider.fooz etc.
Chapter 3 System Preparation This command will generate a public key and an associated private key for the Data Server with alias "hpss_ssmds". It will also generate a self-signed certificate for hpss_ssmds which includes his public key. The key will be valid for 365 days. The keys and certificate will be stored in the file "keystore.ds". This is the file the Data Server will read to obtain his key and certificates when he first begins execution.
Chapter 3 System Preparation % cp cacerts cacerts.ORIG % $JAVA_HOME/bin/keytool -keystore cacerts -import \ -file /tmp/ds.cer -alias hpss_ssmds The keytool utility will print out the information about the certificate, including the fingerprints, and will ask whether the certificate should be trusted. Compare the owner, issuer, and fingerprints carefully with those obtained from the original certificate in step 2. If they match, answer "yes".
Chapter 3 System Preparation Security Manager, or if none of these policy files exists, the default policy is the original Java sandbox policy, which is rather liberal. Any system access is further limited by whatever protections the local operating system supplies. So, for example, if the policy file allows access to file "foo", but the file system permissions do not permit access to "foo" by the user executing hpssadm, then the user cannot access the file.
Chapter 3 System Preparation 2. The Data Server requires read FilePermission on its user authorization file, whose default location is /var/hpss/ssm/hpssadm.config. The hpssadm utility requires read FilePermission for the user's keyfile file, the default location for which is /var/hpss/ssm/keytab grant { permission java.io.FilePermission "/var/hpss/-", "read"; }; The dash ("-") in the pathname in this example signifies that the permission is to be granted to everything in the /var/hpss tree, recursively.
Chapter 3 System Preparation The hpssadm.config file is a flat ASCII file which lists the users who are authorized to use the hpssadm utility. The template for this file is config/templates/hpssadm.config.template. The default name for this file is /var/hpss/ssm/hpssadm.config This pathname can be changed in the hpss_env file by setting the HPSS_SSMDS_JAVA_CONFIG variable as desired.
Chapter 3 System Preparation The keytab file must be stored on each host from which the user will execute the hpssadm utility, and must be specified on the hpssadm command line with the -k option: hpssadm -k keytab_file_path_name The keytab file should be owned by the user and protected so that it is readable only by the user. The keytab is interpreted on the host on which the Data Server runs, not that on which the hpssadm client utility runs.
Chapter 3 3.8.8 System Preparation Updating Expired SSL Certificates When the Data Server certificate expires, the Data Server itself will be able to start up and execute, but any hpssadm client attempting to connect to it will fail with the error "untrusted server cert chain". A new certificate must be generated for the Data Server and disseminated to all the client machines. To do this, follow these steps: 1. Check the keystore and the cacerts file to be sure the certificate has expired.
Chapter 3 System Preparation call returns silently if it determines the code is allowed the requested access, and otherwise throws an exception, which halts the program. Applet code runs under a security manager (usually) because most browsers implement one. The security manager won't let the applet do anything not allowed by the policy file(s).
Chapter 3 System Preparation you can pay to issue X.509 certificates to you. Certificates can also be created by individuals and self-signed by the party owning the certificate. A program uses a file of these certificates as its "trusted store", the set of certificates of parties it will trust.
Chapter 3 System Preparation hpssadm program, such as new alarms or changes in HPSS server statuses. This session does not pass any private data such as passwords, does not use SSL, and is not encrypted. For security reasons, an application can bind or unbind only to an RMI registry running on the same host. This prevents a client from removing or overwriting any of the entries in a server's remote registry. A lookup, however, can be done from any host. 3.8.9.
Chapter 3 System Preparation 1. Download the patch, xfs-2.4.18-1, from the HPSS website (http://www4.clearlake.ibm.com/hpss/support/patches/xfs-2.4.18-1.tar). 2. Untar the downloaded file. % tar -xvf xfs-2.4.18-1.tar 3. Copy xfs-2.4.18-1 (the patch file) to /usr/src. % cp xfs-2.4.18-1 /usr/src 4. Change directory to /usr/src/linux-2.4.18 (or the root of your 2.4.18 kernel tree). % cd linux-2.4.18 5. Apply the patch % patch -p1 < ../xfs-2.4.18-1 6.
Chapter 3 System Preparation 1. Download the patch, kaio-2.4.18-1.tar, from the HPSS website (http://www4.clearlake.ibm.com/hpss/support/patches/kaio-2.4.181.tar). 2. Untar the downloaded file. % tar -xvf kaio-2.4.18-1.tar 3. Copy kaio-2.4.18-1 (the patch file) to /usr/src. % cp kaio-2.4.18-1 /usr/src 4. Change directory to /usr/src/linux-2.4.18 (or the root of your 2.4.18 kernel tree). % cd linux-2.4.18 5. Apply the patch % patch -p1 < ../kaio-2.4.18-1 6.
Chapter 3 System Preparation • A detailed description of the anticipated production usage of HPSS, including distribution of file sizes, data acquisition and access methods, key projects, user expectations, admin/ operations expectations, availability requirements, and performance and loading requirements. • Detailed HPSS configuration information (Note: The lshpss.ksh script is from the deployment tools package).
Chapter 3 System Preparation ❖ Storage Class Sharing - Are any storage classes shared amongst more than one hierarchy? If so, is that intentional and if so, discuss ramification of sharing vs. not sharing (usually boils down to avoiding “pockets” of unused storage – one SC full while another is nearly empty – vs. not being able to separate one set of users or accesses from another).
Chapter 3 System Preparation ❖ Virtual Volume Block Size and Stripe Width - See the discussion for disk Virtual Volume Block Size & Stripe Width. ❖ Thresholds - Again, make sure that these seem reasonable. Do they match whatever repack or tape migration scheme (if any) of the site ❖ Max VVs to Write - Does this value make sense given the number of drives available and the migration policy? ❖ Migration Policy & Purge Policy Ids - Should be zero unless they are really do tape migration.
Chapter 3 System Preparation ❖ Control Hostname & Data Hostname - The control hostname should reference the network interface over which the requests are sent from the SS and PVL to the Mover. It is typically an ethernet address (although this is certainly not required), since that is a reasonably low-latency network and this interface does not require very much bandwidth.
Chapter 3 System Preparation ❖ Copy Count & Skip Factor - These control multiple copies. Typically these are set for 1 or 2 copies on tape (with a disk level at the top of the hierarchy). If not one of these, what is the rationale? ❖ Request Count - Verify that the request counts (remember that more than one disk SC can use the same migration policy) do not appear to make unreasonable demands on the available tape drives – e.g.
Chapter 3 System Preparation ❖ Deferred Dismount - Is deferred dismount disabled? If so, what is the rationale? In a future release, lshpss will print the deferred dismount time – is it reasonable? ❖ Device Specific A & B - Verify that these are correct for the specific PVR type (of course, it probably wouldn’t run if they weren’t, but…). ➢ Log Daemon ❖ Logfile Max Size - Many sites are now using a larger value (e.g.
Chapter 3 System Preparation application that creates a internet domain socket. If there is a need for the Mover to use larger socket buffers, a better solution is typically to use the HPSS network options configuration file, which provides better granularity of control and only affects HPSS subsystems. ◆ RPC_UNSUPPORTED_NETIFS - Verify that only the desired network interfaces are being utilized by DCE. ◆ Tape Devices - Verify that the tape devices appear to be correct (e.g.
Chapter 3 System Preparation • ◆ Disaster recovery requirements ◆ Disaster recovery test plan Pre-Production Test Plan, including: ◆ Customer's requirements (users, admin, management, and operations staff) for the production storage system considering all involved hardware and software, which may include detailed requirements in one or more of the following areas depending on customer requirements: ➢ Functionality ➢ Single-transfer performance ➢ Aggregate performance ➢ Maximum number of concurrent r
HPSS Installation Chapter 4 4.1 Overview This chapter provides instructions and supporting information for installing the HPSS software from the HPSS distribution media. To install this system, we recommend that the administrator be familiar with UNIX commands and configuration, be familiar with a UNIX text editor, and have some experience with the C language and shell scripts. Note: For information on upgrading from a previous version of HPSS, please see Chapter 14: Upgrading to HPSS Release 4.
Chapter 4 HPSS Installation • Non-DCE Package - Contains HPSS Non-DCE Client API include files and libraries and the HPSS Non-DCE Mover binaries. • Source Code Package - Contains the HPSS Source Code The HPSS software package names and sizes for the supported platforms are as follows: Table 4-1 Installation Package Sizes and Disk Requirements Platform HPSS Package Name Package Size /opt/hpss Space Requirements Package Description AIX hpss_runtime-4.5.0.0.lpp 410 MB hpss.
Chapter 4 4. HPSS Installation Verify HPSS installed files (Section 4.5.1). 4.2 Create Owner Account for HPSS Files The HPSS software must be installed by a root user. In addition, a UNIX User ID of hpss and Group ID of hpss is required for the HPSS installation process to assign the appropriate ownership for the HPSS files. If the hpss User ID does not exist, the installation process will fail.
Chapter 4 HPSS Installation 4.4.1.1 Install an HPSS Package Using the installp Command Log on to the node as root user and issue the installp command as follows to install an HPSS package (e.g., HPSS core package): % installp -acgNQqwX -d \ /hpss_runtime.4.5.0.0.lppimage \ -f hpss.core 2>&1 4.4.1.2 Install an HPSS Package Using the AIX SMIT Utility Perform the following steps to install an HPSS package (e.g., HPSS core package) from the HPSS distribution media: 1.
Chapter 4 4.4.3 HPSS Installation IRIX Installation 4.4.3.1 Install the HPSSndapi Package The HPSSndapi package is in a compressed tar file format. Perform the following steps to install this package: 1. Log on to the node as root user 2. Place the compressed tar file in a working directory (e.g., /var/spool/pkg/HPSSndapi-4.5.0.0.tar.Z) 3. Untar the package by issuing the command as follows: % zcat HPSSndapi-4.5.0.0.tar.Z | tar -xvf - 4.
Chapter 4 HPSS Installation /opt/hpss/include/ /opt/hpss/msg/ /opt/hpss/tools/ /opt/hpss/man/ /opt/hpss/config/ /opt/hpss/stk/ /opt/hpss/sammi/hpss_ssm/ /opt/hpss/src/ 2.
Chapter 4 4.5.2.1 1. Construct the HPSS source tree 2. Compile the HPSS binaries HPSS Installation Construct the HPSS Source Tree 4.5.2.1.1 Construct the HPSS Base Source Tree The HPSS base source tree contains the source code for all the HPSS components except the STK PVR proprietary code. To construct the HPSS base source tree, the following steps must be performed: 1. Log on as root. 2. Install the HPSS source code package (code is installed in the /opt/hpss directory). 3.
Chapter 4 HPSS Installation 1. Log on as root. 2. Change directory to the HPSS base source tree (the default location is /opt/hpss). 3. Review the Makefile.macros file. 4. Ensure that the target directory tree (where the source tree will be constructed) is empty. 5. Issue the following command: % make BUILD_ROOT= build-nodce 4.5.2.2 4.5.2.2.
Chapter 4 4.5.3 HPSS Installation 1. Log on as root. 2. Place the constructed HPSS source tree (i.e., HPSS base source tree, HPSS HDM source tree, or HPSS Non-DCE source tree) in the desired build directory (the default location is /opt/hpss). 3. Review the Makefile.macros file which is in the root of the source tree. This file defines the "make" environments and options for the HPSS software compilation.
Chapter 4 214 HPSS Installation September 2002 HPSS Installation Guide Release 4.
Chapter 5 HPSS Infrastructure Configuration 5.1 Overview This chapter provides instructions and supporting information for the infrastructure configuration of HPSS. Before configuring the HPSS infrastructure, we recommend that the administrator be familiar with UNIX commands and configuration, be familiar with a UNIX text editor, and have some experience with the C language, shell scripts, DCE, and Encina.
Chapter 5 HPSS Infrastructure Configuration • WebSphere (Encina TxSeries) • Sammi • Java • DCE/DFS server/client machine(s) • HPSS software Refer to the DCE Version 3.1 Administration Guide and the Encina Server Administration: System Administrator's Guide and Reference for more information on installing and configuring DCE and Encina. Refer to Section 3.8: Install and Configure Java and hpssadm on page 175 for more information on installing Java 1.3. 5.
Chapter 5 HPSS Infrastructure Configuration #=============================================================================== # # Name: hpss_env - Site defined HPSS global variables/environment shell script # # Synopsis: hpss_env # # Arguments: none # # Outputs: # - HPSS global variable definitions and environment parameters # # Description: This script defines the HPSS global variables and environment # parameters which override the values specified in the # ./include/hpss_env_defs.h file.
Chapter 5 HPSS Infrastructure Configuration # 3.27 11/18/98 4v1 default keytab variable changes # 3.28 11/29/98 Move most of the default var’s to # ./include/hpss_env_defs.h # 3.29 11/30/98 Restore HPSSLOG variable # 3.30 07/01/99 Added conditions for SunOS # 3.31 03/03/00 Added variables for SSM command line # 3.32 03/17/00 Corrections to variables for SSM command line # 3.33 03/24/00 Comments for SSM command line, LD_LIBRARY_PATH # 3.34 06/30/00 Move SCCS line to the second line (1923) # 3.
Chapter 5 HPSS Infrastructure Configuration # D i r e c t o r i e s # # HPSS_PATH Pathname where HPSS top level is # # S A M M I D i r e c t o r i e s # # HPSS_PATH_SAMMI_INSTALLPathname where SAMMI is installed # # S y s t e m / U s e r I n f o r m a t i o n # # HPSS_SYSTEM System platform name # HPSS_SYSTEM_VERSION System version # HPSS_HOST Host name # HPSS_HOST_FULL_NAME Fully qualified Host domain name # # D C E V a r i a b l e s # # HPSS_CELL_ADMIN Principal name for administrating hpss in a cell #
Chapter 5 export export else export export fi # D C # export export export E HPSS Infrastructure Configuration HPSS_SYSTEM_VERSION=$(uname -r) HPSS_HOST_FULL_NAME=$(hostname) HPSS_SYSTEM_VERSION=$(oslevel) HPSS_HOST_FULL_NAME=`host $HPSS_HOST |cut -f1 -d’ ‘` V a r i a b l e s . . . HPSS_CELL_ADMIN=”cell_admin” HPSS_CDS_PREFIX=/.:/hpss HPSS_CDS_HOST=$HPSS_HOST # E n c i n a V a r i a b l e s . . . # export HPSS_SFS_ADMIN=”encina_admin” export HPSS_SFS_SERVER=/.
Chapter 5 5.3.2 HPSS Infrastructure Configuration hpss_env_defs.h The following is a verbatim listing of the hpss_env_defs.h file: /* static char SccsId[] = “ @(#)71 1.44 include/hpss_env_defs.h, gen, 4.5 4/29/02 12:28:46”; */ /*============================================================================== * * Include Name: hpss_env_defs.h * * Description: Contains default definitions for HPSS environment variables.
Chapter 5 HPSS Infrastructure Configuration * 1.33 vyw 09/14/00 SSMDS Java 1.3 and security (1990); * also changed HPSS_ROOT to /opt/hpss * 1.34 guidryg 09/15/00 Use /opt for install. * 1.35 ctnguyen10/09/00 Add ENCINA_LOCAL and ENCINA_MIRROR. * 1.36 ctnguyen11/13/00 Add HPSS_PATH_SSM. * 1.37 shreyas 11/27/00 change HPSS_NDCG_KRB5_SERVICENAME * 1.39 JAD 01/25/01 1971 - Additions to support RAIT. * 1.40 HDJ/WHR 02/16/01 2304: Check in NFS V3 code * 1.41 shreyas 03/02/01 2313 - LTO mods * 1.
Chapter 5 HPSS Infrastructure Configuration *************************************************************************** */ typedef struct env { char *name; char *def; char *value; } env_t; static env_t hpss_env_defs[] = { /* *************************************************************************** * HPSS_ROOT - Root pathname for HPSS Unix top level * HPSS_HOST - Machine host name * HPSS_KEYTAB_FILE_SERVER- Fully qualified DCE HPSS server keytab * HPSS_KEYTAB_FILE_CLIENT - Fully qualified DCE HPSS clie
Chapter 5 HPSS Infrastructure Configuration * HPSS_PRINCIPAL- DCE Principal name for HSEC Server * HPSS_PRINCIPAL_BFS- DCE Principal name for Bitfile Server * HPSS_PRINCIPAL_CLIENT_API * - DCE Principal name for Client API * HPSS_PRINCIPAL_DMG- DCE Principal name for DMAP Gateway * HPSS_PRINCIPAL_FTPD- DCE Principal name for FTP Daemon * HPSS_PRINCIPAL_GK- DCE Principal name for Gatekeeper Server * HPSS_PRINCIPAL_HPSSD- DCE Principal name for Startup Daemon * HPSS_PRINCIPAL_LOG- DCE Principal name for Log
Chapter 5 HPSS Infrastructure Configuration * HPSS_PRINCIPAL_MPS_UID - DCE Principal UID for Migration/Purge Server * HPSS_PRINCIPAL_NDCG_UID - DCE Principal UID for Non-DCE Gateway * HPSS_PRINCIPAL_MVR_UID - DCE Principal UID for Mover * HPSS_PRINCIPAL_NFSD_UID - DCE Principal UID for NFS Daemon * HPSS_PRINCIPAL_NS_UID - DCE Principal UID for Name Server * HPSS_PRINCIPAL_PFSD_UID - DCE Principal UID for PFS Daemon * HPSS_PRINCIPAL_PVL_UID - DCE Principal UID for PVL * HPSS_PRINCIPAL_PVR_UID - DCE Princip
Chapter 5 HPSS Infrastructure Configuration * HPSS_EXEC_PVR_AMPEX- executable name for PVR Ampex * HPSS_EXEC_PVR_OPER- executable name for PVR Operator * HPSS_EXEC_PVR_STK- executable name for PVR STK * HPSS_EXEC_PVR_3494- executable name for PVR 3494 * HPSS_EXEC_PVR_3495- executable name for PVR 3495 * HPSS_EXEC_PVR_LTO- executable name for PVR LTO * HPSS_EXEC_PVR_AML - executable name for PVR AML * HPSS_EXEC_SSDISK- executable name for Storage Server - Disk * HPSS_EXEC_SSTAPE- executable name for Storag
Chapter 5 HPSS Infrastructure Configuration { “HPSS_EXEC_REPACK”,“${HPSS_PATH_BIN}/repack” }, /* *************************************************************************** * Logging Unix files * * HPSS_PATH_LOG - unix path name for logging files * HPSS_UNIX_LOCAL_LOG- local log file *************************************************************************** */ { “HPSS_PATH_LOG”,“${HPSS_PATH_VAR}/log”}, { “HPSS_UNIX_LOCAL_LOG”,“${HPSS_PATH_LOG}/local.
Chapter 5 HPSS Infrastructure Configuration * * HPSS_PATH_GK - unix path name for Gatekeeping files * HPSS_UNIX_GK_SITE_POLICY - site policy file *************************************************************************** */ { “HPSS_PATH_GK”,“${HPSS_PATH_VAR}/gk” }, { “HPSS_UNIX_GK_SITE_POLICY”,“${HPSS_PATH_GK}/gksitepolicy” }, /* *************************************************************************** * SFS Files * * HPSS_SFS - Encina SFS server name without CDS prefix * HPSS_SFS_SERVER- Encina SFS se
Chapter 5 HPSS Infrastructure Configuration “${HPSS_SFS_SERVER}/hierarchy${HPSS_SFS_SUFFIX}”}, { “HPSS_CONFIG_STORAGECLASS”, “${HPSS_SFS_SERVER}/storageclass${HPSS_SFS_SUFFIX}”}, { “HPSS_CONFIG_SCLASS”, “${HPSS_CONFIG_STORAGECLASS}”}, { “HPSS_CONFIG_SCLASSTHRESHOLD”, “${HPSS_SFS_SERVER}/sclassthreshold${HPSS_SFS_SUFFIX}”}, { “HPSS_CONFIG_NSGLOBALFILESETS”, “${HPSS_SFS_SERVER}/nsglobalfilesets${HPSS_SFS_SUFFIX}”}, { “HPSS_CONFIG_FILE_FAMILY”, “${HPSS_SFS_SERVER}/filefamily${HPSS_SFS_SUFFIX}”}, { “HPSS_CONF
Chapter 5 HPSS Infrastructure Configuration * HPSS_CONFIG_VVDISK- SS virtual volume - disk * HPSS_CONFIG_VVTAPE- SS virtual volume - tape *************************************************************************** */ { “HPSS_CONFIG_BFMIGRREC”, “${HPSS_SFS_SERVER}/bfmigrrec${HPSS_SFS_SUFFIX}”}, { “HPSS_CONFIG_BFPURGEREC”, “${HPSS_SFS_SERVER}/bfpurgerec${HPSS_SFS_SUFFIX}”}, { “HPSS_CONFIG_ACCTLOG”, “${HPSS_SFS_SERVER}/acctlog${HPSS_SFS_SUFFIX}”}, { “HPSS_CONFIG_BITFILE”, “${HPSS_SFS_SERVER}/bitfile${HPSS_SF
Chapter 5 HPSS Infrastructure Configuration { “HPSS_CONFIG_ACCTVALIDATE”, “${HPSS_SFS_SERVER}/acctvalidate${HPSS_SFS_SUFFIX}”}, /* *************************************************************************** * BFS SFS Files * * HPSS_CONFIG_BFS - BFS type specific *************************************************************************** */ { “HPSS_CONFIG_BFS”, “${HPSS_SFS_SERVER}/bfs${HPSS_SFS_SUFFIX}”}, /* *************************************************************************** * NameServer SFS Files
Chapter 5 HPSS Infrastructure Configuration { “HPSS_CONFIG_SITE”, “${HPSS_SFS_SERVER}/site${HPSS_SFS_SUFFIX}”}, /* *************************************************************************** * Metadata Monitor SFS Files * * HPSS_CONFIG_MM - Metadata Monitor type specific *************************************************************************** */ { “HPSS_CONFIG_MM”, “${HPSS_SFS_SERVER}/mmonitor${HPSS_SFS_SUFFIX}”}, /* *************************************************************************** * Migratio
Chapter 5 HPSS Infrastructure Configuration { “HPSS_CONFIG_PVLACTIVITY”, “${HPSS_SFS_SERVER}/pvlactivity${HPSS_SFS_SUFFIX}”}, { “HPSS_CONFIG_PVLDRIVE”, “${HPSS_SFS_SERVER}/pvldrive${HPSS_SFS_SUFFIX}”}, { “HPSS_CONFIG_PVLJOB”, “${HPSS_SFS_SERVER}/pvljob${HPSS_SFS_SUFFIX}”}, { “HPSS_CONFIG_PVLPV”, “${HPSS_SFS_SERVER}/pvlpv${HPSS_SFS_SUFFIX}”}, /* *************************************************************************** * PVR SFS Files * * HPSS_CONFIG_PVR - PVR type specific * HPSS_CONFIG_CART_AMPEX- PVR a
Chapter 5 HPSS Infrastructure Configuration * HPSS_CDS_LOGC - CDS name - Log Client * HPSS_CDS_LOGD - CDS name - Log Daemon * HPSS_CDS_LS - CDS name - Location Server * HPSS_CDS_MM - CDS name - Metadata Monitor * HPSS_CDS_MOUNTD - CDS name - Mount Daemon * HPSS_CDS_MPS - CDS name - Migration/Purge Server * HPSS_CDS_MVR - CDS name - Mover * HPSS_CDS_NDCG - CDS name - Non-DCE Gateway * HPSS_CDS_NFSD - CDS name - NFS Daemon * HPSS_CDS_NS - CDS name - Name Server * HPSS_CDS_PFSD - CDS name - PFSD * HPSS_CDS_P
Chapter 5 HPSS Infrastructure Configuration * * HPSS_DESC_BFS - Descriptive name - Bitfile Server * HPSS_DESC_DMG - Descriptive name - DMAP Gateway * HPSS_DESC_FTPD - Descriptive name - FTP Daemon * HPSS_DESC_GK - Descriptive name - Gatekeeper Server * HPSS_DESC_HPSSD - Descriptive name - Startup Daemon * HPSS_DESC_LOGC - Descriptive name - Log Client * HPSS_DESC_LOGD - Descriptive name - Log Daemon * HPSS_DESC_LS - Descriptive name - Location Server * HPSS_DESC_MM - Descriptive name - Metadata Monitor *
Chapter 5 HPSS Infrastructure Configuration *************************************************************************** * System Manager Specific * * The SM attempts to throttle the connection attempts to other servers. It * will attempt to reconnect to each server every * HPSS_SM_SRV_CONNECT_INTERVAL_MIN seconds until the number of failures for * that server has reached HPSS_SM_SRV_CONNECT_FAIL_COUNT.
Chapter 5 HPSS Infrastructure Configuration * HPSS_NOTIFY_Q_LOG_THREADS - Number of threads to create per client * to process the queue of notifications * of alarms, events, and status messages * HPSS_NOTIFY_Q_TAPE_THREADS - Number of threads to create per client * to process the queue of notifications * of tape mounts and unmounts * HPSS_NOTIFY_Q_TAPE_CHECKIN_THREADS - Number of threads to create per * client to process the queue of * notifications of tape check-in and * check-out requests **************
Chapter 5 HPSS Infrastructure Configuration * HPSS_SSMDS_INTERVAL- Interval at which DS checks idle RMI clients * HPSS_SSMDS_RMI_HOST- Java RMI host for DS * HPSS_SSMDS_RMI_NAME- Java RMI base name for DS * HPSS_SSMDS_RMI_PORT- Port for Java RMI (Remote Method Invocation) * HPSS_SSMDS_KEYSTORE- Data Server keystore file (for SSL) * HPSS_SSMDS_KEYSTORE_PW- File holding password to Data Server keystore * file, or the string “PROMPT” if the sysadm * wishes to be prompted for the password at * Data Server sta
Chapter 5 HPSS Infrastructure Configuration { “HPSS_SSMDS_JAVA_POLICY”,”${HPSS_PATH_VAR}/ssm/java.policy.ds” }, { “HPSS_HPSSADM_JAVA_POLICY”, “${HPSS_PATH_VAR}/ssm/java.policy.
Chapter 5 HPSS Infrastructure Configuration * * HPSS_LS_NAME - Location Server rpc group *************************************************************************** */ { “HPSS_LS_NAME”, “${HPSS_CDS_LS}/group” }, /* *************************************************************************** * NDAPI Specific * * HPSS_NDCG_KRB5_SERVICENAME - Non DCE Gateway kerberos servicename * HPSS_KRB_TO_DCE_FILE - File to translate krb5 realm names * into DCE cellnames * HPSS_KCHILD_PATH - Pathname for the ndcg_kchild
Chapter 5 HPSS Infrastructure Configuration 1. Configure HPSS with DCE (only on the first core server node in the cell). 2. Configure Encina SFS (only if Encina SFS will run on this node). 3. Create and Manage SFS Files (only if Encina SFS will run on this node). 4. Set up FTP Daemon (only if FTP Daemon will run on this node). 5. Set up Startup Daemon. 6. Add SSM User (only if SSM will run on this node). 7. Start SSM Servers/Session (only if SSM will run on this node).
Chapter 5 HPSS Infrastructure Configuration [3] [4] [5] [6] [7] Create and Manage SFS Files Set Up FTP Daemon Set Up Startup Daemon Add SSM Administrative User Start SSM Servers/User Session [E] Re-run hpss_env() [U] Un-configure HPSS [X] Exit Reply ===> (Select Option [1-7, E, U, X]): Messages will be provided to indicate the status of the HPSS infrastructure configuration process at each stage.
Chapter 5 5.5.2 HPSS Infrastructure Configuration 4. Create keytab files for HPSS servers and clients. 5. Randomize the keys in the created server and client keytabs. 6. Set up the DCE CDS directory for HPSS servers. 7. Set up HPSS-related Extended Registry Attributes, and create hpss_cross_cell_members group. Configure Encina SFS Server This option configures and starts an SFS server. By performing this step (Configure Encina SFS Server) multiple times, multiple SFS servers can be configured.
Chapter 5 HPSS Infrastructure Configuration If the server machine is running Solaris, mkhpss will prompt the administrator to enter disk name to add mirror to the logical volume. For Solaris server machines, the disk name should start with /dev/rdsk. 5.5.3 Manage SFS Files The mkhpss script invokes the managesfs utility, a front-end to the sfsadmin commands provided by Encina, to allow the user to create and manage the SFS files created for HPSS.
Chapter 5 HPSS Infrastructure Configuration Section 6.2.2: SSM User Session Configuration and Startup on page 253 for instructions to create an SSM user session at a later time. Even though all SSM User IDs and their prerequisite accounts can now be created, we recommend that only an administrative SSM ID be created at this time.
Chapter 5 HPSS Infrastructure Configuration Prompt ==> Select the sfs server to be unconfigured 1) encina/sfs/hpss 2) encina/sfs/hpss1 [M] Return to the Main Menu [U] Return to the Unconfigure Menu Reply ===> (Select Option [1-2, M, U]): Option 2 on the Unconfiguration Menu will issue the following warning message, and the user will have the option to abort or continue with the unconfiguration: WARNING => WARNING => WARNING => WAR
Chapter 5 • hpss_log • hpss_ls • hpss_mm • hpss_mountd • hpss_mps • hpss_mvr • hpss_ndcg • hpss_nfs • hpss_pvl • hpss_pvr • hpss_ssm • hpss_ss HPSS Infrastructure Configuration To adhere to the site local password policy, these principals were created with known keys and subsequently changed with randomized keys. The HPSS keytab files (/krb5/hpss.keytabs and / krb5/hpssclient.keytab) hold both versions of the keys; however, the DCE Registry holds only the randomized keys.
Chapter 5 HPSS Infrastructure Configuration -random \ -registry ◆ For each entry in /krb5/hpssclient.keytab do: % dcecp -c keytab add \ /.:/hosts/$HPSS-CDS_HOST/config/keytab/hpssclient.keytab \ -member \ -random \ -registry where refers to an entry in the keytab file; e.g., hpss_ssm, and $HPSS_CDS_HOST refers to the CDS machine host name; e.g., hydra. 3. See the discussion immediately following this step! Propagate the resulting keytab files to every HPSS server machine.
Chapter 6 HPSS Configuration 6.1 Overview This chapter provides instructions for creating the configuration data to be used by the HPSS servers. This includes creating the server configuration, defining the storage policies, and defining the storage characteristics. The configuration data can be created, viewed, modified, or deleted using the HPSS SSM GUI windows. Refer to Appendix F: Additional SSM Information (page 525) for more information on how to use SSM.
Chapter 6 6.1.2 HPSS Configuration 8. Create a specific configuration entry for each HPSS server (Section 6.8: Specific Server Configuration on page 323) 9. Configure MVR devices and PVL drives (Section 6.9: Configure MVR Devices and PVL Drives on page 401) HPSS Configuration Limits The following configuration limits are imposed by SSM and/or the HPSS servers: 6.1.2.1 Server • 6.1.2.2 Storage Policy • Total Accounting Policies: 1 • Total Migration Policies: 64 • Total Purge Policies 64 6.1.
Chapter 6 • 6.1.3 HPSS Configuration Delete: 10,000 physical volumes per SSM delete request Using SSM for HPSS Configuration The HPSS server and resource configuration data may be created, viewed, updated, or deleted using the SSM windows. The configuration data are kept in Encina SFS files. When you submit a request to configure a new server, SSM displays all fields with the appropriate default data.
Chapter 6 HPSS Configuration Figure 6-1 HPSS Health and Status Window 6.1.4 Server Reconfiguration and Reinitialization HPSS servers read their respective configuration file entries during startup to set their initial running conditions. Note that modifying a configuration file while a server is running does not immediately change the running condition of the server. Servers will need to perform a reinitialization or restart to read any newly modified configuration data.
Chapter 6 HPSS Configuration script, start_ssm, is provided to bring up the SSM System Manager and the SSM Data Server. Another provided script, start_ssm_session, can be used to bring up an SSM user session. 6.2.1 SSM Server Configuration and Startup The SSM System Manager will automatically create an SSM configuration entry, if one does not already exist, using the environment variables defined in the /opt/hpss/config/hpss_env file.
Chapter 6 HPSS Configuration 3. privileged. This security level is normally assigned to a privileged user such as an HPSS system analyst. This SSM user can view most of the SSM windows but cannot perform any SSM control functions. 4. user. This security level is normally assigned to an user who may have a need to monitor some HPSS functions. This user can view a limited set of the SSM windows but cannot perform any of the SSM control functions.
Chapter 6 HPSS Configuration Figure 6-2 HPSS Logon Window 6.3 Global Configuration The HPSS Global Configuration metadata record provides important information that is used by all HPSS servers. This is the first configuration that must be done through SSM. 6.3.1 Configure the Global Configuration Information The global configuration information can be configured using the HPSS Global Configuration window. After the information has been created, it can be updated.
Chapter 6 6.3.2 HPSS Configuration Global Configuration Variables Figure 6-3 HPSS Global Configuration screen Table 6-1 lists the Global Configuration variables and provides specific recommendations for the Global Configuration. Table 6-1 Global Configuration Variables Display Field Name Description Acceptable Values Default Value General Section 256 September 2002 HPSS Installation Guide Release 4.
Chapter 6 HPSS Configuration Table 6-1 Global Configuration Variables Acceptable Values Default Value The UID of the user who has root access privileges to the NS database, if the Root Is Superuser flag is set to ON. Valid root user ID 0 Root Is Superuser A flag that indicates whether root privileges are enabled for the UID specified in the Root User ID field. Root access privileges grant the specified user the same access rights to a name space object as the owner of that object.
Chapter 6 HPSS Configuration Table 6-1 Global Configuration Variables 258 Acceptable Values Default Value The logging policy that will be used by HPSS servers for which a specific logging policy has not been configured. Any Logging policy or none None Accounting Policy Name of the SFS file where the accounting policy is stored. Any valid Encina filename /.:/encina/sfs/ hpss/ accounting Accounting Summary Name of the SFS file where accounting summary information is stored.
Chapter 6 HPSS Configuration Table 6-1 Global Configuration Variables Display Field Name Description Acceptable Values Default Value Storage Classes Name of the SFS file where storage class configurations are stored. Any valid Encina filename /.:/encina/sfs/ hpss/ storageclass Storage Hierarchies Name of the SFS file where storage hierarchy configurations are stored. Any valid Encina filename /.
Chapter 6 HPSS Configuration Figure 6-4 Storage Subsystem Configuration window 6.4.1 Storage Subsystem Configuration Variables Table 6-2 lists the Storage Subsystem Configuration Variables and provides specific recommendations. Table 6-2 Storage Subsystem Configuration Variables Display Field Name Subsystem ID 260 Description A unique integer ID for the storage subsystem. This field may only be modified at create time for the storage subsystem.
Chapter 6 HPSS Configuration Table 6-2 Storage Subsystem Configuration Variables Display Field Name Description Acceptable Values Default Value Subsystem Name The descriptive name of the storage subsystem. This field may only be modified at create time for the storage subsystem. A unique character string up to 31 bytes in length. "Subsystem #N" where N is the subsystem ID Migrate Records File (SFS) The name of the SFS file where migration records are stored for this storage subsystem.
Chapter 6 HPSS Configuration 6.5 Basic Server Configuration All HPSS servers use a similar metadata structure for the basic server configuration.
Chapter 6 6.5.1 HPSS Configuration Configure the Basic Server Information A basic server configuration entry can be created using the Basic Server Configuration window. After the configuration entry is created, it can be viewed, updated, or deleted through this window. From the HPSS Health and Status window (shown in Figure 6-1), click on the Admin menu, select the Configure HPSS option and click on the Servers option. The HPSS Servers window will be displayed as shown in Figure 6-5.
Chapter 6 HPSS Configuration Figure 6-5 HPSS Servers Window 264 September 2002 HPSS Installation Guide Release 4.
Chapter 6 HPSS Configuration Figure 6-6 Basic Server Configuration Window 6.5.1.1 Basic Server Configuration Variables The fields in the Server Configuration window describe information necessary for servers to successfully operate. The fields also contain information that is necessary for successful interaction with the SSM component. In addition, each HPSS basic server configuration includes Security Information and Audit Policy fields that determine the server's security environment.
Chapter 6 HPSS Configuration • Execution Controls fields • DCE Controls fields • Security Controls fields • Audit Policy fields To save window space, the last four categories are presented in “layers”, and each layer has its name displayed on a “tab”. To access a different layer, click on the appropriate tab. Table 6-3 lists the fields on the Basic Server Configuration window in the approximate order that they appear on the window.
Chapter 6 HPSS Configuration Table 6-3 Basic Server Configuration Variables (Continued) Default Value Display Field Name Description Acceptable Values Server Subtype The subtype of the selected server. Most servers do not have subtypes. This field is filled in with the server subtype selected by the user as part of the window’s selection. It is not changeable. Server subtype selected by the user (e.g. tape); none for servers that do not have subtypes.
Chapter 6 HPSS Configuration Table 6-3 Basic Server Configuration Variables (Continued) Default Value Display Field Name Description Acceptable Values Storage Subsystem Name of the HPSS Storage Subsystem to which this server will be assigned. No more than one BFS, MPS, NS, Disk SS, and Tape SS can be assigned to any one subsystem. Any configured Storage Subsystem name from the pop-up list. This field is required for BFS, MPS, NS, Disk SS, and Tape SS.
Chapter 6 HPSS Configuration Table 6-3 Basic Server Configuration Variables (Continued) Display Field Name Description Acceptable Values Execute Hostname The hostname on which a particular server is to run. SSM uses this field to locate the Startup Daemon that will execute the server. Any legal hostname, such as a name that might be obtained using the UNIX hostname command. Default Value Local hostname. Advice: In order for a server to start, a Startup Daemon must be running.
Chapter 6 HPSS Configuration Table 6-3 Basic Server Configuration Variables (Continued) Display Field Name Description Acceptable Values Default Value Auto Restart Count The HPSS Startup Daemon uses this field to control automatic server restarts. If the server shuts down unexpectedly, the Startup Daemon will restart the server, without any intervention by SSM, up to this many times; after that, the Startup Daemon will not restart it again.
Chapter 6 HPSS Configuration Table 6-3 Basic Server Configuration Variables (Continued) Display Field Name Description Acceptable Values Maximum Connections The highest number of connection contexts this server can establish. Any positive 32-bit integer value.
Chapter 6 HPSS Configuration Table 6-3 Basic Server Configuration Variables (Continued) Display Field Name Description Acceptable Values Thread Pool Size The highest number of threads this server can spawn in order to handle concurrent requests. Any positive 32-bit integer value.
Chapter 6 HPSS Configuration Table 6-3 Basic Server Configuration Variables (Continued) Display Field Name Description Acceptable Values Principal Name The DCE principal name defined for the server during the infrastructure configuration phase. The principal name must exist in the DCE registry.
Chapter 6 HPSS Configuration Table 6-3 Basic Server Configuration Variables (Continued) Display Field Name Description Acceptable Values Authorization Service The authorization service to use when passing identity information in communications to HPSS components. None, Name, DCE Default Value DCE Advice: The recommended authorization server is DCE. This ensures that the most complete identity information about the client is sent to the server.
Chapter 6 HPSS Configuration Table 6-3 Basic Server Configuration Variables (Continued) Display Field Name Description Acceptable Values Keytab Pathname The absolute pathname of the UNIX file containing the keytab entry that will be used by the server when setting up its identity. Any legal UNIX file name can be used as long as it is the name of a keytable file. Default Value /krb5/ hpss.keytabs Advice: The server must have read access to this file.
Chapter 6 HPSS Configuration Table 6-3 Basic Server Configuration Variables (Continued) Display Field Name Description Acceptable Values Default Value CHMOD The Security Audit Policy for Name Server Object Permissions events. If set, security audit messages will be sent to the logging subsystem. NONE, FAILURE, ALL FAILURE for Name Server; NONE for other servers CHOWN The Security Audit Policy for Name Server Object Owner events.
Chapter 6 HPSS Configuration Table 6-3 Basic Server Configuration Variables (Continued) Display Field Name Description Acceptable Values Default Value RENAME The Security Audit Policy for Name Server Object Rename events. If set, security audit messages will be sent to the logging subsystem. NONE, FAILURE, ALL FAILURE for Name Server; NONE for other servers Advice: Sites that must audit object deletion should set the RENAME field to ALL for Name Server.
Chapter 6 HPSS Configuration Table 6-3 Basic Server Configuration Variables (Continued) Display Field Name Description Acceptable Values Default Value BFSETATTRS The Security Audit Policy for Bitfile Server Set Bitfile Attribute events. If set, security audit messages will be sent to the logging subsystem. NONE, FAILURE, ALL FAILURE for Bitfile Server; NONE for other servers 6.5.1.
Chapter 6 HPSS Configuration {group:subsys/dce/cds-server:rwdtc} {any_other:r--t-} CDS Security object for the MVR Security object: {user:${HPSS_PRINCIPAL_SSM}:rw-tc} {group:subsys/dce/cds-admin:rwdtc} {group:subsys/dce/cds-server:rwdtc} {any_other:---t-} CDS Security object for the NS Security object: {user:${HPSS_PRINCIPAL_FTPD}:r---c} {user:${HPSS_PRINCIPAL_BFS}:r---c} {user:${HPSS_PRINCIPAL_NDCG}:r---c} {user:${HPSS_PRINCIPAL_NFSD}:r---c} {user:${HPSS_PRINCIPAL_DMG}:rw--c} {user:${HPSS_PRINCIPAL_SSM}:
Chapter 6 6.6.1 HPSS Configuration Configure the Migration Policies A migration policy is associated with a storage class and defines the criteria by which data is migrated from that storage class to storage classes at lower levels in the storage hierarchies. Note, however, that it is the storage hierarchy definitions, not the migration policy, which determines the number and location of the migration targets.
Chapter 6 HPSS Configuration Before deleting a basic migration policy, make sure that it is not referenced in any storage class configurations. If a storage class configuration references a migration policy which does not exist, the Migration/Purge and Bitfile Servers will not start. When a migration policy is added to or removed from a storage class configuration, the Migration/Purge Servers must be restarted in order for migration to begin or end on this storage class.
Chapter 6 HPSS Configuration Figure 6-7 Migration Policy Configuration Window 6.6.1.1 Migration Policy Configuration Variables Table 6-4 lists the fields on the Migration Policy window and provides specific recommendations for configuring the Migration Policy for use by HPSS. Note that descriptions of fields which appear both in the Basic Policy and Storage Subsystem-Specific Policy sections of the window apply to both fields. 282 September 2002 HPSS Installation Guide Release 4.
Chapter 6 HPSS Configuration Table 6-4 Migration Policy Configuration Variables Default Value Display Field Name Description Acceptable Values Policy ID A unique ID associated with the Migration Policy. Any unique, non-zero, positive integer value. Last configured Migration Policy ID plus 1. Policy Name The descriptive name of a Migration Policy. Any character string up to 31 bytes in length.
Chapter 6 HPSS Configuration Table 6-4 Migration Policy Configuration Variables (Continued) Display Field Name Description Acceptable Values Default Value Runtime Interval An integer, in minutes, that dictates how often the migration process will occur within a storage class. Any positive 32-bit integer value. 120 minutes Note: The value specifies the interval between the completion of one migration run and the beginning of the next. Migrate Volumes Selects the tape volume migration algorithm.
Chapter 6 HPSS Configuration Table 6-4 Migration Policy Configuration Variables (Continued) Default Value Display Field Name Description Acceptable Values Migrate At Critical Threshold A flag that indicates a migration run should be started immediately when the storage class critical threshold is exceeded. This option applies to disk migration only. ON, OFF OFF Storage Subsystem The descriptive name of the storage subsystem to which a subsystemspecific policy applies.
Chapter 6 HPSS Configuration basic policy. Change the desired fields in the subsystem specific policy and then use the Add Specific button to write the new subsystem specific policy to the metadata file. To update an existing basic purge policy, select the Load Existing button on the Basic Policy portion of the Purge Policy window and select the desired policy from the popup list. The window will be refreshed with the configured basic policy data.
Chapter 6 HPSS Configuration Refer to the window's help file for more information on the individual fields and buttons as well as the supported operations available from the window. Figure 6-8 Purge Policy Window 6.6.2.1 Purge Policy Configuration Variables Table 6-5 lists the fields on the Purge Policy window and provides specific recommendations for configuring the Purge Policy for use by HPSS.
Chapter 6 HPSS Configuration Table 6-5 Purge Policy Configuration Variables Default Value Display Field Name Description Acceptable Values Policy ID A unique ID associated with the Purge Policy. Any unique, non-zero, positive integer value. Last configured Purge Policy ID plus 1. Policy Name The descriptive name of a Purge Policy. Any character string up to 31 bytes in length. Purge Policy ID Advice: A policy’s descriptive name should be meaningful to local site administrators and operators.
Chapter 6 HPSS Configuration Table 6-5 Purge Policy Configuration Variables (Continued) Default Value Display Field Name Description Acceptable Values Purge Locks expire after Maximum number of minutes that a file may hold a purge lock. Purge locked files are not eligible for purging. Any integer value between 0 and 1000000 (one million). A value of 0 indicates that purge locks expire immediately.
Chapter 6 HPSS Configuration Figure 6-9 Accounting Policy Window 6.6.3.1 Accounting Policy Configuration Variables Table 6-6 lists the fields on the Accounting Policy window and provides specific recommendations for configuring the Accounting Policy for use by HPSS. 290 September 2002 HPSS Installation Guide Release 4.
Chapter 6 HPSS Configuration Table 6-6 Accounting Policy Configuration Variables Display Field Name Description Policy ID The unique ID associated with the Accounting Policy. Acceptable Values Default Value Always 1. This field is not changeable. 1 Advice: This number is always 1 for this version of HPSS since only one accounting policy is currently allowed. Accounting Style Style of accounting that is used by the entire HPSS system.
Chapter 6 HPSS Configuration Table 6-6 Accounting Policy Configuration Variables (Continued) Display Field Name Description Accounting Validation File The Encina SFS filename of the Accounting Validation metadata Acceptable Values Default Value Any valid Encina SFS filename. /.:/encina/sfs/ hpss/ acctvalidate Advice: You need to create and populate this file with the account validation editor if Account Validation is enabled and Site-style accounting is in use (see Section 12.2.
Chapter 6 HPSS Configuration Table 6-6 Accounting Policy Configuration Variables (Continued) Display Field Name Description Last Run Time The starting timestamp of the current accounting run or the completion time of the last accounting run. Acceptable Values Default Value A date and time value. 0 Advice: This time is set when an accounting run begins, and it is set again when the accounting run terminates. Number of Accounts Total number of accounts in the system.
Chapter 6 HPSS Configuration return a policy marked MOD back to its original settings. Deletions (DEL) can be "undone" by using the Cancel Delete button which becomes visible only after a policy has been marked for deletion. To create a new logging policy, click on the Start New button. A new line will be highlighted and you can fill in the Descriptive Name field. NEW will be displayed in the Mod column after the name is entered.
Chapter 6 HPSS Configuration Figure 6-10 HPSS Logging Policies Window 6.6.4.2 HPSS Logging Policies List Variables Table 6-7 Logging Policies List Configuration Variables Display Field Name Description Acceptable Values Default Logging Policy The descriptive name of the default logging policy. This policy will apply to all servers which do not have their own policy defined. Blank or the Descriptive Name of one of the logging policies in the list.
Chapter 6 HPSS Configuration Default Value Display Field Name Description Acceptable Values Descriptive Name The descriptive name of the HPSS server to which the Logging Policy will apply. This field is filled in with the descriptive names of existing policies. If Start New is selected, a blank entry will be added, and any unique Descriptive Name may be entered. Names of existing entries Record Types to Log Record types that are to be logged for the specified server.
Chapter 6 HPSS Configuration Display Field Name Description Acceptable Values SSM Types Record types that are to be sent to SSM for display. Any combination of the following: Alarm, Event, Status. Default Value Values from the Default Logging Policy entry.
Chapter 6 HPSS Configuration • Event—defines an informational message (e.g., subsystem initializing, subsystem terminating). Typically, the policy would be to send events to both the log and to SSM for displaying in the HPSS Alarms and Events window (Figure 1-5 on page 38 of the HPSS Management Guide). • Status - defines a status message to be output in a pop-up window or to the log. Typically, the policy would be to not send these messages to the log or to the screen.
Chapter 6 HPSS Configuration Figure 6-11 Logging Policy Window 6.6.4.4 Logging Policy Configuration Variables Table 6-8 lists the fields on the Logging Policy window and provides Logging Policy configuration information. Table 6-8 Logging Policy Configuration Variables Display Field Name Description Acceptable Values Default Value Name of Server to Which Policy Applies The descriptive name of the HPSS server to which the Logging Policy will apply.
Chapter 6 HPSS Configuration Table 6-8 Logging Policy Configuration Variables (Continued) Display Field Name Description Acceptable Values Default Value Record Types to Log Record types that are to be logged for the specified server. Any combination of the following: Alarm, Event, Request, Security, Accounting, Debug, Trace, Status.
Chapter 6 HPSS Configuration Once a Location Policy is created or updated, it will not be in effect until all local Location Servers are started or reinitialized. The Reinitialize button on the HPSS Servers window(Figure 1-1 on page 20 of the HPSS Management Guide) can be used to reinitialize a running Location Server. Figure 6-12 Location Policy Window 6.6.5.
Chapter 6 HPSS Configuration Table 6-9 Location Policy Configuration Variables (Continued) Display Field Name Description Acceptable Values Default Value Location Map Update Interval Interval in seconds that the LS rereads general server configuration metadata. Any positive integer value. 300 Advice: If this value is set too low, a load will be put on SFS while reading configuration metadata and the LS will be unable to contact all remote LSs within the timeout period.
Chapter 6 HPSS Configuration Table 6-9 Location Policy Configuration Variables (Continued) Display Field Name Description Acceptable Values RPC Group Name The CDS pathname where the DCE RPC group containing local LS path information should be stored. Any valid CDS pathname. Default Value /.:/hpss/ls/ group Advice: All clients will need to know this group name since it is used by them when initializing to contact the Location Server.
Chapter 6 HPSS Configuration Figure 6-13 Remote HPSS Site Configuration Window To add a new Remote Site, enter the information about the remote HPSS system and click on the Add button. To update an existing Remote Site, modify the desired fields and click on the Update button to write the changes to the SFS file. To delete an existing Remote Site, click on the Delete button to delete the policy. To load an existing policy click on the Load Existing button.
Chapter 6 HPSS Configuration Table 6-10 Remote HPSS Site Configuration Fields Field Description Acceptable Values Default Value Site Name The descriptive text identifier for this site. Any text string. This name should be unique among all site records. None RPC Group Name The name of the DCE rpcgroup of the Location Servers at the remote site. You must obtain this information from the remote site’s administrator. None 6.
Chapter 6 HPSS Configuration Before deleting a storage class configuration, be sure that all of the storage subsystem specific warning and critical thresholds are set to default. If this is not done, one or more threshold records will remain in metadata and will become orphaned when the storage class configuration is deleted. Refer to the window’s help file for more information on the individual fields and buttons as well as the supported operations available from the window.
Chapter 6 HPSS Configuration Figure 6-15 Tape Storage Class Configuration Window 6.7.1.1 Storage Class Configuration Variables Table 6-11 lists the fields on the Storage Class Configuration window and provides specific recommendations for configuring the storage class for use by HPSS. SSM enforces certain relationships between the SC fields and will not allow fields to be set to inappropriate values. HPSS Installation Guide Release 4.
Chapter 6 HPSS Configuration Table 6-11 Storage Class Configuration Variables Default Value Display Field Name Description Acceptable Values Storage Class ID A unique numeric ID associated with this storage class. Any non-zero, positive 32-bit integer value. Last configured ID plus 1. Storage Class Name A text string used to describe this storage class. Any character string up to 31 bytes in length.
Chapter 6 HPSS Configuration Table 6-11 Storage Class Configuration Variables (Continued) Display Field Name Description Acceptable Values VV Block Size The size of the logical data block on the new virtual volume. A 32-bit integer value. Default Value First multiple of Media Block Size which equals or exceeds 1MB. Advice: The VV Block Size chosen will determine the performance characteristics of the virtual volume.
Chapter 6 HPSS Configuration Table 6-11 Storage Class Configuration Variables (Continued) Display Field Name Description Acceptable Values Default Value Stripe Transfer Rate The approximate data transfer rate for the entire stripe. This value is calculated by SSM. It is the product of the Device I/O Rate and Stripe Width fields. Same as Device I/O Rate. Blocks Between Tape Marks The maximum number of data blocks that can be written on a tape between consecutive tape marks.
Chapter 6 HPSS Configuration Table 6-11 Storage Class Configuration Variables (Continued) Default Value Display Field Name Description Acceptable Values Warning Threshold Low threshold for the amount of space used in this storage class. For disk this is the percentage of total space used. For tape this is the number of free VVs remaining.
Chapter 6 HPSS Configuration Table 6-11 Storage Class Configuration Variables (Continued) Default Value Display Field Name Description Acceptable Values Average Latency The average time (in seconds) that elapses when a data transfer request is scheduled and the time the data transfer begins. This field is only applicable to tape storage classes. Any positive 32-bit integer value. Based on selected Media Type.
Chapter 6 HPSS Configuration across all subsystems. This is the simplest way to configure storage class thresholds. If it is desired to override these default values for one or more subsystems, use the Subsystem-Specific Thresholds button on the Storage Class Configuration window to bring up the Storage Subsystem-Specific Thresholds window, shown in Figure 6-16.
Chapter 6 HPSS Configuration to and Change selected Critical Threshold to and press [Enter]. The override values will be applied in the Threshold Table. Use the Update button at the bottom of the screen to apply these changes to metadata. Changes to the override values are accomplished the same way. To delete the override values, select the desired storage subsystem in the Threshold Table and click on the Set To Defaults button.
Chapter 6 HPSS Configuration Table 6-12 Storage Subsystem-Specific Thresholds Variables Display Field Name Default Values Description Acceptable Values Change selected Warning Threshold to Override value of the Storage Class Warning Threshold for the selected subsystem. For disk, any integer percentage between 1 and 100 (inclusive) which is less than or equal to the Critical Threshold. For tape, any non-negative integer VV count which is greater than or equal to the Critical Threshold.
Chapter 6 HPSS Configuration To delete an existing storage hierarchy, select the Load Existing button on the Storage Hierarchy Configuration window and select the desired storage hierarchy from the popup list. The window will be refreshed with the configured data. Click on the Delete button to delete the storage hierarchy. Refer to Section 3.12.2: Deleting Storage Hierarchy Definition (page 81) in the HPSS Management Guide for more guidelines on deleting a storage hierarchy configuration.
Chapter 6 6.7.2.1 HPSS Configuration Storage Hierarchy Configuration Variables Table 6-13 lists the fields on the Storage Hierarchy Configuration window and provides specific recommendations for configuring the Storage Hierarchy for use by HPSS. Table 6-13 Storage Hierarchy Configuration Variables Default Value Display Field Name Description Acceptable Values Hierarchy ID The unique, numeric ID associated with this hierarchy. Any unique, non-zero, positive 32bit integer value.
Chapter 6 6.7.3 HPSS Configuration Configure the Classes of Service Class of Service (COS) information must be created for each class of service that is to be supported by the HPSS system. A COS can be created using the HPSS Class of Service window. After the configuration entry is created, it can be viewed, updated, or deleted through the same window.
Chapter 6 HPSS Configuration Figure 6-18 Class of Service Configuration Window 6.7.3.1 Class of Service Configuration Variables Table 6-14 lists the fields on the HPSS Class of Service window and provides Class of Service configuration information. Table 6-14 Class of Service Configuration Variables Display Field Name Description Acceptable Values Class ID An unique integer ID for the COS. Any non-zero, positive 32-bit integer value.
Chapter 6 HPSS Configuration Table 6-14 Class of Service Configuration Variables (Continued) Default Value Display Field Name Description Acceptable Values Class Name The descriptive name of the COS. A character string up to 31 bytes in length. [Only modifiable at create time] None Advice: Select a name that describes the COS in some functional way. A good example would be High Speed Disk Over Tape. Storage Hierarchy The name of the storage hierarchy associated with this COS.
Chapter 6 HPSS Configuration Table 6-14 Class of Service Configuration Variables (Continued) Default Value Display Field Name Description Acceptable Values Retry Stage Failures from Secondary Copy When this flag is turned on, HPSS will automatically retry a failed stage from the primary copy if a valid secondary copy exists. For this to work properly, the COS must be set up with at least 2 copies and a valid second copy must have been created during HPSS migration processing.
Chapter 6 6.7.4 HPSS Configuration File Family Configuration File family information must be created for each file family that is to be supported by the HPSS system. A file family can be created using the File Family Configuration window. After the configuration entry is created, it can be viewed, updated, or deleted through the same window. From the HPSS Health and Status window (shown in Figure 6-1), click on the Admin menu, select the Configure HPSS option and click on the File Families option.
Chapter 6 6.7.4.1 HPSS Configuration Configure File Family Variables Table 6-15 describes the file family variables. Table 6-15 Configure File Family Variables Display Field Name Default Value Description Acceptable Values Family ID An unsigned non-zero integer which serves as a unique identifier for this file family. A unique default value is provided, which may be overwritten if desired. However, if an ID which is already in use by another file family is specified, the Add request will fail.
Chapter 6 HPSS Configuration • Mover • Name Server • NFS Daemon • Non-DCE Client Gateway • Physical Volume Library • Physical Volume Repository • Storage Server Sections 6.8.1 through 6.8.14 describe the specific configuration for each of the above servers. The SSM servers, Location Servers, NFS Mount Daemons, and Startup Daemons do not have specific configurations. 6.8.
Chapter 6 HPSS Configuration Figure 6-20 Bitfile Server Configuration Window 6.8.1.1 Bitfile Server Configuration Variables Table 6-16 lists the fields on the Bitfile Server Configuration window and provides specific recommendations for configuring the BFS for use by HPSS. HPSS Installation Guide Release 4.
Chapter 6 HPSS Configuration Table 6-16 Bitfile Server Configuration Variables Default Value Display Field Name Description Acceptable Values Server Name Descriptive name of the BFS. This name is copied over from the BFS general configuration entry. This field cannot be modified. It is displayed for reference only. Selected BFS descriptive name. Server ID The UUID of the Bitfile Server. This ID is copied over from the BFS general configuration entry. This field cannot be modified.
Chapter 6 HPSS Configuration Table 6-16 Bitfile Server Configuration Variables (Continued) Display Field Name Description Acceptable Values Default Value Storage Class Statistics Interval An interval in seconds that indicates how often the BFS needs to contact each SS to get up-to-date statistics on each storage class that the SS manages. This information is used in load balancing across multiple storage classes. Any positive 32-bit integer value.
Chapter 6 HPSS Configuration Table 6-16 Bitfile Server Configuration Variables (Continued) Display Field Name Description Acceptable Values COS Copy To Disk A flag affecting the COS changes for a bitfile. By default, when the COS of a bitfile is changed, the BFS copies the file to the highest level tape storage class in the target hierarchy. If this flag is ON, and if the target hierarchy has a disk storage class as its highest level, the BFS will copy the file to that disk storage class.
Chapter 6 HPSS Configuration Table 6-16 Bitfile Server Configuration Variables (Continued) Default Value Display Field Name Description Acceptable Values SS Unlink Records The file name of the SFS file where the information representing storage segments to be unlinked is stored. Valid Encina file name. /.:/encina/sfs/ hpss/ bfssunlink.# COS Changes The file name of the SFS file where the information indicating which bitfiles need to have the COS changed is stored. Valid Encina file name. /.
Chapter 6 HPSS Configuration Figure 6-21 DMAP Gateway Configuration Window 6.8.2.1 DMAP Gateway Configuration Variables Table 6-17 lists the fields on the HPSS DMAP Gateway Configuration window and provides specific recommendations for configuring the DMAP Gateway for use by HPSS. Table 6-17 DMAP Gateway Configuration Variables Default Value Display Field Name Description Acceptable Values Server Name The descriptive name of the DMAP Gateway.
Chapter 6 HPSS Configuration Table 6-17 DMAP Gateway Configuration Variables (Continued) Display Field Name Description Acceptable Values Encryption Key A number used as an encryption key in message passing. A specific value can be typed in, or the Generate New Key button can be clicked to generate a random key value. Any positive 64-bit integer, displayed as hexadecimal. Default Value 0 Advice: Should not use key 0 which implies no protection.
Chapter 6 HPSS Configuration Figure 6-22 Gatekeeper Server Configuration window To use a Gatekeeper Server for Gatekeeping Services then the Gatekeeper Server must also be configured into the Storage Subsystem (see Section 6.4: Storage Subsystems Configuration on page 259). To use the Gatekeeper Server for Account Validation Services, then the Account Validation button of the Accounting Policy must be ON (see Section 6.6.3: Configure the Accounting Policy on page 289). 6.8.3.
Chapter 6 HPSS Configuration Table 6-18 Gatekeeper Configuration Fields Display Field Name Default Wait Time Description Acceptable Values Default Value The default number of seconds the client will wait before retrying a request if not determined by the Site Interface. The value must be greater than zero and is only used if the Site Interface returns a wait time of zero for the create, open, or stage request being retried.
Chapter 6 HPSS Configuration Refer to the window’s help file for more information on the individual fields and buttons as well as the supported operations available from the window Figure 6-23 Logging Client Configuration Window 6.8.4.1 Log Client Configuration Variables Table 6-19 lists the fields on the Logging Client Configuration window and provides specific recommendations for configuring a Log Client for use by HPSS.
Chapter 6 HPSS Configuration Table 6-19 Log Client Configuration Variables (Continued) Default Value Display Field Name Description Acceptable Values Maximum Local Log Size The maximum size in bytes of the local log file. Once this size is reached, the log will be reused in a wraparound fashion. The local log is not automatically archived. A positive integer up to the maximum file size allowed by the operating system.
Chapter 6 HPSS Configuration Table 6-19 Log Client Configuration Variables (Continued) Default Value Display Field Name Description Acceptable Values Log Messages To: A mask of options to apply to local logging. A combination of the following values: Log Daemon, Local Log File Log Daemon—Send log messages to the central log. Local LogFile—format and log messages from servers on the same node as the Log Client to a local file.
Chapter 6 HPSS Configuration To delete an existing configuration, select the Log Daemon entry on the HPSS Servers window and click on the Type-specific... button from the Configuration button group. The Logging Daemon Configuration window will be displayed with the configured data. Click on the Delete button to delete the specific configuration entry. Refer to the window’s help file for more information on the individual fields and buttons as well as the supported operations available from the window.
Chapter 6 HPSS Configuration Table 6-20 Log Daemon Configuration Variables Default Value Display Field Name Description Acceptable Values Server Name The descriptive name of the Log Daemon. This name is extracted over from the Log Daemon general configuration entry. This field cannot be modified. It is displayed for reference only. The selected Log Daemon descriptive name. Server ID The UUID of the Log Daemon. This ID is copied over from the Log Daemon general configuration entry.
Chapter 6 HPSS Configuration Table 6-20 Log Daemon Configuration Variables (Continued) Default Value Display Field Name Description Acceptable Values Log Directory The name of the directory in which log files are stored. Any valid UNIX path name. The string length (in bytes) is limited to a minimum of the operating system maximum allowed file name size, or 1024. /var/hpss/log Archive Class of Service The COS that will determine where HPSS logs are archived.
Chapter 6 HPSS Configuration To update an existing configuration, select the Metadata Monitor entry on the HPSS Servers window and click on the Type-specific... button from the Configuration button group. The Metadata Monitor Configuration window will be displayed with the configured data. After modifying the configuration, click on the Update button to write the changes to the appropriate SFS file.
Chapter 6 HPSS Configuration Table 6-21 Metadata Monitor Configuration Variables Default Value Display Field Name Description Acceptable Values Server Name The descriptive name of the MMON. This name is copied over from the MMON general configuration entry. This field cannot be modified. It is displayed for reference only. The selected MMON descriptive name. Server ID The UUID of the Metadata Monitor. This ID is copied over from the MMON general configuration entry. The UUID of the MMON.
Chapter 6 HPSS Configuration To add a new specific configuration, select the Migration/Purge Server entry and click on the Typespecific... button from the Configuration button group on the HPSS Servers window. The Migration/Purge Server Configuration window will be displayed as shown in Figure 6-26 with default values. If the default data is not desired, change the fields with the desired values. Click on the Add button to create the configuration entry.
Chapter 6 6.8.7.1 HPSS Configuration Migration/Purge Server Configuration Variables Table 6-22 lists the fields on the HPSS Migration/Purge Server Configuration window and provides specific recommendations for configuring the MPS for use by HPSS. Table 6-22 Migration/Purge Server Configuration Variables Default Value Display Field Name Description Acceptable Values Server Name The descriptive name of the MPS. This name is copied over from the selected MPS general configuration entry.
Chapter 6 HPSS Configuration Table 6-22 Migration/Purge Server Configuration Variables (Continued) Default Value Display Field Name Description Acceptable Values Report File (Unix) A prefix string used by the MPS to construct a report file name. The full file name will consist of this string with a date string and subsystem Id appended to it. This prefix should include a full UNIX path and file name. If a full path is not specified, the location of the migration report files may be unpredictable.
Chapter 6 6.8.8 HPSS Configuration Mover Specific Configuration The Mover (MVR) specific configuration entry can be created using the Mover Configuration window. After the configuration entry is created, it can be viewed, updated, or deleted through the same window. From the HPSS Health and Status window (shown in Figure 6-1), click on the Admin menu, select the Configure HPSS option and click on the Servers option. The HPSS Servers window will be displayed as shown in Figure 6-5.
Chapter 6 HPSS Configuration Figure 6-27 Mover Configuration window 6.8.8.1 Mover Configuration Variables Table 6-23 lists the fields on the Mover Configuration window and provides specific recommendations for configuring the MVR for use by HPSS. Table 6-23 Mover Configuration Variables Default Value Display Field Name Description Acceptable Values Server Name The descriptive name of the MVR. This name is copied over from the selected MVR general configuration entry.
Chapter 6 HPSS Configuration Table 6-23 Mover Configuration Variables (Continued) Default Value Display Field Name Description Acceptable Values Server ID The UUID of the MVR. This ID is copied over from the selected MVR general configuration entry. The UUID of the MVR. This field cannot be modified. It is displayed for reference only. Extracted from the MVR general configuration entry. Buffer Size The buffer size (of each buffer) used for double buffering during data transfers.
Chapter 6 HPSS Configuration Table 6-23 Mover Configuration Variables (Continued) Default Value Display Field Name Description Acceptable Values Port Range Start The beginning of a range of local TCP/IP port numbers to be used by the Mover when connecting to clients (required by some sites for communication across a firewall). If zero, the operating system will select the port number; otherwise the Mover selects a local port number between Port Range Start and Port Range End (inclusive).
Chapter 6 HPSS Configuration Table 6-23 Mover Configuration Variables (Continued) Display Field Name Description Acceptable Values TCP Path Name The pathname of the MVR TCP/IP listen executable. Fully qualified file name of the MVR TCP/IP listen executable. Default Value /usr/lpp/ hpss/bin/ hpss_mvr_tcp Advice: The TCP MVRs currently supported are listed below.
Chapter 6 6.8.8.2.2 HPSS Configuration /etc/services, /etc/inetd.conf, and /etc/xinetd.d To invoke the non-DCE/Encina part of the mover, the remote nodes’ inetd is utilized to start the parent process when a connection is made to a port based on the Mover’s type specific configuration (see section 6.8.8). An entry must be added to the /etc/services file so that the inetd will be listening on the port to which the Mover parent process (running on a DCE/Encina node) will connect.
Chapter 6 port user server server_args = = = = HPSS Configuration 5002 root /opt/hpss/bin/hpss_mvr_tcp /var/hpss/etc/mvr_ek } The specified port will be one greater than the port listed as the TCP Listen Port in the Mover’s type specific configuration. For example, the port value in the example corresponds to a Mover with a TCP Listen Port value of 5001. The template will cause the executable /opt/hpss/bin/hpss_mvr_tcp to be run under the root user ID when a connection is detected on port 5002.
Chapter 6 HPSS Configuration Table 6-24 IRIX System Parameters Parameter Name Minimum Value maxdmasz 513 Parameter Description Maximum DMA size (required for Ampex DST support) Solaris When running the Mover or non-DCE Mover process on a Solaris platform, there are a number of system configuration parameters which may need to be modified before the Mover can be successfully run. The values can be modified by editing the /etc/system configuration file and rebooting the system.
Chapter 6 HPSS Configuration Note that the SEMMSL value should be increased if running more than one Mover on the Linux machine (multiply the minimum value by the number of Movers to be run on that machine). Table 6-26 Linux System Parameters Parameter Name Header File Minimum Value Parameter Description SEMMSL include/linux/sem.h 512 Maximum number of semaphores per ID SHMMAX include/linux/shm.h 0x2000000 Maximum shared memory segment size (bytes) 6.8.8.
Chapter 6 HPSS Configuration # the start of the client path matches any of the paths in this list # then the transfer will proceed, otherwise the Mover will not transfer # the file. # # The format of this file is simply a list of paths, one per line. /gpfs /local/globalfilesystem In the above sample configuration, any file under the path, /gpfs or the path /local/ globalfilesystem can be transferred using the special data protocol subject to the caveats specified above.
Chapter 6 HPSS Configuration Figure 6-28 Name Server Configuration Window 6.8.9.1 Name Server Configuration Variables Table 6-27 lists the fields on the Name Server Configuration window and provides specific recommendations for configuring the NS for use by HPSS. HPSS Installation Guide Release 4.
Chapter 6 HPSS Configuration Table 6-27 Name Server Configuration Variables Default Value Display Field Name Description Acceptable Values Server Name The descriptive name of the NS. This name is copied over from the NS general configuration entry. This field cannot be modified. It is displayed for reference only. The selected NS descriptive name Server ID The UUID of the NS. This ID is copied over from the NS general configuration entry. This field cannot be modified.
Chapter 6 HPSS Configuration Table 6-27 Name Server Configuration Variables (Continued) Display Field Name Description Acceptable Values Root Fileset Name The name to be assigned to the Name Server’s local root fileset. The name must be unique among all filesets within the DCE cell. A character string of up to 127 bytes in length. Default Value Generated automatically by SSM SFS Filenames. The fields below list the names of the SFS files used by the NS.
Chapter 6 HPSS Configuration Configuration window will be displayed with the configured data. After modifying the configuration, click on the Update button to write the changes to the appropriate SFS file. To delete an existing configuration, select the NFS Daemon entry on the HPSS Servers window and click on the Type-specific... button from the Configuration button group. The NFS Daemon Configuration window will be displayed with the configured data.
Chapter 6 HPSS Configuration Figure 6-29 NFS Daemon Configuration Window (left side) HPSS Installation Guide Release 4.
Chapter 6 HPSS Configuration Figure 6-30 NFS Daemon Configuration window (right side) 360 September 2002 HPSS Installation Guide Release 4.
Chapter 6 6.8.10.1 HPSS Configuration NFS Daemon Configuration Variables Table 6-28 lists the fields on the NFS Daemon Configuration window and provides specific recommendations for configuring the NFS Daemon for use by HPSS. Table 6-28 NFS Daemon Configuration Variables Display Field Name Description Acceptable Values Default Value General Parameters. The following fields define general information for the NFS Daemon. Server Name The descriptive name of the NFS Daemon.
Chapter 6 HPSS Configuration Table 6-28 NFS Daemon Configuration Variables (Continued) Display Field Name Description Acceptable Values Default Value Exports File (Unix) The name of a UNIX file containing the HPSS directory/file exported for NFS access. Valid and fully qualified UNIX file name. /var/hpss/nfs/ exports Use Privileged Port A flag that indicates whether the NFS clients will use privileged port. If set, the NFS clients must use port numbers less than 1024.
Chapter 6 HPSS Configuration Table 6-28 NFS Daemon Configuration Variables (Continued) Display Field Name Description Acceptable Values Default Value Grace Interval An interval, in seconds, to indicate how long after a credential’s last use before it will be expired. Any positive 32-bit integer value. 3600 seconds Dump Interval An interval, in seconds, to determine how often the credentials map cache is checkpointed to a UNIX file. Any positive 32-bit integer value.
Chapter 6 HPSS Configuration Table 6-28 NFS Daemon Configuration Variables (Continued) Display Field Name Description Acceptable Values Expire Credentials A flag that indicates whether the credentials map entries will be expired and removed based on the grace and purge intervals. ON, OFF Default Value ON Header Cache. The following fields are used to cache names and attributes of the HPSS file objects. Class of Service The name of the COS used when creating the NFS files.
Chapter 6 HPSS Configuration Table 6-28 NFS Daemon Configuration Variables (Continued) Display Field Name Description Acceptable Values Default Value Buffer Size The size of a single data cache entry. Any positive 32-bit integer value. 500 KB Advice: This value is the amount of data read from and written to HPSS by the data cache layer at a time. Is is recommended that this value be a multiple of 8 KB and, if possible, a power of 2.
Chapter 6 HPSS Configuration Table 6-28 NFS Daemon Configuration Variables (Continued) Display Field Name Description Acceptable Values Default Value Thread Interval An interval, in seconds, that specifies how often the cleanup threads wake up to look for dirty entries. Any positive 32-bit integer value. 30 seconds Advice: This value should not be too small because extra cleanup threads are spawned as needed when the cache starts to fill up.
Chapter 6 HPSS Configuration Table 6-28 NFS Daemon Configuration Variables (Continued) Display Field Name Description Acceptable Values Recover Cached Data A flag that indicates whether the data cache layer will be forced to look for dirty cache entries in the CheckPoint File. When ON, the data cache layer looks for dirty entries. When OFF, it ignores them and resets the checkpoint file.
Chapter 6 HPSS Configuration Refer to the window’s help file for more information on the individual fields and buttons as well as the supported operations available from the window. Figure 6-31 Non-DCE Client Gateway Configuration Window 6.8.11.1 Non-DCE Client Gateway Configuration Variables Table 6-29 lists the fields on the Non-DCE Client Gateway Configuration window and provides specific recommendations for configuring the Non-DCE Client Gateway for use by HPSS.
Chapter 6 HPSS Configuration Table 6-29 Non-DCE Client Gateway Configuration Variables Default Value Display Field Name Description Acceptable Values Server ID The UUID of the NonDCE Client Gateway. This ID is copied over from the Non-DCE Client Gateway general configuration entry. The UUID of the Non-DCE Client Gateway. This field cannot be modified. It is displayed for reference only. Assigned by SSM when the Non-DCE Client Gateway general configuration entry is created.
Chapter 6 HPSS Configuration Table 6-29 Non-DCE Client Gateway Configuration Variables Display Field Name Description Acceptable Values Default Value Maximum Request Queue Size The maximum number of requests to queue until a request thread becomes available. If this queue fills, no more requests will be processed for this particular Non-DCE client until there is more room in the queue. Any positive 32-bit integer value.
Chapter 6 HPSS Configuration To add a new specific configuration, select the PVL Server entry and click on the Type-specific... button from the Configuration button group on the HPSS Servers window. The PVL Server Configuration window will be displayed as shown in Figure 6-32 with default values. If the default data is not desired, change the fields with the desired values. Click on the Add button to create the configuration entry.
Chapter 6 HPSS Configuration 6.8.12.1 Physical Volume Library Configuration Variables Table 6-30 lists the fields on the PVL Server Configuration window and provides specific recommendations for configuring the PVL for use by HPSS. Table 6-30 Physical Volume Library Configuration Variables Default Value Display Field Name Description Acceptable Values Server Name The descriptive name of the PVL. This name is copied over from the PVL general configuration entry. This field cannot be modified.
Chapter 6 HPSS Configuration 6.8.13 Configure the PVR Specific Information The PVR specific configuration entry can be created using the PVR Server Configuration window. After the configuration entry is created, it can be viewed, updated, or deleted through the same window. If you are configuring a PVR for StorageTek, IBM 3494/3495/3584, or ADIC AML; before proceeding with PVR configuration you should read the PVR-specific section (Section 6.8.13.
Chapter 6 HPSS Configuration Figure 6-33 3494 PVR Server Configuration Window 374 September 2002 HPSS Installation Guide Release 4.
Chapter 6 HPSS Configuration Figure 6-34 3495 PVR Server Configuration Window HPSS Installation Guide Release 4.
Chapter 6 HPSS Configuration Figure 6-35 3584 LTO PVR Server Configuration Window 376 September 2002 HPSS Installation Guide Release 4.
Chapter 6 HPSS Configuration Figure 6-36 AML PVR Server Configuration Window HPSS Installation Guide Release 4.
Chapter 6 HPSS Configuration Figure 6-37 STK PVR Server Configuration Window 378 September 2002 HPSS Installation Guide Release 4.
Chapter 6 HPSS Configuration Figure 6-38 STK RAIT PVR Server Configuration Window HPSS Installation Guide Release 4.
Chapter 6 HPSS Configuration Figure 6-39 Operator PVR Server Configuration Window 6.8.13.1 Physical Volume Repository Configuration Variables Table 6-31 lists the fields on the PVR Server Configuration window and provides specific recommendations for configuring a PVR for use by HPSS. Table 6-31 Physical Volume Repository Configuration Variables Default Value Display Field Name Description Acceptable Values Server Name The descriptive name of the PVR.
Chapter 6 HPSS Configuration Table 6-31 Physical Volume Repository Configuration Variables (Continued) Default Value Display Field Name Description Acceptable Values Server ID The UUID of this PVR. This ID is copied over from the PVR general configuration entry. The UUID of this PVR. This field cannot be modified. It is displayed for reference only. Extracted from the PVR general server configuration entry.
Chapter 6 HPSS Configuration Table 6-31 Physical Volume Repository Configuration Variables (Continued) Display Field Name Description Acceptable Values Default Value Same Job On Controller The number of cartridges from this job mounted on this drive’s controller. The larger the number, the harder the PVR will try to avoid mounting two tapes in the same stripe set on drives attached to the same controller. See Advice below for more information. Any positive 32-bit integer value.
Chapter 6 HPSS Configuration Table 6-31 Physical Volume Repository Configuration Variables (Continued) Display Field Name Description Acceptable Values Default Value Advice: The Same Job on Controller, Other Job on Controller, and Distance to Drive values are used by the PVR when selecting a drive for a tape mount operation. The three values are essentially weights that are used to compute an overall score for each possible drive.
Chapter 6 HPSS Configuration Table 6-31 Physical Volume Repository Configuration Variables (Continued) Default Value Display Field Name Description Acceptable Values Support Shelf Tape A toggle button. If ON, the PVR will support the removal of cartridges from the tape library. If OFF, the PVR will not support the removal of cartridges from the tape library.
Chapter 6 HPSS Configuration Table 6-31 Physical Volume Repository Configuration Variables (Continued) Default Value Display Field Name Description Acceptable Values Drive Error Limit This field is intended to be used in conjunction with the PVR Server “Retry Mount Time Limit”. If the number of consecutive mount errors which occur to any drive in this PVR equal or exceed this value, the drive is automatically locked by the PVL.
Chapter 6 HPSS Configuration Table 6-31 Physical Volume Repository Configuration Variables (Continued) Default Value Display Field Name Description Acceptable Values Command Device (3494/3495 Only) The name of the device that the PVR can use to send commands to the 3494/3495 robot. The environment variable HPSS_3494_COMMAN D_DEVICE will override the value entered in this field. Any device; generally /dev/lmcpX for AIX systems; symbolic library name defined in /etc/ibmatl.
Chapter 6 6.8.13.2 6.8.13.2.1 HPSS Configuration LTO PVR Information Vendor Software requirements HPSS is designed to work with the AIX tape driver (Atape) software to talk to the IBM 3584 LTO Library over a SCSI channel. Currently HPSS is only supported for the AIX version of the Atape driver. Please note that the PVR must run on the same node that has the Atape interface and this node must have a direct SCSI connection to the library.
Chapter 6 6.8.13.2.4 HPSS Configuration Vendor Information 1. 3584 UltraScalable Tape Library Planning and Operator Guide GA32-0408-01 2. IBM Ultrium Device Drivers Installation and User's Guide GA32-0430-00.1 3. IBM Ultrium Device Drivers Programming Reference WB1304-01 4. 3580 Ultrium Tape Drive Setup, Operator and Service Guide GA32-0415-00 5. 3584 UltraScalable Tape Library SCSI Reference WB1108-00 6.8.13.3 6.8.13.3.
Chapter 6 HPSS Configuration Other programs can send commands to the robot at the same time as HPSS through the additional device special file. If the robot is placed in pause mode by an operator, an alarm will appear on the HPSS operator screen. All subsequent robot operations will silently be suspended until the robot is put back in automatic mode. 6.8.13.3.
Chapter 6 HPSS Configuration The STK RAIT PVR cannot be supported at this time since STK has not yet made RAIT generally available. The SSI requires that the system environment variables CSI_HOSTNAME and ACSAPI_PACKET_VERSION be correctly set. Note that due to limitations in the STK Developer's Toolkit, if the SSI is not running when the HPSS PVR is started, or if the SSI crashes while the HPSS PVR is running, the HPSS PVR will lock up and will have to be manually terminated by issuing a kill -9 command.
Chapter 6 HPSS Configuration the Server System Interface (ssi) and the Toolkit event logger. These binaries and associated script files are distributed with the HPSS, but are maintained by the STK Corporation. The binaries and script files for starting the STK client side processes are located in the $HPSS_PATH/stk/bin directory. Documentation files describing the files in the bin directory are located in the $HPSS_PATH/stk/doc directory. Refer to these doc files for additional information. The t_startit.
Chapter 6 HPSS Configuration 5.x 4 Enter Remote Host Version (ACSAPI_PACKET_VERSION): 4 Starting /opt/hpss/stk/bin/mini_el... Attempting startup of /opt/hpss/ bin/mini_el ... Starting /opt/hpss/bin/ssi... Attempting startup of PARENT for /opt/ hpss/bin/ssi... SIGHUP received Parent Process ID is: 17290 Attempting startup of /opt/hpss/bin/ssi... SIGHUP received Parent Process #17290 EXITING NORMALLY Initialization Done.
Chapter 6 HPSS Configuration User needs to set the Server Name and Client Name, which are case sensitive, in the AML PVR Server Configuration panel to establish the connectivity between the HPSS software and the OS/2 controlling the robot. The Server Name is the name of the controller associated with the TCP/IP address, as defined in the TCP/IP HOST file, and the Client Name is the name of the OS/2 administrator client as defined in the DAS configuration.
Chapter 6 HPSS Configuration 1. Make sure the AMU archive management software is running and the hostname is resolved, 2. Select an OS/2 window from the Desktop and change the directory to C:\DAS, C:> cd \das 3. At the prompt, type tcpstart and make sure that TCP/IP gets configured and that the port mapper program is started, C:\das> tcpstart 4.
Chapter 6 HPSS Configuration default data is not desired, change the fields with the desired values. Click on the Add button to create the configuration entry. To update an existing configuration, select the Storage Server entry on the HPSS Servers window and click on the Type-specific... button from the Configuration button group. The Storage Server Configuration window will be displayed with the configured data.
Chapter 6 HPSS Configuration Figure 6-40 Disk Storage Server Configuration Window 396 September 2002 HPSS Installation Guide Release 4.
Chapter 6 HPSS Configuration Figure 6-41 Tape Storage Server Configuration Window 6.8.14.1 Storage Server Configuration Variables Table 6-32 lists the fields on the Storage Server Configuration window and provides specific recommendations for configuring a Storage Server for use by HPSS. Table 6-32 Storage Server Configuration Variables Display Field Name Description Acceptable Values Server Name The descriptive name of the SS. This name is copied from the SS general configuration entry.
Chapter 6 HPSS Configuration Table 6-32 Storage Server Configuration Variables (Continued) Default Value Display Field Name Description Acceptable Values Server ID The UUID of this SS. This ID is copied from the SS general configuration entry. This field cannot be modified. It is displayed for reference only. Extracted from the SS general server configuration entry. Statistics Fields. The following fields are displayed for Tape Storage Servers only.
Chapter 6 HPSS Configuration Table 6-32 Storage Server Configuration Variables (Continued) Display Field Name Description Acceptable Values Default Value Total Bytes The total number of bytes of used and available storage known to the SS. This field is applicable to the Tape Storage Server only. Any positive 64-bit integer value. 0 Advice: This value should be set to zero when the specific configuration record is first created.
Chapter 6 HPSS Configuration Table 6-32 Storage Server Configuration Variables (Continued) Default Value Display Field Name Description Acceptable Values Physical Volumes The name of the SFS file that contains the physical volume metadata. Valid SFS file name. The file must not be shared with other Storage Servers. /.:/encina/sfs/ hpss/ sspvdisk.# for Disk SS /.:/encina/sfs/ hpss/ sspvtape.# for Tape SS Advice: Use a name that is meaningful to the type of SS and metadata being stored.
Chapter 6 HPSS Configuration Table 6-32 Storage Server Configuration Variables (Continued) Display Field Name Description Acceptable Values Storage Segments The name of the SFS file that contains the storage segment metadata. Valid SFS file name. The file must not be shared with other Storage Servers. Default Value /.:/encina/sfs/ hpss/ storagesegdis k.# for Disk SS /.:/encina/sfs/ hpss/ storagesegtap e.# for Tape SS Advice: Use a name that is meaningful to the type of SS and metadata being stored.
Chapter 6 HPSS Configuration All locally attached magnetic disk devices (e.g., SCSI, SSA) should be configured using the pathname of the raw device (i.e., character special file). The configuration of the storage devices (and subsequently the Movers that control them) can have a large impact on the performance of the system because of constraints imposed by a number of factors (e.g., device channel bandwidth, network bandwidth, processor power).
Chapter 6 HPSS Configuration Figure 6-42 HPSS Devices and Drives Window To configure a new device and drive, click on the Add New... button on the HPSS Devices and Drives window. The Mover Device and PVL Drive Configuration window will be displayed as shown in Figure 6-44 with default values for a new tape device/drive. If a disk device/drive is desired, click the Disk button to display the default disk data (as shown in Figure 6-43) before modifying any other fields.
Chapter 6 HPSS Configuration Figure 6-43 Disk Mover Device and PVL Drive Configuration Window 404 September 2002 HPSS Installation Guide Release 4.
Chapter 6 HPSS Configuration Figure 6-44 Tape Mover Device and PVL Drive Configuration Window 6.9.1 Device and Drive Configuration Variables Table 6-33 lists the fields on the Mover Device and PVL Drive Configuration window. Table 6-33 Device/Drive Configuration Variables Display Field Name Description Acceptable Values Device/Drive ID The unique, numeric ID associated with this device/drive. Any non-zero, positive 32-bit integer value. HPSS Installation Guide Release 4.
Chapter 6 HPSS Configuration Table 6-33 Device/Drive Configuration Variables (Continued) Default Value Display Field Name Description Acceptable Values Device /Drive Type The type of device over which data will move. Any valid type from the pop-up list. Default Tape or Default Disk, based on Tape/Disk toggle buttons. Mover The name of the MVR that controls the device. Any configured MVR name from the pop-up list. First configured MVR name found in the SFS file.
Chapter 6 HPSS Configuration Table 6-33 Device/Drive Configuration Variables (Continued) Display Field Name Description Acceptable Values Starting Offset The offset in bytes from the beginning of the disk logical volume at which the Mover will begin using the volume. The space preceding the offset will not be used by HPSS. Zero, or a positive integer that is a multiple of the Media Block Size. Default Value Zero Advice: This value is used for disk devices only.
Chapter 6 HPSS Configuration Table 6-33 Device/Drive Configuration Variables (Continued) Default Value Display Field Name Description Acceptable Values Device Name The name by which the MVR can access the device. Any valid UNIX path name of a device file. None Advice: This name is usually the path name of a device special file such as /dev/ rmt0/ For locally attached disk devices, the pathname should refer the raw/character special file (e.g., /dev/rhpss_disk1).
Chapter 6 HPSS Configuration Table 6-33 Device/Drive Configuration Variables (Continued) Display Field Name Description Acceptable Values Drive Address The name/address by which the PVR can access the drive. Valid drive address (see Advice below). Default Value None Advice: For StorageTek robots: Drive Address configuration entries correspond to the ACS,Unit,Panel,Drive Number used by ACSLS to identify drives.
Chapter 6 HPSS Configuration Table 6-33 Device/Drive Configuration Variables (Continued) Display Field Name Description Acceptable Values Locate Support An indication of whether the device supports a high speed (absolute) positioning operation. ON, OFF Default Value ON Advice: This option is supported for 3480, 3490, 3490E, 3590, 3580, Timberline, Redwood, 9840, 9940, DST-312, DST-314, and GY-8240 devices.
Chapter 6 6.9.
Chapter 6 412 HPSS Configuration September 2002 HPSS Installation Guide Release 4.
Chapter 7 HPSS User Interface Configuration 7.1 Client API Configuration The following environment variables can be used to define the Client API configuration: The HPSS_LS_NAME defines the CDS name of the Location Server RPC Group entry for the HPSS system that the Client API will attempt to contact. The default is /.:/hpss/ls/group. The HPSS_MAX_CONN defines the number of connections that are supported by the Client API within a single client process.
Chapter 7 HPSS User Interface Configuration The HPSS_SERVER_NAME environment variable is used to specify the server name to be used when initializing the HPSS security services. The default value is /.:/hpss/client. This variable is primarily intended for use by HPSS servers that use the Client API. The HPSS_DESC_NAME environment variable is used to control the descriptive name used in HPSS log messages if the logging feature of the Client API is enabled. The default value is “Client Application”.
Chapter 7 HPSS User Interface Configuration The HPSS_REGISTRY_SITE_NAME environment variable is used to specify the name of the security registry used when inserting security information into connection binding handles. This is only needed when the client must support DFS in a cross-cell environment. The default registry is “/.../dce.clearlake.ibm.com”.
Chapter 7 HPSS User Interface Configuration Thus if the key on the NDCG SSM screen is 0123456789ABCDEF then the key in the ndcl.keyconfig file must look like the sample file shown below: 0x01234567 0x89ABCDEF • Make sure you set the appropriate permissions on this file. Only users authorized to use the Non DCE Client API should have access to this file. You can specify an alternate pathname for this file by setting the HPSS_NDCL_KEY_CONFIG_FILE environment variable. 7.2.
Chapter 7 HPSS User Interface Configuration The HPSS_HOSTNAME environment variable is used to specify the hostname to be used for TCP/IP listen ports created by the Client API. The default value is the default hostname of the machine on which the Client API is running. This value can have a significant impact on data transfer performance for data transfers that are handled by the Client API (i.e., those that use the hpss_Read and hpss_Write interfaces).
Chapter 7 HPSS User Interface Configuration The HPSS_REUSE_CONNECTIONS environment variable is used to control whether TCP/IP connections are to be left open as long as a file is open or are to be closed after each read or write request. A non-zero value will cause connections to remain open, while a value of zero will cause connections to be close. The default value is zero.
Chapter 7 HPSS User Interface Configuration Perform the following on the NDCG server: • If your OS supports it and you wish to use DES encryption to encrypt/decrypt your userid/password make sure you have the following line in Makefile.macros before compiling hpss. NDAPI_INTERNATIONAL_SUPPORT = off In case of international sites and for sites that don't have DES support, this flag can be set to on which will then use an alternate hashing mechanism to perform the encryption.
Chapter 7 HPSS User Interface Configuration Important fields in the /etc/krb5.conf: [libdefaults] stanza: The default_realm should map to the kerberos realm name you wish to use, which in most cases will be the same as the dce cell name. The default_keytab_name (which is on the server host) is typically /krb5/v5srvtab. This is the keytab used by the ndcg kerberos service. [realms] stanza: This contains the information for the kerberos realm.
Chapter 7 HPSS User Interface Configuration rgy_edit=> domain account Domain changed to: account rgy_edit=> add host/dopey.clearlake.ibm.com Enter account group [gname]: none Enter account organization [oname]: none Enter password: qwerty Retype password: qwerty Enter your password: Enter misc info: () KRB5 host account [all other prompts can be answered with "Enter"] ... rgy_edit=> add ndcg/dopey.clearlake.ibm.
Chapter 7 HPSS User Interface Configuration 1. Compile the client library with kerberos mode enabled and link with the kerberos libraries. Use the following flags: -lkrb5 -lcrypto -lcom_err Make sure the linker knows where to find these libraries using the -L flag. 2. Call hpss_SetAuthType in your client program if other authentication modes have been enabled in the client 3. Run kinit to get the initial credentials for the desired principal 4.
Chapter 7 HPSS User Interface Configuration possible, use a “TCP Wrapper” application for initiating the HPSS FTP Daemon. This enhances on-the-fly changes to the startup of the HPSS FTP Daemon. Additionally, this provides for enhanced security. Several TCP Wrapper applications are available in the Public domain. HPSS Parallel FTP Daemon options: The only options which accept additional arguments are the –p, -s, -D, and –F options.
Chapter 7 HPSS User Interface Configuration Table 7-1 Parallel FTP Daemon Options 424 Option Description s string Specify the syslog facility for the HPSS PFTPD. The syntax on the -s option is -slocal7. The default syslog facility is LOG_DAEMON (reference: /usr/include/sys/syslog.h). Alternatives are local0 - local7. Incorrect specification will default back to LOG_DAEMON. To make use of the alternates, modify /etc/syslog.conf to use the alternate facility.
Chapter 7 HPSS User Interface Configuration Table 7-1 Parallel FTP Daemon Options Option Description H Used to disallow login for users whose home directory does not exist or is not properly configured. The default behavior (without the H option) is to put the user in the “/” directory. I Toggle the use of trusted hosts. Default is off. Note: this is not usually recommended.
Chapter 7 HPSS User Interface Configuration HPSS.) Most of these applications, do not exhibit the line length limitation observed by the inetd superdaemon and they also allow “on the fly” modification of initialization parameters for network services; e.g., PFTP, telnet, etc., without having to refresh (kill -HUP) the inetd superdaemon.
Chapter 7 HPSS User Interface Configuration private # Enable/Disable compression filter # NOT CURRENTLY SUPPORTED. compress [ yes | no ] [ ... ] # Enable/Disable tar filter # NOT CURRENTLY SUPPORTED. tar [ yes | no ] [ ... ] # Control for logging (sent to syslog()).
Chapter 7 HPSS User Interface Configuration The following “hpss_options” are read only if the corresponding flag (See the FTP Daemon Flags above) appears on the inetd.conf initialization line for the HPSS PFTP Daemon and may be left “active” (not commented out with the # symbol) even if the default value is desired.
Chapter 7 HPSS User Interface Configuration Table 7-2 Banner Keywords • Keyword Description L Local hostname U User name s Shutdown time d User disconnect time r Connection deny time % % The format of the file is: Message lines contain keywords mentioned above.
Chapter 7 HPSS User Interface Configuration Step 4. Creating FTP Users In order for an HPSS user to use FTP, a DCE userid and password must be created. Refer to Section 8.1.1: Adding HPSS Users (page 215) in the HPSS Management Guide for information on how to use the hpssuser utility to create the DCE userid and password and set up the necessary configuration for the user to use FTP.
Chapter 7 HPSS User Interface Configuration 7.4 NFS Daemon Configuration Before the HPSS NFS daemon can be started, any existing AIX or Solaris native NFS daemons must be stopped and prevented from restarting. This is important because the NFS protocol does not provide a way for clients to specify which of two daemons is wanted. When the system is set up correctly, there should be no 'nfsd' or 'mountd' processes running.
Chapter 7 HPSS User Interface Configuration The alternative approach would have been to mount jupiter's files directly on /users and tardis' files directly on /hpss. But this is the layout of mount points that should be avoided; it can cause the two NFS daemons to interact badly with each other. 7.4.1 The HPSS Exports File The HPSS exports file contains a list of HPSS directories and filesets that can be exported to the NFS clients.
Chapter 7 HPSS User Interface Configuration Table 7-3 Directory Export Options Option Description anon=UID If a request comes from a client user with the HPSS root identity, use the UID value as the effective user ID. The default value for this option is -2. root=HostName[:HostName,...] Gives HPSS root access only to the HPSS root users from the specified host name. The default is for no hosts to be granted root access. Network names are not allowed for this option. access=Client[:Client,...
Chapter 7 HPSS User Interface Configuration /usr/tps -id=20,root=hermes:zip 5. To convert client root users to guest UID=100, enter: /usr/new -id=10,anon=100 6. To export read-only to everyone, enter: /usr/bin -id=15,ro 7. To allow several options on one line, enter: /usr/stuff -id=255,access=zip,anon=-3,ro 8. To export files in a directory named /sources in a fileset named project.fileset: project.fileset:/sources -id=20,access=sandia.gov 7.4.
Chapter 7 HPSS User Interface Configuration • IBM’s Parallel Operating Environment MPI, Version 3 Release 2 • Sun HPC MPI, version 4.1 • ANL MPICH, version 1.2 Other versions of MPI may be compatible with HPSS MPI-IO as well. The mpio_MPI_config.h file is dynamically generated from the host MPI’s mpi.h file, making it possible to tailor the interaction of HPSS MPI-IO with the host MPI.
Chapter 7 HPSS User Interface Configuration of cells to allow data access and authorization between clients and servers in different cells. DFS uses the Episode physical file system, although it can use other native file systems, such as UFS. HPSS provides a similar interface through the Linux version of SGI’s XFS file system. A standard interface is used to couple DFS and XFS (the “managed” file systems) with HPSS.
Chapter 7 HPSS User Interface Configuration updates made through the DFS interface being visible through the HPSS interface and vice versa. Filesets managed with this option are called mirrored filesets. Objects in mirrored filesets have corresponding entries in both DFS and HPSS with identical names and attributes. A user may access data through DFS, at standard DFS rates, or when high performance I/O rates are important, use the HPSS interface. 7.6.2.
Chapter 7 HPSS User Interface Configuration Figure 7-1 DFS/HPSS XDSM Architecture 7.6.2.4 XDSM Implementation for DFS The XDSM implementation supported by Transarc is called the DFS Storage Management Toolkit (DFS SMT). It is fully compliant with the corresponding standard XDSM specification.
Chapter 7 HPSS User Interface Configuration The bulk of DFS SMT is implemented in the DFS file server, but there is also a user space shared library that implements all APIs in the XDSM specification. The kernel component maintains XDSM sessions, XDSM tokens, event queues, and the metadata which describes the events for which various file systems have registered. The kernel component is also responsible for receiving events and dispatching them to the DMAP.
Chapter 7 HPSS User Interface Configuration To support persistent DM-related metadata, XFS utilizes its standard extended attribute facility. DM attributes, event masks, managed regions, and attribute change times (dtime values) are stored as extended attributes. These extended attributes are treated as file metadata. The xfsdump and xfsrestore utilities include extended attributes and migrated regions. Migrated data is not recalled when a dump is taken, producing an abbreviated dump. 7.6.2.
Chapter 7 7.6.2.6.2 HPSS User Interface Configuration DMAP Gateway Server The DMAP Gateway is a conduit and a translator between HDM and HPSS. HPSS servers use DCE/RPCs to communicate, however the DMAP Gateway encodes requests using XDR and sends these requests via sockets to HDM. In addition, it translates XDR from the HDM to DCE/TRPC/ Encina calls to the appropriate HPSS servers. When a connection between the HDM and Gateway is made, mutual authentication occurs.
Chapter 7 HPSS User Interface Configuration Meetings with Transarc and IBM Austin have taken place to discuss the issue. The above restriction may be fixed in the future. 7.6.2.7.2 Migration and Purge Algorithms Currently, the HDM reads through all the anodes in an aggregate to determine migration and purge candidates. Using empirical data we determined that the HDM reads approximately 70 entries per second (this is disk hardware dependent).
Chapter 7 7.6.2.7.4 HPSS User Interface Configuration Mirrored Fileset Recovery Speed Mirrored fileset recovery time must be considered when configuring the system. Mirrored fileset recovery can be tedious and slow. The DFS fileset is recovered using HPSS metadata, which requires many SFS accesses. The three time consuming steps when recovering mirrored filesets are: • Dumping the HPSS fileset information. The rate for this step is approximately 60 entries per second.
Chapter 7 7.6.2.8.1 HPSS User Interface Configuration Migration/Purge Algorithms and the MPQueue Migration and purge are handled differently in an HPSS/XFS system than in an HPSS/DFS system. In an HPSS/XFS system, a queue of migration and purge candidates are kept in shared memory using the Migration/Purge Queue or MPQueue. When migration and purge run, they simply step through the migration and purge candidates in the MPQueue.
Chapter 7 HPSS User Interface Configuration For Linux systems, it is assumed that the system on which XFS is running has been configured with the appropriate kernel and XFS versions as given in Section 2.3.5.1: HPSS/XFS HDM Machine on page 52. For DFS HDMs, two additional steps must be performed before continuing to the configuration of the HDM: 1. Configure DFS SMT Kernel Extensions (AIX) 2. Configure DCE DFS The following sections describe these extra steps in more detail.
Chapter 7 HPSS User Interface Configuration Here is a sample user_cmd.tcl: #!/bin/ksh set pre_start_dfs "/var/hpss/hdm/hdm1/pre_start_dfs" set pre_start_dfs_fail_on_error $TRUE set pre_stop_dfs "/var/hpss/hdm/hdm1/pre_stop_dfs" set post_stop_dfs "/var/hpss/hdm/hdm1/post_stop_dfs" The pre_start_dfs Korn shell script will break before trying to start DFS and export DFS files. The script ensures that the HDMs are all running.
Chapter 7 HPSS User Interface Configuration echo " could not start hdm$id, status = $status" exit $status fi else echo " hdm$id is already running" fi done echo " all hdm servers are running" exit 0 An example of the pre_start_dfs for Solaris is as follows: #!/bin/ksh # Start the servers (two of them in this example): for id in 0 1; do key=`expr 3788 + $id` var=/var/hpss/hdm/hdm$id $HPSS_PATH_BIN/hdm_admin -k $key -s $id -v $var ps >/dev/null 2>&1 if [ $? != 0 ]; then echo " starting hdm$id" rm -f $var/hd
Chapter 7 HPSS User Interface Configuration $HPSS_PATH_BIN/hdm_admin -k $key -s $id -v $var tcp \ disable >/dev/null 2>&1 done exit 0 The post_stop_dfs script is executed once it has completed detaching the DFS aggregates and shutting down DFS. The script stops any HDMs that are still running.
Chapter 7 HPSS User Interface Configuration The fifth file (filesys.dat) is automatically updated by HDM as new aggregates and filesets are created. Therefore, this file should not ordinary be edited by the administrator. HDM cannot be started if this file is missing or does not contain correct information. Before starting HDM for the first time, a special version of filesys.dat must be created so that HDM will recognize that the file is correct.
Chapter 7 HPSS User Interface Configuration The following paragraphs discuss each parameter found in the file. Except as noted, each parameter must be specified. HDM will not start if a mandatory parameter is omitted. The configuration parameters can be specified in any order. The keywords must be spelled correctly, using the specified upper and lower case letters. For example, DescName, not descName or descname. AclLogName specifies the name of the file used for the ACL log.
Chapter 7 HPSS User Interface Configuration are run on one machine, but leads to the possibility that an aggregate will be overlooked and not kept properly synchronized. On the other hand, if "permissiveMount" is not specified, HDM will abort mount events for aggregates it does not manage. While this is safer, it cannot be used on a machine where multiple copies of HDM are running.
Chapter 7 HPSS User Interface Configuration In normal operation, only alarm and event messages need to be enabled. Trace and debug messages should be enabled when it is necessary to track down the root cause of a problem. Logging too many different types of messages will impact HDM performance. MainLogName specifies the name of the file used for the main event log. Typically, this will be /var/ hpss/hdm/hdm/hdm_main_log.
Chapter 7 HPSS User Interface Configuration MaxStages specifies the maximum number of data event processes that can concurrently stage files from HPSS to Episode. When this limit is reached, further transfers from HPSS are deferred until one of the stages completes. This value must be less than NumDataProcesses. A value in the range 1-3 is a good starting point. MaxTcpConnects specifics the maximum number of simultaneous requests to mirrored filesets managed by this HDM allowed through the HPSS interface.
Chapter 7 HPSS User Interface Configuration When multiple HDM servers are to be run on the same machine, each HDM must have a unique SharedMemoryKey. HDM servers cannot share memory or logs without serious consequences. ZapLogName specifies the name of the file used for the zap log. Typically, this will be /var/hpss/ hdm/hdm/hdm_zap_log. This file contains a record of the archived fileset files that need to be destroyed. The file must exist before HDM is started, but can be empty.
Chapter 7 HPSS User Interface Configuration For DFS, this file consists of a number of lines that describe the aggregates and the filesets that reside on that aggregate. Each aggregate is described by a line that begins in the first column. After the line for each aggregate are the lines that describe the filesets that reside on that aggregate. Fileset lines begin with a TAB character. Any line that begins in column one is treated as the definition for an aggregate.
Chapter 7 HPSS User Interface Configuration Fsid specifies the file system id for this aggregate. The value is defined by dfstab. Option specifies how the filesets on the aggregate will be managed by HPSS. The parameter may be either archive/delete, archive/rename, or mirror. If mirror is selected, the name and data space will be mirrored by HPSS, and the end user can access the name and data space from either DFS or HPSS.
Chapter 7 HPSS User Interface Configuration Gateway specifies the fully qualified name of the host where the DMAP Gateway that will manage this fileset runs. To keep the example above short, Gateway is shown as tardis, but in practice, the name should be tardis.ca.sandia.gov. If the fileset is partially configured, the host name is represented by a ‘?’. That prevents end users from accessing the DFS fileset. To complete the configuration, an administrator will use SSM to create the HPSS fileset.
Chapter 7 HPSS User Interface Configuration Block devices: 2 fd 3 ide0 8 sd 22 ide1 65 sd 66 sd The block device that matches our major number (3) is ide0. 3. Now put this information together to form the media descriptor. Since the format is (,), for our example the media descriptor is ide0(3,71). Option specifies how the filesets on the filesystem will be managed by HPSS. The parameter may be either archive/delete, or archive/rename.
Chapter 7 HPSS User Interface Configuration When multiple HDM Servers and DMAP Gateways are running, they must use different TCP ports. 7.6.3.3.3 gateways.dat File The gateway configuration file, gateways.dat, is a text file identifying DMAP gateways that will communicate with HDM. The file must be located in the same directory as config.dat, typically / var/hpss/hdm/hdm. The file consists of a number of entries, each containing a host name, port, and encryption key.
Chapter 7 HPSS User Interface Configuration The file consists of a number of sections, where each section defines a migration or purge policy. Each section begins with a line that identifies the type of policy being defined (a migration or purge policy) and gives it a name. Comments can appear in the file, starting with a ‘#’ character and continuing to the end of the line.
Chapter 7 HPSS User Interface Configuration LastAccessTimeBeforePurge specifies the number of seconds that must elapse after a file is accessed before the file becomes eligible for purging. PurgeDelayTime is the time, in seconds, that the purge process waits between passes in which it looks for files to purge. If this time is set to zero, HDM waits an infinite amount of time, meaning the purge process waits for a signal before looking for files to purge. hdm_admin can be used to send the signal.
Chapter 7 HPSS User Interface Configuration KeyTabFile specifies the name of a UNIX file containing a copy of the DCE key for the HDM Security Server component. The file must exist and must contain an entry for the given Principal. ObjectID specifies the DCE object UUID for an HDM. ObjectID is used by the endpoint mapper to distinguish between different instantiations of HDM servers, so a unique value must be used. uuidgen can be used to generate a unique UUID.
Initial Startup and Verification Chapter 8 8.1 Overview This chapter provides instructions for starting up the HPSS servers, performing post-startup configuration, and verifying that the system is configured as desired. Briefly, here are the steps involved: 1. Start up the HPSS servers (Section 8.2: Starting the HPSS Servers (page 463)) 2. Unlock the PVL drives (Section 8.3: Unlocking the PVL Drives on page 465) 3. Create HPSS storage space (Section 8.
Chapter 8 Initial Startup and Verification SSM can be used to start up the following types of HPSS server: • Bitfile Server • DMAP Gateway • Gatekeeper Server • Location Server • Log Client • Log Daemon • Metadata Monitor • Migration/Purge Server • Mover • Name Server • NFS Daemon • NFS Mount Daemon • Non-DCE Client Gateway • Physical Volume Library • Physical Volume Repository • Storage Server Before starting up the HPSS servers, ensure that all configured HPSS Startup Dae
Chapter 8 Initial Startup and Verification 8.3 Unlocking the PVL Drives As a default, all newly configured drives are locked. They must be unlocked before the PVL can use them. Refer to Section Section 5.5.1: Unlocking a Drive on page 102 of the HPSS Management Guide for more information. 8.4 Creating HPSS Storage Space Adding storage space in HPSS is done in two distinct phases: import and create.
Chapter 8 Initial Startup and Verification 8.8 Creating HPSS directories If Log Archiving is enabled, use an HPSS namespace tool such as scrub or pftp, create the /log directory in HPSS. This directory must be owned by hpss_log and have permissions rwxr-xr-x. 8.9 Verifying HPSS Configuration After HPSS is up and running, the administrator should use the following checklist to verify that HPSS was configured correctly: 8.9.1 8.9.2 8.9.3 8.9.
Chapter 8 8.9.5 8.9.6 • For tape devices, verify that the “Locate Support” option is enabled (unless there are unusual circumstances why this functionally is not or cannot be supported). • For tape devices, verify that the “NO-DELAY” option is enabled (unless there are unusual circumstances why this functionally is not or cannot be supported). • For disk devices, verify that the “Multiple Mover Tasks” flag is enabled.
Chapter 8 8.9.8 8.9.9 Initial Startup and Verification File Families, Filesets, and Junctions • Verify that file families and filesets are created according to the site’s requirement. • Verify that each fileset is associated with the appropriate file family and/or COS. • Verify that each fileset has an associated junction. User Interfaces • Verify that the desired HPSS user interfaces (FTP, NFS, DFS etc.) are properly configured. 8.9.10 Operational Checklist 8.9.10.
Chapter 8 Initial Startup and Verification 8.9.11 Performance Measure data transfer rates in each COS for: • Client writes to disk • Migration from disk to tape • Staging from tape to disk • Client reads from disk Transfer rates should be as fast as the underlying hardware. The actual hardware speeds can be obtained from their specification and by testing directly from the operating system. For example, using dd to read and write to each device.
Chapter 8 470 Initial Startup and Verification September 2002 HPSS Installation Guide Release 4.
Appendix A Glossary of Terms and Acronyms ACI Automatic Media Library Client Interface ACL Access Control List ACSLS Automated Cartridge System Library Software (Science Technology Corporation) ADIC Advanced Digital Information Corporation accounting A log record message type used to log information to be used by the HPSS Accounting process. This message type is not currently used. aggregate A disk partition that has been modified to provide support for DFS filesets and access control lists.
Appendix A Glossary of Terms and Acronyms attribute When referring to a managed object, an attribute is one discrete piece of information, or set of related information, within that object. attribute change When referring to a managed object, an attribute change is the modification of an object attribute. This event may result in a notification being sent to SSM, if SSM is currently registered for that attribute.
Appendix A Glossary of Terms and Acronyms a virtual volume is a cluster. configuration The process of initializing or modifying various parameters affecting the behavior of an HPSS server or infrastructure service. configuration file An Encina Structured File Server (SFS) file that stores information defining HPSS server operating parameters, storage characteristics, policies, devices and drives, and other information.
Appendix A Glossary of Terms and Acronyms DMG Shorthand for DMAP Gateway. DMLFS A DCE Local File System that has been modified to support XDSM Data Management APIs. DNS Domain Name Service DOE Department of Energy drive A physical piece of hardware capable of reading and/or writing mounted cartridges. The terms device and drive are often used interchangeably. DTS Distributed Time Service Encina A product from Transarc Corporation that serves as the HPSS transaction manager.
Appendix A Glossary of Terms and Acronyms file system id A 32-bit number that uniquely identifies an aggregate. FTP File Transfer Protocol Gatekeeper Server An HPSS server that provides two main services: the ability to schedule the use of HPSS resources referred to as the Gatekeeping Service, and the ability to validate user accounts referred to as the Account Validation Service.
Appendix A Glossary of Terms and Acronyms HPSS-only fileset An HPSS fileset that has no counterpart in DFS. HPSS/DMAP A Data Management Application that monitors DFS or XFS activity in order to keep DFS or XFS synchronized with HPSS. The server relays requests between DFS or XFS and the DMAP Gateway.
Appendix A Glossary of Terms and Acronyms LCU Library Control Unit LFS A DCE Local File System, which is a high performance log-based file system that supports the use of access control lists and multiple filesets within a single aggregate. LLNL Lawrence Livermore National Laboratory LMCP Library Manager Control Point LMU Library Management Unit local log An optional circular log maintained by a Log Client.
Appendix A Glossary of Terms and Acronyms Metadata Manager The subsystem/component within HPSS responsible for the physical storage and management of HPSS metadata as well as the transactional mechanisms for manipulating HPSS meta data. The current Metadata Manager for HPSS is the Encina SFS product, together with a set of HPSSdeveloped application program interfaces (APIs) that provide a layer of abstraction on top of the Encina SFS data access methods.
Appendix A Glossary of Terms and Acronyms verification and provides the Portable Operating System Interface (POSIX). name space The set of name-object pairs managed by the HPSS Name Server. NDCG Non-DCE Client Gateway NDAPI Non-DCE Client Application Program Interface NERSC National Energy Research Supercomputer Center Network File System A protocol developed by Sun Microsystems that allows transparent access to files over a network.
Appendix A Glossary of Terms and Acronyms physical volume An HPSS object managed jointly by the Storage Server and the Physical Volume Library that represents the portion of a cartridge that can be contiguously accessed when mounted. A single cartridge may contain multiple physical volumes. Physical Volume Library An HPSS server that manages mounts and dismounts of HPSS physical volumes.
Appendix A Glossary of Terms and Acronyms RISC Reduced Instruction Set Computer/Cycles RMI Remote Method Invocation; the Java form of remote procedure call RMI registry The service with which Java programs register themselves to run remote methods and by which they find the locations of other Java programs which offer remote methods.
Appendix A Glossary of Terms and Acronyms SSM Storage System Management SSM session The environment in which an SSM user interacts with SSM to monitor and control HPSS through the SSM windows. SSM itself may be running without any sessions active. When an SSM user starts up Sammi and logs in, an SSM session begins and lasts until the user logs off. It is possible to have multiple sessions accessing the same SSM.
Appendix A Glossary of Terms and Acronyms between the System Manager and the GUI, and (3) the GUI itself, which includes the Sammi Runtime Environment and the set of SSM windows. stripe length The number of bytes that must be written to span all the physical storage media (physical volumes) that are grouped together to form the logical storage media (virtual volume). The stripe length equals the virtual volume block size multiplied by the number of physical volumes in the stripe group (i.e.
Appendix A Glossary of Terms and Acronyms of a striped virtual volume before switching to the next physical volume. VV Virtual Volume XCT Cross Cell Trust XDSM The Open Group’s Data Storage Management standard. It defines APIs that use events to notify Data Management applications about operations on files. XFS A file system created by SGI available as open source for the Linux operating system. 484 September 2002 HPSS Installation Guide Release 4.
Appendix B References 1. 3580 Ultrium Tape Drive Setup, Operator and Service Guide GA32-0415-00 2. 3584 UltraScalable Tape Library Planning and Operator Guide GA32-0408-01 3. 3584 UltraScalable Tape Library SCSI Reference WB1108-00 4. AIX Performance Tuning Guide 5. Data Storage Management (XDSM) API, ISBN 1-85912-190-X 6. DCE for AIX, Version 3.1: Quick Beginnings 7. Encina Administration Guide Volume 1: Introduction and Configuration 8.
Appendix B References 20. HPSS User’s Guide, September 2002, Release 4.5. 21. IBM 3494 Tape Library Dataserver Operator's Guide, GA32-0280-02 22. IBM 3495 Operator’s Guide, GA32-0235-02 23. IBM AIX Version 4.3 Installation Guide, SC23-4112-01 24. IBM DCE for AIX, Version 3.1: Introduction to DCE 25. IBM DCE Version 3.1 for Solaris: Quick Beginnings. 26. IBM DFS Version 3.1 for Solaris: Quick Beginnings. 27. IBM DCE Version 3.1 for AIX and Solaris: Administration Guide-Introduction. 28. IBM DCE Version 3.
Appendix B References 46. Solaris 5.8 11/99 Sun Hardware Platform Guide 47. Solaris System Administration Guide, Volume I 48. Solaris System Administration Guide, Volume II 49. STK Automated Cartridge System Library Software (ACSLS) System Administrator's Guide, PN 16716 50. STK Automated Cartridge System Library Software Programmer’s Guide, PN 16718 51. J. Steiner, C. Neuman, and J. Schiller, "Kerberos: An Authentication Service for Open Network Systems," USENIX 1988 Winter Conference Proceedings (1988).
Appendix B 488 References September 2002 HPSS Installation Guide Release 4.
Appendix C Developer Acknowledgments HPSS is a product of a government-industry collaboration. The project approach is based on the premise that no single company, government laboratory, or research organization has the ability to confront all of the system-level issues that must be resolved for significant advancement in highperformance storage system technology.
Appendix C 490 Developer Acknowledgments September 2002 HPSS Installation Guide Release 4.
Appendix D Accounting Examples D.1 Introduction This appendix describes how to set up the gathering of accounting data at a customer site. The accounting data is used by the customer to calculate charges for the use of HPSS resources. The accounting data represents a blurred snapshot of the system storage usage as it existed during the accounting run. D.
Appendix D Accounting Examples about the storage used by a particular HPSS Account Index (AcctId) in a particular Class Of Service (COS): • The total number of file accesses (#Accesses) to files owned by the Account Index in the Class Of Service. In general, file accesses are counted against the account of the user accessing the file, not the owner of the file itself. • The total number of files (#Files) stored under the Account Index in the Class Of Service.
Appendix D Accounting Examples Sites may wish to write a module that will redirect the accounting data into a local accounting data base. This module would replace the default HPSS module, acct_WriteReport(), which writes out the HPSS accounting data to a flat text file. Where should the accounting data be stored? The HPSS accounting file and a copy of the current Account Map should be named with the date and time and stored for future reference.
Appendix D Accounting Examples 3152 5674 ... 45(DDI) 25(DND)30(CBC) 100(DDI) ................. Note: The Account Apportionment Table and Account Maps can be created by the individual sites. They are not created or maintained by HPSS. Some sites may wish to add more information, such as department and text name, or include less information, such as only the UID. D.
Appendix D Accounting Examples What kind of reports will be needed for your site? Learning what kind of accounting reports your site will need to generate will help you determine how detailed the collected accounting information should be. A typical Account Map will allow reports to be generated for the following: • Total file accesses, amount of data transferred, and total space used per account, per class of service, per storage class.
Appendix D 496 Accounting Examples September 2002 HPSS Installation Guide Release 4.
Appendix E Infrastructure Configuration Example E.1 AIX Infrastructure Configuration Example % pwd /opt/hpss/config % mkhpss Verify User ID ============== Status ==> User root; verified; continue... Perform HPSS Infrastructure Configuration: ========================================== Status ==> Platform: AIX Status ==> Host Name: host.clearlake.ibm.
Appendix E Infrastructure Configuration Example [E] Re-run hpss_env() [U] Un-configure HPSS [X] Exit Reply ===> (Select Option [1-7, E, U, X]): 1 Perform DCE Set up ================== Verify DCE is Running ===================== Status ==> DCE is running, continue... Perform DCE Register Status => Running /opt/hpss/config/hpss_dce_register on host.clearlake.ibm.
Appendix E Infrastructure Configuration Example Check the network and DCE on both sides Found: 1 Trusted Cell Members TrustedCells[0].cell_id = b0e840f4-2206-11d5-9453-0004ac498ce4 TrustedCells[0].uid = 101 TrustedCells[0].cell_name = /.../host_cell.clearlake.ibm.com TrustedCells[0].hpss_cell_id = 200090 /.../host_cell.clearlake.ibm.com successfully opened Completed Cross Cell Trust Check Status ==> Configure HPSS with DCE completed, continue...
Appendix E Infrastructure Configuration Example Status ==> Adding principal encina_admin to group encina_admin_group ... Status ==> Adding principal encina/sfs/hpss to group encina_admin_group ... Status ==> Adding principal encina/sfs/hpss to group encina_servers_group ... Status ==> Adding principal encina_admin to organization none ... Status ==> Adding principal encina/sfs/hpss to organization none ...
Appendix E Infrastructure Configuration Example LPs: 16 PPs: 16 STALE PPs: 0 BB POLICY: relocatable INTER-POLICY: maximum RELOCATABLE: yes INTRA-POLICY: middle UPPER BOUND: 1 MOUNT POINT: N/A LABEL: None MIRROR WRITE CONSISTENCY: on EACH LP COPY ON A SEPARATE PV ?: yes Prompt ==> Logical volume exists - use it, redefine it, or quit (u/d/q)(u): Prompt ==> SFS log volume name to use (logVol): Prompt ==> SFS chunk size to use (64): Prompt ==> SFS log file name to use (logF
Appendix E Infrastructure Configuration Example Status ==> Adding {user encina_admin ACQ} ... Status ==> Adding {user hosts/host/self ACQ} ... Status ==> Clearing exclusive authority ... Status ==> Stopping server ... Status ==> Destroying credentials ...
Appendix E Infrastructure Configuration Example ===================== Status ==> DCE is running, continue... Prompt ==> Password for encina_admin: Reading /opt/hpss/config/hpss_env Enter CDS name of SFS to work with [/.:/encina/sfs/hpss] Querying SFS server /.:/encina/sfs/hpss Root SFS server: /.:/encina/sfs/hpss All SFS servers: /.:/encina/sfs/hpss Currently selected SFS server: /.
Appendix E Infrastructure Configuration Example All SFS servers: /.:/encina/sfs/hpss Currently selected SFS server: /.
Appendix E GK Configurations HPSS Global Configuration Storage Hierarchies Non-DCE Gateway Configurations NS Configurations NS Global Filesets Log Client Configurations Log Daemon Configurations Logging Policies Location Server Policies Metadata Monitor Configurations Migration/Purge Configurations Subsystem Storage Class Thresho Infrastructure Configuration Example PVL Physical Volumes PVR Configurations 3494 Cartridges 3495 Cartridges AML Cartridges Operator Cartridges STK Cartridges STK Cartridges Rem
Appendix E Infrastructure Configuration Example dataVol: Account Log Records Migration Records Purge Records Bitfiles Bitfile COS Changes Bitfile Tape Segments Bitfile Disk Segments Bitfile Disk Allocation Maps BFS Storage Segment Checkpoints BFS Storage Segment Unlinks NS ACL Extensions NS Fileset Attrs NS Objects NS Text Extensions Migration/Purge Checkpoints SS Disk Storage Maps SS Tape Storage Maps SS Disk Storage Segments SS Tape Storage Segments SS Disk Physical Volumes SS Tape Physical Volumes SS
Appendix E Infrastructure Configuration Example ================================= Prompt ==> Select Infrastructure Configuration Option: [1] Configure HPSS with DCE [2] Configure SFS Server [3] Create and Manage SFS Files [4] Set Up FTP Daemon [5] Set Up Startup Daemon [6] Add SSM Administrative User [7] Start SSM Servers/User Session [E] Re-run hpss_env() [U] Un-configure HPSS [X] Exit Reply ==
Appendix E Infrastructure Configuration Example Perform HPSS Startup Daemon Setup ================================= Status ==> HPSS Startup Daemon will be invoked at system restart Infrastructure Configuration Menu ================================= Prompt ==> Select Infrastructure Configuration Option: [1] Configure HPSS with DCE [2] Configure SFS Server [3] Create and Manage SFS Files [4] Set Up FTP D
Appendix E Infrastructure Configuration Example Infrastructure Configuration Menu ================================= Prompt ==> Select Infrastructure Configuration Option: [1] Configure HPSS with DCE [2] Configure SFS Server [3] Create and Manage SFS Files [4] Set Up FTP Daemon [5] Set Up Startup Daemon [6] Add SSM Administrative User [7] Start SSM Servers/User Session [E] Re-run hpss_env() [U]
Appendix E Infrastructure Configuration Example [5] Set Up Startup Daemon [6] Add SSM Administrative User [7] Start SSM Servers/User Session [E] Re-run hpss_env() [U] Un-configure HPSS [X] Exit Reply ===> (Select Option [1-7, E, U, X]):U Prompt ==> Select Unconfiguration Option: [1] Unconfigure encina [2] Unconfigure an installation node [M] Return to the Main Menu Reply ===> (Select Optio
Appendix E Infrastructure Configuration Example Status ==> Remove /opt/encinamirror/encina/sfs/hpss? (y/n)(y) Prompt ==> Remove /opt/encinalocal? (y/n)(y) Prompt ==> Remove /opt/encinamirror? (y/n)(y) Perform DCE Deregister Status => Remove /.:/hpss? (y/n)(y) Status => Removing keytab file, /krb5/hpss.keytabs. Status => Removing keytab file, /krb5/hpssclient.keytab.
Appendix E Infrastructure Configuration Example Status ==> DCE is running, continue... Perform DCE Register Status on Tue Aug 28 11:36:22 CDT Prompt Prompt Prompt Status => Running /opt/hpss/config/hpss_dce_register on host by root 2001 => Password for cell_admin: => Remove /krb5/hpss.keytabs? (y/n)(y) => Remove /krb5/hpssclient.
Appendix E Infrastructure Configuration Example If the application stops prior to the completion message, a cell is probably unreachable! Check the network and DCE on both sides Found: 1 Trusted Cell Members TrustedCells[0].cell_id = b499b8ba-98d3-11d5-8d0d-c05e2fc1aa77 TrustedCells[0].uid = 101 TrustedCells[0].cell_name = /.../host_cell.clearlake.ibm.com TrustedCells[0].hpss_cell_id = 200040 /.../host_cell.clearlake.ibm.
Appendix E Infrastructure Configuration Example Status ==> Adding principal encina_admin to group encina_admin_group ... Status ==> Adding principal encina/sfs/hpss to group encina_admin_group ... Status ==> Adding principal encina/sfs/hpss to group encina_servers_group ... Status ==> Adding principal encina_admin to organization none ... Status ==> Adding principal encina/sfs/hpss to organization none ... Status ==> Creating account for encina_admin .
Appendix E Infrastructure Configuration Example Status ==> Init disk ... tkadmin init disk -server /.:/encina/sfs/hpss /dev/rdsk/c0t8d0s1 Initialized disk partition /dev/rdsk/c0t8d0s1 disk size (in pages): 128401 Status ==> Create a physical volume of /dev/rdsk/c0t8d0s1 ... tkadmin create pvol logpvhpss 64 1 /dev/rdsk/c0t8d0s1 0 Status ==> Create a logical volume of logpvhpss ...
Appendix E Infrastructure Configuration Example Status ==> Adding {user encina_admin ACQ} ... Status ==> Adding {user hosts/host.clearlake.ibm.com/self ACQ} ... Status ==> Clearing exclusive authority ... Status ==> Stopping server ... Status ==> Destroying credentials ...
Appendix E Infrastructure Configuration Example Enter CDS name of SFS to work with [/.:/encina/sfs/hpss] Querying SFS server /.:/encina/sfs/hpss Root SFS server: /.:/encina/sfs/hpss All SFS servers: /.:/encina/sfs/hpss Currently selected SFS server: /.
Appendix E Infrastructure Configuration Example Creating “serverconfig” New file serverconfig created Creating “accounting” New file accounting created ... Root SFS server: /.:/encina/sfs/hpss All SFS servers: /.:/encina/sfs/hpss Currently selected SFS server: /.
Appendix E Infrastructure Configuration Example New file bfmigrrec.1 created ... oot SFS server: /.:/encina/sfs/hpss All SFS servers: /.:/encina/sfs/hpss Currently selected SFS server: /.
Appendix E Infrastructure Configuration Example data port = 4020) (y/n)(y) Status ==> FTP Daemon setup completed, continue...
Appendix E Infrastructure Configuration Example Verify DCE is Running ===================== Status ==> DCE is running, continue... Prompt ==> Password for cell_admin: Prompt ==> Enter SSM user id Reply ===> (user id:)hpss DCE: Adding DCE User ‘hpss’ ...
Appendix E Infrastructure Configuration Example Start SSM Session ================= Verify User ID ============== Status ==> User root; verified; continue... Prompt ==> Do you wish to start SSM servers under user id root? Reply ===> (Y) Prompt ==> Do you wish to start SSM session under default user id? (hpss) s) Reply ===> (Y) Prompt ==> Enter SAM2_DISPLAY for displaying SSM session: hpss:0.0 E.
Appendix E etc.,) WARNING WARNING WARNING WARNING WARNING => => => => => Infrastructure Configuration Example ABOUT TO UN-CONFIGURE AN INSTALLATION NODE! ! ! ! ! -ALL- HPSS CONFIGURATION and METADATA WILL BE DESTROYED! -ALL- HPSS CONFIGURATION and METADATA WILL BE DESTROYED! -ALL- HPSS CONFIGURATION and METADATA WILL BE DESTROYED! (i.e., HPSS Server Principals, Keytabs, CDS Entries, SFS ! ! ! ! ! ! ! ! ! ! ! ! Files, ...
Appendix E 524 Infrastructure Configuration Example September 2002 HPSS Installation Guide Release 4.
Appendix F Additional SSM Information F.1 Using the SSM Windows Before using the SSM windows, it is helpful to be aware of some of the conventions used by SSM and by Sammi (on which SSM is based). While the following list does not cover all features of all windows, it does describe the most important points. • Almost all mouse actions should be performed with the left button. One exception is opening help windows (see Section F.2). Some windows (very few) may use other buttons for special purposes.
Appendix F Additional SSM Information • Most non-enterable text fields have gray backgrounds slightly lighter than the window background, and no borders. Some multi-line fields have the same background color, but use borders to help set them off from their surroundings. Some special fields display a fixed set of text strings, and use different background colors for different strings. These are mostly used to mark status conditions with different colors.
Appendix F Additional SSM Information of the popup. For potentially longer option lists, a “selection list” is used. This type uses scrollbars, if necessary, to display all the option data, and it has a “Cancel” button at the bottom of the list. You must click the “Cancel” button to dismiss this type of popup. • Popup selection lists are used in other places besides being part of option lists. This type of popup is gray in color, but they work the same way as the option list variety.
Appendix F Additional SSM Information The About Sammi menu option opens a window which displays information about Sammi and about Kinesix, the Sammi developer. Among other items, it shows the current version of the Sammi runtime, the host operating system and operating system version, and the hostname where Sammi is running. Again, this information can sometimes be useful in diagnosing problems.
Appendix F Additional SSM Information when it starts, and hpss.def is used to override the settings in s2_defaults.def. Some of the user-preference features which can be set in a defaults file are: 1. Whether or not popup items on windows cause a beep tone. 2. The volume of the beep which is issued when the user tries to type beyond the end of a field. 3. The text entry foreground and background colors. 4. The blink rate for blinking fields. Again, consult the comments for more information.
Appendix F Additional SSM Information • The file .motifbind is read by the Motif window manager when it starts up, and is used to set key bindings to make Sammi work correctly. There are six versions of this file in /usr/ lpp//sammi/data: .motifbind.dec .motifbind.hp .motifbind.ibm .motifbind.sgi .motifbind.sun_mit .motifbind.sun_news In theory, all SSM users need .
Appendix F Additional SSM Information preferences may be saved in disk files which SSM can automatically load each time the user logs into an SSM session. Preferences are saved in disk files in each user’s SSM work area, /opt/hpss/sammi/ssm_user/, where is an actual SSM username. While each SSM work area is owned by an SSM user, the SSM Data Server process is the entity which actually writes and reads the preferences files.
Appendix F Additional SSM Information ◆ SSM work areas for the new user are created. These work areas are /opt/hpss/sammi/ ssmuser/ (where is the actual ID of the new user), and /opt/hpss/sammi/ssmuser//mapfiles. ◆ A template Sammi configuration file (ssm_console.dat) is copied to the new user’s work area and modified to be user-specific. This involves editing the console ID number, process hostnames, and the RPC addresses and version numbers for the two vital Sammi processes.
Appendix F Additional SSM Information Multiple SSM Sessions. The defaults assume that each user will run only one SSM session at a time. If one user must run multiple SSM sessions, the easiest way to configure this is to create multiple user names for that user with hpssuser. If you choose not to do this, you must create completely separate Sammi execution environments for each concurrent session that a user may want to run.
Appendix F 534 Additional SSM Information September 2002 HPSS Installation Guide Release 4.
Appendix G High Availability G.1 Overview The High Availability (HA) feature of HPSS allows a properly configured HPSS system to automatically recover from a number of possible failures, with the goal of eliminating all single points of failure in the system. The same functionality can be used to minimize the impact of regularly scheduled maintenance and/or software upgrades. The High Availability feature is only available on AIX platforms.
Appendix G High Availability High availability is not the same as fault tolerance. The failures above are “protected against” from the standpoint that the HA HPSS system will be able to return to an operational state without intervention when any one of the above failures occur. There certainly may be some down-time, especially when the core server fails (crashes). After a recovery, HPSS will function properly, but it will no longer be in a Highly Available state.
Appendix G High Availability • Each node has two connections to the ethernet network. One is a “standby” that can take over the IP and hardware addresses of the primary adapter in case of failure. • There is an RS-232 serial cable connecting Node 1 and Node 2 to enable communication even in the event that the main network fails. G.2 Planning G.2.1 Before You Begin Ensure that appropriate prerequisite software is installed. See section Section 2.3.2.
Appendix G • High Availability Cluster Event Worksheet However, this is a large list of worksheets to go through, so they have been condensed down to cover only what is needed for an HA HPSS system. The following pages contain the condensed HA HPSS Planning Worksheet with suggested values. Please refer to the HACMP for AIX Planning Guide, Appendix A for help in filling in the blanks below. A sample completed worksheet is given in Section G.2.2.2: HA HPSS Planning Worksheet (example) on page 541.
Appendix G Network Name _ether1___ _ether1___ Network Attr __________ __________ High Availability Service Adapters: IP Label __________ Function _service___ IP Address __________ Network Name _ether1___ Network Attr __________ HW Address __________ Serial Networks: Network Name _serial1___ Network Type _RS232____ Node Names _hanode1__, _hanode2__ Serial Network Adapter Worksheet (node A): Slot Number __________ Interface Name __________ Adapter Label __________ Network Name _serial1___
Appendix G High Availability Adapter Logical Name __________, __________ Adapter Bus ID _6________, _5________ Shared Disk Bus IDs __________ Shared SCSI-2 Differential or Differential Fast/Wide Disks Worksheet (bus2): Type of Bus __________ Node Name _hanode1_, _hanode2_ Slot Number __________, __________ Adapter Logical Name __________, __________ Adapter Bus ID _6________, _5________ Shared Disk Bus IDs __________ Non-Shared Volume Group Worksheet (Non-Concurrent Access) (hanode1): Volume
Appendix G High Availability Volume Group Name __________ Major Number __________ Log LV Name (if any) __________ Physical Volumes __________, __________, __________, __________, __________, __________, __________, __________, __________, __________, __________, __________, __________, __________, __________ Volume Group Name __________ Major Number __________ Log LV Name (if any) __________ Physical Volumes __________, __________, __________, __________, __________, __________, __________,
Appendix G High Availability Cluster Name _HAHPSS_ Network: Name _ether1____ Type _Ethernet__ Attr _public___ Netmask _255.255.255.0_ Node names _hanode1_, _hanode2_ Boot and Standby Adapters (hanode1): IP Label _ha1boot__ _ha1stby___ Function _boot_____ _standby__ IP Address _192.94.47.244_ _192.94.48.
Appendix G High Availability Network Name _serial1___ Network Type _RS232____ Node Names _hanode1__, _hanode2__ Serial Network Adapter Worksheet (node A): Slot Number _sa2______ Interface Name _/dev/tty1_ Adapter Label _ha1tty___ Network Name _serial1___ Serial Network Adapter Worksheet (node B): Slot Number _sa2______ Interface Name _/dev/tty1_ Adapter Label _ha2tty___ Network Name _serial1___ Shared SCSI-2 Differential or Differential Fast/Wide Disks Worksheet (bus1): Type of Bus _Ultra SCSI_ No
Appendix G High Availability Non-Shared Volume Group Worksheet (Non-Concurrent Access) (hanode1): Volume Group Name _rootvg___ Physical Volumes _hdisk0___, _hdisk1___ Logical Volumes _hd1-6, hd8, hd9var_ Mirrored? _all are mirrored_ Non-Shared Volume Group Worksheet (Non-Concurrent Access) (hanode2): Volume Group Name _rootvg___ Physical Volumes _hdisk0___, _hdisk1___ Logical Volumes _hd1-6, hd8, hd9var_ Mirrored? _all are mirrored_ Shared Volume Groups/Filesystems: Volume Group Name _dce
Appendix G High Availability HPSS Resource Group: Resource Group Name _HPSSResourceGroup_ Resource Group Type _Rotating__ Application Server Name _HPSSAppServer_ Start Command _/var/hahpss/hpss_start.ksh_ Stop Command _/var/hahpss/hpss_stop.ksh_ G.3 System Preparation G.3.1 Physically set up your system and install AIX The first step to prepare a system for HA HPSS is to perform the physical setup. This includes all the physical cabling for power, networks, disk devices, etc.
Appendix G High Availability % bootlist -m normal hdisk0 hdisk1 % shutdown -Fr Note that the last step reboots the machine. This is necessary to disable the normal quorum checking for rootvg. G.3.3 Diagram the Disk Layout One key aspect to setting up your disks, volume groups, logical volumes, and file systems properly is knowing which disks to use. This is easiest if you draw a diagram to plan out your volume groups.
Appendix G • /opt/encinamirror • /usr/lpp/encina • /opt/hpss • /var/hpss • /usr/java130 • /usr/local/sammi High Availability Note that this lists default file system mount points only. Determine the appropriate file systems and mount points for your site before continuing. The sizing of these file systems is very important. To determine sizing for HPSS-related file systems, see Section 2.10.3: System Memory and Disk Space on page 132.
Appendix G High Availability G.4 Initial Install and Configuration for HACMP G.4.1 Install HACMP Install the following file sets from the HACMP 4.4 installation media on each node: cluster.base cluster.cspoc cluster.doc.en_US cluster.man.en_US cluster.taskguides cluster.vsm G.4.2 Setup the AIX Environment for HACMP 1. Create the /dev/tty<#> devices on each node 2. Setup your SCSI adapters on each node 3. Create the /.
Appendix G High Availability Note that you do not configure your service adapter. This is because HACMP will change your boot adapter into your service adapter when it brings up HPSS. Therefore, without HACMP running, you configure the adapter with its boot name and address. 6. Add Physical Volume IDs (PV IDs) for all shared hdisks (if necessary) If this is the first time that your shared disks have ever been used, you will need to manually assign PV IDs to them. To check for this, run lspv.
Appendix G High Availability -> Add a Cluster Definition Figure H-2 Adding a Cluster Definition G.4.3.2 Define the two nodes in the cluster Now tell the HACMP the names of the nodes in the cluster. These names do not have to be the same as the hostnames or adapters of the nodes since the names are used internally to HACMP. % smitty hacmp Cluster Configuration -> Cluster Topology -> Configure Nodes -> Add Cluster Nodes 550 September 2002 HPSS Installation Guide Release 4.
Appendix G High Availability Figure H-3 Adding Cluster Nodes G.4.3.3 Define the networks The SMIT path for configuring each network (ethernet and RS232) is the same % smitty hacmp Cluster Configuration -> Cluster Topology -> Configure Networds -> Add an Network At this point, you will be offered two options, IP-based Network and Non IP-based Network. Choose IP-based Network to configure your ethernet network.
Appendix G High Availability Figure H-4 Adding an IP-based Network Choose Non IP-based Network to configure your RS232 network: Figure H-5 Adding a Non IP-based Network 552 September 2002 HPSS Installation Guide Release 4.
Appendix G G.4.3.4 High Availability Define the network adapters When defining network adapters, there are some slight differences between the values supplied for service, boot, standby, and serial adapters: • Boot, standby, and serial adapters are tied to particular nodes, while service adapters are not. • Service adapters have associated hardware (MAC) addresses so that clients don’t have to flush their ARP caches when it moves from one physical adapter to another.
Appendix G High Availability Figure H-6 Adding an Ethernet Boot Adapter Figure H-7 Adding an Ethernet Standby Adapter 554 September 2002 HPSS Installation Guide Release 4.
Appendix G High Availability Figure H-8 Adding an Ethernet Service Adapter Figure H-9 Adding a Serial Adapter HPSS Installation Guide Release 4.
Appendix G G.4.3.5 High Availability Synchronize the Cluster Topology By this point in the configuration, you should have given HACMP all the information it needs about the topology of your networks. However, only one of the nodes has this configuration information, and it needs to be on both nodes. So, it’s time to synchronize the cluster topology. See Section G.10.4.1: Synchronize Topology on page 573 for instructions.: G.4.3.
Appendix G High Availability -> Cluster Resources -> Change/Show Resources/Attributes for a Resource Group [HPSSResourceGroup] There are only three fields that need to be filled in on the SMIT screen at this time: Service IP Label, Filesystems, and Volume Groups. Special care should be taken to ensure that all shared file systems and volume groups are listed (it’s usually easy if you use the F4 pick lists): Figure H-11 Configuring a Resource Group G.4.3.
Appendix G G.4.4 High Availability Configure DCE, SFS, and HPSS G.4.4.1 Start the Cluster To continue configuring DCE, Encina, and HPSS, it will be necessary to start the HACMP cluster. This will cause one of the nodes to acquire the service address, vary on the shared volume groups, and mount the shared file systems. For information on how to start cluster, see Section G.10.1: Startup the Cluster on page 570. G.4.4.
Appendix G High Availability HACMP is able to control HPSS by using a set of scripts that are included in the HPSS installation under $HPSS_ROOT/tools/ha (by default, /opt/hpss/tools/ha): hpss_environment hpss_start.ksh hpss_stop.ksh hpss_sync.ksh hpss_verify.ksh hpss_snapshot.ksh hpss_notify.ksh hpss_aix_error.ksh hpss_cluster_notify.ksh These scripts need to be stored locally on each node’s internal disks, not on shared storage.
Appendix G High Availability 6. Create /var/hahpss (or corresponding directory) on node 2. 7. Synchronize the scripts using hpss_sync.ksh. If you are setup properly with your /.rhost files, run the following from the script directory: % ./hpss_sync.ksh Where is the standby address of node 2. This will copy the HA HPSS scripts from the current node (node 1) to the specified node (node 2).
Appendix G 4. High Availability Remove lines that meet any of the following criteria: ➢ Executable name doesn’t begin with hpss ➢ Executable name is hpssd, hpss_ssmds, or hpss_ssmsm ➢ The process is a Non-DCE Client Gateway subprocess, hpss_ndcg_* ➢ The process is a Mover subprocess, hpss_mvr_* G.5.2 5. From the beginning of each line, replace all the text preceeding the executable name with “./”. 6. On each line, place double quotes around the server names. 7. At the end of each line, put a “ &”.
Appendix G High Availability Figure H-12 Adding an Application Server G.5.2.2 Attach the Application Server to the Resource Group % smitty hacmp Cluster Configuration -> Cluster Resources -> Change/Show Resources/Attributes for a Resource Group [HPSSResourceGroup] There are only four fields that need to be filled in on this SMIT screen: “Service IP Label”, “Filesystems”, “Volume Groups”, and ”Application Servers”.
Appendix G High Availability Figure H-13 Adding an Application Server to a Resource Group G.5.2.3 Bring Down the Cluster Now that the cluster is back in sync and that it knows how to run the hpss_stop.ksh script, it’s time to bring down the cluster. However, HACMP doesn’t yet know how to stop HPSS, SFS, and DCE (it won’t know that until the cluster is synchronized in the next step), so you’ll need to shut HPSS, SFS, and DCE down manually. After HPSS, SFS, and DCE are down, bring down the cluster.
Appendix G G.5.2.5 High Availability Bring Up the Cluster (just to test that it works) Now bring the cluster back up using the instructions in Section G.10.1: Startup the Cluster on page 570. When the cluster is active again, HPSS will be fully operational. It will be able to service requests immediately, and all an administrator needs to do is start an SSM session to begin administering the system. G.5.
Appendix G G.5.4 High Availability Setup Error Notification Even though an HA HPSS system is designed to recover from failures, the recovered system is often unable to handle subsequent failures. For this reason, it is important that administrators know immediately when a component fails so that it can be replaced or fixed quickly in order to get the HA HPSS system back to a highly available state. This is why the hpss_notify.ksh script is supplied. If you have customized your hpss_notify.
Appendix G High Availability Figure H-15 Configuring AIX Error Notification G.5.4.2 HACMP Notify Events When some failures occur, they generate events in HACMP. These events can be configured to cause a notification to be sent before and after the event occurs using the hpss_cluster_notify.ksh script. To do this: % smitty hacmp Cluster Configuration -> Cluster Resources -> Cluster Events -> Change/Show Cluster Events [Choose the event] 566 September 2002 HPSS Installation Guide Release 4.
Appendix G High Availability Figure H-16 Configuring Cluster Event Notification Fill in the Notify Command field using the following syntax: hpss_cluster_notify.ksh There are no arguments to pass. The events that you should consider setting up this way include: fail_standby join_standby network_down network_up node_down node_up node_up_complete swap_adapter It may be necessary to setup these event notification on each node independently. G.5.
Appendix G High Availability The answer to this problem will depend from site to site, but one good way to make this work is to have a set of intermediate scripts between the crontab file and the commands it executes. These scripts could test for the existence of any prerequisite files and/or file systems and only execute the associated command if all the prerequisites are met. This is exactly the strategy we recommend for the crontab entries for the sfs_backup_util in Section G.
Appendix G High Availability G.7 Metadata Backup Considerations Special care should be taken when configuring your HA HPSS system to work with the sfs_backup_util to keep the source, configuration, and backup (LA and TRB) files stored on the shared disks. Otherwise a failover could easily make backup files or the sfs_backup_util program itself unreachable. Also, crontab entries related to SFS Backup will require special attention.
Appendix G High Availability After setting up ssh and scp for your cluster, follow these steps to configure HA HPSS to use them: 1. Edit the /var/hahpss/hpss_environment file on either of the cluster nodes, and set the following environment variables: HPSS_REMOTE_SHELL=ssh HPSS_REMOTE_COPY=scp 2. Now synchronize this change to the other node: % ./hpss_sync.
Appendix G High Availability Figure H-18 Starting Cluster Services 2. Alternatively, it is possible to use a slightly different SMIT path to start the Cluster Manager on the local node. Of course, this requires logging into each node independently to activate both Cluster Managers. % smitty hacmp Cluster Services -> Start Cluster Services Take the defaults and press . G.10.2 Shutdown the Cluster The procedure for shutting the cluster down is almost exactly the same as starting the cluster.
Appendix G High Availability % smitty hacmp Cluster Services -> Stop Cluster Services G.10.3 Verify the Cluster Once the HA HPSS verification method has been defined in HACMP (Section G.5.3: Define HA HPSS Verification Method on page 564), go to SMIT to verify your cluster: % smitty hacmp Cluster Configuration -> Cluster Verification -> Verify Cluster Figure H-19 Verifying the Cluster The output from this will include both HACMP’s normal checks and the HA HPSS-specific checks.
Appendix G High Availability G.10.4.1 Synchronize Topology In order to synchronize topology changes to the cluster, go to SMIT: % smitty hacmp Cluster Configuration -> Cluster Topology -> Synchronize Cluster Topology This will take you to the “Synchronize Cluster Topology” SMIT window. Accept the defaults by pressing , and your topology should synchronize successfully.
Appendix G High Availability hpss_start_list hpss_stop.ksh hpss_sync.ksh hpss_verify.ksh Update complete G.10.5 Move a Resource Group It is often useful to have HACMP move a resource group to another node in the cluster. This will result in a short period of down time as HA HPSS is shutdown on the active node and brought up on the standby node, but it is a convenient way to free up a node for maintenance.
Index A Access Control List (ACL) access control list extensions, 112 Account Apportionment Table, 493 Account Index, 493 Account Map site style, 493 UNIX style, 493, 494 Accounting account apportionment table, 493 account index, 493 Account Map, 493 accounting policy configuration, 289 accounting policy, 36 accounting reports, 495 charging policy, 44 examples, 491–495 HPSS infrastructure, 31 maintaining/modifying the Account Map, 494 processing of HPSS accounting data, 491 site accounting requirements, 491
vendor software requirements, 392 API, see Client Application Program Interface Application Program Interface (API), see Client Application Program Interface Audit, see Security Authentication, see Security Authorization, see Security Automated Cartridge System Library Software (ACSLS), 390 B BFS, see Bitfile Server Bitfile Server, 22, 26 BFS metadata, 112 BFS storage segment checkpoint, 114 BFS storage segment unlinks, 114 configuration metadata, 110 creating the BFS specific configuration, 324 security p
Client Application Program Interface (API), 29, 34 interface considerations, 58 performance considerations, 139 security policy, 87 Client Platforms Supported, 37 Configuration Planning, 39 Configuration, also see Creating Configurations configuring HPSS infrastructure on installation node, 240 configuring HPSS with DCE, 242 defining HPSS environment variables, 216 HPSS configuration limits, 250 HPSS configuration performance considerations, 138 HPSS configuration, 249–468 infrastructure configuration, 215–
storage hierarchy, 315 storage policy, 279 storage server specific, 394 storage space, 465 D Data Server, 29, 79, 110, 120 DCE configuration 445 Configuring on AIX 445 HDM Server 448 DCE, see Distributed Computing Environment Delog, 77 Devices/Drives creating the device/drive configuration, 401 disk devices, 57 tape robots 54 DFS archived filesets 436 configuration 435, 445 Configuring on AIX 445 DMAP Gateway Server 441 filesets 436 HDM Server 440 HPSS modifications 440 kernel extension 445 mirrored filese
transaction log, 131 See also Structured File Server (SFS) Environment variables 216 Client API environment 413 F File Family, 22 File Transfer Protocol (FTP), 32, 33 FTP Daemon configuration, 422 interface considerations, 59 performance considerations, 138 security policy, 88 set up FTP Daemon, 244 Files duplicate file policy, 44 Filesets See also DFS FTP Daemon FTP Daemon configuration, 422 set up FTP Daemon, 244 FTP, see File Transfer Protocol G Graphical User Interface (GUI), 29, 35 storage system man
interface considerations, 58 performance considerations, 135 requirements and intended usages, 43 sizing considerations, 105–135 static configuration variables, 125 storage characteristics configuration, 305 HIPPI, see High Performance Parallel Interface HPSS, see High Performance Storage System I IBM 3494/3495 PVR, 55, 389 cartridge import and export, 389 configuration requirements, 388 server considerations, 72 vendor information, 389 vendor software requirements, 388 Infrastructure, 20, 31, 215 configur
PFTP, 59, 138 L Latency, 104 Location Policy, 36 Log Client, 33, 77 configuration metadata, 110 creating the Log Client specific configuration, 333 Log Daemon, 33, 77 configuration metadata, 110 creating the Log Daemon specific configuration, 336 log file archival, 337 Logging, 33 creating the logging policy configuration, 293, 301 logging policy, 36, 119 metadata, 119 performance considerations, 140 server considerations, 77 storage policy considerations, 35, 89 LTO 72 M Management, 35 Mass Storage Syste
NFS and Mount Daemons, 120 PVL, 117 PVR, 118 server configuration metadata, 109 sizing assumptions, 121 sizing computations, 126 sizing spreadsheet, 121 tape storage server, 116 Migration migration policy, 35 Migration/Purge Server (MPS), 27 checkpoints, 118 configuration metadata, 110 creating the MPS specific configuration, 341 metadata, 118 server considerations, 65 MM, see Metadata Manager MMON, see Metadata Monitor Mount Daemon, see NFS Mount Daemon Mover (MVR), 29 configuration metadata, 111 creating
creating the NFS Daemon specific configuration, 357 memory and disk space requirements, 134 metadata, 120 NFS Daemon configuration, 431 NFS Mount Daemon, 77 configuration metadata, 111 metadata, 120 NFS, 33 NFS Server, see NFS Daemon NFS, see Network File System Non-DCE Client Application Program Interface (Non-DCE Client API), interface considerations, 58 Non-DCE Client Gateway Configurations, configuration metadata, 111 Non-DCE Client Gateway, server considerations, 81 NS, see Name Server O Objects, 22,
configuration metadata, 111 creating the PVL specific configuration, 370 drives, 117 jobs, 117 metadata, 117 physical volumes, 117 server considerations, 70 unlocking PVL drives, 465 Physical Volume Repository (PVR), 28, 71 AML PVR information, 72 cartridges, 118 configuration metadata, 111 creating the PVR specific configuration, 373 IBM 3494/3495 PVR information, 72, 389 metadata, 118 operator mounted drives, 56 operator PVR, 72 server considerations, 71 StorageTek PVR information, 71, ??–392 tape robots,
storage system capacity, 43 storage system conversion, 45 usage trends, 44 S Sammi, 29, 35 setting up Sammi license key, 213 software considerations, 48 SSM, 79 Security, 32, 45 audit, 32, 89 authentication, 32 authorization, 32 enforcement, 32 management, 32 security policy, 36 site security policy, 87 storage policy considerations, 87 Servers creating the general server configuration, 263–?? general server configuration, 110, 262 HPSS metadata, 106 HPSS server considerations, 62 HPSS servers, 25 platform
Startup Daemon server considerations, 78 set up Startup Daemon, 244 STK, see StorageTek PVR Storage Classes, 23, 93 metadata, 112 storage class characteristics considerations, 93 storage class configuration, 305 Storage Hierarchies, creating storage hierarchy configuration, 315 metadata, 113 storage characteristics considerations, 101 Storage Level migration policy for tape, 83 Storage Map, 23 disk storage server metadata, 115 tape storage server metadata, 116 Storage Policy HPSS storage policy configuratio
Storage System Management (SSM), 20 adding SSM administrative user, 244 additional SSM information, 525–?? components, 29 management interface, 35 metadata, 120 Non-standard SSM configurations, 532 server considerations, 79 setting up an SSM user, 531 SSM configuration and start up, 252 SSM server configuration and start up, 253 SSM user session configuration and start up, 253 start up SSM servers/user session, 245 using SSM for HPSS configuration, 251 StorageTek PVR, 55, 389, 392 cartridge import and expor
Transmission Control Protocol/Internet Protocol (TCP/IP) Mover, 72 network considerations, 53 NFS, 60 PFTP, 59, 138 STK, 55 user interface FTP, 33 user interface PFTP, 33 U UDP, see user Data Protocol UID, see User Identifier UNIX-style Accounting accounting policy, 36 site accounting requirements, 491 site accounting table, 493 User Data Protocol (UDP) network considerations, 54 NFS, 60 User Identifier (UID) HPSS policy modules for accounting, 36 Name Server, 62 site account apportionment table, 493 site
Accounting Policy, 290 Bitfile Server Configuration, 325 Configure Mover Device and PVL Drive, 404 File Family Configuration, 322 HPSS Class of Service, 319 HPSS DMAP Gateway Server Configuration, 330 HPSS Health and Status, 252 HPSS Logon, 255 HPSS Migration/Purge Server Configuration, 342 HPSS Storage Class, 306 HPSS Storage Hierarchy, 316 Logging Client Configuration, 334 Logging Daemon Configuration, 337 Logging Policy, 299, 301 Metadata Monitor Configuration, 340 Migration Policy, 282 Mover Configurati
September 2002 HPSS Installation Guide Release 4.