Managing Systems and Workgroups A Guide for HP-UX System Administrators Edition 9 HP Servers and Workstations Manufacturing Part Number : B2355-90950 E0306 Printed in the USA March 2006 © Copyright 1997-2006 Hewlett-Packard Development Company, L.P.
Legal Notices Proprietary computer software. Valid license from HP required for possession, use or copying. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor’s standard commercial license. Warranty The information contained herein is subject to change without notice.
Publication History The manual publication date and part number indicate its current edition. The publication date will change when a new edition is released. To ensure that you receive the new editions, you should subscribe to the appropriate product support service. See your HP sales representative for details. First Edition October 1997, B2355-90157, HP-UX 9.0 through 11.0 Printed, CD-ROM (Instant Information), and Web (http://www.docs.hp.com) Second Edition May 1998, B2355-90664, HP-UX 9.
Eigth Edition September 2005, B1255-90912, HP-UX 10.0 through 11i v2 (B.11.23) Web (http://www.docs.hp.
Conventions We use the following typographical conventions. audit (5) An HP-UX manpage. audit is the name and 5 is the section in the HP-UX Reference. On the web and on the Instant Information CD, it may be a hot link to the manpage itself. From the HP-UX command line, you can enter “man audit” or “man 5 audit” to view the manpage. See man (1). Book Title The title of a book. On the web and on the Instant Information CD, it may be a hot link to the book itself. KeyCap The name of a keyboard key.
Contents Preface HP-UX 11i Release Names and Release Identifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . Changes in System Management Tools at HP-UX 11i Version 2 . . . . . . . . . . . . . . . . . SAM X-Window-Based Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SCM Web-Based Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SCR and DMI Replaced by New SIM Tool at 11i v2 . . . . . . . . . . . . . .
Contents Servers for Specific Purposes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 File Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Application Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 A Sample Workgroup / Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Should You Share Users’ Home and Mail Directories?. . . . . . . . . . . . . . . . . . . . . . . Planning your Printer Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LP Spooler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Overview of the LP Spooler . . . . . . . .
Contents Possible Problems Exchanging Data Between HP-UX and PCs . . . . . . . . . . . . . . . . . ASCII End-of-Line Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Endian Difference Problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . What is Endian? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Internet Protocols and IPv6. . . . . . . . . . . . . . . . . . . . . .
Contents Security Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Disabling Use of cfengine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Logging Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . cfengine Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Introduction to syslog . . . . .
Contents Adding a User to a System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Automating the Process of Adding a User . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Controlling File Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Defining Group Membership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Setting File Access Permissions . . . . . . . . . . . . . .
Contents Using “Site Hiding” . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring a System to Receive Electronic Mail . . . . . . . . . . . . . . . . . . . . . . . . . . . Central Mail Hub Topography (Receiving E-mail) . . . . . . . . . . . . . . . . . . . . . . . . Configuring the hub . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring the Clients . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Common Behavior for Kernel Configuration Commands . . . . . . . . . . . . . . . . . . . . . Common Command Line Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Common Output Formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Common Exit Status Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Common Security Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Making Configuration Changes with System Files. . . . . . . . . . . . . . . . . . . . . . . . Uses for System Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Managing Device Bindings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Primary Swap Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dump Devices . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Third-Party Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exporting a File System from an HP-UX Server . . . . . . . . . . . . . . . . . . . . . . . Troubleshooting NFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Requisite Entries. . . . . . . . . . . . . . . . . . . . . .
Contents Removing a Printer from the LP Spooler. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Removing a Printer from a Printer Class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Removing a Printer Class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring Printers to Use HPDPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Implementing HPDPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Enabling / Disabling Autoboot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471 Booting from an Alternate Boot Source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471 Booting from an Alternate Boot Device. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471 Booting from an Alternate Kernel File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472 Changing the PRI, HAA, and ALT Boot Paths . . . . . . . . . . . . . . . . . .
Contents Overview of the Shutdown Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ready . . . Set . . . Go! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Types of Shutdown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Normal (Planned) Shutdown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Power Failure. . . . . . . . . . . . . . . . .
Contents Kernel Dump Device Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Run Time Dump Device Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dump Order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . What Happens When the System Crashes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Systems Running HP-UX Releases Prior to Release 11.0 . . . . . . . . . . . . . . . .
Contents Managing Logical Volumes Using HP-UX Commands . . . . . . . . . . . . . . . . . . . . . . Example: Creating a Logical Volume Using HP-UX Commands . . . . . . . . . . . . . Tasks That You Can Perform Only with HP-UX Commands . . . . . . . . . . . . . . . . . . Extending a Logical Volume to a Specific Disk . . . . . . . . . . . . . . . . . . . . . . . . . . . Creating Root Volume Group and Root and Boot Logical Volumes . . . . . . . . . . . Backing Up and Restoring Volume Group Configuration . . . .
Contents Copying a File System Across Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613 Dealing with File System Corruption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 613 Diagnosing a Corrupt File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 614 Locating and Correcting Corruption Using fsck . . . . . . . . . . . . . . . . . . . . . . . . . . 614 Checking an HFS File System . . . . . . . . . . . . . . . . . . . . .
Contents Resizing a JFS File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . To Resize a JFS File System using fsadm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . To Resize a Basic JFS File System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Examples and Cookbook Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Managing Large Files. . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Configuring Primary and Secondary Swap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring Dump . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . How Much Disk Space Should Be Used for Dump? . . . . . . . . . . . . . . . . . . . . . . . Configuring Dump Areas Using HP-UX Commands . . . . . . . . . . . . . . . . . . . . . . Backing Up Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Restoring Your Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Determining What Data to Restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Before Restoring Your Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Restoring Your Data Using SAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Restoring Your Data Using HP-UX Commands . . . .
Contents Installing Extension Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Removing Patches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Managing System Performance. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Performance Bottlenecks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Guidelines . . . . . . . . . . . . . . . .
Contents The /etc/passwd File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Eliminating Pseudo-Accounts and Protecting Key Subsystems . . . . . . . . . . . . . . . System Access by Modem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Protecting Programs from Illegal Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Managing Access to Files and Directories . . . . . . . . . . . . . . . . . . .
Contents Guidelines for System Initialization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Guidelines for Trusted Backup and Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Guidelines for Mounting and Unmounting a File System . . . . . . . . . . . . . . . . . . . . Guidelines for Handling Security Breaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tracking Root . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents Manipulating the Trusted System Databases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuring NFS Diskless Clusters for Trusted Systems . . . . . . . . . . . . . . . . . . . . . . Choice 1: Clusters with Private Password Databases . . . . . . . . . . . . . . . . . . . . . . . Converting a Nontrusted Cluster to a Trusted Cluster . . . . . . . . . . . . . . . . . . . . Converting a Trusted Standalone System to Trusted Cluster . . . . . . . . . . . . . . .
Contents Using SAM with PAM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . System-Wide Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Per-User Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The pam.conf Configuration File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The pam_user.conf Configuration File. . . .
Contents Finding Large Files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Examining File System Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Moving a Directory (within a File System) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Moving a System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Popping the Directory Stack. . . . . . . . . . .
Contents Configuring a Relay Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Setting Up the Cluster Server. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A Preview of What You Will Need to Do . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Help Information for NFS Diskless Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Setting the Policies for a Cluster . . . . . . . . . . . .
Contents What is AutoRAID? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pros and Cons of AutoRAID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Recommended Uses of AutoRAID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . HP SureStore E Disk Array. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using Hot Spared Disks . . . . . . . . . .
Contents 34
Preface HP-UX 11i Release Names and Release Identifiers With HP-UX 11i, HP delivers a highly available, secure, and manageable operating system that meets the demands of end-to-end Internet-critical computing. HP-UX 11i supports enterprise, mission-critical, and technical computing environments. HP-UX 11i is available on both PA-RISC systems and Intel Itanium-based systems. Each HP-UX 11i release has an associated release name and release identifier.
Changes in System Management Tools at HP-UX 11i Version 2 SAM X-Window-Based Interface For HP-UX 11i Version 2, some portions within the X-Window-based System Administration Manager (SAM) interface have been replaced by their equivalent, web-based SCM (Servicecontrol Manager) interface. Sections of this document that show details of the SAM interface have not yet been fully updated to reflect this change. However, for help when using the SCM tool, you can select the online help within SCM.
Additionally, SCM uses the WBEM (Web-Based Enterprise Management) protocol, a replacement for SNMP (Simple Network Management Protocol). As its name implies, SNMP cannot handle complex data and has specific security issues; WBEM resolves these issues. SCM also offers a command-line interface. For detailed information on SCM, including the WBEM protocol, please see the online help within SCM and the HP Servicecontrol Manager User’s Guide, available at http://docs.hp.com.
These utilities help simplify the management of groups of systems and of Serviceguard clusters. Configuration synchronization provides policy-based configuration management for groups of systems and for Serviceguard clusters. With configuration synchronization, you specify a specific server as your configuration master; all your other systems are defined as clients. The configuration master retains copies of all files that you want synchronized across your clients.
Finding HP-UX Information The following table outlines where to find basic system administration information for HP-UX. This table does not include information for specific products. Table 2 Finding HP-UX Information and Documents If you need to. . . Go to . . . Located at . . .
What’s in This Document This document: • Supports HP-UX 11i and 11.x, including 64-bit functionality, as well as HP-UX 10.x. • Covers administration of interdependent workgroups, as well as single systems. It includes the following major topics: • Chapter 1, “Systems and Workgroups,” on page 43 Definition of terms and categories. • Chapter 2, “Planning a Workgroup,” on page 55 Choosing among alternative models for distributing applications, data and other computing resources.
Information on managing the security for an individual workstation or server. • Chapter 9, “Administering a Workgroup,” on page 857 Maintenance involving more than one system; links to useful procedures throughout the document. See: — “How To:” on page 878 — “Troubleshooting” on page 890 • Chapter 10, “Setting Up and Administering an HP-UX NFS Diskless Cluster,” on page 897 Information on NFS Diskless (HP-UX 10.20 only).
1 Systems and Workgroups This document is for administrators of HP-UX systems and workgroups. The introductory topics that follow should help you understand the terms and categories we’ll be using.
Systems and Workgroups Workgroup Focus Workgroup Focus Most system administration manuals, including the HP-UX System Administration Tasks manual in past releases, focus on single-system tasks, telling you how to configure and maintain individual systems. This is essential information, but it is not enough.
Systems and Workgroups How We Are Using the Terms “System” and “Workgroup” How We Are Using the Terms “System” and “Workgroup” System In this document, we use the term system to mean one HP-UX system, a single “box”. A system so defined always has its own CPU (for example, we do not refer to XTerminals as systems) but may or may not have its own root file system. See “Types of System” on page 46 for more information.
Systems and Workgroups Types of System Types of System Single-User versus Multiuser For the purposes of this document, we’ll be distinguishing between two ways for people to use a given system: • as a single-user workstation, usually on someone’s desk and used mainly or exclusively by that person; • as a multiuser system, often kept in a computer room, with which individual users communicate by means of a terminal, or terminal-emulator on a desktop system connected by a LAN or modem.
Systems and Workgroups Types of System Partitioned Systems (The Partitioning Continuum) HP-UX 11i provides many ways to isolate or combine system resources (for example CPUs, memory, and I/O cards). HP refers to the collection of system administration solutions that provides these capabilities as the Partitioning Continuum.
Systems and Workgroups Types of System is capable of supporting its own operating system1. The term nPartitions derives from algebra, where the “n” refers to a variable number, indicating that you can group (and regroup) the cell boards in your system in different ways to create varying numbers and sizes of partitions (to best suit your needs).
Systems and Workgroups Types of System PRM Process Resource Manager is a resource management tool used to control the amount of resources that processes use during peak system load (at 100% CPU, 100% Memory, or 100% disk bandwidth utilization). PRM can guarantee a minimum allocation of system resources available to a group of processes through the use of PRM groups.
Systems and Workgroups Types of System Table 1-2 Vpars Manpages: vparboot (1M) vparcreate (1M) vparmodify (1M) vparremove (1M) vparreset (1M) vparstatus (1M) vparutil (1M) vparresources (5) vpartition (5) Not all HP-UX-based machines support virtual partitions. For detailed information on which machines and HP-UX releases support vPars, see Installing and Managing HP-UX Virtual Partitions (vPars). WLM WLM expands on the features of PRM by providing a more dynamic way to allocate resources.
Systems and Workgroups Types of System Operating Systems This document is for administrators of HP-UX systems, and the workgroups we describe are predominantly made up of such systems, with some PCs running Microsoft Windows or Linux operating systems.
Systems and Workgroups Types of Workgroup Types of Workgroup For the purposes of this document, a workgroup is group of interdependent, predominantly HP-UX systems, but may also include some Windows NT systems, The HP-UX systems may or may not have their own root file systems. See “NFS Diskless” on page 52, “Multiuser” on page 52 and “Client-Server” on page 53. NFS Diskless Refers to workgroups, or portions of workgroups, that get the root of their HP-UX file system from a remote server.
Systems and Workgroups Types of Workgroup • “Configuring a System” on page 137 • “Administering a System: Managing Disks and Files” on page 555 • “Administering a System: Managing Printers, Software, and Performance” on page 701 Client-Server For more information see: Chapter 1 • “Client-Server Model” on page 59 • “Configuring a Workgroup” on page 383 • “Administering a Workgroup” on page 857 53
Systems and Workgroups Types of Workgroup 54 Chapter 1
Planning a Workgroup 2 Planning a Workgroup The topics that follow are primarily intended to help someone who is about to set up a workgroup from scratch, but you may also find them useful if you’re reconfiguring or expanding the workgroup. If you need to know what we mean by workgroup, see “How We Are Using the Terms “System” and “Workgroup”” on page 45.
Planning a Workgroup Choosing a File-Sharing Model Choosing a File-Sharing Model If you are about to set up a new workgroup, or make large changes to an existing one, you must first decide how you will distribute the computing resources among the users. The biggest of these decisions concerns how users will share files and applications.
Planning a Workgroup Choosing a File-Sharing Model • Security: — Easy to protect physically (e.g, in a locked computer room). — Allows you to keep sensitive data (or all data) off the desktop. Disadvantages • Large system required, possibly with multiple processors: — Special power and climate requirements. • Fragile: — If system crashes, or is down for maintenance, no one works. — Failure of any component likely to affect everyone. • Inflexible:.
Planning a Workgroup Choosing a File-Sharing Model CAUTION Advantages NFS Diskless is a good choice for workgroups, or portions of workgroups, running 10.0 through 10.20, but it is not supported on later releases.
Planning a Workgroup Choosing a File-Sharing Model Client-Server Model Client-server is an umbrella term we are using to refer to workgroups that share resources other than the root file system; that is, the workstations run HP-UX from their own local disks, but depend on an NFS server for non-“system” files and applications, and may also have common arrangements for printing, backups and user-access.
Planning a Workgroup Choosing a File-Sharing Model — NFS mounts can create complex cross-dependencies between systems; these can become hard to keep track of and pose problems during boot and shutdown. • Performance: — Heavily dependent on LAN and subnet performance. — Running applications locally may alleviate LAN bottlenecks, but at the cost of losing the computing power of a large server.
Planning a Workgroup Distributing Applications and Data Distributing Applications and Data The topics that follow are intended to help you plan the overall configuration of the workgroup, in terms of what pieces of the workflow reside and run on what systems. This section will make better sense if you have already read “Choosing a File-Sharing Model” on page 56; you will notice that the discussion is biased towards the “Client-Server Model” on page 59.
Planning a Workgroup Distributing Applications and Data export a given application from a single subdirectory under /opt, rather than having to export several subdirectories for each application, or even the whole of /usr/local. What To Distribute; What To Keep Local Theory The V.4 file-sharing paradigm divides HP-UX directories into two categories: private and shared (sometimes also referred to as dynamic and static).
Planning a Workgroup Distributing Applications and Data HP-UX release. HP recommends you implement such tightly coupled configurations only under NFS Diskless (currently restricted to 10.x systems).
Planning a Workgroup Distributing Applications and Data For the greatest ease of management (backups and software maintenance) you should: • keep data in one central place where it can be easily backed up • maintain only one version and one copy of each application • if possible, concentrate applications on a single, powerful server Aim for the simplest configuration that is consistent with acceptable performance.
Planning a Workgroup Distributing Applications and Data File Server Users normally do not log in to a file server; they get the data they need from it by means of NFS mounts. The main requirements for a file server are: • plenty of disk space Disk striping, which allows I/O to multiple spindles concurrently, may improve throughput. • plenty of RAM • fast I/O interfaces such as Fast-Wide SCSI.
Planning a Workgroup Distributing Applications and Data • In addition, a powerful processor, and possibly multiple processors, so that it can run large applications, and many applications concurrently. For reasons of application compatibility, an application server may also need more frequent operating-system updates than a file server.
Planning a Workgroup A Sample Workgroup / Network A Sample Workgroup / Network To provide consistency among the case studies and examples throughout Managing Systems and Workgroups: A Guide for HP-UX System Administrators (MSW), we have developed a sample workgroup/network to demonstrate a variety of situations and tasks. While it is impossible to account for every possible combination of equipment and network topography, we have tried to account for many common configurations.
Planning a Workgroup A Sample Workgroup / Network Figure 2-1 Managing Systems and Workgroups Example Network Diagram 15.nn.yy 15.nn.
Planning a Workgroup A Sample Workgroup / Network The MSW Network (System by System) The MSW network includes a variety of system types: server systems, workstations, personal computers, and thin clients. There are also several network-based printers. For details on the specific systems listed in the preceding table, review the following descriptions until you find the system that interests you.
Planning a Workgroup A Sample Workgroup / Network In addition to its use as a file server, it is also the gateway computer between the two subnets net1 and net2. It has two network cards, one connecting to net1 via thin-lan coaxial cable, and one connecting to net2 via a 10-BaseT network hub. flserver also has a printer directly connected to it. appserver flserver System Name: appserver.net2.corporate System Type: HP 9000 Enterprise Server Superdome 32-way (single partition) Network (IP) Address: 15.
Planning a Workgroup A Sample Workgroup / Network flserver.net2) and two IP addresses (one for each network interface card). Workstations There are four workstations in the MSW example network, one on the net1 subnet, the others on net2. Each is a different model, and they run various versions of HP-UX to reflect many installations in the real world where not every computer is running the same HP-UX release. wsj6700 Chapter 2 wsj6700 This is the workstation connected to the net1 subnet.
Planning a Workgroup A Sample Workgroup / Network wszx6 wsb2600 Features: Computer in the workgroup running an older version of HP-UX operating system. System Name: wszx6.net2.corporate System Type: HP Integrity Model zx6000 Network (IP) Address: 15.nn.xx.103 Operating System: HP-UX 11i Version 2 Physical Memory: 6 GB Disk Space: 128 GB Features: Software development workstation System Name: wsb2600.net2.corporate System Type: HP 9000 Model b2600 Network (IP) Address: 15.nn.xx.
Planning a Workgroup A Sample Workgroup / Network Personal Computers (PCs) hp Hewlett Packard The MSW example network includes two PCs, each running the “Microsoft Windows” operating systems. pc735n pc735n This HP Pavilion 735n desktop PC is located on the net1 subnet. pcs3300nx This Compaq Presario s3300nx desktop PC is located on the net2 subnet. System Name: pc735n.net1.corporate System Type: HP Pavilion 735n desktop PC Network (IP) Address: 15.nn.yy.
Planning a Workgroup A Sample Workgroup / Network Network (IP) Address: 15.nn.xx.2 Operating System: Microsoft Windows XP Physical Memory: 1 GB Disk Space: 120 GB Features: Thin Clients The MSW example network also includes two thin client computers. These devices have no disks of their own and are highly dependent on other computers in the network. thin20 A Compaq EVO T20 Thin Client thin30 A Compaq EVO T30 Thin Client Compaq EVO T20 Thin Client thin20 74 System Name: thin20.net2.
Planning a Workgroup A Sample Workgroup / Network Operating System: Microsoft Windows NT Embedded Physical Memory: 8 MB Disk Space: Features: thin30 System Name: thin30.net2.corporate System Type: Compaq EVO T30 Thin Client Network (IP) Address: 15.nn.xx.151 Operating System: Microsoft Windows XP Embedded Physical Memory: 16 MB Disk Space: Features: Network Printers The MSW network also contains several network printers, one on each subnet.
Planning a Workgroup Setting Disk-Management Strategy Setting Disk-Management Strategy This section covers: • “Distributing Disks” on page 76 Which systems should you attach the workgroup’s disks to? • “Capacity Planning” on page 77 How much disk space do you need? • “Disk-Management Tools” on page 79 LVM, mirroring, striping - what are they and what are they for? Distributing Disks Read these guidelines in conjunction with “Distributing Applications and Data” on page 61.
Planning a Workgroup Setting Disk-Management Strategy Capacity Planning As with memory, the simple answer to the question, “How much disk capacity should you buy?” is “As much as you can afford.” You can almost guarantee that however much capacity you buy now, your users and their applications will find a way to exhaust it within a year. All the same, you need to plan.
Planning a Workgroup Setting Disk-Management Strategy “Managing Swap and Dump” on page 662 provides some guidelines for estimating swap needs, but there is often no substitute for running the applications and seeing what happens. Example Here’s what we did to figure out how much swap would be used by the tools used to develop this document. We booted a workstation (an HP9000 715 running HP-UX 10.
Planning a Workgroup Setting Disk-Management Strategy We repeated the experiment on another, much smaller system (32 MB RAM) and got similar results, drawing the conclusion that a workstation running these applications locally would need to have about 30 MB of swap available, for a minimum of 70 MB configured swap.
Planning a Workgroup Setting Disk-Management Strategy LVM divides up the disk in much the same way as the “hard partitions” implemented under earlier versions of HP-UX for systems, but logical volumes are very much easier to reconfigure than partitions, and they can span two or more disks. These two attributes make LVM a much more powerful and flexible tool than hard partitions.
Planning a Workgroup Setting Disk-Management Strategy “Whole Disk” The alternative to LVM is “whole-disk” management, which as the name implies treats the disk as a single unit. Should You Use a Logical Volume Manager or “Whole Disk”? Advantages of a logical volume manager: • Logical volumes can span multiple disks: — File systems (and individual files) can be larger than a single physical disk. — A logical volume can be as small or large as the file system mounted to it requires.
Planning a Workgroup Setting Disk-Management Strategy Disk Mirroring Disk mirroring is available only under LVM. See “Logical Volume Manager (LVM)” on page 79. Disk mirroring allows you to keep a live copy of any logical volume; the data in that volume is in effect being continuously backed up. Strict mirroring ensures that the mirror copy is on a separate disk (in the same volume group).
Planning a Workgroup Planning to Manage File Systems Planning to Manage File Systems This section addresses questions you might have when planning to administer file systems.
Planning a Workgroup Planning to Manage File Systems There are a variety of reasons why you might create a new piece of the overall file system, including: • You have just added a new non-LVM disk or logical volume. • You are concerned about the possibility of running out of disk space for your users’ files (or you actually have run out of disk space).
Planning a Workgroup Planning to Manage File Systems a. b. c. d. e. Using JFS (default version is 3.3) Using JFS (default version is 3.5), LVM’s limitation is 2TB On a Superdome using 512MB DIMMS On a Superdome using 1GB DIMMS HP-UX Supports 1TB - memory capacities vary by machine type Determining What Type of File System to Use As of HP-UX 11.0, the Journaled File System (JFS) is installed as the default for root and other HP-UX file systems.
Planning a Workgroup Planning to Manage File Systems It is permissible to have a mixture of JFS and other file systems on a single computer system. NOTE Access Control Lists are supported in JFS beginning with JFS 3.3, which is included with HP-UX 11i. You can obtain JFS for HP-UX 11.00 from the HP Software Depot, http://software.hp.com. To see if JFS is installed on an HP-UX 11.00 system, run swlist -l fileset JFS If JFS is installed, the output will include a list of JFS filesets.
Planning a Workgroup Planning to Manage File Systems Basic JFS functionality is included with the HP-UX operating system software. With the installation of a separately orderable product called HP OnLineJFS, JFS also provides online administrative operations, including backup, resizing, and defragmentation. The advantages of JFS are well worth the small amount of learning required to use it.
Planning a Workgroup Planning to Manage File Systems • As of 10.20, HP-UX allowed JFS as a local root file system within a logical volume, although not on a non-partitioned, whole disk. The 10.20 implementation of JFS is VERITAS Version 3, which supports file sizes greater than 2 GB as well as large user identification numbers (UIDs). See vxupgrade (1M) for information to convert a Version 2 file system to Version 3.
Planning a Workgroup Planning to Manage File Systems The optional HP OnLineJFS product eases system maintenance by allowing you to perform tasks such as file-system backup and enlarging or reducing a file system without unmounting it. These capabilities are not available on HFS.
Planning a Workgroup Planning to Manage File Systems JFS allocates space to files in the form of extents, adjacent disk blocks that are treated as a unit. Extents can vary in size from a single block to many megabytes. Organizing file data this way allows JFS to issue large I/O requests, which is more efficient than reading or writing a single block at a time. JFS groups structural changes into transactions, and records these in an intent log on the disk before any changes are actually made.
Planning a Workgroup Planning to Manage File Systems complete) when the system call that initiated it returns to the application; exceptions, however, are found in the JFS mount options that delay transaction logging. However, even if transaction logging is delayed, transactions remain atomic and the file system will still not be left in an intermediate state. Is user data part of a transaction? User data is not usually treated as part of a transaction.
Planning a Workgroup Planning to Manage File Systems NOTE JFS extents are unrelated to LVM physical or logical extents. LVM physical extents are also contiguous blocks of the physical volume (disk), 4MB in size by default, but whose size is fixed. For information about LVM extents, see “How LVM Works” on page 559.
Planning a Workgroup Planning to Manage File Systems Each JFS file system has its own intent log. Space is reserved for the intent log when the file system is created; its size cannot be changed later. The intent log is not a user-visible file, although you can use the fsdb tool to dump it. Normally, user data is not treated as part of a transaction.
Planning a Workgroup Planning to Manage File Systems No. If the intent log fills up, there is no perceivable impact on users. Blocking on I/O might occur, but this occurs in many situations unrelated to the intent log, and will have no perceivable impact. No errors occur if the intent log fills up. How can I know the size of the intent log? You can use fsdb to view the size of the intent log.
Planning a Workgroup Planning to Manage File Systems delaylog Delayed logging. Some system calls return before the intent log is written. This enhances the performance of the system, but some changes are not guaranteed until a short time later when the intent log is written. This mode approximates traditional UNIX guarantees for correctness in case of system failure. tmplog Temporary logging. The intent log is almost always delayed.
Planning a Workgroup Planning to Manage File Systems Additionally, the system administrator can control the way writes are handled, with and without O_SYNC. • the mincache mount option determines how ordinary writes are treated.
Planning a Workgroup Planning to Manage File Systems • treats all writes as delayed (even if application explicitly requested synchronous I/O) • log replay not possible — file system might need to be rebuilt after crash mount -o nolog,convosync=delay is useful only for temporary file systems. The convosync=delay option causes JFS to change all O_SYNC writes into delayed writes, canceling any data integrity guarantees normally provided by opening a file with O_SYNC.
Planning a Workgroup Planning to Manage File Systems • The device containing a snapshot only holds blocks that have changed on the primary file system since the snapshot was created. • The remaining blocks, which have not changed, can be found on the device containing the primary file system. Thus, there is no need for a copy. All this is done transparently within the kernel. How does one work with snapshots? A JFS snapshot can be used to perform an online backup of a file-system.
Planning a Workgroup Planning to Manage File Systems Typically, the system administrator will create a new snapshot after correcting the problem (for example, by using a larger snapshot device, or by choosing a time when the primary file system is less volatile). How does an OnLineJFS backup differ from a standard backup? An OnLineJFS backup involves using a snapshot of the file system, rather than the file system itself.
Planning a Workgroup Planning to Manage File Systems The fscat utility provides an interface to a JFS snapshot file system, similar to that provided by the dd utility invoked on the special file of other JFS file systems. On most JFS file systems, the block or character special file for the file system provides access to a raw image of the file system for such purposes as backing up the file system to tape.
Planning a Workgroup Planning to Manage File Systems In general, a JFS file system has better performance than an HFS file system, due to its use of big extents, optimized file-system space usage, large read-ahead, and contiguous files. However, the natural result of file-system is the fragmentation of its blocks. HP OnLineJFS has an efficient means of defragmenting file system space, to restore file-system performance.
Planning a Workgroup Managing Users Across Multiple Systems Managing Users Across Multiple Systems If your users regularly log in to more than one system, you need to think about both security and logistics. The following guidelines may be helpful. Guidelines • Maintain unique, “global” user IDs across systems. You need to ensure that each login name has a unique user-ID number (uid) across all the systems on which the user logs in; otherwise one user may be able to read another user’s private files.
Planning a Workgroup Managing Users Across Multiple Systems Some sites have an automated service that assigns uids that are unique site-wide. If your site offers such a service, use it; otherwise, you will have to devise your own method of checking that the uid you assign each new login is unique across all the systems the user will have access to. • Distributing mail directories from a central point allows you to set up a mail hub for the group, simplifying mail maintenance. This is often a good idea.
Planning a Workgroup Managing Users Across Multiple Systems • mail configuration and maintenance It often makes sense to configure one system in the workgroup as the group’s mail hub, and in this case some users may want to import /var/mail so they can run their mailer on their local system rather than logging in to the mail server.
Planning a Workgroup Planning your Printer Configuration Planning your Printer Configuration This section contains conceptual information on two approaches to managing printers: • LP Spooler, the traditional UNIX vehicle for print management (see “LP Spooler” on page 105). • HP Distributed Print Service (HPDPS), functionality that allows for centralized administration of dispersed print resources (see “HP Distributed Print Service (HPDPS)” on page 113).
Planning a Workgroup Planning your Printer Configuration Overview of the LP Spooler The Line Printer Spooling System (LP spooler) is a set of programs, shell scripts, and directories that control your printers and the flow of data going to them. NOTE Use the LP spooler if your system has more than one user at any given time. Otherwise, listings sent to the printer while another listing is printing will be intermixed, thus scrambling both listings.
Planning a Workgroup Planning your Printer Configuration If one printer’s “drain gets clogged”, you can reroute a print request from that printer to another by using the lpmove command.Unwanted data can be “flushed” from the spooling system with the cancel command.
Planning a Workgroup Planning your Printer Configuration Remote Spooling You can also send print requests to a printer configured on a remote system, using remote spooling. When you use remote spooling, a shell script (“pump”) sends data to a remote system via the rlp command. A remote spooling program called rlpdaemon, running on the remote system, receives data and directs it into the remote system’s LP spooler. The rlpdaemon also runs on your local system to receive requests from remote systems.
Planning a Workgroup Planning your Printer Configuration Printer Model Files Printer model files are required in the following procedures: • “Adding a Local Printer to the LP Spooler” on page 434 • “Adding a Remote Printer to the LP Spooler” on page 436 When you configure your printer into the LP spooler, you must identify the printer interface script to be used. The /usr/lib/lp/model directory lists printer interface scripts from which to choose.
Planning a Workgroup Planning your Printer Configuration Table 2-5 Model Files and Corresponding Printers and Plotters model File 110 Intended Purpose PCL2 PCL level 2 model interface; identical files: hp2300-1100L, hp2300-840L, hp2560, hp2563a, hp2564b, hp2565a, hp2566b, hp2567b PCL3 PCL level 3 model interface; identical files: deskjet, deskjet500, deskjet500C, deskjet550C, deskjet850C, deskjet855C, hp2235a, hp2276a, hp2932a, hp2934a, ruggedwriter PCL4 PCL level 4 model interface; identical files
Planning a Workgroup Planning your Printer Configuration Printer Types A local printer is physically connected to your system. To configure a local printer, see “Adding a Local Printer to the LP Spooler” on page 434. A remote printer may be physically connected or simply configured to a computer and accessed over a network via rlp (1M). To access the remote printer, your system sends requests through the local area network (LAN) to the other system.
Planning a Workgroup Planning your Printer Configuration To use a printer class, you direct print requests to it, rather than to a specific printer. The print request is spooled to a single print queue and printed by the first available printer in the class. Thus, printer usage can be balanced and reliance on a particular printer can be minimized. To create a printer class, see “Creating a Printer Class” on page 439.
Planning a Workgroup Planning your Printer Configuration Similarly, a priority fence value can be assigned to each printer to set the minimum priority that a print request must have to print on that printer. A printer’s fence priority is used to determine which print requests get printed; only requests with priorities equal to or greater than the printer’s fence priority get printed. See lpadmin (1M) and lpfence (1M) for details.
Planning a Workgroup Planning your Printer Configuration • “Why use HPDPS?” on page 115 • “Planning to Implement HPDPS” on page 116 • “Familiarize yourself with the HPDPS Objects” on page 117 • “Sample HPDPS Basic Environment” on page 119 • “Sample HPDPS Extended Environment” on page 120 • “Determining Filesets to Install and Where to Install Them” on page 121 • “Plan your HPDPS Logical and Physical Configurations” on page 117 • “Design Your Physical Configuration” on page 118 • “Familiariz
Planning a Workgroup Planning your Printer Configuration To use the full capabilities of HPDPS requires using the HP9000 Distributed Computing Environment (DCE), a separately purchased product. If your host system is configured as a DCE cell, you can implement the HPDPS Extended Environment, which features a multiplatform client/server infrastructure, single-point administration, client authentication, and object authorization. HPDPS can also be configured without DCE.
Planning a Workgroup Planning your Printer Configuration HPDPS HP-UX client in the DCE cell. You can configure and monitor printers, servers, and queues. You can set defaults for jobs users send to HPDPS-managed printers. • Configure your printing resources to balance workloads effectively. — Give users with common job requirements access to the printers that support their jobs. — Distribute printer workloads, by routing jobs to any of several printers capable of printing the jobs.
Planning a Workgroup Planning your Printer Configuration Table 2-6 Disk Requirements for Installation of HPDPS Components Disk Space Required All (Client, supervisor, and spooler) 17MB Client only 9MB Client and spooler 13MB Client and supervisor 13MB Servers (Spooler and supervisor) 13MB Spooler only 12MB Supervisor only 12MB Further tables and formulas for calculating memory and disk-space requirements are provided in Chapter 2, “Installing HPDPS,” of the HP Distributed Print Service Ad
Planning a Workgroup Planning your Printer Configuration Consider your Users To figure out how you want your HPDPS system to manage the printers, ask yourself about the needs of your user population: • What patterns do you observe among your users in the way they access the printers? Do they print continually throughout the day or in spurts? Are they printing from forms or onto letterhead? Is much time expended waiting for printouts at certain times of day or from certain printers but not others? • Can
Planning a Workgroup Planning your Printer Configuration For example, you can configure a Basic Environment, which will have all objects installed on a single host system. You will need to configure one client, one spooler, and one supervisor. Figure 2-3 Sample HPDPS Basic Environment In Figure 2-3 on page 119, fancy is a single host system, on which are installed the HPDPS client, spooler, and supervisor. Attached to fancy is one locally configured printer.
Planning a Workgroup Planning your Printer Configuration A sample HPDPS configuration with an Extended Environment might have one or more clients, one or more spoolers, and one or more supervisors, distributed among several host systems. Figure 2-4 Sample HPDPS Extended Environment In Figure 2-4 on page 120, fancy, tango, and kenya are host computer systems, on which are configured HPDPS objects that are distributed in an Extended Environment.
Planning a Workgroup Planning your Printer Configuration Determining Filesets to Install and Where to Install Them HPDPS software is bundled under the CDE Run-Time Environment (or under Instant Ignition under the Run-Time Environment) in the product DistributedPrint. You can install the entire product or selected filesets, depending on the role your system plays in the distributed print environment. These are the filesets: PD-CLIENT Mandatory. Select this fileset to use the HPDPS commands.
Planning a Workgroup Planning your Printer Configuration Table 2-7 Values stored in the /etc/rc.config.d/pd file (Continued) Value Definition PDPRNPATH Defines the paths where HPDPS finds printer model files. (For information on the contents of a model file directory, see the HP Distributed Print Service Administration Guide.) PD_CLIENT Specifies whether the host system starts a client daemon. Set by default to PD_CLIENT=0, meaning the host does not start a client.
Planning a Workgroup Planning your Printer Configuration • To implement HPDPS Basic Environment, load the 10.x default DCE core services bundled with HP-UX for distributed computing environment functionality. • To implement HPDPS Extended Environment, load the DCE servers, a separately purchased product. Detailed instructions for installing the HPDPS components using swinstall are found in Chapter 2, “Installing HP Distributed Print Service,” of the HP Distributed Print Service Administration Guide.
Planning a Workgroup Distributing Backups Distributing Backups In a workgroup configuration, where large numbers of systems are involved it is frequently most efficient to centralize backup administration. In this way you can control the backup process and ensure that the data important to your organization is always appropriately backed up. Using HP OpenView OmniBack II for Backup If you are backing up large numbers of systems, the HP OmniBack software product can be particularly useful.
Planning a Workgroup Distributing Backups Figure 2-5 Chapter 2 Distributing Backups with HP OmniBack II 125
Planning a Workgroup Services for Data Exchange with Personal Computers Services for Data Exchange with Personal Computers Today’s technology offers many ways to share data between HP-UX systems and personal computers (PC’s).
Planning a Workgroup Services for Data Exchange with Personal Computers Because ftp is supported by HP-UX and available on many PC-based operating systems, it is an ideal tool to use for transferring data between HP-UX systems and your personal computers. On HP-UX systems, the ftp utility can be found in the executable file: /usr/bin/ftp.
Planning a Workgroup Services for Data Exchange with Personal Computers Examples of terminal emulators include: • telnet - can be used to connect to PC’s (requires the PC to run a telnet server application), and can be used on PC’s (in client mode) to connect to HP-UX systems. • Hyperterminal (found in several versions of Microsoft’s operating systems) - can be used on PC’s to connect to HP-UX systems via a modem.
Planning a Workgroup Services for Data Exchange with Personal Computers Versions of the X Window System for PCs Running applications on a remote computer and displaying the results on your own computer’s screen is as easy as using a terminal emulator (see “Terminal Emulators” on page 127) if you are working only with text.
Planning a Workgroup Services for Data Exchange with Personal Computers Versions of the PC Windows Systems for HP-UX Systems Running applications on a remote computer and displaying the results on your own computer’s screen is as easy as using a terminal emulator (see “Terminal Emulators” on page 127) if you are working only with text.
Planning a Workgroup Services for Data Exchange with Personal Computers Network Operating Systems Network Operating Systems such as Novell NetWare, AppleShare by Apple Computer, Inc., or Microsoft’s LAN Manager are still another way that you can share data between HP-UX systems and your personal computers. With a network operating system (NOS), a portion of the HP-UX directory tree is allocated for use by PC clients.
Planning a Workgroup Possible Problems Exchanging Data Between HP-UX and PCs Possible Problems Exchanging Data Between HP-UX and PCs No matter how you share data between HP-UX systems and PC’s, there are several important things you must consider related to operating system and computer architecture: • Differences in how PC’s, Apple Macintosh computers, and HP-UX systems handle the end-of-line condition in ASCII text files. • “Big Endian” versus “Little Endian” computer architecture.
Planning a Workgroup Possible Problems Exchanging Data Between HP-UX and PCs • Carriage returns with no line feeds (each line of text overwrites the previous line). All lines in the file are printed on the same line on the screen.
Planning a Workgroup Possible Problems Exchanging Data Between HP-UX and PCs NOTE Newer PA-RISC computers can be either big endian or little endian machines, however the HP-UX operating system is a big endian operating system.
Planning a Workgroup Internet Protocols and IPv6 Internet Protocols and IPv6 Internet Protocol version 6 (IPv6) is a new generation of the Internet Protocol that is beginning to be adopted by the Internet community. IPv6 is also referred to as “IPng” (IP next generation). It provides the infrastructure for the next wave of Internet devices, such as personal digital assistants (PDAs), mobile phones, and appliances. It also provides increased connectivity for existing devices such as laptop computers.
Planning a Workgroup Internet Protocols and IPv6 136 Chapter 2
3 Configuring a System This section describes how to set up a single-user or multiuser system.
Configuring a System Starting A Preloaded System Starting A Preloaded System System administrators can either use these directions as a quick reference or just print them out for users about to start up their own systems. IMPORTANT System security is an important part of system configuration. HP-UX provides a wide variety of security features, including basic file and access control, Trusted System configuration, intrusion detection with HP-UX HIDS, and system “lockdown” with Bastille.
Configuring a System Starting A Preloaded System The workstation completes its start-up sequence and displays the desktop login screen. Step 4. Log in to the desktop as root for your first session. See “Using the CDE Desktop” on page 140. Step 5. Set up and configure additional security, as suggested in the “Important” note above. See Chapter 8, “Administering a System: Managing System Security,” on page 741. Step 6. Add users as needed. See “Adding a User to a System” on page 245. Step 7.
Configuring a System Using the CDE Desktop Using the CDE Desktop After you install HP-UX, the desktop Login Manager displays a login screen. The CDE login screen is labeled CDE. When a particular desktop is running, it is the desktop that is run by all users on the system. Refer to the HP CDE 2.1 Getting Started Guide. If you see a console login prompt, then CDE is not running on your system.
Configuring a System Using System Administration Manager (SAM) Using System Administration Manager (SAM) NOTE In HP-UX 11i Version 2, the implementation of a number System Administration Manager (SAM) functions have been changed, although SAM continues to provide an interface. For details, see “SAM X-Window-Based Interface” on page 36. The System Administration Manager (SAM) is an HP-UX tool that provides an easy-to-use user interface for performing setup and other essential tasks.
Configuring a System Using System Administration Manager (SAM) Using SAM versus HP-UX Commands Using SAM reduces the complexity of most administration tasks. SAM minimizes or eliminates the need for detailed knowledge of many administration commands, thus saving valuable time. Use SAM whenever possible, especially when first mastering a task. Some tasks described in this manual cannot be done by SAM, in which case you will need to use the HP-UX commands.
Configuring a System Using System Administration Manager (SAM) Using SAM with an X Window System To use SAM with an X Window System, the X11-RUN fileset must be installed and the DISPLAY environment variable must be set to reflect the display on which you want SAM to appear. (The DISPLAY variable will usually be set unless you used rlogin to log into a remote system.) To view the current settings of the environment variables, enter env | more The DISPLAY environment variable is usually set in the.
Configuring a System Using System Administration Manager (SAM) For each user given restricted access, SAM creates a file /etc/sam/custom/login_name.cf that defines the user’s SAM privileges. SAM uses this file to give users access to the indicated areas. When users execute SAM, they will have superuser status in the areas you defined and will only see those SAM areas in the menu. Areas that do not require superuser status (such as SD) will also appear and will execute using the user’s ID.
Configuring a System Using Distributed Systems Administration Utilities Using Distributed Systems Administration Utilities You can use Distributed Systems Administration Utilities (DSAU) tools to send files and commands to designated systems in your cluster or network.
Configuring a System Using Distributed Systems Administration Utilities A new tool in this toolkit is Configuration Engine (cfengine). cfengine is a popular open source tool for configuration synchronization. It allows policy-based or goal-based configuration management that allows the administrator to define the management actions to be applied to groups of systems so those systems reach a desired state. cfengine is a client/server based tool.
Configuring a System Using Distributed Systems Administration Utilities The administrator can initiate synchronization operations on the managed clients in two ways, using either a push or a pull operation. • Using the cfrun command (see the cfrun (1) manpage for more information) from the master configuration server, the administrator can push changes. cfrun reads the file cfrun.hosts to determine the list of managed clients.
Configuring a System Using Distributed Systems Administration Utilities configuration synchronization operations. The master cfservd is responsible for authenticating remote clients using a public/private key exchange mechanism and optionally encrypting the files that are transferred to the managed clients. — cfservd can optionally run on each managed client in order to process cfrun requests.
Configuring a System Using Distributed Systems Administration Utilities Figure 3-1, “cfengine Overview,” illustrates the relationship of the cfengine commands and daemons, and shows an example of the administrator using cfrun. The dashed lines in the diagram indicate calling sequences (for example, A calls B). Solid lines indicate that data is being read from configuration files. Figure 3-1 cfengine Overview 1.
Configuring a System Using Distributed Systems Administration Utilities If a standalone system is the master server, by default the master copy of update.conf is located in /var/opt/dsau/cfengine_master/inputs/. The master copies of other configuration files such as cfagent.conf, cfservd.conf, and cfrun.hosts are also located here. If the master server is a Serviceguard cluster, the master configuration files are located in the mount point associated with the package.
Configuring a System Using Distributed Systems Administration Utilities A possible but somewhat unusual configuration is to have a fixed member of Serviceguard cluster act as the master server but no package is configured so cfservd will not be highly available. This configuration is valid but not recommended. Configuring cfengine The following sections provide detailed instructions for setting up a configuration synchronization master server and its clients.
Configuring a System Using Distributed Systems Administration Utilities For a detailed description of the cfengine management actions, please refer to cfengine man page. This wizard will help you set up this system as a cfengine master or to add or remove a cfengine client, and to perform the required security setup. Press ‘Return’ to continue...
Configuring a System Using Distributed Systems Administration Utilities ******* WARNING!!!! ******** To protect against possible corruption of sensitive configuration files, control-c has been disabled for the remainder of this configuration. Configuration of the cfengine master server is starting. Verifying the master has an entry in the /etc/hosts file on each client... Keys are being created... Keys have been created, now distributing.... Starting cfengine on the master and any pertinent client machines.
Configuring a System Using Distributed Systems Administration Utilities A file recording the answers for this run of the Configuration Synchronization Wizard is stored here... /var/opt/dsau/cfengine/tmpdir/csync_wizard_input.txt This configuration can be reestablished by issuing the command: /opt/dsau/sbin/csync_wizard \ -f /var/opt/dsau/cfengine/tmpdir/csync_wizard_input.
Configuring a System Using Distributed Systems Administration Utilities It is a client/server based utility. A standalone system or Serviceguard cluster can be configured as the cfengine ‘master’. The master contains the configuration description and configuration files that will be used by all the clients. Clients copy the configuration description from the master and apply it to themselves.
Configuring a System Using Distributed Systems Administration Utilities can run the package is also available. You will need a free IP address for this package and you need to configure storage for the package before proceeding. For details on creating highly available file systems, please refer to ‘Creating a Storage Infrastructure’ chapters of the Managing Serviceguard documentation.
Configuring a System Using Distributed Systems Administration Utilities • The package IP address. This should also be a registered DNS name so the configuration synchronization is easy to configure on client systems. • The package subnet. Use netstat -i to determine the proper subnet. Once the storage infrastructure is configured and the IP address obtained, press return to access the default answer of ‘yes’ and proceed with creating the package.
Configuring a System Using Distributed Systems Administration Utilities Note that additional remote clients can easily be added later using the wizard. It is not necessary to use the wizard to add new clients when additional members are added to the cluster. Refer to the section on Serviceguard Automation features for details. You can optionally specify additional remote clients to manage at this time. If you are running in an HA environment, you do not need to specify the cluster members.
Configuring a System Using Distributed Systems Administration Utilities If the administrator had previously configured cfengine, before overwriting any existing configuration files, the wizard creates backups in the directory: /var/opt/dsau/cfengine/backups The top level files in this directory are the most recent backup files. Any configurations before that are saved in timestamped subdirectories named v_timestamp.
Configuring a System Using Distributed Systems Administration Utilities commands, refer to“cfengine Daemons and Commands” on page 147. The Serviceguard package ensures that cfengine’s cfservd daemon remains highly available. The cfengine configuration files update.conf and cfagent.conf define the master configuration synchronization server to be the registered DNS name for the relocatable IP address of the package.
Configuring a System Using Distributed Systems Administration Utilities 2. The appropriate cfengine public/private keys are created for the new member and placed in the members /var/opt/dsau/cfengine/ppkeys directory. The new keys for this member are also distributed to the /var/opt/dsau/cfengine/ppkeys directories on the other cluster members. 3. The new member’s /var/opt/dsau/cfengine/inputs directory is populated. 4. cfservd is started on the new member. 5.
Configuring a System Using Distributed Systems Administration Utilities The wizard can currently only manage clients when the clients are in the same DNS domain as the master server. For multi-domain configurations, refer to “Manual Configuration” on page 163 for instructions on adding clients manually. Note that if adding a Serviceguard cluster as a managed client, each cluster member must be added individually.
Configuring a System Using Distributed Systems Administration Utilities Keys have been created, now distributing.... The client has been added to the cfengine domain The wizard configures each new client to run cfservd so it can respond to cfrun requests and adds the client to master’s cfrun.hosts file. Manual Configuration The following sections describe the steps required to manually configure cfengine master configuration synchronization servers or managed clients.
Configuring a System Using Distributed Systems Administration Utilities # # # # # # cd cp cp cp cp cp /var/opt/dsau/cfengine_master/inputs /opt/dsau/share/cfengine/templates/cf.main.template cf.main /opt/dsau/share/cfengine/templates/update.conf.template update.conf /opt/dsau/share/cfengine/templates/cfagent.conf.template cfagent.conf /opt/dsau/share/cfengine/templates/cfrun.hosts.template cfrun.hosts /opt/dsau/share/cfengine/templates/cfservd.conf.template cfservd.conf 3. Next, edit update.conf.
Configuring a System Using Distributed Systems Administration Utilities These same domain edits must also be performed in cf.main and cfservd.conf as well. See the next steps. Use cfagent -p (or --parse-only) flag to verify the syntax of update.conf. 4. Distribute the master update.conf to each managed client. This step is described in “Configuring a Synchronization Managed Client” on page 176. 5. Create the master server’s security keys.
Configuring a System Using Distributed Systems Administration Utilities “domain = <%DOMAIN_NAMER%>” and replace the token with the DNS domain of the client systems. This restricts all the clients to be members of that single domain. 8. The file /var/opt/dsau/cfengine_master/inputs/cfagent.conf is the master policy file. The default cfagent.conf includes the default template cf.
Configuring a System Using Distributed Systems Administration Utilities This example allows all the hosts in the listed domains to access files on the master server. You also can specify lists of specific hosts, IP address ranges, etc. Please see the cfengine reference manual for additional information 10. On the master server, start cfservd: # /sbin/init.d/cfservd start This is repeated for each managed client. 11. Test the configuration by performing the following steps: a.
Configuring a System Using Distributed Systems Administration Utilities 1. Start by obtaining an IP address for the package. This address is typically registered in DNS to simplify management of remote clients. If you are using cfengine for intra-cluster use only, it is sufficient to make sure the address is added to each member’s /etc/hosts file. 2. Next, create the storage infrastructure required for a new package.
Configuring a System Using Distributed Systems Administration Utilities # cp /opt/dsau/share/cfengine/templates/update.conf.template update.conf # cp /opt/dsau/share/cfengine/templates/cfagent.conf.template cfagent.conf # cp /opt/dsau/share/cfengine/templates/cfrun.hosts.template cfrun.hosts # cp /opt/dsau/share/cfengine/templates/cfservd.conf.template cfservd.conf 3. Edit update.conf. This file has a format similar to cfengine’s main configuration file cfagent.conf.
Configuring a System Using Distributed Systems Administration Utilities These same domain edits must also be performed in cf.main and cfservd.conf as well. See below. Use cfagent’s -p (--parse-only) flag to verify the syntax of update.conf. • List Managed Clients in cfrun.hosts cfrun requires that all managed clients be listed in the file cfrun.hosts. Since each cluster member is considered a client, make sure each member is listed in /csync/dsau/cfengine_master/inputs/cfrun.hosts.
Configuring a System Using Distributed Systems Administration Utilities • Edit the cfservd.conf File The file /var/opt/dsau/cfengine_master/inputs/cfservd.conf controls which managed clients have access to the files served by cfservd on the master. Perform the following edits to cfservd.conf.
Configuring a System Using Distributed Systems Administration Utilities identical across all cluster members. cfengine’s cfkey generates a public/private key pair for the current system. cfkey creates the files localhost.priv and localhost.pub. cfengine expects keys to be named using the following convention: username-IP address.pub For example: root-10.0.0.3.pub The administrator copies the localhost.pub key to the correct name based on the system’s IP address.
Configuring a System Using Distributed Systems Administration Utilities • Configure and start cfservd 1. Configure the cfservd daemon to start at system startup. Edit /etc/rc.config.d/cfservd and change the line CSYNC_CONFIGURED=0 to CSYNC_CONFIGURED=1. 2. Propagate this change cluster-wide: # ccp /etc/rc.config.d/cfservd /etc/rc.config.d/cfservd 3. On the master server, start cfservd: # /sbin/init.d/cfservd start 4. Repeat for the remaining cluster members.
Configuring a System Using Distributed Systems Administration Utilities 4. Edit the package control script, and substitute appropriate values for the placeholder tokens. Note: The default script template assumes you are using an LVM-based storage configuration. If you are using VxVM and/or CFS, refer to the Managing Serviceguard documentation for more information on configuring packages using those technologies.
Configuring a System Using Distributed Systems Administration Utilities g. Find the line “FS_FSCK_OPT[0]=“<%SG_PKG_FS_FSCK_OPT%>”” and replace the token with any filesystem specific fsck options. As above, the token can be deleted and this option left blank. For example, FS_FSCK_OPT[0]=“”. h. Find the line “IP[0]=“<%SG_PKG_IP%>””and replace the token with the IP address of the csync package. For example, IP[0]= 123.456.789.3. i.
Configuring a System Using Distributed Systems Administration Utilities The -v instructs cfrun itself to be more verbose and the --verbose is passed on to the remote cfagent. For additional troubleshooting information, please refer to “cfengine Troubleshooting” on page 182. Configuring a Synchronization Managed Client When manually configuring managed clients, the basic steps are: • Exchanging security keys. This establishes the trust relationship between the managed client and master server.
Configuring a System Using Distributed Systems Administration Utilities 3. Push the client’s public key to the master server’s ppkeys directory using the following naming convention: # scp localhost.pub master_server:\ /var/opt/cfengine/ppkeys/root-client_IP_address.pub Note that its important to use a utility like secure copy (see scp (1)) when transferring the key in order to protect its integrity. 4.
Configuring a System Using Distributed Systems Administration Utilities Choosing a Synchronization Invocation Method As the administrator, you can push changes out to managed clients by using the cfrun command (see cfrun (8)). cfrun contacts the cfservd daemon on each managed client and cfservd invokes cfagent does the actual synchronization work. You can also choose to have cfagent run at intervals on the client. There are two approaches: • Run cfagent from a cron job.
Configuring a System Using Distributed Systems Administration Utilities Key Exchange • Network port usage • Encryption • Checksum alerts All the key exchange examples shown thus far have used scp to securely transfer master server public key to the managed client and the managed client’s public key to the master server. A scheme like this provides the highest level of security but can be inconvenient in certain situations.
Configuring a System Using Distributed Systems Administration Utilities Network Port Usage cfservd uses TCP port 5308 by default. You can instruct cfagent to connect to cfservd using a different port by specifying a port in the cfrun.hosts file. For example: host1.abc.xyz.com # Use standard port host2.abc.xyz.com# Use standard port host3.abc.xyz.com:4444# Use port 4444 Also, cfengine will honor a cfengine tcp port defined in /etc/services.
Configuring a System Using Distributed Systems Administration Utilities database, change ChecksumUpdates to “off.” At this point, any changes to a checksum of a monitored file causes a security warning. For example: host1: !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! host1: SECURITY ALERT: Checksum for /etc/example changed! host1: !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Disabling Use of cfengine The csync_wizard does not have an unconfigure option to stop a system from being a master server.
Configuring a System Using Distributed Systems Administration Utilities • cfagent (see cfagent (8)) supports the --inform switch on the command line. For more information, refer to the cfengine reference manual in /opt/dsau/doc/cfengine. cfengine Troubleshooting The following are some troubleshooting hints for working with cfengine. 1. Run cfservd on the master server using the --no-fork (-F) and the --verbose (-v) options. This will provide useful information for any troubleshooting efforts 2.
Configuring a System Using Distributed Systems Administration Utilities Check for white space within the configuration files. As a general rule, using white space can improve readability. One common issue is omitted whitespace within parentheses. For example, functions should have no space between the function name and leading parenthesis but the function itself requires leading and trailing whitespace within the enclosing parentheses.
Configuring a System Using Distributed Systems Administration Utilities cfengine:: Couldn’t open a socket cfengine:: Unable to establish connection with host1 (failover) host2: Couldn’t open a socket If the master server cfservd is running, this error could indicate that a there is a firewall or port issue such that the client cannot reach port TCP 5308 on the master server. When using cfrun, the master server must also be able to reach TCP port 5308 on the remote client.
Configuring a System Using Distributed Systems Administration Utilities Table 3-1 syslog Priority Levels Message Description LOG_EMERG A panic condition, normally broadcast to all users. LOG_ALERT A condition that should be corrected immediately, such as a corrupted system database. LOG_CRIT A critical condition such as a hard device error. LOG_ERR General errors. LOG_WARNING Warning messages. LOG_NOTICE Conditions that are not error conditions, but may require special attention.
Configuring a System Using Distributed Systems Administration Utilities Table 3-2 syslog Facilities Messages (Continued) Message Description LOG_MAIL Messages from the mail system. LOG_DAEMON Messages from the system daemons such as inetd, ftpd (see inetd (1M), ftpd (1M)). LOG_AUTH Messages from the authorization system, including login, su, getty (see login (1), su (1), getty (1M)). LOG_SYSLOG Messages generated internally by the syslogd daemon.
Configuring a System Using Distributed Systems Administration Utilities • Forwarded to remote systems. For more information, see the “Log Consolidation Overview” on page 187. See the syslogd (1M) manpage for additional information on configuring message filters. Log Consolidation Overview Log forwarding is a feature of the standard UNIX syslogd. In addition to logging messages to the local host's log files, syslogd can forward log messages to one or more remote systems.
Configuring a System Using Distributed Systems Administration Utilities Improved Log Consolidation The Distributed Systems Administration Utilities (DSAU) uses syslog-ng, or syslog “Next Generation,” to address the weaknesses of the traditional syslogd mentioned above. syslog-ng is an open source syslogd replacement. It performs all the functions of the standard syslogd in addition to providing features such as the following: • Improved filtering functionality.
Configuring a System Using Distributed Systems Administration Utilities syslog Co-existence The Distributed Systems Administration Utilities configures syslog-ng to co-exist and work alongside the standard syslogd. syslogd continues to handle all the local logging for the system. syslog-ng is used when forwarding messages to a log consolidation system and is used on the log consolidator to receive and filter messages. The following diagrams illustrate the relationship between syslogd and syslog-ng.
Configuring a System Using Distributed Systems Administration Utilities system’s /var/adm/syslog/syslog.log and related files. Applications also frequently have application-specific log files. In this example, Serviceguard maintains a log of package operations in /etc/cmcluster//.log. 2. The clog_tail daemon of DSAU, labeled “Log reader” in the diagram, monitors text-based logs and sends new log lines to syslog-ng for processing.
Configuring a System Using Distributed Systems Administration Utilities Figure 3-3 illustrates the configuration on the log consolidation server. Figure 3-3 syslog-ng Log Consolidator Configuration 1. The syslog-ng server reads the incoming log data from the UDP or TCP connected clients. Note: gray arrows indicate a read operation; black arrows, a write. 2. The grey area is identical to the client configuration in Figure 3-2, “syslog-ng Log-Forwarding Configuration.
Configuring a System Using Distributed Systems Administration Utilities consolidated logs. /clog/syslog/ would contain the consolidated syslog-related file. /clog/packages would contain consolidated package logs for a Serviceguard cluster. Log Consolidation Configuration The following sections describe how to configure log consolidation servers and log forwarding clients. Configuring a consolidation server is a multi-step process. The clog_wizard tool vastly simplifies the configuration process.
Configuring a System Using Distributed Systems Administration Utilities - Client that forwards logs to a remote consolidation server Do you want to configure hostname as a Consolidation Server? (y/n) [y]: Answer yes. The wizard then prompts: Enter the fully qualified directory where the consolidated logs should be stored? []: It is typically best to select a dedicated filesystem for the consolidated logs.
Configuring a System Using Distributed Systems Administration Utilities implies that the administrator trusts the remote system. See the ssh section in the log forwarding client section for establishing stronger security guarantees. The /etc/services file documents the well-known reserved ports. When choosing a reserved port, the wizard will check both /etc/services and use “netstat -an” to check that the port is not in use. Note that syslogd uses UDP port 514. TCP port 514 is reserved for use by remsh.
Configuring a System Using Distributed Systems Administration Utilities specified earlier. If you choose not to consolidate this system’s syslogs, then choosing a TCP transport earlier will require that all log forwarding clients be configured to use the TCP transport. The wizard displays a summary of all the configuration choices made by the administrator: Summary of Log Consolidation Configuration: You have chosen to configure hostname as a Log Consolidation Server.
Configuring a System Using Distributed Systems Administration Utilities Updating the syslog configuration: Updating the /etc/rc.config.d/syslogd file to add -N SYSLOGD_OPTS. This stops syslogd from listening to UDP port 514. Updating the /etc/syslog.conf file for UDP local loopback. Starting syslogd for the configuration changes to take effect. Registering the log consolidation ports in the /etc/services file. Starting syslog-ng. Successfully configured hostname as a log consolidation server.
Configuring a System Using Distributed Systems Administration Utilities consolidation clients. The system they are sending the entries to is the consolidation server. In addition to syslog data, arbitrary textual log files can also be consolidated. In a Serviceguard cluster, this tool can help you automate package log file consolidation. Log consolidation is especially useful in a Serviceguard cluster, since it allows you to look at a single consolidated file instead of the per-member logs.
Configuring a System Using Distributed Systems Administration Utilities The wizard prompts for the following, all of which should have already been configured: 1. LVM volume group name (for example, vgclog) 2. Logical volume in the volume group (for example, /dev/vgclog/lvol1) 3. The filesystem’s mount point (for example, /clog) 4. The filesystem’s mount options (for example, –o rw,largefiles). The mount options are used verbatim in the Service package control script’s FS_MOUNT_OPT[0] field.
Configuring a System Using Distributed Systems Administration Utilities Log files that reside on this cluster can be consolidated. Would you like to consolidate this cluster's syslogs? (y/n) [y]: Would you like to consolidate this cluster's package logs? (y/n) [y]: In a Serviceguard cluster, you can consolidate all the member-specific syslog files into a single consolidated syslog containing syslog.log, mail.log and syslog-ng.log. Each member-specific package log can also be consolidated.
Configuring a System Using Distributed Systems Administration Utilities Subnet: 123.456.788.0 The following logs on this cluster will be consolidated: Syslog Serviceguard package logs Do you want to continue? (y/n) [y]: Do you want to continue? (y/n) [y]: Copying files that will be modified by the wizard to /var/opt/dsau/root_tmp/clog on each cluster node. These files will be used to restore the cluster to its current log consolidation configuration, in the event of a failure.
Configuring a System Using Distributed Systems Administration Utilities Starting the "clog" Serviceguard package, this will take a few moments... The "clog" Serviceguard package has been started on cluster-member. Successfully created the "clog" Serviceguard package. Successfully configured clustername as a log consolidation server. Cluster Configuration Notes In a Serviceguard cluster, the adoptive node for the clog package performs the log consolidation functions.
Configuring a System Using Distributed Systems Administration Utilities Serviceguard Automation Features The Distributed Systems Administration Utilities require Serviceguard 11.17 or later. With Serviceguard 11.17 or later, when members are added to or deleted from cluster or packages are added and deleted, the DSAU consolidated logging tools will automatically take the appropriate configuration actions.
Configuring a System Using Distributed Systems Administration Utilities — The clog_tail log monitor adds or deletes the package log file from its list of files to monitor. Minimizing Message Loss During Failover When there is a failure on the adoptive node, it takes a finite amount of time for the clog package to fail over to another cluster member. The longer this failover time, the more likely that messages could be lost from the consolidated log.
Configuring a System Using Distributed Systems Administration Utilities You can configure this cluster clustername as either: - Consolidation server - Client that forwards logs to a remote consolidation server Do you want to configure clustername as a Consolidation Server? (y/n) [y]: n Answer “No” here. At this point you are configuring a log forwarding client. The wizard displays the following: You now need to specify what system will be the consolidator.
Configuring a System Using Distributed Systems Administration Utilities clients. Refer to section “Configuring a Log Consolidation Standalone Server with clog_wizard” on page 192 for a discussion of the max-connections() setting. If you answer “yes” to using TCP, the next question asks for the TCP port to forward messages to: You need to find out from the administrator of the consolidation server the TCP port that was configured for log receiving.
Configuring a System Using Distributed Systems Administration Utilities ssh port forwarding requires an additional free TCP port on the local client system: You need to choose a free port on this cluster for ssh port forwarding. The port chosen should be free on all cluster nodes. Enter the ssh port to be used for port forwarding? []: 1175 The same guidelines for choosing a free syslog-ng TCP port apply to this port.
Configuring a System Using Distributed Systems Administration Utilities The following logs will be forwarded for consolidation: Syslog Serviceguard package logs Do you want to continue? (y/n) [y]: Confirm your answers with a “yes” response and the wizard summarizes the configuration steps that it performs: Copying files that will be modified by the wizard to /var/opt/dsau/root_tmp/clog on each cluster node.
Configuring a System Using Distributed Systems Administration Utilities Manually Configuring Log Consolidation If you choose not to use the Consolidated Logging Wizard, use the following sections for the manual steps required to configure a log consolidation server and log forwarding clients. Because there are many steps required to set up clients and servers, HP recommends using the clog_wizard.
Configuring a System Using Distributed Systems Administration Utilities mail.debug *.info;mail.none @ @ where is the fully qualified domain name of the consolidation server. The name must be fully qualified or syslogd will not forward the messages properly. Note that there must be a before each @ sign. If you have customized syslog.conf, make sure to add the forwarding lines for your customizations as well.
Configuring a System Using Distributed Systems Administration Utilities same manner as a remote client. In other words, when the consolidator is a client of itself, it’s configured identically to remote clients. If using the UDP protocol or not consolidating the local syslogs of this server, delete the <%UDP_LOOPBACK_SOURCE%> and <%UDP_LOOPBACK_LOG%> tokens. • Replace the <%TYPE%> tokens with either udp or tcp depending on the desired log transport to support.
Configuring a System Using Distributed Systems Administration Utilities • Replace the <%FS%> token with the filesystem or directory where the consolidated logs will be kept. For example, destination d_syslog { file(“<%FS%>/syslog/syslog.log”); }; becomes: destination d_syslog { file(“/clog/syslog/syslog.log”); }; Make sure that this directory exists or the appropriate filesystem is mounted.
Configuring a System Using Distributed Systems Administration Utilities If consolidating the local syslogs, add: CLOG_SYSLOG=1 otherwise add: CLOG_SYSLOG=0 For a standalone consolidator, add the following: CLOG_SYSTEM_LOG_CONSOLIDATION_DIR= CLOG_SERVICEGUARD_PACKAGE_LOG_CONSOLIDATION_DIR= — Add the following two values that are used by the System File Viewer: CLOG_LAYOUTS_DIR=/var/opt/dsau/layouts CLOG_ADDITIONAL_LOG_DIRS[0]=/var/adm
Configuring a System Using Distributed Systems Administration Utilities Create the configuration files described below on every cluster member. The simplest approach is to configure one member completely and then copy each configuration file cluster-wide. The cexec and ccp tools can simplify replicating changes cluster-wide. For a cluster configuration, syslog-ng is configured as a package so the log consolidation service is highly available.
Configuring a System Using Distributed Systems Administration Utilities mail.debug *.info;mail.none @ @ where is the fully qualified domain name of the local cluster member. The name must be fully qualified or syslogd will not forward messages properly. If you have customized syslog.conf, make sure to add the forwarding lines for your customizations as well. c. Since /etc/rc.config.
Configuring a System Using Distributed Systems Administration Utilities This causes the syslog-ng consolidator to read the local syslogd’s UDP messages and send them to syslog-ng on the local TCP port. Optionally, the destination could be set to be the local consolidation file directly (destination(d_syslog) in this default template), but the above configuration sets the consolidation server client components in the same manner as a remote client.
Configuring a System Using Distributed Systems Administration Utilities For UDP: destination d_syslog_udp { udp(“package IP” port(514)); }; where <%IP%> is replaced by the clog package IP address or hostname and the <%PORT%> token is replaced by 514, the standard syslog UDP port. e. Replace the <%FS%> token with the filesystem or directory where the consolidated logs will be kept. This filesystem/directory is the one managed by the Serviceguard package.
Configuring a System Using Distributed Systems Administration Utilities b. Replace all the <%TYPE%> tokens with either tcp or udp depending on the desired log transport. c. Find the line: “destination d_syslog_<%TYPE%>{ <%TYPE%>(“<%IP%>”port(<%PORT>%>)); };.” Replace <%IP%> with the IP address of the clog package. For TCP, replace <%PORT%> with TCP port used for log forwarding (selected above). For UDP, replace <%PORT%> with 514, the standard UDP port. Step 4. The syslog-ng startup procedure, /sbin/init.
Configuring a System Using Distributed Systems Administration Utilities Step 5. All the files edited thus far need to be distributed cluster-wide: # ccp /etc/syslog-ng.conf.server /etc/ # ccp /etc/syslog-ng.conf.client /etc/ # ccp /etc/rc.config.d/syslog-ng /etc/rc.config.d/ Step 6. When using TCP, record the port number you chose above in the /etc/services file.
Configuring a System Using Distributed Systems Administration Utilities Step 2. Find the line “LV[0]=“<%SG_PKG_LOG_VOL%>””and replace the token with the full name of the logical volume. For example: LV[0]=“/dev/vgclog/lvol1” Step 3. Find the line “FS[0]=“<%SG_PKG_FS%>”” and replace the token with the name of the filesytem created for this package. For example: FS[0]=“/clog” All the consolidated logs will reside on this filesystem.
Configuring a System Using Distributed Systems Administration Utilities Step 9. Find the line “SUBNET[0]=“<%SG_PKG_SUBNET%>”” and replace the token with the subnet for the packages IP address. Use netstat -i to help determine the subnet. For example: SUBNET[0]= 192.119.152.0 Now, you need to distribute the package files clusterwide. To do this, do the following steps: Step 1. Distribute the package files clusterwide.
Configuring a System Using Distributed Systems Administration Utilities Step 1. Run /opt/dsau/sbin/syslog-ng with the -s or --syntax-only option to verify the syntax of the /etc/syslog-ng.conf.server and /etc/syslog-ng.conf.client files. For the package’s adoptive node, a symbolic link will be created named /etc/syslog-ng.conf and this symbolic link will point to the .server file. For the remaining cluster members, the symbolic link will point to the .client file.
Configuring a System Using Distributed Systems Administration Utilities Step 5. Validate that log forwarding is working properly. If consolidating the cluster’s local syslogs, use “logger ” and make sure this message is in the consolidated syslog.log. If you are not consolidating local logs, use the logger command from a log forwarding client. Note that logger messages are first sent to the local syslogd, which forwards them to syslog-ng. By default, syslogd suppresses duplicate messages.
Configuring a System Using Distributed Systems Administration Utilities b. Edit the system’s /etc/syslog.conf file to forward log messages to port 514 on the local host where they will be read by syslog-ng. Using the HP-UX default /etc/syslog.conf as the example, add the following lines: mail.debug *.info;mail.none @ @ Where is the fully qualified hostname of this system.
Configuring a System Using Distributed Systems Administration Utilities b. Replace all the _<%TYPE%> tokens with either tcp or udp depending on the desired log transport. c. Find the line “destination d_syslog_<%TYPE%>{<%TYPE%>(“<%IP%>” port(<%PORT%>)); };” If using the UDP protocol, replace <%IP%> with the IP address of the log consolidation server and <%PORT%> with 514, the standard UDP port. If using the TCP protocol with ssh port forwarding, replace <%IP%> with 127.0.0.
Configuring a System Using Distributed Systems Administration Utilities CLOG_SSH=1 CLOG_SSH_PORT= otherwise , use: CLOG_SSH=0 otherwise, if using the UDP protocol, use: CLOG_TCP=0 If consolidating the local syslogs, use: CLOG_SYSLOG=1 otherwise, use: CLOG_SYSLOG=0 Step 4. When using TCP with ssh port forwarding , record the ssh port number you chose above in the /etc/services file.
Configuring a System Using Distributed Systems Administration Utilities Manually Configuring a Serviceguard Cluster as a Log Forwarding Client Configuring a Serviceguard cluster as a log forwarding client is similar to configuring a single system. All cluster members must be up and accessible before proceeding. You will first configure syslogd, then syslog-ng. Create the configuration files described below on every cluster member.
Configuring a System Using Distributed Systems Administration Utilities e. The /etc/syslog.conf is specific to each member and the edits described above must be performed on each cluster member. f. Making the above changes on each cluster member, syslogd must be restarted for these changes to take effect. Use cexec to do this on all members of the cluster: # cexec “/sbin/init.d/syslogd stop;/sbin/init.d/syslogd start” Step 2. To configure syslog-ng, start with the same syslog-ng.
Configuring a System Using Distributed Systems Administration Utilities port apply to this port. For details, refer to “Configuring a Log Consolidation Standalone Server with clog_wizard” on page 192. (Note that the ssh port chosen should be a free port on all cluster members). Non-interactive secure shell authentication must be set up between each member of this cluster and the log consolidator (can use /opt/dsau/bin/csshetup tool for the configuration).
Configuring a System Using Distributed Systems Administration Utilities If consolidating this cluster’s package logs, add: CLOG_PACKAGE=1 otherwise, add: CLOG_PACKAGE=0 Step 4. All the files edited thus far need to be distributed cluster-wide: # ccp /etc/syslog-ng.conf.client /etc/ # ccp /etc/rc.config.d/syslog-ng /etc/rc.config.d/ Create the following symbolic link on each cluster member: # ln -sf /etc/syslog-ng.conf.client /etc/syslog-ng.conf Step 5.
Configuring a System Using Distributed Systems Administration Utilities Consolidating Package Logs on the Log Consolidation Server To consolidate the package logs forwarded from cluster clients to a Log Consolidation Server, the following needs to be done on the Log Consolidation Server: Step 1. For each package that will be forwarded from a cluster client, add the following destination, filter and log lines to the syslog-ng.conf.server file, after the HP_AUTOMATED_LOG_FILE_CONSOLIDATION section.
Configuring a System Using Distributed Systems Administration Utilities Disabling a Standalone Log Consolidation System Perform the following steps to deconfigure log consolidation: Step 1. If the local syslogs were being consolidated, or the UDP protocol was used, edit /etc/rc.config.d/syslogd and change SYSLOGD_OPTS to remove the -N switch. For example, make the following edit: SYSLOGD_OPTS=“-D” Step 2. If the local syslogs were being consolidated, also edit the system’s /etc/syslog.
Configuring a System Using Distributed Systems Administration Utilities Step 1. If local syslogs were being consolidated or the UDP protocol was used, edit /etc/rc.config.d/syslogd and change SYSLOGD_OPTS to remove the -N switch For example: SYSLOG_OPTS="-D" Step 2. Restart syslogd with the following commands: #/sbin/init.d/syslogd stop #/sbin/init.d/syslogd start Step 3. If the local syslogs were being consolidated, edit the systems /etc/syslog.conf file to remove the following lines: mail.debug *.
Configuring a System Using Distributed Systems Administration Utilities Disabling a Standalone Log Forwarding Client Perform the following steps to disable log forwarding on a standalone client: Step 1. If syslog messages were being forwarded to the log consolidator, edit /etc/rc.config.d/syslogd and change SYSLOGD_OPTS to remove the -N switch. For example, SYSLOGD_OPTS=“-D” Step 2. Edit the systems /etc/syslog.conf file to remove the following lines: mail.debug *.info;mail.
Configuring a System Using Distributed Systems Administration Utilities Disabling a Serviceguard Cluster Log Forwarding Client Perform the following steps to deconfigure log forwarding. These steps need to be done on each cluster member: Step 1. If syslog messages were being forwarded to the log consolidator, edit /etc/rc.config.d/syslogd and change SYSLOGD_OPTS to remove the -N switch. For example, SYSLOGD_OPTS=“-D”. Step 2. Edit the systems /etc/syslog.conf file to remove the following lines: mail.
Configuring a System Using Distributed Systems Administration Utilities Log File Protections One level of protection is the permissions on the consolidated log files themselves. This is controlled via the syslog-ng.conf.server file. Each syslog-ng “file” destination can have specific permissions specified. If the log directory for a consolidated file does not exist, syslog-ng can be instructed to create it (create_dirs(yes)) and set the directory’s ownership and permissions on the directory as well.
Configuring a System Using Distributed Systems Administration Utilities using syslog-ng’s global setting “time_reopen()”. See the syslog-ng open source reference manual (/opt/dsau/doc/syslog-ng) for details. ssh Port Forwarding to a Serviceguard Cluster Log Consolidator When using ssh port forwarding with a Serviceguard cluster as the log consolidation server, a special ssh configuration is required.
Configuring a System Using Distributed Systems Administration Utilities Pick one of the cluster members and copy these files to the same directory on the other cluster members. Using the “cluster copy” or cpp tool is a quick way to do this, using the following commands: # cd /opt/ssh/etc/ # ccp ssh_host_* /opt/ssh/etc/ Then from each log consolidation client, perform a standard ssh key exchange with the relocatable IP address of the clog package.
Configuring a System Using Distributed Systems Administration Utilities Starting System Management Homepage To log in to the System Management Homepage, navigate to: http://hostname:2301 Enter a username and password. Root logins are enabled by default. For additional information on starting and logging into the System Management Homepage, refer to the HP Systems Management Homepage User Guide. After logging in to System Management Homepage, choose the Logs tab and then “System Log Viewer.
Configuring a System Using Distributed Systems Administration Utilities Parallel Distributed Shell The Distributed Systems Administration Utilities (DSAU) include the open souce tool Parallel Distributed Shell (pdsh). pdsh formalizes the use of remsh and ssh for distributing commands to groups of systems. Unlike remsh/ssh wrappers, pdsh offers the following benefits: • High performance Commands are issued in parallel to groups of target system.
Configuring a System Using Distributed Systems Administration Utilities • Choice of command transports pdsh can use either remote shell rcmd (see rcmd (3)) or ssh as a command transport. Note that the ssh transport offers greatly improved security. See “Security Configuration” on page 242 for details. • Parallel copy command The pdcp command provides a parallelized copy command to copy a local source file to multiple targets.
Configuring a System Using Distributed Systems Administration Utilities pdsh Utility Wrappers Administrators can build wrapper commands around pdsh for commands that are frequently used across multiple systems and Serviceguard clusters. Several such wrapper commands are provided with DSAU. These wrappers are Serviceguard cluster-aware and default to fanning out cluster-wide when used in a Serviceguard environment.
Configuring a System Using Distributed Systems Administration Utilities Security Configuration The command fanout tools support both remote shell (rsh or rcmd) and ssh transports. Each requires specific security setup steps in order to authorize the user initiating the command fanout operation to execute a command on the remote target systems. The command fanout tools require that the remote system not prompt for a password.
Configuring a System Using Distributed Systems Administration Utilities Security Notes The remote shell protocol is an inherently insecure protocol. It is the protocol used by the Berkeley “r commands,” rlogin, rcp, remsh, and so on. Many system administrators disable the use of the “r” commands as a matter of policy. For example, the Bastille security hardening tool offers a default option to disable these insecure services.
Configuring a System Using Distributed Systems Administration Utilities — pdsh@: gethostbyname(“”) failed Reason: When the hostname is unknown. — pdsh@: : connect: Connection refused Reason: The target system is unreachable. The r services might be disabled for this system. — pdsh@: : connect: timed out Reason: The hostname exists (e.g.
Configuring a System Controlling Access to a System Controlling Access to a System You can control who has access to your system, its files, and its processes. Authorized users gain access to the system by supplying a valid user name (login name) and password. Each user is defined by an entry in the file/etc/passwd. You can use SAM to add, remove, deactivate, reactivate, or modify a user account. For additional information about passwords, refer to passwd (4) and passwd (1).
Configuring a System Controlling Access to a System • Allow user to log into other systems without a password. See “$HOME/.rhosts file” on page 386. • Import remote directories using NFS. See “Sharing Files and Applications via NFS and ftp” on page 394. • Give remote access to a user. See “Allowing Access to Remote Systems” on page 386. • Set up the user’s login environment. See “Customizing System-Wide and User Login Environments” on page 270. • Test the new account.
Configuring a System Controlling Access to a System To see the steps that SAM executes, choose Options/View SAM Log... When you use SAM to add a user, SAM does the following: • creates an entry in the /etc/passwd file for the user • creates a home directory for the user • copies start-up files (.cshrc, .exrc, .login, .profile) to the user’s home directory Manually Adding a Use the following steps to add a user from the command line. User Step 1. Add the user to the /etc/passwd file.
Configuring a System Controlling Access to a System Step 3. Ensure that the user has the appropriate shell start-up files to execute when logging in. The three most popular shells in the HP-UX environment are: POSIX shell, Korn shell, and C shell. Each shell uses particular start-up files. Table 3-3 Start-Up Files Shell Name Location Start-up Files POSIX shell /usr/bin/sh, /sbin/sh Korn shell /usr/bin/ksh .profile and any file specified in the ENV environment variable (conventionally .
Configuring a System Controlling Access to a System Using the useradd You can use the useradd command to add users, as well as usermod and Command userdel for modifying and deleting them. useradd has the form: /usr/sbin/useradd [option] ... username username is the new login name for the user. The options are described in Table 3-6. See also useradd (1M). Table 3-4 useradd Options Option Meaning -u uid UID (defaults to next highest number). -g group Primary working group name or group ID.
Configuring a System Controlling Access to a System The following command creates a new user account, adds Patrick to the primary working group (called users), creates a home directory and sets up a default Korn shell: useradd -g users -m -k /etc/skel -s /usr/bin/ksh patrick The resulting entry in the /etc/passwd file is: patrick:*:104:20::/home/patrick:/usr/bin/ksh You can make a script with as many instances of the useradd command as necessary. You can set different defaults with the useradd -D command.
Configuring a System Controlling Access to a System You can assign special privileges to a group of users using the /usr/sbin/setprivgrp command. For information, refer to setprivgrp (1M), setprivgrp (2), getprivgrp (2), rtprio (2), plock (2), shmctl (2), chown (1), chown (2), getprivgrp (1), plock (2), shmctl (2),lockf (2), setuid (2), setgid (2), and setgid (2).
Configuring a System Controlling Access to a System NOTE Access Control Lists are supported in JFS beginning with JFS 3.3, which is included with HP-UX 11i. You can obtain JFS 3.3 for HP-UX 11.00 from the HP Software Depot, http://software.hp.com. To see if JFS 3.3 is installed on an HP-UX 11.00 system, run swlist -l fileset JFS If JFS 3.3 is installed, the output will include a list of JFS file sets. If you get an error message, JFS 3.3 is not installed.
Configuring a System Controlling Access to a System The default run-level is usually run-level 3 or 4, depending on your system. The default run-level for CDE is 4. To determine the current run-level of the init process, type: who -r You can add to and change the sequence of processes that HP-UX starts at each run-level. See “Customizing Start-up and Shutdown” on page 515. Also see the manpage inittab (4). You can use SAM to shut down a system and change the current run-level to single-user state.
Configuring a System Controlling Access to a System For increased security, ensure that the permissions (and ownership) for the files /sbin/init and /etc/inittab are as follows: -r-xr-xr-x -r--r--r-- 254 bin bin bin bin /sbin/init /etc/inittab Chapter 3
Configuring a System Adding Peripherals Adding Peripherals To add peripherals to your system, consult the following documentation: • The hardware installation manual that came with the peripheral. • For PCI OL* information, see the manual Interface Card OL* Support Guide. For PCI OL* information on nPartition-able systems, see the manual HP Systems Partitions Guide: Administration for nPartitions.
Configuring a System Adding Peripherals The easiest way to add peripherals is to run SAM or Partition Manager for nPartition-able systems. However, you can also add peripherals using HP-UX commands. For HP-UX to communicate with a new peripheral device, you may need to reconfigure your system’s kernel to add a new driver. If using HP-UX commands, use the /usr/sbin/mk_kernel command (which SAM uses).
Configuring a System Adding Peripherals If there is a terminfo file for the terminal you want to add, skip the next step and go to Step 4. If there is no terminfo file for the terminal you want to add, you will need to create one. See the next step for details. Step 3. To create a terminfo file, follow the directions in terminfo (4). To adapt an existing file, follow these steps: a. Log in as superuser. b. Make an ASCII copy of an existing terminfo file.
Configuring a System Adding Peripherals Troubleshooting Problems with Terminals There are a number of terminal related problems that can occur. Many of these result in a terminal that appears not to communicate with the computer. Other problems cause “garbage” to appear on the screen (either instead of the data you expected or intermixed with your data).
Configuring a System Adding Peripherals Step 2. Check to see if an editor is running on the terminal. This is best done from another terminal. Issue the command: ps -ef Look in the column marked TTY for all processes associated with the terminal with which you are having problems. For each entry, check in the column marked COMMAND to see if the process represented by that entry is an editor. If you find that an editor is running at the terminal, it is probably in a text-entry mode.
Configuring a System Adding Peripherals CAUTION The stty command, above, should only be used with device file names for currently active terminal device files (use the who command to see which device files are active). If you attempt to execute stty with a non-active device file, you will hang the terminal where you entered the commands. Step 4. Reset the terminal. The terminal itself may be stuck in an unusable state. Try resetting it.
Configuring a System Adding Peripherals If you have another terminal that is still working, go to that terminal and log in (you will need to be superuser).
Configuring a System Adding Peripherals Try using the cat command to send an ASCII file (such as /etc/motd or /etc/issue) to the device file associated with the problem terminal. For example, if your problem terminal is associated with the device file ttyd1p4: cat /etc/motd > /dev/ttyd1p4 You should expect to see the contents of the file /etc/motd displayed on the terminal associated with the device file /dev/ttyd1p4. If you do not, continue to the next step. Step 10.
Configuring a System Adding Peripherals — An alternate method to test the terminal hardware is to swap the suspect terminal with a known good one. This will help identify problems within the terminal that are not caught by the terminal selftest. NOTE Be sure to swap only the terminal (along with its keyboard and mouse). You want the known good terminal at the end of the SAME cable that the suspect terminal was plugged into).
Configuring a System Adding Peripherals • Noise on the data line: — RS-232 Cable too long (maximum recommended length is 50 feet) — Data cable near electrically noisy equipment (motors, etc.
Configuring a System Setting Up the Online Manpages Setting Up the Online Manpages There are three ways to set up online manpages, each resulting in a different amount of disk usage and having a different response time: 1. Fastest response to the man command (but heaviest disk usage): Create a formatted version of all the manpages. This is a good method if you have enough disk space to hold the nroff originals and the formatted pages for the time it takes to finish formatting.
Configuring a System Setting Up the Online Manpages You only need to create the cat8.Z directory if /usr/share/man/man8.Z exists. To save disk space, make sure you use the cat*.Z directories (not cat*) because if both cat*.Z and cat* exist, both directories are updated by man. To save disk space, you can NFS mount the manpages on a remote system. Regardless of how you set up the manpages, you can recover disk space by removing the nroff source files.
Configuring a System Making Adjustments Making Adjustments • Setting the System Clock • Manually Setting Initial Information • Customizing System-Wide and User Login Environments Setting the System Clock Only the superuser (root) can change the system clock. The system clock budgets process time and tracks file access.
Configuring a System Making Adjustments Setting the Time Zone (TZ) /sbin/set_parms sets your time zone upon booting. If you have to reset the time zone, you can use /sbin/set_parms. See “Manually Setting Initial Information” on page 268. Setting the Time and Date /sbin/set_parms sets your time and date upon booting. See “Manually Setting Initial Information” on page 268. If you have to reset the time or date, you can use SAM or HP-UX commands.
Configuring a System Making Adjustments /sbin/set_parms is automatically run when you first boot the system. To enter the appropriate set_parms dialog screen to manually add or modify information after booting, log in as superuser and specify set_parms option option is one of the keywords in Table 3-5. You will be prompted for the appropriate data. Table 3-5 set_parms Options option Chapter 3 Description hostname Your unique system name.
Configuring a System Making Adjustments Table 3-5 set_parms Options (Continued) option font_c-s Description Network font service. This allows you to configure your workstation to be a font client or server. As a font client, your workstation uses the font files on a network server rather than using the fonts on its own hard disk, thus saving disk space. System RAM usage is reduced for font clients, but increased for font servers.
Configuring a System Setting Up Mail Services Setting Up Mail Services Whether you are administering a single system, or a workgroup containing many systems, you will probably want your users to be able to communicate with each other using electronic mail (e-mail). This topic area will help you understand what is involved in setting up e-mail services for your workgroup.
Configuring a System Setting Up Mail Services • Schedule MIME Applications (if necessary) to allow the user to experience non-textual information attached to incoming electronic mail, for example viewing graphics files or video clips, or listening to audio data. Mail Delivery Agents Mail Delivery Agents form the core of the electronic mail system. These programs, usually running in the background, are responsible for routing, and delivering electronic mail.
Configuring a System Setting Up Mail Services Mail Alias Files Mail Alias Files are used for: • Mapping “real world” names to user login names • Describing distribution lists (mailing lists), where a single name (e.g., deptXYZ) is mapped to several or many user login names For faster access, the alias files can be processed into a hashed database using the command: newalias (a form of sendmail). By default, the alias file (ASCII version) is located in the file /etc/mail/aliases.
Configuring a System Setting Up Mail Services Central Mail Hub A central mail hub (a mail server) receives e-mail for its users and the users on the client computers that it serves. Users either NFS-mount their incoming mail files to their local computers (the clients), or log in to the hub to read their mail. Electronic mail can be sent directly from the client computers.
Configuring a System Setting Up Mail Services ✓ Traffic between local machines (within the workgroup) does not have to travel through the hub computer because each client can send and receive its own electronic mail. Therefore if the hub goes down or becomes overloaded, local mail traffic is unaffected (only mail to and from computers outside of the workgroup is affected). ✓ Greater privacy for electronic mail users on the client machines. Data is not stored in a central repository.
Configuring a System Setting Up Mail Services ✓ Each computer needs to run its own copy of the sendmail daemon to “listen” for incoming mail. Selecting a Topography The topography you use depends on your needs. Here are some things to consider when choosing your electronic mail network topography: Security By using a topography with a hub computer you can better protect work that is being done on machines within your workgroup or organization.
Configuring a System Setting Up Mail Services Configuring a System to Send Electronic Mail Configuring an HP-UX system to send e-mail is relatively simple. You need to do two things: 1. Be sure that the executable file for the sendmail program, /usr/sbin/sendmail, is on your system. 2. If you are using a Gateway Mail Hub topography you need to enable site hiding for each of the client computers in your workgroup.
Configuring a System Setting Up Mail Services Configuring a System to Receive Electronic Mail Configuring a system in your workgroup to receive e-mail is a bit more complicated than configuring it to send e-mail. First you must determine two things: 1. Which type of networking topography you are going to use (see Networking Topographies) 2. Where the system fits in to the topography: the electronic mail hub, a client in a workgroup served by a hub, or a standalone system.
Configuring a System Setting Up Mail Services c. (Optional) Set the environment variable SENDMAIL_FREEZE to 1 to indicate that the sendmail configuration file is to be frozen. With older computers, and in certain other circumstances, a frozen configuration file can speed up sendmail’s performance by reducing the time it needs to parse its configuration file. SENDMAIL_FREEZE=1 Step 2. Reboot the hub computer to start up and properly configure the sendmail daemon.
Configuring a System Setting Up Mail Services c. (Optionally) Set the environment variable SENDMAIL_FREEZE to 1 to indicate that the sendmail configuration file is to be frozen. With older computers, and in certain other circumstances, a frozen configuration file can speed up sendmail’s performance by reducing the time it needs to parse its configuration file. SENDMAIL_FREEZE=1 Step 2. Reboot the computer to start up and properly configure the sendmail daemon.
Configuring a System Setting Up Mail Services Fully Distributed (Standalone System) Topography When using a Fully Distributed electronic mail topography each computer is a standalone machine (with regard to electronic mail). Each machine is effectively its own workgroup and is configured just like the hub computer in a “Central Mail Hub” topography e-mail network. Configuring each System The procedure for configuring each system in a “Fully Distributed” topography is: Step 1. Edit the file /etc/rc.config.
Configuring a System Reconfiguring the Kernel (Prior to HP-UX 11i Version 2) Reconfiguring the Kernel (Prior to HP-UX 11i Version 2) NOTE This section applies to releases of HP-UX prior to 11i version 2. See “Reconfiguring the Kernel (HP-UX 11i Version 2)” on page 315 for the procedures for 11i version 2 and beyond. For most systems, the default kernel configuration included with HP-UX will be sufficient for your needs.
Configuring a System Reconfiguring the Kernel (Prior to HP-UX 11i Version 2) ❏ A dynamic tunable is one whose value can be changed without a reboot. ❏ An automatic tunable is one that is constantly being tuned by the kernel itself in response to changing system conditions. The list of dynamic and automatic tunables is continually growing. To determine which tunables are dynamic on your HP-UX 11i system, use the kmtune command (see the kmtune (1M) manpage), or see the Kernel Configuration portion of SAM.
Configuring a System Reconfiguring the Kernel (Prior to HP-UX 11i Version 2) To add, remove, or modify the root file system, you will not be able to use SAM. Instead, re-install your system or see “Creating Root Volume Group and Root and Boot Logical Volumes” on page 578 if you are using logical volumes.
Configuring a System Reconfiguring the Kernel (Prior to HP-UX 11i Version 2) To use SAM to reconfigure the kernel, log in as the superuser, ensure you are logged on to the machine for which you are regenerating the kernel, and start SAM. Select the “Kernel Configuration” menu item; use SAM’s online help if needed. Generally, SAM is simpler and faster to use than the equivalent HP-UX commands. To use HP-UX commands to reconfigure the kernel: Step 1.
Configuring a System Reconfiguring the Kernel (Prior to HP-UX 11i Version 2) This builds a new kernel ready for testing: /stand/build/vmunix_test and the associated kernel components. Step 5. Prepare for rebooting by invoking the kmupdate command. This sets a flag that tells the system to use the new kernel when it restarts. /usr/sbin/kmupdate Step 6. Notify users that the system will be shut down.
Configuring a System Reconfiguring the Kernel (Prior to HP-UX 11i Version 2) Managing Dynamically Loadable Kernel Modules This section presents the concepts and procedures which are necessary to understand, configure, and manage Dynamically Loadable Kernel Modules (DLKMs).
Configuring a System Reconfiguring the Kernel (Prior to HP-UX 11i Version 2) DLKM Concepts This section provides a conceptual overview of DLKM features and functionality by: • defining DLKM at a high level • explaining terms and concepts essential to understanding DLKM • describing how DLKM modules are packaged in HP-UX • identifying the types of kernel modules currently supported by DLKM • describing the advantages of writing kernel modules in DLKM format • examining DLKM module functions and co
Configuring a System Reconfiguring the Kernel (Prior to HP-UX 11i Version 2) Table 3-8 Important Terms and Concepts Term / Concept Kernel Module Definition A Kernel Module is a section of kernel code responsible for supporting a specific capability or feature. For example, file system types and device drivers are kernel modules.
Configuring a System Reconfiguring the Kernel (Prior to HP-UX 11i Version 2) Table 3-8 Important Terms and Concepts (Continued) Term / Concept Modularly Packaged Module Definition A Modularly packaged Module is a Kernel Module whose configuration data has been modularized (not shared with other kernel modules), which is a pre-requisite for DLKM-enabling the Kernel Module.
Configuring a System Reconfiguring the Kernel (Prior to HP-UX 11i Version 2) Table 3-8 Important Terms and Concepts (Continued) Term / Concept Loadable Module (DLKM Module) Definition A Loadable Module (or DLKM Module) is a Modularly packaged Module with the capability to be dynamically loaded into a running kernel.
Configuring a System Reconfiguring the Kernel (Prior to HP-UX 11i Version 2) Table 3-8 Important Terms and Concepts (Continued) Term / Concept Definition Dynamically configured Loadable Module A Dynamically Configured Loadable Module is a loadable module which has been fully configured to be dynamically loaded into or unloaded from the kernel without having to re-link the entire kernel or reboot the system.
Configuring a System Reconfiguring the Kernel (Prior to HP-UX 11i Version 2) NOTE See the master (4) manpage for descriptions of the two kinds of master files, and the config (1M) manpage for a descriptions of the traditional and modular system files. Kernel modules written as traditional modules are still fully supported in HP-UX. Driver developers are encouraged to re-package their static modules according to the module packaging architecture introduced with DLKM modules.
Configuring a System Reconfiguring the Kernel (Prior to HP-UX 11i Version 2) Auto loading occurs when the kernel detects a particular loadable module is required to accomplish some task, but the module is not currently loaded. The kernel automatically loads the module. DLKM Driver Loading Concepts When a module is dynamically loaded, its object file is read from disk and loaded into newly allocated kernel memory. Once in memory, the module's symbols are relocated and any external references are resolved.
Configuring a System Reconfiguring the Kernel (Prior to HP-UX 11i Version 2) • A module may be unloaded only by a user level request specifying the module to be unloaded. The unload is accomplished through the kmadmin command. This request may fail for a number of reasons, the most common being that the module is busy at the time. An example of this would be attempting to unload a device while there are outstanding opens on the device.
Configuring a System Reconfiguring the Kernel (Prior to HP-UX 11i Version 2) Through the use of configurable module attributes, System Administrators can control the various functions of a DLKM driver, including whether it is dynamically loaded or statically configured. This section provides attributes and keywords for: • required components of a DLKM driver • optional components of a DLKM driver It also presents a brief description of STREAMS and Miscellaneous drivers.
Configuring a System Reconfiguring the Kernel (Prior to HP-UX 11i Version 2) The $TUNABLE section defines the names and default values of the tunable parameters (variables) for the module. Default (and optionally minimum) values for tunable parameters are entered here. The $DRIVER_INSTALL section defines the module’s name and associated block and/or character major device number(s). system File Definition Every DLKM module requires a system file.
Configuring a System Reconfiguring the Kernel (Prior to HP-UX 11i Version 2) kernel to resolve references to the absent module’s functions. Configuring a module that uses stubs requires a full kernel build so that the stubs can be statically linked to the kernel. Modstub.o contains stubs for entry points defined in the associated loadable module that can be referenced by other statically configured kernel modules currently configured in the system.
Configuring a System Reconfiguring the Kernel (Prior to HP-UX 11i Version 2) The argument to the _load() function is not meaningful and should be ignored. DLKM Tools There are a number of HP-UX commands known collectively as the kernel configuration tool set for installing, configuring, and managing DLKM modules. These commands are presented with descriptions and applicable command line options in this section.
Configuring a System Reconfiguring the Kernel (Prior to HP-UX 11i Version 2) Kernel Configuration Tools Description The system administrator uses the kernel configuration tools to install, configure, load, unload, update, or remove kernel modules from the system; and to build new kernels. You can use the commands described in this tool set to configure kernel modules of any type (static or loadable).
Configuring a System Reconfiguring the Kernel (Prior to HP-UX 11i Version 2) NOTE If you need further information regarding the functionality, usage, or command line options for any of the kernel configuration tools, refer to their respective manpages. Table 3-9 Kernel Configuration Tool Set Tool/ Command config kmadmin kminstall Chapter 3 Action • First form—generates both the static kernel and associated Dynamically Configured Loadable Modules; a system reboot is necessary.
Configuring a System Reconfiguring the Kernel (Prior to HP-UX 11i Version 2) Table 3-9 Kernel Configuration Tool Set (Continued) Tool/ Command kmsystem kmtune kmupdate 302 Action • -c option—assigns a value (Y or N) to the configuration ($CONFIGURE) flag of the specified module in preparation for the next system configuration. • -l option—assigns a value (Y or N) to the loadable ($LOADABLE) flag of the specified module in preparation for the next system configuration.
Configuring a System Reconfiguring the Kernel (Prior to HP-UX 11i Version 2) DLKM Procedures for Dynamically Configured Loadable Modules This section provides detailed procedures for configuring, loading, and unloading DLKM Enabled kernel modules. Procedural information is shown in three different ways. The first two are summary formats and the third provides detailed procedure steps. 1.
Configuring a System Reconfiguring the Kernel (Prior to HP-UX 11i Version 2) Figure 3-5 DLKM Procedural Flowchart Start Dynamically configured Loadable Module Dynamically or Statically Configured? Prepare module as Dynamically Configured Loadable Module using the command: kmsystem -c Y-l Y Statically configured Loadable Module Prepare module as Statically Configured Loadable Module using the command: kmsystem -c Y -l N OPTIONAL: Tune system parameter(s) supplied by module or static kernel using the co
Configuring a System Reconfiguring the Kernel (Prior to HP-UX 11i Version 2) Table 3-10 Phase Preparing Dynamically Configured Loadable Module Procedures Configuration Option Prepare Loadable Module as a Dynamically configured Loadable Module Procedure Prepare a loadable module for dynamic loading into the HP-UX kernel Optional: Query and/or Tune the system parameters supplied by a loadable module Configure a loadable module for dynamic loading Register a Dynamically configured Loadable Module with the ke
Configuring a System Reconfiguring the Kernel (Prior to HP-UX 11i Version 2) Table 3-11 Statically Configured Loadable Modules Procedures Phase Preparing Configuration Option Prepare Loadable Module as a Statically configured Loadable Module Procedure Prepare a loadable module for static linking to the HP-UX kernel Optional: Query and/or Tune the system parameters for a Statically configured Loadable Module present in the Static Kernel Configure Kernel to include Statically configured Loadable Module L
Configuring a System Reconfiguring the Kernel (Prior to HP-UX 11i Version 2) How To Prepare a Loadable Module for Dynamic Linking To prepare a loadable module to be dynamically loaded into the kernel, do the following: Step 1. Execute this kmsystem command: /usr/sbin/kmsystem -c Y -l Y module_name How to query and tune the system parameters supplied by a loadable module Use the kmtune command to query, set, or reset system (tunable) parameters used by the DLKM module or the static kernel.
Configuring a System Reconfiguring the Kernel (Prior to HP-UX 11i Version 2) Step 1. To configure a loadable module for dynamic loading, execute this config command: /usr/sbin/config -M module_name -u This results in the generation of a loadable image. The -u option forces config to call the kmupdate command, which causes the system to move the newly generated image into place and register it with the running kernel.
Configuring a System Reconfiguring the Kernel (Prior to HP-UX 11i Version 2) • checks what other modules the loadable module depends upon and automatically loads any such module that is not currently loaded • allocates space in active memory for the specified loadable module • loads the specified loadable module from the disk and link-edits it into the running kernel • relocates the loadable module’s symbols and resolves any references the module makes to external symbols • calls the module’s _load
Configuring a System Reconfiguring the Kernel (Prior to HP-UX 11i Version 2) Step 2. To unload a dynamically configured loadable module by ID number, execute this kmadmin command: /usr/sbin/kmadmin -u module_id How to determine Use the -S or -s option of the kmadmin command to view detailed which modules are information about all current registered DLKM modules. currently loaded Step 1.
Configuring a System Reconfiguring the Kernel (Prior to HP-UX 11i Version 2) • the module’s pathname to its object file on disk • the module’s status (LOADED or UNLOADED) • the module’s size • the module’s virtual load address • the memory size of Block Started by Symbol (BSS) (the memory size of the un-initialized space of the data segment of the module’s object file) • the base address of BSS • the module’s reference or hold count (the number of processes that are currently using the module)
Configuring a System Reconfiguring the Kernel (Prior to HP-UX 11i Version 2) How To Query and Tune the System Parameters for a Statically Configured Loadable Module Present in the Static Kernel Use the kmtune command to query, set, or reset system (tunable) parameters used by the DLKM module or the static kernel. kmtune reads the master configuration files, the system description files, and the HP-UX system file. For a Modularly packaged module or a Traditionally packaged module using 11.
Configuring a System Reconfiguring the Kernel (Prior to HP-UX 11i Version 2) a. save the existing kernel file and its kernel function set directory as /stand/vmunix.prev and /stand /dlkm.vmunix.prev, respectively b. move the newly generated kernel file and its kernel function set directory to their default locations, /stand/vmunix and /stand/dlkm, respectively After the system reboots, your DLKM module will be available as statically configured in the new running kernel.
Configuring a System Reconfiguring the Kernel (Prior to HP-UX 11i Version 2) 314 Module type A module type is distinguished by the mechanism used to maintain the modules of that type within the kernel. DLKM modules are classified according to a fixed number of supported module types. Modwrapper The additional code and data structures added to a DLKM module in order to make it dynamic. PCI Peripheral Component Interconnect. An industry-standard bus used on HP-UX systems to provide expansion I/O.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) Reconfiguring the Kernel (HP-UX 11i Version 2) NOTE This section applies to releases of HP-UX starting with 11i version 2. See “Reconfiguring the Kernel (Prior to HP-UX 11i Version 2)” on page 282 for the procedures for releases prior to 11i version 2.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) • The running kernel configuration is automatically backed up before each configuration change (if desired). • The system automatically maintains a detailed log file of all kernel configuration changes. • Kernel modules and kernel tunable parameters now have descriptions associated with them. Kernel tunable parameters have online documentation, and descriptions of the relationships between them.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) The kconfig command is used to manage whole kernel configurations. It allows configurations to be saved, loaded, copied, renamed, deleted, exported, imported, etc. It can also list existing saved configurations and give details about them. For more information, see “Managing Saved Configurations with kconfig” on page 352 or the kconfig (1M) manpage. The kcmodule command is used to manage kernel modules.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) Overview of the kcweb Tool You can configure and manage the kernel without remembering the syntax of the kernel configuration commands or the exact names of modules and tunables by using the kcweb tool, the web-based, user-friendly HP-UX kernel configuration tool to configure and manage the kernel of your system.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) Figure 3-6 Sample kcweb Display You can access kcweb in any of the following ways: • the command line with the kcweb command • the HP Service Control Manager (SCM) • the Kernel Config (kcweb) area of SAM • a web browser, using the URL of a kcweb server that has already been started By default, the kcweb command invokes the Mozilla web browser.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) Other Kernel Configuration Operations Other sections below describe some special kernel configuration operations and special uses of the kernel configuration commands. The usage of some kernel resources can be monitored, with alarms delivered when usage rises above a set threshold. These alarms can be configured and reviewed using the kcalarm command or the kcweb tool.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) It is possible to have an undesirable, or even unbootable, kernel configuration because of mistaken configuration changes, hardware failures, or software defects. Mechanisms exist both to prevent such problems and to help recover from them. For more details see “Recovering from Errors” on page 364.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) Table 3-12 Option Options Shared by Kernel Configuration Commands (Continued) Description k c o n f i g k c m o d u l e k c t u n e -D (Difference) Display only elements for which there is a change being held for next boot. o o o -h (hold) Hold the requested changes for next boot. o o o -K (Keep) Do not back up the currently running configuration. Keep the existing backup unmodified.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) With a -P (Parse) option, the commands produce an output format designed to be parsed by scripts or applications. This format is described in “Parsing Command Output” on page 363. Scripts and applications must parse this output format, because HP supports release-to-release compatibility of output format only when the -P option is used.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) Persistence of Changes By default, the kernel configuration tools will apply configuration changes to the currently running system, causing an immediate change in behavior. System administrators can override this default by specifying the -h (hold) option to any of the commands. This option causes the changes to be held until the system is rebooted. HP recommends that this option be used only when the next reboot is expected to happen soon.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) Managing Kernel Modules with kcmodule The kcmodule command is used to query and change the states of kernel modules, in the currently running configuration or in a saved configuration. The HP-UX kernel is built from a number of modules, each of which is a device driver, kernel subsystem, or some other body of kernel code. A typical kernel has 200-300 modules in it.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) Name Description State Capable Depends On fcms [3E4741A9] Fibre Channel Mass Storage Driver static (to resolve dependencies) unused static module libfcms interface HPUX_11_23 1.0.0 Name Description State Capable Depends On krs [3E47419F] Kernel Registry Service static (required) static module libkrs module libkrs_pdk interface HPUX_11_23 1.0.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) Each kernel module in the currently running configuration has a state, which describes how the module is being used. The possible states are: unused The module is installed on the system but not in use. static The module is statically bound into the kernel executable. This is the most common state. Moving a module into or out of this state requires relinking the kernel executable and rebooting.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) best The system administrator chose to use the module, but didn’t choose a specific state, so the module is in its “best” state as determined by the module developer. auto The module was in auto state, and was automatically loaded when something tried to use it. required The module was marked required by its developer. depend The module is in use because some other module in the configuration depends on it.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) See the kcmodule (1M) manpage for details. When you change a module state using a command as in the above examples, the change will be made immediately to the currently running system, if possible. Sometimes it’s not possible to make the change immediately; for example, there might be a CD file system mounted, in which case cdfs can’t be unloaded. In those cases, kcmodule will hold the change and apply it at next boot.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) • • view details about a module modify the state of a module You can view the modules pane by choosing the modules menu item from the navigation column in kcweb. Figure 3-7 kcweb modules Getting Information about Modules To get more detailed information about a particular module, execute the following two steps: • 330 Select the modules menu item in the navigation column.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) • Select a module to view the details about a particular module in the details pane. Interpreting Module Information If you choose a module, the module details screen is displayed.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) Table 3-14 Module Details Fields (Continued) Field Name Description version indicates the version of the module state indicates the state of the module in the kernel that is currently running (unused, static, loaded, auto) cause indicates the reason why the module is in its current state (explicit, auto, depend, required, default) next boot indicates the state of the module after the system is restarted next boot cause indicate
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) NOTE If the cause is dependent or required, the modify module state button will not appear, as kcweb does not allow modifications to the state of a required module or a module on which other modules are dependent.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) Table 3-15 Field Name kcweb modify module state Fields (Continued) Description version version number of the module state the current value of the module cause reason on how the module got into its current state next boot the state that the module will be changed to if you click the ok button capabilities all the states that the module can support dynamic indicates whether the module is a dynamically loadable kernel module dep
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) System administrators can create their own “user-defined” tunables if they choose. These will not affect the operation of the system directly, but they can be used in computing the values of other tunables. For example, an administrator could choose to create a num_databases tunable, and then set several kernel tunables based on its value.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) Maximum number of processes for each non-root user nproc 4200 Default Immed Maximum number of processes on the system The -g option adds the name of the module defining the tunable, and sorts the output by module name. This has the effect of grouping related tunables together in the output.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) For more information on the -P option and its use by scripts or programs, see “Parsing Command Output” on page 363, or the kconfig (5) manpage. Interpreting Tunable Information Looking at the sample output above, you can see that each tunable has a name and a textual description. Each tunable is associated with a kernel module whose name is listed in the verbose output (or in the table output if -g is specified).
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) system is free to choose the value it thinks optimal, and to change it as needed. HP recommends that tunables be left set to default unless the default is known to be unsatisfactory. Note: setting a tunable to Default is not the same thing as setting it explicitly to the default value reported by kctune. Using the example above, if you set nproc to 4200, its value will remain 4200 until you change it.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) To set a tunable to Default, either of these assignments will work. (Setting a user-defined tunable to Default causes it to be removed.) # kctune nproc= # kctune nproc=default Assignments can be to expressions, as noted above. Note that the assignment may need to be quoted to avoid interpretation by the shell.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) Changes to saved kernel configurations can be made by using the -c (configuration) option. Such changes are made to the saved configuration immediately, but they won’t affect the running system until that saved configuration is either loaded or booted. See “Managing Saved Configurations with kconfig” on page 352 for more information.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) Figure 3-10 kcweb tunables Getting Information About Tunables To get more detailed information about a particular tunable, execute the following two steps: Step 1. Select the tunables menu item in the navigation column. The tunables pane is displayed, which lists all the tunables that are currently configured on your system. Step 2. Select a tunable to view the details about a particular tunable in the details pane.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) Interpreting Tunable Information If you choose a tunable, the tunable details pane (Figure 3-11) is displayed.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) Table 3-16 kcweb tunables details (Continued) Field Name Description module indicates the name of the module (if any) that the tunable is associated with current indicates the current maximum value for the resource next boot (expression) indicates a formula describing the next boot value (Note: this can also be an integer) next boot (integer) indicates the planned value, with all formulae computed last boot value indicates valu
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) The “modify tunable” page (Figure 3-12) is displayed: Figure 3-12 kcweb modify tunable The modify tunable page contains the following fields: Table 3-17 Field Name kcweb tunables details Fields Description tunable indicates the name of the tunable that will be modified description indicates a description of the tunable module indicates the kernel module that the tunable is associated with current indicates the current value of th
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) Table 3-17 kcweb tunables details Fields (Continued) Field Name Description next boot (expression) a formula describing the next boot value (can be an integer) next boot (integer) indicates the calculated value of the user input field “next boot”; may need to refreshed by clicking the recalculate button last boot value indicates value of the tunable when the system was last booted default this is the default value of the tunable;
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) Monitoring Kernel Resource Usage Some tunable parameters represent kernel resources whose usage can be monitored. For these tunables, you can set alarms to notify you when the usage of the corresponding kernel resource crosses a threshold you specify. Getting Information about Alarms with kcweb To get more detailed information about a particular alarm using kcweb, execute the following two steps: Step 1.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) Figure 3-13 Chapter 3 kcweb alarms 347
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) Interpreting Alarms Information with kcweb If you choose an alarm, the alarm details pane is displayed.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) Table 3-18 Field Name kcweb alarms detail Fields (Continued) Description present usage indicates the percentage of resource being consumed at the previous polling event type indicates the event notification to be used polling interval indicates the time interval between polling notification indicates the method used to notify about alarm triggering notification data indicates supplementary information used by the notification met
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) The modify alarm page is displayed: Figure 3-15 kcweb modify alarm The modify alarm page contains the following fields: Table 3-19 Field Name kcweb modify alarm Fields Description tunable indicates the name of the tunable for which the alarm will be modified threshold indicates the percent at which the alarm is to trigger 350 Chapter 3
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) Table 3-19 kcweb modify alarm Fields (Continued) Field Name event type Description displays the checkboxes that determine when notifications are to be sent: initial First polling at which resource usage exceeds threshold; when an alarm is first added, activated, deactivated, or the system reboots.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) usage tables (including top consumers) for supported kernel tunables. These data also enable usage graphs in the kcweb tool. Monitoring is turned on by default when the kcweb tool is installed. For more information, see the kcalarm (1M), kcmond (1M), and kcusage (1M) manpages.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) Getting Information about Saved Configurations When you run kconfig with no options, it shows you the saved configurations on your system. There will always be a saved configuration called backup, which is automatically maintained by the system; any other saved configurations on the system will also be listed. (For more information on the backup configuration, see “Recovering from Errors” on page 364.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) # kconfig –P name,title name backup title Automatic Backup name title day Configuration for daytime multiuser processing name title night Configuration for nighttime batch processing For more information on the -P option and its use by scripts or programs, see “Parsing Command Output” on page 363, or the kconfig (5) manpage.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) Using and Modifying Saved Configurations Creating Saved Configurations Saved kernel configurations can be created in three ways: by saving the currently running configuration, by copying an existing saved configuration, or by reading a system file. To save the currently running configuration, use kconfig -s (save).
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) Several options of kconfig allow other changes to saved configurations. The -r (rename) option will rename a saved configuration. (The backup configuration cannot be renamed.) The -t option will change the title on a saved configuration. The -d (delete) option will delete a saved configuration.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) NOTE /stand/system, and any system file created by exporting the running configuration, always reflects any changes that are being held for next boot. Once you have a system file, you can edit it using any text editor, making the changes you desire. After editing it, you can apply the changes with the kconfig -i (import) command.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) Most changes made in system files can be made using the kernel configuration commands, and vice versa.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) mk_kernel. By contrast, each invocation of one of the kernel configuration commands applies changes separately (although multiple changes listed on the same configuration command line are applied together). Applying multiple changes together is particularly valuable when modules are moved into or out of static state, because each command that does this will run for quite a while.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) Primary Swap Device Each kernel configuration is allowed to have a primary swap device specification. In essence, this specifies which disk volume should be used by the system for paging. At present, only the primary swap device is specified using the kernel configuration mechanisms; other swap devices, if desired, are configured after boot using the swapon command or system call, or through entries in /etc/fstab.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) Dump Devices Each kernel configuration is allowed to have any number of dump devices. These are devices to which a system crash dump should be written, if a system crash occurs. The dump devices specified in the kernel configuration are typically only used during the boot process; once the boot process completes, the system uses the dump devices specified in /etc/fstab instead. See crashconf (1M) for more details.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) specify an explicit attachment of the device to the driver in question. Most installations have no need to specify explicit device driver specifications. Explicit device driver bindings are specified in a system file as lines with the following form: driver deviceID drivername The deviceID is the identification of the hardware device in question.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) Figure 3-16 kcweb change log viewer Parsing Command Output Improvements to HP-UX often require changes in the output formats of commands like those described here. This can be troublesome when applications or scripts have been written that parse the outputs of those commands.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) CAUTION HP reserves the right to change the other output formats of these commands at any time. HP will not support applications and scripts that parse the output of these commands unless they use the -P option. The -P option of each of these commands takes a list of field names, identifying the fields that the application wants to have appear in the output.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) The Automatic backup Configuration The system automatically maintains a saved configuration called backup. Generally, any time you use the kernel configuration tools to make a change to the currently running configuration, the previous (pre-change) configuration is saved to backup. Therefore the backup configuration is somewhat like the “undo” command in a word processor.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) If you want to disable the automatic replacement of the backup configuration for a particular change, specify -K. If you want to force an automatic replacement of the backup configuration, specify -B (Backup). These options work with any kernel configuration command that makes configuration changes. Booting a Saved Configuration In extreme circumstances, a mistaken configuration change can result in a kernel configuration that won’t boot.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) To boot an Itanium-based system in fail-safe mode, get to the HPUX> prompt as described above and type HPUX> boot –tm To boot a PA-RISC system in fail-safe mode, get to the ISL> prompt as described above and type ISL> hpux –f0x40000 (The two methods can be combined, if you want to boot a saved configuration in fail-safe mode.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) • ✓ Load a known good configuration using kconfig –l. Try the backup configuration first. Else (your system is down): ❏ If you have had a hardware failure and now the system won’t boot or if you need to preserve the bad configuration: • • ❏ Try booting in fail-safe mode (see above). Repair the configuration or the hardware, then reboot.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) The manual for “Prophet” tells Susan to set the maxdsiz tunable to at least 0.5 TB, to set the semmni tunable to 3000, and to add 50 to whatever value she’s using for shmmni. Being a security-minded system administrator, she knows she also wants to turn on the Intrusion Detection System by setting the enable_idds tunable. Susan starts by looking at the current values of these tunables, and the descriptions of the ones she’s unfamiliar with.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) # kcmodule -d idds Module State Cause Description idds unused Intrusion Detection Data Source # kcmodule -C "Add Intrusion Detection to the kernel." idds=best WARNING: The requested changes cannot be made to the running system. They will be held until next boot. * The automatic 'backup' configuration has been updated. * Building a new kernel for configuration 'nextboot'... * Adding version information to new kernel...
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) Console Login: root Password: Please wait...checking for disk quotas ... WARNING: YOU ARE SUPERUSER !! After the reboot, Susan saves the new kernel configuration under the name good, so that she can go back to it if needed. She gives it a title to help recognize it later. # kconfig -C "Good configuration for Prophet" -s good * The current configuration has been saved to 'good'.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) While Susan’s on vacation, her colleague, Fred, decides to use the machine for billing software during the night. This software needs to execute code on the stack (a security risk), so he enables that behavior (which is prohibited by default). No reboot is needed to do so.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) Since good isn’t a very helpful name for Susan’s configuration anymore, Fred renames it to day. He checks the list of configurations to make sure everything looks OK. # kconfig -r good day * The configuration 'good' has been renamed to 'day'.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) Change to configuration 'current' at 21:55:49 PST on 02 February 2003 by root: Configuration loaded from 'day'. ====================================================================== Change to configuration 'current' at 21:56:09 PST on 02 February 2003 by root: Configuration loaded from 'night'. She can see that Fred has put a new application on her server, and worse, an insecure one. At least he tested and documented his changes.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) Kernel Configuration Quick Reference Tables Table 3-21 Working with Kernel Configurations Procedure Command Choose the configuration to boot... ...before the reboota kconfig [-f] ...at the boot loader prompt (Itanium-based) boot configname ...
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) Table 3-22 Working with System Files (Continued) Procedure ...update the currently running configuration Command kconfig [-fhV] -i filename a. Includes any changes being held for next boot. b. mk_kernel can also be used for this purpose. Table 3-23 Working with Changes Held for Next Boot Note: kconfig -i, kcmodule, and kctune hold their changes until next boot if they can’t be applied immediately, or if -h is specified.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) Table 3-24 Working with Tunables (Continued) ...apply change to saved configuration -c configname ...create user-defined tunable -u Table 3-25 Working with Kernel Modules List modules and their states... kcmodule [module] ... ...verbose output -v ...only modules with changes held for next boot -D ...include required modules -a ...in a saved configuration -c configname Add a module to the configuration... ...
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) Table 3-26 Working with the Kernel Configuration Log File (Continued) ...while making a change with a kc* command add -C "comment" to the change command ...without making a configuration change kclog -C "comment" View the last n entries in the log (default is 1)... kclog n ...counting only changes to a configuration -c configname ...counting only changes of a particular type -t module|tunable|device ...
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) Transition from Previous HP-UX Releases Experienced administrators of previous releases of HP-UX will find some aspects of the 11i v2 kernel configuration mechanisms unfamiliar. However, many of the underlying concepts are unchanged. The tables in this section give information to help administrators translate from the old kernel configuration mechanisms to 11i v2.
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) Table 3-29 Commands and Options Older HP-UX Command/Option HP-UX 11i version 2 config (without -M) mk_kernel a config -M No longer needed kmadmin -b No longer needed kmadmin -k kcmodule b kmadmin -L modulename kcmodule modulename=loaded b kmadmin -U modulename kcmodule modulename=unused b kmadmin -u module_id kcmodule modulename=unused b kmadmin -q module_id kcmodule -v modulename b kmadmin -Q modulename kcmodule -v mod
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) Table 3-29 Commands and Options (Continued) Older HP-UX Command/Option HP-UX 11i version 2 kmsystem -c n modulename kcmodule modulename=unused b kmsystem -q modulename kcmodule -v modulename b kmtune (no options) c kctune d kmtune -l kctune -v d kmtune -q tunable kctune tunable d kmtune -r tunable kctune tunable=Default d kmtune -u -s tunable=value kctune tunable=value d kmtune -u -s tunable+value kctune tunable+=value d
Configuring a System Reconfiguring the Kernel (HP-UX 11i Version 2) Table 3-30 Files and Directories Older HP-UX File/Directory HP-UX 11i version 2 Currently running kernel: /stand/vmunix /stand/vmunix Backup kernel: /stand/vmunix.prev Backup configuration: backup a Test kernel: /stand/build/vmunix_test (default output of mk_kernel) Test configuration: hpux_test b Primary system file: /stand/system /stand/system b Module system files: /stand/system.d/* No longer used.
Configuring a Workgroup 4 Configuring a Workgroup This section deals with the tasks you need to do to configure a new system into the network and the workgroup, and to set up shared access to resources such as files and printers and services such as mail and backups: • “Installing New Systems” on page 384 • “Adding Users to a Workgroup” on page 388 • “Implementing Disk-Management Strategy” on page 393 • “Sharing Files and Applications via NFS and ftp” on page 394 • “Adding PC/NT Systems into the
Configuring a Workgroup Installing New Systems Installing New Systems Most HP systems are delivered with the operating system already installed on the root disk; this is called instant ignition. See “Starting A Preloaded System” on page 138. If you ordered your system without instant ignition, you will have to install HP-UX from a CD-ROM or DDS tape. Read the HP-UX installation guide for your version of HP-UX to guide you through the installation process.
Configuring a Workgroup Installing New Systems Configuring /etc/hosts You can use any text editor to edit the /etc/hosts file. If you are not running BIND or NIS, you can use SAM. Step 1. If no /etc/hosts file exists on your system, copy /usr/newconfig/etc/hosts to /etc/hosts, or use ftp to copy another system’s/etc/hosts file to your system. See the ftp (1) manpage for more information. Step 2. Make sure the /etc/hosts file contains the following line: 127.0.0.1 localhost loopback Step 3.
Configuring a Workgroup Installing New Systems • root password • optional parameters: — subnet mask — IP address of a Domain Name Server — Network Information Service (NIS) domain name • whether to make the system a font client or font server You can reset networking parameters at any time by running /sbin/set_parms again and rebooting the system. See “Manually Setting Initial Information” on page 268 for a list and description of the set_parms options.
Configuring a Workgroup Installing New Systems For example, to allow system ws732 to send a window to system wszx6, enter: xhosts +ws732 on system wszx6. Configure New Systems into a Workgroup To configure a new system into a workgroup, do the following tasks: • Set up NFS mounts to allow the system’s users to share working directories. See “Adding a User to Several Systems: A Case Study” on page 389 or “Sharing Remote Work Directories” on page 388.
Configuring a Workgroup Adding Users to a Workgroup Adding Users to a Workgroup This section includes the following topics: • • • • • “Accessing Multiple Systems” on page 388 “Sharing Remote Work Directories” on page 388 “Local versus Remote Home Directories” on page 389 “Adding a User to Several Systems: A Case Study” on page 389 “Exporting a Local Home Directory” on page 391 Accessing Multiple Systems If a user has an account with the same login on more than one system, (for example, if the user’s $HOM
Configuring a Workgroup Adding Users to a Workgroup Local versus Remote Home Directories Users can have their home directory on their own local system or on a remote file server. The advantage of keeping all users’ home directories on one file server is that you can back up all the accounts at one time. If a user’s home directory is on a remote server, you may want to create a minimal home directory on the local system so that a user can still log into the local system if the server is down.
Configuring a Workgroup Adding Users to a Workgroup Before beginning, make sure Tom’s login name has a uid number that is unique across the systems he is going to use. (Your network administrator may have a program to ensure uniqueness of uid numbers.) Then create an account for Tom on the file server, flserver. See “Adding a User to a System” on page 245. Then do the following procedure: Step 1. On the file server, export Tom’s home directory and the projects directory where he does his work: a.
Configuring a Workgroup Adding Users to a Workgroup a. Create Tom’s account. See “Adding a User to a System” on page 245. If Tom’s login has already been set up on another system (for example on flserver) you may want to cut the line from flserver’s /etc/passwd file and paste it into the /etc/passwd file on wsb2600 to ensure that Tom’s account has the same uid number on both systems. b. Create empty directories for the file systems to be imported.
Configuring a Workgroup Adding Users to a Workgroup exportfs -a Step 2. On the remote system, do the following: a. Create an empty directory: mkdir /home/lisa b. Add entry to /etc/fstab : mailserver:wsj6700:/home/lisa /home/lisa nfs rw,suid 0 0 c. Mount all directories: mount -a See “Exporting a File System (HP-UX to HP-UX)” on page 395 for more information.
Configuring a Workgroup Implementing Disk-Management Strategy Implementing Disk-Management Strategy One or more of the topics below should be useful when you are adding disk capacity to the workgroup, whether you are adding a new disk (or disks), a new server system, or a new workstation with a local disk (or disks). • Quick reference for “Adding a Disk” on page 861. • “Distributing Applications and Data” on page 61 Suggestions on how to distribute disk storage in your workgroup.
Configuring a Workgroup Sharing Files and Applications via NFS and ftp Sharing Files and Applications via NFS and ftp This section provides procedures and troubleshooting information for Network File System (NFS) and File Transfer Protocol (ftp). ❏ NFS allows a computer access to a file system that resides on another computer’s disks, as though the file system were mounted locally.
Configuring a Workgroup Sharing Files and Applications via NFS and ftp Exporting a File System (HP-UX to HP-UX) Use either of the following procedures to set up NFS exports on the server. • “Using SAM to Export a File System” on page 395 • “Using the Command Line to Export a File System” on page 395 Using SAM to Export a File System Step 1. Log in to the server as root. Step 2. Run SAM: enter sam on the command line. Step 3.
Configuring a Workgroup Sharing Files and Applications via NFS and ftp b. Run the nfs.server script: /sbin/init.d/nfs.server start Step 3. Edit /etc/exports, adding an entry for each directory that is to be exported. The entry identifies the directory and (optionally) the systems that can import it.
Configuring a Workgroup Sharing Files and Applications via NFS and ftp NOTE Files in the local directory will be overlaid, but not overwritten, when you import the remote directory. The local files will be accessible again once you unmount the remote directory. • Make sure that the client has permission to import the file system from the server. This requires an entry in /etc/exports on the server; see Step 3 under “Using the Command Line to Export a File System” on page 395.
Configuring a Workgroup Sharing Files and Applications via NFS and ftp Table 4-1 Deciding Which type of NFS Mount to Use Ordinary NFS Mounts — Use an ordinary NFS mount when you would like the mounted file system to always remain mounted. This is useful when the mounted file system will be frequently accessed. Automatically mounted NFS file systems — Use an automatically mounted NFS file system when you want the file system to be mounted only when it is actively being used.
Configuring a Workgroup Sharing Files and Applications via NFS and ftp Using SAM to Import a File System Step 1. Log in to the client as root. Step 2. Run SAM. Enter: sam on the command line. Step 3. Enable NFS client services if necessary: Choose “Networking and Communications/Network Services/NFS Client”, then pull down the “Actions” menu and choose “Enable”. Step 4.
Configuring a Workgroup Sharing Files and Applications via NFS and ftp NOTE You do not have to call the directory on the client by the same name it has on the server, but it will make things simpler (more transparent) for your users if you do. If you are running applications configured to use specific path names, you must make sure those path names are the same on every system on which the applications run.
Configuring a Workgroup Sharing Files and Applications via NFS and ftp CIFS/9000 CIFS/9000 provides HP-UX with a distributed file system based upon Microsoft’s CIFS (Common Internet File System) protocol, also known as the SMB (Server Message Block) protocol. The SMB protocol is the native file-sharing protocol in Microsoft Windows and OS/2 operating systems and is the standard way that millions of PC users share files across corporate intranets.
Configuring a Workgroup Sharing Files and Applications via NFS and ftp NOTE A DiskAccess evaluation package is supplied with HP Vectra XW Graphics workstations as of May 2, 1997. For other systems, a free one-month evaluation package is available on the Web at http://www.ssc-corp.com/nfs. Installation Install DiskAccess from CD onto the NT workstation and follow prompts. Reboot the workstation when directed to do so. Exporting a File System from an HP-UX Server Do the following on the HP-UX server.
Configuring a Workgroup Sharing Files and Applications via NFS and ftp CAUTION If you dial in to the server using a variable IP address for the NT client, and the server lists the client’s host name explicitly in /etc/exports, the lookup will fail because the IP address will not match. You need to export the directory without restrictions (no host names in /etc/exports). If you modified /etc/exports, force the system to re-read it: /usr/sbin/exportfs -a Now do the following on the NT Client. Step 1.
Configuring a Workgroup Sharing Files and Applications via NFS and ftp Troubleshooting NFS Table 4-2 Problem Individual client can’t import from one or more servers What To Do Check the following on the client: • Verify that the local directory exists on the client. If it does not exist, create it using mkdir. For example: mkdir /opt/adobe • LAN cable intact and connected, and all connections are live. • /etc/hosts exists and has “Requisite Entries” on page 407.
Configuring a Workgroup Sharing Files and Applications via NFS and ftp Table 4-2 (Continued) Problem All clients can’t import from a given server What To Do Do the following on the server: • Check that the server is up and running, and that the LAN connection between the server and clients is live (can you “ping” the clients from the server and vice versa?) Check that rpc.mountd is running: ps -ef | grep rpc.mountd If rpc.mountd is not running (symptom RPC-PROG NOT REGISTERED), run it: /usr/sbin/rpc.
Configuring a Workgroup Sharing Files and Applications via NFS and ftp Table 4-2 (Continued) Problem All clients can’t import from a given server (cont’d) What To Do On the server (cont’d): • exportfs -a (to force the server to re-read /etc/exports and export the file systems specified in it). Stale NFS file handle (Common on NFS clients after server has crashed, or been rebooted before clients have unmounted NFS file systems, or after /etc/exports has been changed on the server).
Configuring a Workgroup Sharing Files and Applications via NFS and ftp Requisite Entries The following entries are required in /etc/hosts, /etc/fstab, and /etc/resolv.conf: • /etc/hosts: — System host name and IP address, for example: 12.0.14.123 fredsys fredsys.mysite.myco.com — An entry similar to the following: 127.0.0.
Configuring a Workgroup Sharing Files and Applications via NFS and ftp What To Do A. When the Domain Name Server Goes Down If a system powers up before the Domain Name Server does, it will not find the name server and you will get the message: rcmd: hostname: Unknown host when the user tries to reach another system. The simplest solution is to reboot the system after the name server has been rebooted. B.
Configuring a Workgroup Sharing Files and Applications via NFS and ftp Moving or Reusing an Exported Directory If you rename an NFS-mounted directory, NFS clients must unmount and remount the imported directory before they can see the new contents. For example, if a server is exporting /opt/myapp, and you move /opt/myapp to /opt/myapp.
Configuring a Workgroup Sharing Files and Applications via NFS and ftp mkdir /home/ftp b. Create the subdirectory /usr/bin under the ftp home directory, for example: cd /home/ftp mkdir usr cd usr mkdir bin Step 3.
Configuring a Workgroup Sharing Files and Applications via NFS and ftp Step 8. In all entries in /home/ftp/etc/group, replace the password field with an asterisk (*): users:*:20:acb guest:*:21:ftp Step 9. Change the owner of the files in ~ftp/etc to root, and set the permissions to read only (mode 0444): chown root /home/ftp/etc chmod u=r,g=r,o=r /home/ftp/etc Step 10. Create a directory pub under ~ftp, and change its owner to user ftp and its permissions to writable by all (mode 0777).
Configuring a Workgroup Sharing Files and Applications via NFS and ftp If inetd is not running, start it: /usr/sbin/inetd It is also possible that the ftp service is disabled. Check /etc/inetd.
Configuring a Workgroup Adding PC/NT Systems into the Workgroup Adding PC/NT Systems into the Workgroup • “Hardware Connections” on page 413 • “Configuring HP-UX Systems for Terminal Emulation” on page 414 • ❏ “telnet” on page 414 ❏ “Other Terminal Emulators” on page 417 “Configuring HP-UX Systems for File Transfer” on page 417 ❏ • “ftp (File Transfer Protocol)” on page 417 “Mounting File Systems Between HP-UX and PCs” on page 432 Hardware Connections Adding a personal computer (PC) to a workg
Configuring a Workgroup Adding PC/NT Systems into the Workgroup • How often you plan to access the data on the PC (occasionally? frequently? constantly?) • The type of data you want to exchange (ASCII text? graphics? sound? video?) • How will you exchange the data (file transfer?, shared windowing environment?, electronic mail?) Configuring HP-UX Systems for Terminal Emulation The primary reason for having a computer in a workgroup (regardless of what type of computer it is) is so that its users can a
Configuring a Workgroup Adding PC/NT Systems into the Workgroup Step 2. Make sure that the PC is running telnet server software. a. Install a version of telnet server software. NOTE Microsoft’s Windows NT 4.0 operating systems do not initially include telnet server software. Commercial and shareware versions of telnet server software are available from a variety of sources. b. Configure, and start the telnet server software according to the instructions that come with it. Step 3.
Configuring a Workgroup Adding PC/NT Systems into the Workgroup Using Telnet to Log in to an HP-UX System from a PC Step 1. Make sure that the PC is running, and reachable via your network. a. Turn on the PC and boot up the Windows NT operating system. b. Make sure that your PC has networking services configured, and has a network address (IP address). Step 2. Make sure that the telnetd daemon is running on your HP-UX system. The telnetd daemon is not usually run directly.
Configuring a Workgroup Adding PC/NT Systems into the Workgroup b. Clicking on the “Remote System ...” menu item from the connect menu. c. Entering the name of your HP-UX system in the “Host Name” field of the resulting dialog box (leave the “Port” field set to “telnet”). d. Clicking on the “Connect” button in the lower-left corner of the dialog box.
Configuring a Workgroup Adding PC/NT Systems into the Workgroup ftp Server Software Shipped as part of the Windows NT 4.0 operating systems for PCs (but not necessarily installed initially) are a group of utilities collectively known as the “Microsoft Peer Web Services.” One of the services in this collection is an “ftp publishing service” that enables you to ftp files to and from your PC while sitting at one of your HP-UX systems. This service is the ftp server that runs on your PC.
Configuring a Workgroup Adding PC/NT Systems into the Workgroup TROUBLESHOOTING INFORMATION If the connection is not successful ftp will let you know that the connection failed. The displayed error message will vary depending on what is the cause of the failed connection: ❏ ftp: connect: Connection refused The most likely cause of this message is: ✓ Problem: The ftp publishing service on the Windows NT-based PC is not running (has not been started). Solution: Start the ftp server on the PC.
Configuring a Workgroup Adding PC/NT Systems into the Workgroup TROUBLESHOOTING INFORMATION ❏ ftp: vectrapc1: Unknown host Possible causes of this error message include: ✓ Problem: You typed the name of your PC incorrectly. Solution: Verify that you entered the name of your PC correctly in the open command. Depending on where in your network structure the PC is located with respect to your HP-UX system, it might be necessary to fully qualify the PC name.
Configuring a Workgroup Adding PC/NT Systems into the Workgroup This message is actually a login prompt, and there are several ways to respond to it: ❏ Hit Return to accept the default response In the above example, there are three parts to the displayed prompt: 1. The word “Name” 2. The network name for your PC (“vectrapc1.net2.corporate”) 3. The default user name (“userx”); this is usually the name of the HP-UX account that you were using when you issued the ftp command in Step 1.
Configuring a Workgroup Adding PC/NT Systems into the Workgroup to network eavesdroppers), ftp provides a way to access a remote computer using what is known as an “anonymous login”. To use this feature, enter the word “anonymous” at the prompt: Name (vectrapc1.net2.corporate:userx): anonymous You will then be prompted to enter a password in a special way: 331 Anonymous access allowed, send identity (e-mail name) as password.
Configuring a Workgroup Adding PC/NT Systems into the Workgroup a. For ASCII (plain text) files, set the transfer mode using ftp’s ascii command: ftp> ascii This enables character conversions such as end-of-line carriage return stripping to occur (See “ASCII End-of-Line Problems” on page 132). b. For binary files (graphics files, sound files, data base files, etc.
Configuring a Workgroup Adding PC/NT Systems into the Workgroup On the HP-UX System - Sending a File to the PC Once you have made a connection and logged in to the PC from your HP-UX system (See “Establishing an ftp Connection from HP-UX to a PC” on page 418) you are ready to transfer a file to the PC. Step 1. Locate the file you want to send.
Configuring a Workgroup Adding PC/NT Systems into the Workgroup Step 3. Transfer the file using ftp’s send command. Example 1 To send the ASCII file “phone.dat” (located in the “/var/tmp” directory on your HP-UX system) to the PC: ftp> lcd /var/tmp ftp> ascii ftp> send phone.dat — OR — ftp> ascii ftp> send /var/tmp/phone.dat Example 2 To send the graphics file “roadmap.jpg” from the current working directory: ftp> binary ftp> send roadmap.
Configuring a Workgroup Adding PC/NT Systems into the Workgroup Connected to flserver.net2.corporate. 220 flserver FTP Server (Version 1.7.111.1) ready. If your connection succeeded, proceed to Step 3. If the connection is not successful ftp will let you know that the connection failed.
Configuring a Workgroup Adding PC/NT Systems into the Workgroup ✓ Problem: Your HP-UX system is not currently reachable on the network. Solution: Make sure that the your HP-UX system is physically connected to the network and that there are no network outages or breaks between your PC and your HP-UX system. ❏ftp: flserver: Unknown host Possible causes of this error message include: ✓ Problem: You typed the name of your HP-UX system incorrectly.
Configuring a Workgroup Adding PC/NT Systems into the Workgroup When you have successfully connected to your HP-UX system, another message will follow the “Connected to...” message: Name (flserver.net2.corporate:(none)): This message is actually a login prompt, and there are several ways to respond to it: ❏ Enter a valid account name and password for your PC You will then be prompted to enter the password for the account.
Configuring a Workgroup Adding PC/NT Systems into the Workgroup After successfully entering the HP-UX account information you will be logged in to your HP-UX system and placed in the directory designated as the ftp-root directory.
Configuring a Workgroup Adding PC/NT Systems into the Workgroup TIP If you are unsure of the format of the file you are transferring (ASCII or binary) set the file type to “binary”. ASCII files will not be corrupted if transferred in binary mode, however end-of-line character stripping will not occur (See “ASCII End-of-Line Problems” on page 132). Step 3. Transfer the file using ftp’s get command. Example 1: to retrieve the ASCII file “phone.
Configuring a Workgroup Adding PC/NT Systems into the Workgroup a. For ASCII (plain text) files, set the transfer mode using ftp’s ascii command: ftp> ascii This enables character conversions such as those that handle the differences between how the ends of lines are handled between differing types of operating systems (See “ASCII End-of-Line Problems” on page 132). b. For binary files (graphics files, sound files, database files, etc.
Configuring a Workgroup Adding PC/NT Systems into the Workgroup Mounting File Systems Between HP-UX and PCs Yet another way of sharing data between HP-UX systems and PCs is to share an HP-UX file system between them using PCNFS. For an example of how to do this see “Third-Party Products” on page 401.
Configuring a Workgroup Configuring Printers for a Workgroup Configuring Printers for a Workgroup This section deals with configuring printers according to two methods: the traditional UNIX LP spooler and the HP Distributed Print Server (HPDPS). • “Configuring Printers to Use the LP Spooler” on page 433 • “Configuring Printers to Use HPDPS” on page 444 For conceptual information about print-management topics, see “Planning your Printer Configuration” on page 105.
Configuring a Workgroup Configuring Printers for a Workgroup Initializing the LP Spooler Before you can use the LP spooler, you must initialize it. Using SAM If you use SAM to add a printer, SAM will prompt you to initialize the LP spooler. Using HP-UX Commands You can use HP-UX commands to initialize the LP spooler by following these steps: Step 1. Add at least one printer to the LP spooler. See “Adding a Local Printer to the LP Spooler” on page 434. Step 2.
Configuring a Workgroup Configuring Printers for a Workgroup Using HP-UX Commands Step 1. Ensure that you have superuser capabilities. Step 2. Stop the LP spooler: /usr/sbin/lpshut For more information, see “Stopping and Restarting the LP Spooler” on page 703. Step 3. Add the printer to the LP spooler. For example: /usr/sbin/lpadmin -plocal_printer -v/dev/lp -mHP_model -g7 See lpadmin (1M) for details on the options. See “Printer Model Files” on page 109 for choices for the -m option. Step 4.
Configuring a Workgroup Configuring Printers for a Workgroup Adding a Remote Printer to the LP Spooler To familiarize yourself with remote spooling concepts, see “Remote Spooling” on page 108. The easiest way to add a printer to a remote system is to run SAM. If you elect to use HP-UX commands, review the SAM procedure, Step 4, as this information will also be required when performing the task manually. Using SAM NOTE SAM does not verify that an actual printer exists on a remote system.
Configuring a Workgroup Configuring Printers for a Workgroup a. Edit /etc/services (on remote system), and if necessary, uncomment the line beginning with printer by removing the #. b. Ensure no systems are restricted from access by /var/adm/inetd.sec. c. Make sure rlpdaemon is running. Using HP-UX Commands Step 1. Ensure that you have superuser capabilities. Step 2. Stop the LP spooler: /usr/sbin/lpshut For more information, see “Stopping and Restarting the LP Spooler” on page 703. Step 3.
Configuring a Workgroup Configuring Printers for a Workgroup Step 6. Enable the newly added printer to process print requests. For example: /usr/bin/enable local_printer Step 7. Restart the LP spooler to process print requests. /usr/sbin/lpsched Step 8. Send a sample print job to the printer. • If it prints, the remote printing daemon (rlpdaemon) is active on the system and your task is completed.
Configuring a Workgroup Configuring Printers for a Workgroup Adding a Network-Based Printer Using SAM You can use SAM to add a network-based printer that uses the HP JetDirect Network Interface. The HP JetDirect software must be installed on your system and you must be prepared to provide SAM with the following: • the printer’s node name (the name associated with an Internet address) • the local name that the LP spooler will use to refer to the printer.
Configuring a Workgroup Configuring Printers for a Workgroup Only one printer can be added to a class at a time. If you have more than one printer to add, repeat this command. Step 4. Allow print requests to be accepted for the newly added printer class. For example: /usr/sbin/accept laser Step 5. Restart the LP spooler: /usr/sbin/lpsched Removing a Printer from the LP Spooler Using SAM Step 1. Invoke SAM as superuser. Step 2. Select Printers and Plotters. Step 3.
Configuring a Workgroup Configuring Printers for a Workgroup Step 4. Stop the LP spooler: /usr/sbin/lpshut For more information, see “Stopping and Restarting the LP Spooler” on page 703. Step 5. (Optional): Deny any further print requests for the printer. For example: /usr/sbin/reject -r"Use alternate printer." laser1 By doing this step, you can be assured that no new jobs will appear before you remove the printer.
Configuring a Workgroup Configuring Printers for a Workgroup Step 9. Remove the printer from the LP spooler. For example: /usr/sbin/lpadmin -xlaser1 Step 10. Restart the LP spooler: /usr/sbin/lpsched See lpshut (1M), lpadmin (1M), and lpsched (1M) for details on the command options. Removing a Printer from a Printer Class Read “Printer Class” on page 111 to familiarize yourself with this concept. NOTE You cannot use SAM to remove a printer from a class. Using HP-UX commands Step 1.
Configuring a Workgroup Configuring Printers for a Workgroup Removing a Printer Class See “Printer Class” on page 111 to familiarize yourself with this concept. NOTE You cannot use SAM to remove a printer class. Using HP-UX commands Step 1. Ensure that you have superuser capabilities. Step 2. Stop the LP spooler: /usr/sbin/lpshut For more information, see “Stopping and Restarting the LP Spooler” on page 703. Step 3. (Optional): Deny any further print requests for the printer.
Configuring a Workgroup Configuring Printers for a Workgroup NOTE When you remove a printer class, the printers in the class are not removed — you may still use them as individual printers. If you remove all printers from a class, that printer class is automatically removed. Configuring Printers to Use HPDPS IMPORTANT HPDPS is not supported on versions of HP-UX after HP-UX 11i Version 1.0.
Configuring a Workgroup Configuring Printers for a Workgroup b. Select Printers and Plotters. You will see two choices HP Distributed Print Services and LP Spooler. Before entering the HP Distributed Print Services area, select LP Spooler.
Configuring a Workgroup Configuring Printers for a Workgroup If a print queue exists, SAM displays the print queue information; else, SAM prompts you for print queue name, spooler, and spooler host. You can also set job scheduling method (to priority-fifo or fifo) by choosing print queue options. When you enter OK, if no Logical Printer object exists on your system, SAM prompts you to create it with another dialogue box. Alternatively, you can select Logical Printers from the List pull-down menu.
Configuring a Workgroup Configuring Printers for a Workgroup Modifying Users’ Environments to Use HPDPS Enabling Users to Access HPDPS Printers During the installation process, HPDPS adds /opt/pd/bin to the HP-UX PATH environment variable. For users to access HPDPS commands, they should have the same path set in their environment.
Configuring a Workgroup Compatibility Between HP-UX Releases 10.x and 11.x Compatibility Between HP-UX Releases 10.x and 11.x The topics in this discussion address compatibility issues that may arise in workgroup configurations where systems are running different versions of HP-UX releases and also sharing resources such as file systems and applications. For example, a hypothetical workgroup in a mixed environment might contain one 11.0 HP-UX server, and three 10.20 HP-UX clients. HP-UX 10.x to 11.
Configuring a Workgroup Compatibility Between HP-UX Releases 10.x and 11.x Source Compatibility 32-bit software that compiled on an HP-UX 10.x release can be recompiled without change on HP-UX 11.0. The term “source” includes input source to compilers, scripts and makefiles. Data Compatibility A 32-bit application can continue to access persistent data files, such as system files, backup/recovery formats, and HP-documented data formats via supported APIs in the same manner as the previous release.
Configuring a Workgroup Compatibility Between HP-UX Releases 10.x and 11.x An executable created by linking with a mixture of shared and archive libraries is not recommended. • Data model relocatable object compatibility. Creating an executable by linking with a mixture of 32-bit and 64-bit objects is not supported and will not be permitted by the loader. Compatibility Between 32-bit and 64-bit There are several areas where compatibility issues may arise between the 32-bit and 64-bit versions of HP-UX 11.
Configuring a Workgroup Compatibility Between HP-UX Releases 10.x and 11.x you are running the 32-bit version of 11.0, you will not encounter any problems. However, in the case of 64-bit version of HP-UX 11.0, there may be some compatibility issues for legacy software.
Configuring a Workgroup Compatibility Between HP-UX Releases 10.x and 11.x It is advantageous to run your software without porting to 11.0 when: • You want to simplify the transition process. • You want to use a single executable for both HP-UX 10.x and HP-UX 11.0. • Your software is not a library. (Native versions of libraries are usually needed for optimal performance.) • You do not need to recompile your software with the new ANSI C++ compiler.
Configuring a Workgroup Compatibility Between HP-UX Releases 10.x and 11.x Documentation for Transitioning Software to HP-UX 11.0 Hewlett-Packard has provided several resources to help you transition software to HP-UX 11.0. • HP-UX 64-bit Porting and Transition Guide This guide provides a detailed discussion on programming issues involved with porting software to HP-UX 11.0. It describes the changes you need to make to compile, link, and run programs on a 64-bit operating system.
Configuring a Workgroup Compatibility Between HP-UX Releases 10.x and 11.x source files to the latest release of HP-UX, and is useful when planning a transition. • scandetail tool This tool gives a detailed picture of API transition problems, indicating exactly what API impacts occur on each line of your source files. For each problem detected by these tools, a detailed impact page is available that describes the problem and any necessary modifications of your source files.
Configuring a Workgroup Compatibility Between HP-UX Releases 10.x and 11.x Large File Compatibility Large files (greater than 2 GB) are supported on HP-UX Releases 10.20 and later. To support large files on your system, you must explicitly enable a large-files file system. (See “Managing Large Files” on page 654 for more information.) When working with large files be aware of these issues: • You cannot perform interactive editing on large files.
Configuring a Workgroup Compatibility Between HP-UX Releases 10.x and 11.x Figure 4-2 32-bit Operating System and Large Files HP-UX 11.0 (32-bit version of OS), HP-UX 10.30, and HP-UX 10.
Configuring a Workgroup Compatibility Between HP-UX Releases 10.x and 11.x Figure 4-3 64-bit Operating System and Large Files HP-UX 11.
Configuring a Workgroup Compatibility Between HP-UX Releases 10.x and 11.x To Configure Large File Support with NFS To configure large file support on NFS, both the NFS client and NFS server must support NFS PV3. Step 1. On the NFS Server, enter commands similar to those following. a. To create a new file system with large files enabled, enter a command like: /usr/sbin/newfs -F hfs -o largefiles /dev/vg02/rlvol or: /usr/sbin/newfs -F vxfs -o largefiles /dev/vg02/rlvol1 b.
Configuring a Workgroup Compatibility Between HP-UX Releases 10.x and 11.x Table 4-4 NFS Protocol Compatibility and Large File Support System Type (mount option) Client PV2 Client PV2/PV3 default Client PV2/PV3 mount option -o vers=2 Client PV2/PV3 mount option -o vers=3 Non-HP Client PV2/ PV3 HP Server -PV2 (HP-UX 10.20 or earlier) PV2 PV2a PV2b PV2c PV2 HP Server - PV2/PV3 (HP-UX 10.
Configuring a Workgroup Compatibility Between HP-UX Releases 10.x and 11.x j. The HP-UX PV3 client returns an [EFBIG] error if the requested file is larger than the remote file system’s maximum file size. k. The HP-UX PV3 server returns [NFS3ERR_FBIG] if the request (read(), write(), or create()) exceeds the maximum supported size of the underlying HFS/JFS file system. l.
Configuring a Workgroup Compatibility Between HP-UX Releases 10.x and 11.x 150001 150001 1 2 tcp tcp 720 720 pcnfsd pcnfsd indicates that the server can serve both NFS PV2 and NFS PV3.
Configuring a Workgroup Compatibility Between HP-UX Releases 10.x and 11.
5 Administering a System: Booting and Shutdown This section contains information on the following topics: • ❏ “The Boot Sequence: Starting an HP-UX System” on page 464 ❏ “Booting HP-UX on HP Integrity Servers: Details and Variations” on page 465 ❏ “Booting HP-UX on HP 9000 (PA-RISC) Systems: Details and Variations” on page 486 ❏ “Speeding the Boot: SpeedyBoot” on page 501 • “Setting Initial System Information” on page 513 • “Customizing Start-up and Shutdown” on page 515 • “Shutting Down Syst
Administering a System: Booting and Shutdown Booting Systems Booting Systems Whenever you turn on (or reset) your computer the hardware, firmware, and software must be initialized in a carefully orchestrated sequence of events known as the boot sequence. The Boot Sequence: Starting an HP-UX System HP-UX based systems go through the following sequence when you power them on or reset them: 1.
Administering a System: Booting and Shutdown Booting Systems If you are booting a PA-RISC System see Booting HP-UX on HP 9000 (PA-RISC) Systems: Details and Variations. Booting HP-UX on HP Integrity Servers: Details and Variations “The Boot Sequence: Starting an HP-UX System” on page 464 describes the basic sequence of events that occurs when you turn on, reset, or reboot an HP Integrity Server.
Administering a System: Booting and Shutdown Booting Systems To set the ACPI configuration for HP-UX: in the EFI Shell interface enter the acpiconfig default command, and then enter the reset command for the nPartition to reboot with the proper (default) configuration for HP-UX. A Standard Boot Here are more details about what happens during a typical HP-UX boot-up sequence on an HP Integrity Server. Step 1.
Administering a System: Booting and Shutdown Booting Systems HAA The High-Availability Alternate boot path is the path you want your system to boot from should your primary boot path fail. ALT The ALTernate boot path is the hardware path to an alternate boot source (for example, a tape drive, network-based boot source, or optical disc drive). On HP Integrity Servers, the PRI boot path is tried during an automatic boot.
Administering a System: Booting and Shutdown Booting Systems Step 5. Load and initiate the HP-UX operating system: hpux.efi then opens, and loads the HP-UX kernel into memory and initiates it. Step 6. HP-UX goes through its initialization process and begins normal operation.
Administering a System: Booting and Shutdown Booting Systems autoboot off If the autoboot flag is set to off the boot process stops at the EFI Boot Manager from which you can manually boot HP-UX or perform other tasks. Overriding an Automatic Boot If the autoboot flag in the nonvolatile memory of your system or nPartition is enabled, your system or nPartition will attempt to automatically boot following a boot delay. By default, the boot delay is set to 10 seconds however you can change this.
Administering a System: Booting and Shutdown Booting Systems To select a file system to use, enter its mapped name followed by a colon (:). For example, to operate with the boot device that is mapped as fs0, enter fs0: at the EFI Shell prompt. When you hit Enter to complete the command the shell prompt will change to reflect your device selection: (fs0:\>) Step 3. Enter HPUX at the EFI Shell command prompt to launch the HPUX.EFI loader.
Administering a System: Booting and Shutdown Booting Systems autoboot 30 Enabling / Disabling Autoboot The value of the autoboot flag can be set or changed in several ways: Example 5-3 Enable Autoboot (using EFI Shell’s autoboot command) Shell> autoboot on Example 5-4 Disable Autoboot (using EFI Shell’s autoboot command) Shell> autoboot off Example 5-5 Enable Autoboot (using setboot from a running HP-UX system) /usr/sbin/setboot -b on Example 5-6 Disable Autoboot (using setboot from a running HP-UX s
Administering a System: Booting and Shutdown Booting Systems Step 2. Enter map at the EFI shell prompt to list bootable devices on your system. The devices will be listed. Look for entries that begin with fs#: (where # is a number such as 0, 1, 2, 3, etc.). Step 3. Determine which entry maps to the device you are trying to boot from and enter the fs#: name at the shell prompt.
Administering a System: Booting and Shutdown Booting Systems fs0 : Acpi(HWP0002,500)/Pci(2|0)/Ata(Primary,Master)/HD(Part1, Sig88F40A3A-B992-11E1-8002-D6217B60E588) fs1 : Acpi(HWP0002,500)/Pci(2|0)/Ata(Primary,Master)/HD(Part3, Sig88F40A9E-B992-11E1-8004-D6217B60E588) blk0 : Acpi(HWP0002,500)/Pci(2|0)/Ata(Primary,Master) blk1 : Acpi(HWP0002,500)/Pci(2|0)/Ata(Primary,Master)/HD(Part1, Sig88f40A3A-B992-11E1-8002-D6217B60E588) blk2 : Acpi(HWP0002,500)/Pci(2|0)/Ata(Primary,Master)/HD(Part1, Sig88f40A6C-B992-11E
Administering a System: Booting and Shutdown Booting Systems Changing the PRI, HAA, and ALT Boot Paths On HP Integrity Servers, the primary, high-availability alternate, and alternate boot paths are based on the first, second, and third items that appear in the boot options list for the server, respectively. You can manage the boot paths using the setboot command when HP-UX is running, or by using the “Boot Option Maintenance Menu” in the EFI Boot Manager.
Administering a System: Booting and Shutdown Booting Systems • Use the setboot -p path command to set the primary boot path, for example: /usr/sbin/setboot -p 0/0/2/0/0.6 • Use the setboot -h path command to set the high-availability alternate boot path, for example: /usr/sbin/setboot -h 0/0/0/3/1.6 • Use the setboot -a path command to set the alternate boot path, for example: /usr/sbin/setboot -a 0/0/0/3/0.
Administering a System: Booting and Shutdown Booting Systems Delete Boot Option(s) Allows you to interactively delete one or more entries from your boot options list Change Boot Order Allows you to reorder your boot options list Step 3. When the boot options list for your system is as you want it, select “Exit” to return to the EFI Boot Manager’s main menu (which should now reflect your new edits to the boot options list).
Administering a System: Booting and Shutdown Booting Systems To list and configure an HP-UX boot device’s AUTO file from the EFI Shell use EFI Shell commands (such as cd, ls, and edit) to display and edit the EFI\HPUX\AUTO file on the selected device. Step 1. Access the EFI Shell environment using the server’s (or nPartition’s) system console. Access the system console either via the server’s management processor (MP) or via a hardwired console terminal.
Administering a System: Booting and Shutdown Booting Systems In the list that is displayed locate the entry corresponding to the device containing the AUTO file you want to change. Look at the entries in the list that begin with the string fs#, where # will be a number (for example fs0, fs1, fs2 ... and so on). At the EFI Shell prompt enter the fs# for the desired device followed by a colon: Shell> fs0: Your device is now selected and the EFI Shell prompt will change to reflect that: fs0:\> Step 3.
Administering a System: Booting and Shutdown Booting Systems Step 4. To change the contents of the AUTO file you can either use the edit command to edit the file using the full-screen EFI editor, or use the echo command and redirect its output to the AUTO file: • To use the edit command, enter edit AUTO and configure the AUTO file using the full-screen editor.
Administering a System: Booting and Shutdown Booting Systems Step 1. Access the HPUX.EFI loader for the boot device that contains the AUTO file you want to configure. You can do this either by launching the loader from the EFI Shell interface, or by selecting the device from the EFI Boot Manager and interrupting the HP-UX boot process to access the loader’s HPUX> prompt. NOTE If you use the EFI Shell interface, be sure to select the correct boot device before starting the HPUX.
Administering a System: Booting and Shutdown Booting Systems boot option kernel Specifies to boot the specified kernel file using the loader option given. For example: setauto boot -is vmunix command creates an AUTO file containing boot -is vmunix (which indicates to boot in single-user mode, as specified by the -is option). See the hpux (1M) manpage for details on loader options, which include LVM maintenance mode (-lm), VxVM maintenance mode (-vm), tunable maintenance mode (-tm), and others. Step 4.
Administering a System: Booting and Shutdown Booting Systems The most difficult part of this step is determining which device file to use to reference the proper EFI file system. If the AUTO file you want to change is the one associated with the device you are currently booted from, here is one way to determine which device file to use: Example 5-8 Determining the EFI disk partition of your current boot device 1.
Administering a System: Booting and Shutdown Booting Systems containing /stand. Look for which device file has a hardware address that matches your primary boot path. Change the “s2” to “s1” as in the previous sub-step and you have the name to use with efi_cp. NOTE You can use this procedure with devices other than your current boot device if you have multiple devices you alternately boot from. Example 5-8 describes a common occurrence. Step 2.
Administering a System: Booting and Shutdown Booting Systems Login to the service processor (MP or GSP) and enter CO to access the Console list. Select the nPartition console. When accessing the console, confirm that you are at the EFI Boot Manager menu (the main EFI menu). If at another EFI menu, select the Exit option from the sub-menus until you return to the screen with the EFI Boot Manager heading. From the EFI Boot Manager menu, select the EFI Shell menu option to access the EFI Shell environment.
Administering a System: Booting and Shutdown Booting Systems Step 4. Boot to the HP-UX Boot Loader prompt (HPUX>) by typing any key within the ten seconds given for interrupting the HP-UX boot process. You will use the HPUX.EFI loader to boot HP-UX in single-user mode in the next step. After you type a key, the HPUX.EFI interface (the HP-UX Boot Loader prompt, HPUX>) is provided. For help using the HPUX.EFI loader, type the help command. To return to the EFI Shell, type exit.
Administering a System: Booting and Shutdown Booting Systems Step 6. If you are accessing the system console through the management processor and you are no longer using it, exit the console and service processor interfaces. To exit the EFI environment type ^B (Control-B); this exits the nPartition console and returns to the service processor Main Menu. To exit the service processor, type X at the Main Menu.
Administering a System: Booting and Shutdown Booting Systems System hardware or hardware associated with an nPartition you are booting will go through a series of self-tests to verify that the processors, memory, and other system components are in working order. Step 3. Boot device selection: Your system (or the nPartition you are booting) must locate a kernel file to boot from.
Administering a System: Booting and Shutdown Booting Systems For information about the specific hardware paths available on your system, refer to the output of ioscan (see ioscan (1M) for details on how to run ioscan). Also, some path information is physically printed on your system. Usually, the primary boot path points to the device from which you most frequently boot and that device is available.
Administering a System: Booting and Shutdown Booting Systems Automatic Versus Manual Booting PDC sets up the boot and console devices using the Boot Console Handler (BCH). Which actions the BCH takes once the console and boot devices have been initialized depend on whether or not the operator manually interrupts an autoboot, and on the state of two flags in nonvolatile memory: autoboot and autosearch.
Administering a System: Booting and Shutdown Booting Systems Table 5-1 How autoboot and autosearch Flag Settings Affect the Boot Sequence (Continued) autoboot autosearch Boot Type What happens ON OFF Auto Boot The BCH tries the primary boot path in nonvolatile memory; if it is not bootable, the BCH interacts with the user to obtain a bootable device path ON ON Auto Search The BCH tries the primary boot path; if it is not bootable, the BCH searches to find the first device that is bootable and bo
Administering a System: Booting and Shutdown Booting Systems Example 5-9 Enabling the Autoboot Flag Using the BCH Main Menu: Enter Command > co au bo on TIP The above command is a shortcut for entering a command that actually resides in the BCH configuration menu (the co portion of the command indicates that the next part of the command is from the configuration menu). The au portion of the above command is shorthand for the “auto” command within the configuration menu.
Administering a System: Booting and Shutdown Booting Systems Example 5-14 Disabling the Autoboot Flag Using setboot /usr/sbin/setboot -b off Example 5-15 Enabling the Autosearch Flag Using setboot /usr/sbin/setboot -s on Example 5-16 Disabling the Autosearch Flag Using setboot /usr/sbin/setboot -s off Changing the PRI, HAA, and ALT Boot Paths HP 9000 systems allow you to define a primary boot path and an alternate boot path, and in many cases a high-availability alternate boot path.
Administering a System: Booting and Shutdown Booting Systems • Use the setboot -h path command to set the high-availability alternate boot path, for example: /usr/sbin/setboot -h 0/0/0/3/1.6 • Use the setboot -a path command to set the alternate boot path, for example: /usr/sbin/setboot -a 0/0/0/3/0.6 Setting the PRI, HAA, and ALT Boot Paths Using the Boot Console Handler Step 1.
Administering a System: Booting and Shutdown Booting Systems Example 5-19 Setting the ALT (Alternate Boot Path) Using the BCH Example: Set the alternate boot path address to 0/0/0/3/0.6 Main Menu: Enter Command > pa alt 0/0/0/3/0.6 Booting PA-RISC Systems from an Alternate Boot Source A boot source consists of two parts: 1. A boot device containing a file system where kernel files are stored 2.
Administering a System: Booting and Shutdown Booting Systems The Boot Console Handler (BCH) will display its main menu and prompt for a command: Main Menu: Enter command > Step 2. Use the BCH boot command to specify where you want to boot the system from. You can issue the BOOT command in any of the following ways: • BOOT Issuing the BOOT command with no arguments boots the device at the primary (PRI) boot path.
Administering a System: Booting and Shutdown Booting Systems Example 5-21 Boot from the boot device specified at hardware address 0/0/2/0/0.14: Main Menu: Enter command or menu > boot 0/0/2/0/0.14 Example 5-22 Boot from the boot device specified at path label P2: Main Menu: Enter command or menu > search PATH# ----P0 P1 P2 Device Path (dec) ------------------0/0/2/0/0.13 0/0/2/0/0.14 0/0/2/0/0.
Administering a System: Booting and Shutdown Booting Systems Do you wish to stop at the ISL prompt prior to booting? (y/n) >> n ISL booting hpux Boot : disk(0/0/1/0/0.15.0.0.0.0.0;0)/stand/vmunix To boot an HP-UX kernel other than that which is pointed to in the AUTO file, or to boot HP-UX in single-user or LVM-maintenance mode, stop at the ISL prompt and specify the appropriate arguments to the hpux loader.
Administering a System: Booting and Shutdown Booting Systems You rarely need to change the contents of the AUTO file. However, there are occasions when you might want to, such as when you create a new kernel file (with a name other than the default, /stand/vmunix) that you regularly want to boot from, or to boot from a device on a different disk from where ISL resides.
Administering a System: Booting and Shutdown Booting Systems Main Menu: Enter command or menu > BOOT ALT Alternate Boot Path: 0/0/0/3/0.6 Do you wish to stop at the ISL prompt prior to booting? (y/n) >> y Initializing boot Device. .... ISL Revision A.00.43 Apr 12, 2000 ISL> Step 3.
Administering a System: Booting and Shutdown Booting Systems The system will boot into single-user mode; watch for the confirmation messages: INIT: Overriding default level with level `s' INIT: SINGLE USER MODE Step 4. If you accessed the system console and service processor (management processor) interfaces via a network, exit the console and service processor interfaces if finished using them.
Administering a System: Booting and Shutdown Booting Systems Speeding the Boot: SpeedyBoot On many HP Integrity Servers and HP 9000 Systems, a firmware based feature called SpeedyBoot allows you to bypass some of the boot-time system tests in order to boot your system more quickly. NOTE HP recommends that all self tests be performed, but recognizes the need to have your system available as quickly as possible.
Administering a System: Booting and Shutdown Booting Systems ✓ full memory tests ✓ platform dependent tests (HP Integrity Servers only) ✓ I/O hardware tests (HP Integrity Servers only) ✓ processor hardware tests (HP 9000 Systems only) ✓ central electronic complex tests (HP 9000 Systems only) ✓ chipset tests (HP Integrity Servers only) You can be independently specify which tests will be performed: • for the next boot only • for all subsequent boots The tests are described in “System Boot Tes
Administering a System: Booting and Shutdown Booting Systems System Boot Tests When your system boots, it performs the tests described in Table 5-2. These are keywords for the hardware tests that are executed by processor-dependent code (PDC) or firmware upon a boot or reboot of the system. Table 5-2 Test Name SpeedyBoot Tests Values Description all on off partial All the listed tests. SELFTESTS on off partial Includes the early_cpu and late_cpu tests.
Administering a System: Booting and Shutdown Booting Systems Table 5-2 Test Name SpeedyBoot Tests (Continued) Values Description PDH on off Processor-dependent hardware. When on, test a checksum of read-only memory (ROM). When off, do not. CEC on off Central electronic complex. When on, test low-level bus converters and I/O chips. When off, do not. CEC is not available on all systems. Memory_init on off When on, enables full destructive memory tests.
Administering a System: Booting and Shutdown Booting Systems Example 5-28 Displaying Current SpeedyBoot Settings for your System (HP Integrity Server sample output) setboot -v Primary bootpath : HA Alternate bootpath : 0/0/0/1/0 Alternate bootpath : Autoboot is ON (enabled) TEST CURRENT ---------all partial SELFTESTS on early_cpu on late_cpu on FASTBOOT on Platform on Full_memory on Memory_init on IO_HW off Chipset on Table 5-3 SpeedyBoot Status Table Headers Column Chapter 5 DEFAULT ----
Administering a System: Booting and Shutdown Booting Systems Table 5-3 SpeedyBoot Status Table Headers (Continued) Column Next Boot Description The values for each test that will be used on the next boot. If they are different from Current, the Current values will be reestablished after the next boot. on, off, and partial are the same as for Current.
Administering a System: Booting and Shutdown Booting Systems To disable an individual test, enter: FASTBOOT test SKIP, where test is the name of the self test (“PDH”, “EARLY”, or “LATE”). To enable an individual test, enter: FASTBOOT test RUN. For details on setting self tests, enter: HELP FASTBOOT at the BCH Configuration Menu Step 4. Repeat Step 3 until the settings reflect your desired settings, then reboot your system.
Administering a System: Booting and Shutdown Booting Systems • late_cpu • platform • chipset • io_hw • mem_init • mem_test boottest Display the current boot-time system test configuration boottest testname Display the current setting for the specified test (testname). For example: boottest mem_test displays the memory self-test settings. boottest on Enable all boot-time system tests. HP recommends this but recognizes your needs may require disabling some boot-time system tests.
Administering a System: Booting and Shutdown Booting Systems Configuring Boot-Time System Tests from a Booted System SpeedyBoot tests are configured with three setboot options: -v Displays a status table of the SpeedyBoot test settings. -t testname=value Change the value for the test testname in nonvolatile memory to value for all following boots. The changes are reflected in the Current and Next Boot columns of the SpeedyBoot table.
Administering a System: Booting and Shutdown Booting Systems -T testname=value Change the value for the test testname for the next system boot only. The changes are reflected in the Next Boot column of the SpeedyBoot table. The change does not modify nonvolatile memory, so the permanent values, shown in the Current column, are restored after the boot. testname and value are the same as for the -t option.
Administering a System: Booting and Shutdown Booting Systems all SELFTESTS early_cpu late_cpu FASTBOOT full_memory PDH CEC off off off off off off off off partial yes yes yes yes yes yes no partial on on on on on on off off off off off off off off off Now, let’s change the previous to set the normal boot to do only the late_cpu and the full_memory tests, skipping the slower early_cpu tests and the PDH tests: # setboot -t late_cpu=on -t full_memory=on -v Primary bootpath : 10/0.0.
Administering a System: Booting and Shutdown Booting Systems full_memory on PDH off CEC off 512 yes yes no on on off on on off Chapter 5
Administering a System: Booting and Shutdown Setting Initial System Information Setting Initial System Information The first time your system boots following the installation of HP-UX, a special set-up script (called /sbin/set_parms) runs to prompt you for values of certain parameters that your system needs to know about in order to define its place in the world. Most of these values relate to networking.
Administering a System: Booting and Shutdown Setting Initial System Information Table 5-4 System Parameters (Continued) option 514 Description ip_address Internet protocol address. If networking is installed, this is an address with four numeric components, each of which is separated by a period with each number between 0 and 256. An example of an IP address is: 255.32.3.10. If you do not have networking installed, you will not be prompted for the IP address.
Administering a System: Booting and Shutdown Customizing Start-up and Shutdown Customizing Start-up and Shutdown This section explains how to make applications and services start automatically on boot and stop on shutdown. To automate starting and stopping a subsystem you need to do all of the following: 1. Decide at what run level(s) you want the subsystem to start and stop.
Administering a System: Booting and Shutdown Customizing Start-up and Shutdown 4. Reboot the system to make sure everything works. On a busy system, this may be inconvenient, but beware of testing on a configuration other than the one on which your subsystem will actually run; any differences in start-up/shutdown configuration between the test system and the production system may invalidate the test.
Administering a System: Booting and Shutdown Customizing Start-up and Shutdown killproc() { pid=`ps -e | awk '$NF~/'"$1"'/ {print $1}'` if [ "X$pid" != "X" ] then if kill "$pid" then echo "$1 stopped" else rval=1 echo "Unable to stop $1" fi fi } case $1 in 'start_msg') # message that appears in the startup checklist echo "Starting the web_productname daemon" ;; 'stop_msg') # message that appears in the shutdown checklist echo "Stopping the web_productname daemon" ;; 'start') # source the configuration fil
Administering a System: Booting and Shutdown Customizing Start-up and Shutdown print "failed to start $web_productname_daemon" rval=2 fi ;; 'stop') killproc $web_productname_daemon ;; *) echo "usage: $0 {start|stop|start_msg|stop_msg}" rval=1 ;; esac exit $rval Then create a configuration file, /etc/rc.config.d/web_productname, to tell the above script where to find the web_productname daemon and whether or not to start it up (1=yes; 0=no): #!/sbin/sh# # v1.
Administering a System: Booting and Shutdown Customizing Start-up and Shutdown Since HP guarantees that scripts using the number 900 in run level 2 will not be overwritten when we upgrade the system or add HP or third-party software, and run level 2 is a good place to start the web_productname daemon, we assigned our script number 900 and linked it into the /sbin/rc2.d directory: ln -s /sbin/init.d/web_productname /sbin/rc2.
Administering a System: Booting and Shutdown Shutting Down Systems Shutting Down Systems • “Overview of the Shutdown Process” on page 520 • “Types of Shutdown” on page 522 — — — — • “Normal (Planned) Shutdown” on page 522 “Power Failure” on page 525 “Unclean Shutdowns” on page 526 “System Crashes / HP-UX Panics” on page 527 “Special Considerations for Shutting Down Certain Systems” on page 528 — — — — — — — “Mail Server” on page 528 “Name Server” on page 528 “Network Gateway” on page 529 “NFS File S
Administering a System: Booting and Shutdown Shutting Down Systems inconsistent with the “total picture” of what the file system should look like (pointers pointing to the wrong place, inodes not properly updated, etc.). ❏ The system might have users logged into it from remote locations. These users might be in the middle of important work when the system is turned off. Consequently, their work will be interrupted and important data could be lost.
Administering a System: Booting and Shutdown Shutting Down Systems Types of Shutdown There are various types of shutdown, both planned, and unplanned. This section covers several common situations: • A “Normal (Planned) Shutdown” on page 522 • “Power Failure” on page 525 • “System Crashes / HP-UX Panics” on page 527 • “Unclean Shutdowns” on page 526 Normal (Planned) Shutdown Hopefully, most of your system shutdowns will be of this type.
Administering a System: Booting and Shutdown Shutting Down Systems • the wall command (see wall (1M)) — only notifies users of your system, not users of other systems that are likely to be affected by a shutdown of your system • calling them on the phone, or speaking to them in person However you do it, the critical thing is to notify them as far in advance as possible of your planned shutdown.
Administering a System: Booting and Shutdown Shutting Down Systems Example 5-31 Shutdown and Halt To immediately shut down the system and halt it so that it can safely be powered off: /sbin/shutdown -h 0 Example 5-32 Shutdown to Single-User Mode To shut the system down to single-user mode, use neither the -h or the -r options to the shutdown command.
Administering a System: Booting and Shutdown Shutting Down Systems • Verifies that the user attempting to shut down the system has permission to do so (checks the /etc/shutdown.allow file). • Changes the current working directory to the root directory (/). • Runs the sync command to be sure that file system changes still in memory are updated in the superblocks and file system structures on disk.
Administering a System: Booting and Shutdown Shutting Down Systems Many HP-UX systems can be equipped with uninterruptible power supplies (UPSs) to allow you to maintain power to your systems for a short while following the failure of your computer’s primary power source. If the power failure is brief, systems equipped with UPSs will not be affected by the power failure at all.
Administering a System: Booting and Shutdown Shutting Down Systems that resulted from the improper shutdown. In nearly all cases, fsck can find and fix all of the structural problems and the file system can then be marked clean. On rare occasions, the file system corruption is beyond what fsck can automatically correct. In these cases fsck will terminate with an error message indicating that you need to use it in an interactive mode to fix the more serious problems. In these cases data loss is likely.
Administering a System: Booting and Shutdown Shutting Down Systems Special Considerations for Shutting Down Certain Systems In today’s world of networked computers, people who are not direct users of your system can still be affected by its absence from the network (when it has been shut down).
Administering a System: Booting and Shutdown Shutting Down Systems Network Gateway If your computer is serving as a network gateway computer: that is, it has several network interface cards in it, and is a member of multiple networks (subnets), your computer’s absence on the network can have a huge impact on network operations. An example of this is the computer called flserver in the MSW Sample Network (see “The MSW Network (Overview)” on page 67).
Administering a System: Booting and Shutdown Shutting Down Systems computer C, which was shut down without notice. It is important for the administrator of computer B to warn the administrator of computer A to unmount any NFS-mounted file systems from computer B, or computer A will also need to be rebooted as an indirect consequence of computer C being shut down.
Administering a System: Booting and Shutdown Abnormal System Shutdowns Abnormal System Shutdowns • “Overview of the Dump / Save Cycle” on page 532 • “Preparing for a System Crash” on page 533 — “Systems Running HP-UX Releases Prior to Release 11.0” on page 534 — “Dump Configuration Decisions” on page 534 — “Defining Dump Devices” on page 542 • “What Happens When the System Crashes” on page 548 — “Systems Running HP-UX Releases Prior to Release 11.
Administering a System: Booting and Shutdown Abnormal System Shutdowns Overview of the Dump / Save Cycle Figure 5-1 Overview of the Dump/Save Cycle Normal Operation EM ! ST SY ASH CR Resume Normal Operation System Reboot e lM ica ys Ph ry mo Crash Processing HP-UX Filesystem Disks 3 1 2 Reboot Processing Dump Devices When the system crashes, HP-UX tries to save the image of physical memory, or certain portions of it, to predefined locations called dump devices.
Administering a System: Booting and Shutdown Abnormal System Shutdowns • During system initialization when the initialization script for crashconf runs (and reads entries from the /etc/fstab file) • During run time, by an operator or administrator manually running the /sbin/crashconf command. Preparing for a System Crash Normal Operation EM ! ST SY ASH R C The dump process exists so that you have a way of capturing what your system was doing at the time of a crash.
Administering a System: Booting and Shutdown Abnormal System Shutdowns Systems Running HP-UX Releases Prior to Release 11.0 Prior to HP-UX Release 11.0, you have limited control over the dump process.
Administering a System: Booting and Shutdown Abnormal System Shutdowns dump subsystem is available to you that will give you a lot more control over the dump process. An operator at the system console can even override the runtime configuration as the system is crashing. In addition to any previous options you had, you now have control over the following crash dump features: • How much memory gets dumped. • Run-time crash dump configuration.
Administering a System: Booting and Shutdown Abnormal System Shutdowns When you define your dump devices, whether in a kernel build or at run time, you can list which classes of memory must always get dumped, and which classes of memory should not be dumped. If you leave both of these lists empty, HP-UX will decide for you which parts of memory should be dumped based on what type of error occurred. In nearly all cases, this is the best thing to do.
Administering a System: Booting and Shutdown Abnormal System Shutdowns (A dump that previously took 3 hours to complete should now take only 1 hour.) • You can use the crashconf(1M), command to disable or enable compressed dumps. (Compression is configured into the kernel by default.) During a crash event, you can also choose to override dump compression.
Administering a System: Booting and Shutdown Abnormal System Shutdowns You can disable compression by using the crashconf -c command with the off argument, as follows: $ crashconf -v -c off {Lines omitted from display} Dump compressed: OFF Any changes that you make to the dump configuration take effect immediately but will persist only until the next reboot or the next invocation of the crashconf command. To make changes persist across reboots, use the -t option.
Administering a System: Booting and Shutdown Abnormal System Shutdowns Compressed Save versus Noncompressed Save System dumps can be very large, so large that your ability to store them in your HP-UX file system area can be taxed. The boot time utility called savecrash can be configured (by editing the file /etc/rc.config.d/savecrash) to compress or not compress the data as it copies the memory image from the dump devices to the HP-UX file system area during the reboot process.
Administering a System: Booting and Shutdown Abnormal System Shutdowns • “Full Dump vs. Selective Dump” on page 540 • “Dump Definitions Built into the Kernel” on page 540 • “Using a Device for Both Paging and as a Dump Device” on page 541 Full Dump vs. Selective Dump You have chosen this section because it is most important to you to capture the specific instruction or piece of data that caused your system crash. The only way to guarantee that you have it is to capture everything.
Administering a System: Booting and Shutdown Abnormal System Shutdowns If it is critical to you to capture every byte of memory in all instances, including the early stages of the boot process, define enough dump space in the kernel configuration to account for this. NOTE The preceding example is presented for completeness.
Administering a System: Booting and Shutdown Abnormal System Shutdowns Dump Level You are reading this section because disk space is a limited resource on your system. Obviously, the fewer pages that you have to dump, the less space is required to hold them. Therefore, a full dump is not recommended. If disk space is very limited, you can always choose no dump at all. However, there is a happy medium, and it happens to be the default dump behavior; it is called a selective dump.
Administering a System: Booting and Shutdown Abnormal System Shutdowns NOTE For HP-UX releases prior to Release 11.0, dump device definitions must be built into the kernel. How Much Dump Space Do I Need? Before you define dump devices, it is important to determine how much dump space you need, so that you can define enough dump space to hold the dump, but will not define too much dump space, which would be a waste of disk space. Systems Running HP-UX Releases Prior to Release 11.
Administering a System: Booting and Shutdown Abnormal System Shutdowns Total pages included in dump: DEVICE -----------31:0x00d000 OFFSET(kB) ---------52064 SIZE (kB) ---------262144 ---------262144 6208 LOGICAL VOL. -----------64:0x000002 NAME ------------------------/dev/vg00/lvol2 Step 2. Multiply the number of pages listed in Total pages included in dump by the page size (4 KB), and add 25 percent for a margin of safety to give you an estimate of how much dump space to provide.
Administering a System: Booting and Shutdown Abnormal System Shutdowns Step 5. When the time is appropriate, boot your system from the new kernel file to activate your new dump device definitions. For details on how to do that, see “Reconfiguring the Kernel (Prior to HP-UX 11i Version 2)” on page 282. Using HP-UX Commands to Configure Dump Devices into the Kernel You can also edit your system file and use the config program to build your new kernel. Step 1.
Administering a System: Booting and Shutdown Abnormal System Shutdowns • The logical volume cannot be used for file system storage, because the whole logical volume will be used.
Administering a System: Booting and Shutdown Abnormal System Shutdowns The /etc/fstab File You can define entries in the fstab file to activate dump devices during the HP-UX initialization (boot) process, or when crashconf reads the file.
Administering a System: Booting and Shutdown Abnormal System Shutdowns Example 5-37 Add Specific Devices to Active Dump List To have crashconf add the devices represented by the block device files /dev/dsk/c0t1d0 and /dev/dsk/c1t4d0 to the dump device list: /sbin/crashconf /dev/dsk/c0t1d0 /dev/dsk/c1t4d0 Example 5-38 Replace Active Dump List with Specific Devices To have crashconf replace any existing dump device definitions with the logical volume /dev/vg00/lvol3 and the device represented by block devi
Administering a System: Booting and Shutdown Abnormal System Shutdowns backed into the disk array, etcetera). Other times the cause is not readily apparent. It is for this reason that HP-UX is equipped with a dump procedure to capture the contents of memory at the time of the crash for later analysis. Systems Running HP-UX Releases Prior to Release 11.0 For systems running HP-UX releases prior to Release 11.
Administering a System: Booting and Shutdown Abnormal System Shutdowns [ A Key is Pressed ] *** Proceeding with compressed dump. *** The dump will be a SELECTIVE dump: 1240 of 16352 megabytes. *** To change this dump type, press any key within 10 seconds. [ A Key is Pressed ] *** Select one of the S) The dump will be a P) The dump will be a F) The dump will be a following dump types, by pressing the corresponding key: SELECTIVE dump: 1240 of 16352 megabytes. PARTIAL dump: 6138 of 16352 megabytes.
Administering a System: Booting and Shutdown Abnormal System Shutdowns You can interrupt the dump at any time by pressing the ESC (escape) key. It can take as much as 15 seconds to abort. However, if you interrupt a dump, it will be as though a dump never occurred; that is, you will not get a partial dump. Following the dump, the system attempts to reboot. The Reboot After the dumping of physical memory pages is complete, the system attempts to reboot (if the AUTOBOOT flag is set).
Administering a System: Booting and Shutdown Abnormal System Shutdowns After your system is rebooted, one of the first things you need to do is to be sure that the physical memory image that was dumped to the dump devices is copied to the HP-UX file system area so that you can either package it up and send it to an expert for analysis, or analyze it yourself using a debugger. NOTE As of HP-UX Release 11.
Administering a System: Booting and Shutdown Abnormal System Shutdowns If you chose to do a partial save by leaving the SAVECRASH environment set to 1, and by setting the environment variable called SAVE_PART=1 (in the file /etc/rc.config.d/savecrash) the only pages that were copied to your HP-UX file system area during the boot process are those that were on paging devices. Pages residing on dedicated dump devices are still there.
Administering a System: Booting and Shutdown Abnormal System Shutdowns The syntax of the crashutil command to do a conversion is: /usr/sbin/crashutil [-v version] source [destination] version, in this command, is the format that you want to convert to. source is the HP-UX file system file/directory containing the dump you want to convert. And, if you do not want to convert the source in place, you can specify an alternate destination for the converted output.
Administering a System: Managing Disks and Files 6 Administering a System: Managing Disks and Files This section contains information on the following topics: Chapter 6 • “Managing Disks” on page 556 • “Managing File Systems” on page 602 • “Managing Swap and Dump” on page 662 • “Backing Up Data” on page 674 • “Restoring Your Data” on page 696 555
Administering a System: Managing Disks and Files Managing Disks Managing Disks This section provides practical guidance in managing disks under HP-UX.
Administering a System: Managing Disks and Files Managing Disks The “VERITAS Volume Manager and File System” neighborhood at HP’s HP-UX documentation web site provides information on other versions of VERITAS Volume Manager: http://docs.hp.com/hpux/os/11i/index.html#VERITAS%20Volume% 20Manager%20and%20File%20System For a book-length view of these topics, we recommend Disk and File Management Tasks on HP-UX, published by Prentice Hall PTR, 1997.
Administering a System: Managing Disks and Files Managing Disks The Logical Volume Manager (LVM) Useful Facts About LVM 558 • To use LVM, a disk must be first initialized into a physical volume (also called an LVM disk). • Once you have initialized one or more physical volumes, you assign them into one or more volume groups. If you think of all of your physical volumes as forming a storage pool, then a subset of disks from the pool can be joined together into a volume group.
Administering a System: Managing Disks and Files Managing Disks In Figure 6-1, logical volume /dev/vg01/lvol1 might contain a file system, /dev/vg01/lvol2 might contain swap space, and /dev/vg01/lvol3 might contain raw data. As the figure illustrates, a file system, swap space, or raw data area may exist within a logical volume that resides on more than one disk.
Administering a System: Managing Disks and Files Managing Disks incrementing the address by one for each unit. Physical extent size is configurable at the time you form a volume group and applies to all disks in the volume group. By default, each physical extent has a size of 4 megabytes (MB). This value can be changed when you create the volume group to a value between 1MB and 256MB. 560 • The basic allocation unit for a logical volume is called a logical extent.
Administering a System: Managing Disks and Files Managing Disks Figure 6-2 on page 561 shows an example of several types of mapping available between physical extents and logical extents within a volume group.
Administering a System: Managing Disks and Files Managing Disks As can be seen in Figure 6-2 on page 561, the contents of the first logical volume are contained on all three physical volumes in the volume group. Since the second logical volume is mirrored, each logical extent is mapped to more than one physical extent. In this case, there are two physical extents containing the data, each on both the second and third disks within the volume group.
Administering a System: Managing Disks and Files Managing Disks Setting Up Logical Volumes for File Systems File systems reside in a logical volume just as they do within disk sections or nonpartitioned disks. As of 10.10, the maximum size of HFS and JFS (VxFS) file systems increased from 4GB to 128GB. However, your root or boot logical volume is limited to either 2GB or 4GB, depending on your processor. (For more information on HFS and JFS, refer to “Determining What Type of File System to Use” on page 85.
Administering a System: Managing Disks and Files Managing Disks For example, suppose a group of users will require 60MB space for file system data; this estimate allows for expected growth. You then add 6MB for the “minfree” space and arrive at 66MB. Then you add another 3MB for file system overhead and arrive at a grand total estimate of 69MB required by the file system, and by consequence, for the logical volume that contains the file system.
Administering a System: Managing Disks and Files Managing Disks As a system administrator, you can exercise control over which physical volumes will contain the physical extents of a logical volume. You can do this by using the following two steps: 1. Create a logical volume without specifying a size using lvcreate (1M) or SAM. When you do not specify a size, by default, no physical extents are allocated for the logical volume. 2.
Administering a System: Managing Disks and Files Managing Disks Typically, you specify the size of a logical volume in megabytes. However, a logical volume’s size must be a multiple of the extent size used in the volume group. By default, the size of each logical extent is 4 MB. So, for example, if a database partition requires 33MB and the default logical extent size is 4 MB, LVM will create a logical volume that is 36MB (or 9 logical extents). The maximum supported size for a raw data device is 4 GB.
Administering a System: Managing Disks and Files Managing Disks Using Disk I/O Interfaces LVM supports disks that use SCSI, HP-FL, and, to a limited extent, HP-IB I/O interface types, as shown in Table 6-1.
Administering a System: Managing Disks and Files Managing Disks As of HP-UX 11i version 2, LVM no longer performs bad block relocation in software, but defers to the hardware bad block relocation implemented within modern disks and disk arrays. LVM recognizes and honors software relocation entries created by previous releases, but will not create new ones. Enabling or disabling bad block relocation via lvchange has no effect. The -r option of lvcreate cannot be used with HP-IB devices.
Administering a System: Managing Disks and Files Managing Disks On HP Integrity Servers, make sure to use the device file with the s2 suffix, as that represents the HP-UX partition on the disk. On HP 9000 (PA-RISC) systems, use the device file without a partition number. Use a physical volume’s raw device file for these two tasks only: • When creating a physical volume. Here, you use the device file for the disk.
Administering a System: Managing Disks and Files Managing Disks Naming Logical Volumes Logical volumes are identified by their device file names which can either be assigned by you or assigned by default when you create a logical volume using lvcreate (1M). When assigned by you, you can choose whatever name you wish up to 255 characters. When assigned by default, these names take the form: /dev/vgnn/lvolN (the block device file form) and /dev/vgnn/rlvolN (the character device file form).
Administering a System: Managing Disks and Files Managing Disks Managing Logical Volumes Using SAM SAM enables you to perform most, but not all, LVM management tasks.
Administering a System: Managing Disks and Files Managing Disks Table 6-2 Commands Needed for Physical Volume Management Tasks Task Table 6-3 Creating a physical volume for use in a volume group. pvcreate(1M) Displaying information about physical volumes in a volume group. pvdisplay(1M) Moving data from one physical volume to another. pvmove(1M) Removing a physical volume from LVM control.
Administering a System: Managing Disks and Files Managing Disks Table 6-3 Commands Needed for Volume Group Management Tasks Task Commands Needed Scan all physical volumes looking for logical volumes and volume groups; allows for recovery of the LVM configuration file, /etc/lvmtab. vgscan(1M) Adding disk to volume group. vgextend(1M) f Removing disk from volume group. vgreduce(1M) a. Before executing command, one or more physical volumes must have been created with pvcreate. b.
Administering a System: Managing Disks and Files Managing Disks Table 6-4 Commands Needed for Logical Volume Management Tasks Task Commands Needed Increasing the size of logical volume by allocating disk space. lvextend(1M) Decreasing the size of a logical volume. lvreduce(1M) a Removing the allocation of disk space for one or more logical volumes within a volume group. lvremove(1M) Preparing a logical volume to be a root, primary swap, or dump volume.
Administering a System: Managing Disks and Files Managing Disks Example: Creating a Logical Volume Using HP-UX Commands To create a logical volume, do the following procedure: Step 1. Select one or more disks. ioscan (1M) shows the disks attached to the system and their device file names. Step 2. Initialize each disk as an LVM disk by using the pvcreate command.
Administering a System: Managing Disks and Files Managing Disks c. Create the volume group specifying each physical volume to be included using vgcreate. For example: vgcreate /dev/vgnn /dev/dsk/c0t0d0 Use the block device file to include each disk in your volume group. You can assign all the physical volumes to the volume group with one command. No physical volume can already be part of an existing volume group. Step 4.
Administering a System: Managing Disks and Files Managing Disks Extending a Logical Volume to a Specific Disk Suppose you want to create a 300 MB logical volume and put 100 MB on your first disk, another 100 MB on your second disk, and 100 MB on your third disk. To do so, follow these steps: Step 1. After making the disks physical volumes and creating your volume group, create a logical volume named lvol1 of size 0. lvcreate -n lvol1 /dev/vg01 Step 2.
Administering a System: Managing Disks and Files Managing Disks Creating Root Volume Group and Root and Boot Logical Volumes NOTE VERITAS Volume Manager (VXVM) The VERITAS Volume Manager included in the operating environments as of the September 2002 release of HP-UX 11i version (B.11.11) enables rootability.
Administering a System: Managing Disks and Files Managing Disks Whether you use a single “combined” root-boot logical volume, or separate root and boot logical volumes, the logical volume used to boot the system must be the first logical volume on its physical volume. If the root logical volume is not the first logical volume on its physical volume, then you must also configure a boot logical volume. Both a root logical volume and a boot logical volume must be contiguous with bad block relocation disabled.
Administering a System: Managing Disks and Files Managing Disks Step 2. Create a directory for the volume group using mkdir. Step 3. Create a device file named group in the above directory with the mknod command. (See “Example: Creating a Logical Volume Using HP-UX Commands” on page 575 for details.) Step 4. Create the root volume group specifying each physical volume to be included using vgcreate. For example: vgcreate /dev/vgroot /dev/dsk/c0t3d0 Step 5.
Administering a System: Managing Disks and Files Managing Disks lvextend -L 160 /dev/vgroot/root /dev/dsk/c0t3d0 Step 3. Specify that logical volume be used as the root logical volume: lvlnboot -r /dev/vgroot/root Once the root logical volume is created, you will need to create a file system (see “Creating a File System” on page 603).
Administering a System: Managing Disks and Files Managing Disks • • • • • • • • lvremove lvrmboot lvsplit pvchange pvmove vgcreate vgreduce vgextend You can display LVM configuration information previously backed up with vgcfgbackup or restore it using vgcfgrestore. By default, vgcfgbackup saves the configuration of a volume group to the file /etc/lvmconf/volume_group_name.conf. If you choose, you can run vgcfgbackup at the command line, saving the backup file in any directory you indicate.
Administering a System: Managing Disks and Files Managing Disks CAUTION • move the disks in a volume group to different hardware locations on a system • move entire volume groups of disks from one system to another Moving a disk which is part of your root volume group is not recommended. See Configuring HP-UX for Peripherals for more information.
Administering a System: Managing Disks and Files Managing Disks b. Create a group file in the above directory with mknod. c. Issue the vgimport command: vgimport /dev/vol_group_name physical_volume1_path Step 7. Activate the newly imported volume group: vgchange -a y /dev/vol_group_name Step 8.
Administering a System: Managing Disks and Files Managing Disks Now vgexport actually removes the volume group from the system. It then creates the plan_map file. Once the /etc/lvmtab file no longer has the vg_planning volume group configured, you can shut down the system, disconnect the disks, and connect the disks on the new system. Transfer the file plan_map to the / directory on the new system. Step 3. On the new system, create a new volume group directory and group file.
Administering a System: Managing Disks and Files Managing Disks For example, you might want to move only the data from a specific logical volume from one disk to another to use the vacated space on the first disk for some other purpose.
Administering a System: Managing Disks and Files Managing Disks If you are using the disk space for a new purpose and do not need the data contained in the logical volume, no backup is necessary. If, however, you want to retain the data that will go into the smaller logical volume, you must back it up first and then restore it once the smaller logical volume has been created.
Administering a System: Managing Disks and Files Managing Disks To add an alternate link to a physical volume that is already part of a volume group, use vgextend to indicate the new link to the physical volume. For example, if /dev/dsk/c2t0d0 is already part of your volume group but you wish to add another connection to the physical volume, enter: vgextend /dev/vg02 /dev/dsk/c4t0d0 If the primary link fails, LVM will automatically switch from the primary controller to the alternate controller.
Administering a System: Managing Disks and Files Managing Disks Detaching one or more links to a physical volume will not necessarily cause LVM to stop using that physical volume entirely. If the detached link is the primary path to the device, LVM will begin using any available alternate link to it. LVM will only stop using the physical volume when all the links to it are detached. If all the links to a device are detached, the associated physical volume will be unavailable to the volume group.
Administering a System: Managing Disks and Files Managing Disks data allocated on three disks, with each disk storing every third block of data. The size of each of these blocks is referred to as the stripe size of the logical volume. Disk striping can increase the performance of applications that read and write large, sequentially accessed files.
Administering a System: Managing Disks and Files Managing Disks two disks on each bus, the disks should be ordered so that disk 1 is on bus 1, disk 2 is on bus 2, disk 3 is on bus 1, and disk 4 is on bus 2, as depicted in Figure 6-4. Figure 6-4 Interleaving Disks Among Buses • Increasing the number of disks may not necessarily improve performance.
Administering a System: Managing Disks and Files Managing Disks So, suppose you wish to stripe across three disks. You decide on a stripe size of 32 kilobytes. Your logical volume size is 24 megabytes. To create the striped logical volume, you would enter: lvcreate -i 3 -I 32 -L 24 -n lvol1 /dev/vg01 lvcreate automatically rounds up the size of the logical volume to a multiple of the number of disks times the extent size.
Administering a System: Managing Disks and Files Managing Disks • If you plan to use the striped logical volume for a JFS (VxFS) file system: Use the largest available size, 64KB. For I/O purposes, JFS combines blocks into extents, which are variable in size and may be very large. The configured block size, 1KB by default (which in any case governs only direct blocks), is not significant in this context. See “Frequently Asked Questions about the Journaled File System” on page 87 for more information.
Administering a System: Managing Disks and Files Managing Disks • “Removing a Logical Volume” on page 870 • “Adding a Mirror to an Existing Logical Volume” on page 870 • “Removing a Mirror from a Logical Volume” on page 872 • “Moving a Directory to a Logical Volume on Another System” on page 874 LVM Troubleshooting If You Can’t Boot From a Logical Volume If you cannot boot from a logical volume, a number of things might be responsible for this situation.
Administering a System: Managing Disks and Files Managing Disks After you have made the LVM disk minimally bootable, the system can be booted in maintenance mode using the -lm option of the hpux command at the ISL> prompt. This causes the system to boot to single-user state without LVM or dump but with access to the root file system. Maintenance mode is a special way to boot your system that bypasses the normal LVM structures.
Administering a System: Managing Disks and Files Managing Disks your volume group will still remain active; however, a message will be printed to the console indicating that the volume group has lost quorum. Until the quorum is restored (at least one of the LVM disks in the volume group in the above example is once again available), LVM will not allow you to complete most commands that affect the volume group configuration.
Administering a System: Managing Disks and Files Managing Disks As a result, the volume group will activate without a quorum being present. You might get messages about not being able to access certain logical volumes. This is because part or all of a logical volume might be located on one of the disks that is not present. Whenever you override a quorum requirement, you run the risk of using data that are not current.
Administering a System: Managing Disks and Files Managing Disks 1. Reboot your system in single-user state. 2. If you already have a good current backup of the data in the now corrupt file system, skip this step. Only if you do not have such backup data and if those data are critical, you may want to try to recover whatever part of the data that may remain intact by attempting to back up the files on that file system in your usual way.
Administering a System: Managing Disks and Files Managing Disks Handling I/O Errors within LVM When a device driver returns an error to LVM on an I/O request, LVM classifies the error as either non-recoverable or recoverable. How those errors are handled determines your course of action. Non-Recoverable Errors Non-recoverable errors are considered fatal; there’s no expectation that retrying the operation could work.
Administering a System: Managing Disks and Files Managing Disks cable — which can manifest itself as a missing disk. In these cases, LVM will log an error message to the console, but it will not return an error to the application accessing the logical volume. If you have a current copy of the data on a separate, functioning mirror, then LVM directs the I/O to a mirror copy, much as it would for a non-recoverable error. Applications accessing the logical volume will not see any error.
Administering a System: Managing Disks and Files Managing Disks If you want to enable a timeout on a logical volume, you should set it to an integral multiple of any timeout assigned to the underlying physical volume(s). Otherwise, the actual duration of the I/O request may exceed the logical volume’s timeout. See pvchange (1M) for details on how to change the I/O timeout value on a physical volume. You can view the timeout value for a logical volume using the lvdisplay command.
Administering a System: Managing Disks and Files Managing File Systems Managing File Systems This section presents information for managing file systems on a single system.
Administering a System: Managing Disks and Files Managing File Systems Creating a File System When creating either an HFS or JFS file system, you can use SAM or a sequence of HP-UX commands. Using SAM is quicker and simpler. The following provides a checklist of subtasks for creating a file system which is useful primarily if you are not using SAM. If you use SAM, you do not have to explicitly perform each distinct task below; rather, proceed from SAM’s “Disks and File Systems” area menu.
Administering a System: Managing Disks and Files Managing File Systems If you decide not to use a logical volume when creating a file system, skip steps 1 through 4 below, which deal with logical volumes only. Refer to the book Disk and File Management Tasks on HP-UX for more information on creating a file system within a disk section or a whole disk. Step 1.
Administering a System: Managing Disks and Files Managing File Systems Step 5. Create the New File System Create a file system using the newfs command. Note the use of the character device file. For example: newfs -F hfs /dev/vg02/rlvol1 If you do not use the -F FStype option, by default, newfs creates a file system based on the content of your /etc/fstab file. If there is no entry for the file system in /etc/fstab, then the file system type is determined from the file /etc/default/fs.
Administering a System: Managing Disks and Files Managing File Systems • “Mounting File Systems Using HP-UX Commands” on page 607 • “Mounting Local File Systems” on page 607 • “Mounting File Systems Automatically at Bootup” on page 608 • “Solving Mounting Problems” on page 608 See also: • “JFS and the mount Command” on page 94 • “Importing a File System (HP-UX to HP-UX)” on page 396 Overview The process of incorporating a file system into the existing directory structure is known as mounting the
Administering a System: Managing Disks and Files Managing File Systems Mounting File Systems Using HP-UX Commands The mount command attaches a file system, on either a non-LVM disk or a logical volume, to an existing directory. You can also use the mountall command or mount -a to mount all file systems listed in the file /etc/fstab. (See mount (1M), mountall (1M) and fstab (4) for details.) Mounting Local File Systems To mount a local file system: Step 1.
Administering a System: Managing Disks and Files Managing File Systems Mounting File Systems Automatically at Bootup To mount a file system automatically at bootup, list it in the /etc/fstab file. See the entry for fstab (4) for details on creating /etc/fstab entries. Solving Mounting Problems Here are some typical problems that are sometimes encountered when mounting a file system and the actions to take to correct the problem. See also “Troubleshooting NFS” on page 404.
Administering a System: Managing Disks and Files Managing File Systems Table 6-5 Solving Mounting Problems (Continued) Problem Solution You get an error indicating /etc/mnttab does not exist or that mount had an “interrupted system call” when you try to mount a file system. /etc/mnttab is normally created, if it does not already exist, within /sbin/init.d/localmount when you boot up your computer. If you get one of these messages, /etc/mnttab does not exist.
Administering a System: Managing Disks and Files Managing File Systems If you do not use SAM to unmount a file system, you must use the umount command. Refer to umount (1M) for details. You can also use the umountall command to unmount all file systems (except the root file system) or umount -a to unmount all file systems listed in the file /etc/mnttab. (See umount (1M) and mnttab (4) for details.
Administering a System: Managing Disks and Files Managing File Systems You can also use ps -ef to check for processes currently being executed and map fuser output to a specific process. See fuser (1M) and ps (1) for more information. CAUTION • Are you attempting to unmount the root (/) file system? You cannot do this. • Are you attempting to unmount a file system that has had file system swap enabled on that disk using SAM or swapon? You cannot do this either.
Administering a System: Managing Disks and Files Managing File Systems Using HP-UX Commands When using lvextend to increase the size of the logical volume container, this does not automatically increase the size of its contents. When you first create a file system within a logical volume, the file system assumes the same size as the logical volume. If you later increase the size of the logical volume using the lvextend command, the file system within does not know that its container has been enlarged.
Administering a System: Managing Disks and Files Managing File Systems Step 5. Run bdf to confirm that the file system capacity has been increased. Copying a File System Across Devices Suppose you want to copy a file system from one disk (or disk section) to another, or from one disk or logical volume to another logical volume. For example, you might need to copy a file system to a larger area. If so, here are the steps to follow: 1.
Administering a System: Managing Disks and Files Managing File Systems Never take a system offline by merely shutting its power off or by disconnecting it. Diagnosing a Corrupt File System The following are symptomatic of a corrupt file system: • A file contains incorrect data (garbage). • A file has been truncated or has missing data. • Files disappear or change locations unexpectedly. • Error messages appear on a user’s terminal, the system console, or in the system log.
Administering a System: Managing Disks and Files Managing File Systems Checking an HFS File System To check an HFS file system, use the following procedure: Step 1. Before running fsck, make sure that a lost+found directory is present and empty at the root for each file system you plan to examine. fsck places any problem files or directories it finds in lost+found. If lost+found is absent, rebuild it using mklost+found (1M). Step 2.
Administering a System: Managing Disks and Files Managing File Systems Table 6-6 fsck Results (Continued) If fsck reports... Any uncorrectable errors with an error message Proceed to... Step 8 Then... Step 9 Step 6. Check for other causes of the problem. If fsck runs without finding errors, the problem is not a corrupted file system. In this case, consider other possible causes of problems with files: • A user deleted, overwrote, moved, or truncated the file(s) in question.
Administering a System: Managing Disks and Files Managing File Systems Before doing so, move any critical files on this file system that have not yet been backed up (and are still intact) to another file system or try saving them to tape. When you run fsck interactively, it may need to perform actions that could cause the loss of data or the removal of a file/file name (such as when two files claim ownership of the same data blocks).
Administering a System: Managing Disks and Files Managing File Systems Once you have returned the files in the lost+found directory to their proper locations, restore any files that are missing from your most recent backup. IMPORTANT The following message CAN'T READ BLOCK ... may indicate a media problem that mediainit (1) can resolve. Otherwise, hardware failure has probably occurred; in this case, contact your local sales and support office.
Administering a System: Managing Disks and Files Managing File Systems Table 6-7 HFS vs. JFS File Checking after System Failure (Continued) Concern HFS JFS What assurance is there of file system integrity? No assurance fsck can repair a file system after a crash, although it usually can; is sometimes unable to repair a file system that crashed before completing a file system operation.
Administering a System: Managing Disks and Files Managing File Systems 2. Unmount the file system. 3. Create the new smaller file system using newfs. Indicate the new smaller file system size using the -s size option of newfs. 4. Re-mount the file system. 5. Restore the backed up file system data to the newly created file system. If You Are Using Logical Volumes If an HFS file system is contained within a logical volume, the logical volume resembles a container with the file system as its contents.
Administering a System: Managing Disks and Files Managing File Systems • “What To Do When Exceeding a Hard Limit” on page 626 Using disk quotas allows the administrator to control disk space usage by limiting the number of files users can create and the total number of system blocks they can use. You implement disk quotas on a local file system and its users by placing soft limits and hard limits on users’ file system usage. Soft limits are limits that can only be exceeded for a specified amount of time.
Administering a System: Managing Disks and Files Managing File Systems In this example, /dev/null specifies that the file created is empty, /home/quotas specifies that the file quotas is to be in /home directory, and 600 root bin is the mode, owner, and group of the file. For syntax, see cpset (1M). NOTE To control the size of the quotas file, refrain from using large user identification numbers (UIDs). This will not be a concern if you use SAM to add users because SAM selects the UIDs in numerical order.
Administering a System: Managing Disks and Files Managing File Systems b. Apply the prototype user’s limits to other users of the /home file system: edquota -p patrick alice ellis dallas This assigns the limits of the prototype user, patrick, to the other users, alice, ellis, and dallas. NOTE When removing a user from the system, run edquota and set the user’s limits to zero. Thus, when the user is removed from the system, there will be no entry for that user in the quotas file. Step 4.
Administering a System: Managing Disks and Files Managing File Systems Step 5. Turn on quotas. Disk quotas can be enabled in any of the following ways: • Turn on disk quotas when rebooting. If you want disk quotas to be turned on automatically when the system starts up, add the quota option to the file system entry in the /etc/fstab file. For example: /dev/vg00/lvol3 /home hfs rw,suid,quota 0 2 • Turn on disk quotas by re-mounting the file system.
Administering a System: Managing Disks and Files Managing File Systems starts quotas on the /home file system. The -v (verbose) option generates a message to the screen listing each file system affected. This command has no effect on a file system for which quotas are already turned on. You can also specify the -a option, which turns on disk quotas for all mounted file systems listed in the file /etc/fstab that include the quota option. See quotaon (1M) for more information. 2.
Administering a System: Managing Disks and Files Managing File Systems checking levels, see quota (1). Only a user with superuser privileges can use the user option of the quota command to view specific usage and quota information about other users. What To Do When Exceeding a Hard Limit When users reach a hard limit or fail to reduce their usage below soft limits within the allotted time, an error message appears on their terminal.
Administering a System: Managing Disks and Files Managing File Systems 3. Remove files until the remaining number is well below the file and/or file system block quotas determined by the soft limits. 4. Move the file back into the original file system. Or, when using a job-control shell: 1. Go to the shell and type a “suspend” character (for example, pressing the CTRL and Z keys at the same time) to suspend the editor. 2.
Administering a System: Managing Disks and Files Managing File Systems Creating and Modifying Mirrored Logical Volumes You can configure mirroring by using either SAM or HP-UX commands. Whenever possible, use SAM. Using SAM SAM will perform the following mirroring set-up and configuration tasks: • Creating or removing a mirrored logical volume. • Configuring or changing the characteristics of a logical volume’s mirrors. You can specify: — the number of mirror copies.
Administering a System: Managing Disks and Files Managing File Systems Using HP-UX Commands Table 6-8 summarizes the commands you will need to do mirror set-up and configuration tasks when you do not use SAM. Consult Section 1M of the HP-UX Reference for the appropriate command line options to use. Table 6-8 HP-UX Commands Needed to Create and Configure Mirroring Commands and Options Needed Task Creating a mirrored logical volume. lvcreate -m Subtasks: Add: Setting strict or nonstrict allocation.
Administering a System: Managing Disks and Files Managing File Systems Doing an Online Backup by Splitting a Logical Volume You can split a mirrored logical volume into two logical volumes to perform a backup on an offline copy while the other copy stays online. When you complete the activity on the offline copy, you can merge the two logical volumes back into one.
Administering a System: Managing Disks and Files Managing File Systems NOTE To prevent the loss of flexibility that occurs when you create physical volume groups, you may want to use lvextend, which allows you to specify particular physical volumes. See “Extending a Logical Volume to a Specific Disk” on page 577 for more information.
Administering a System: Managing Disks and Files Managing File Systems dump. To reset these options, you will need to reboot your system in maintenance mode. Then use the lvchange command with the -M n and -c n options. Step 5. Use the lvextend command to mirror each logical volume in the root volume group onto the specified disk. The logical volumes must be extended in the same order that they are configured on the original boot disk.
Administering a System: Managing Disks and Files Managing File Systems Step 9. Add a line to /stand/bootconf for the new boot disk using vi or another text editor: vi /stand/bootconf l /dev/dsk/c0t3d0 where l denotes LVM. Once you have created mirror copies of the root, boot, and primary swap logical volume, should any of these logical volumes fail, the system can use the mirror copy on the other disk and continue.
Administering a System: Managing Disks and Files Managing File Systems Mirroring a Boot Disk with LVM on HP-UX 11i for HP Integrity Servers The following diagram shows the disk layout of a boot disk. The disk contains a Master Boot Record (MBR) and Extensible Firmware Interface (EFI) partition tables that point to each of the partitions. The idisk command is used to create the partitions (see idisk (1M)).
Administering a System: Managing Disks and Files Managing File Systems NOTE The values in the example represent a boot disk with three partitions: an EFI partition, an HP-UX partition, and an HP Service partition. Boot disks of earlier HP Integrity Servers may have an EFI partition of only 100MB and may not contain the HPSP partition. b. Partition the disk using idisk and your partition description file: idisk -f /tmp/idf -w /dev/rdsk/c3t1d0 c. To verify you can run: idisk /dev/rdsk/c3t1d0 Step 2.
Administering a System: Managing Disks and Files Managing File Systems a. Use efi_cp to copy the AUTO file from the original boot disk’s EFI partition to the current directory. Make sure to use the device file with the s1 suffix, as it refers to the EFI partition: efi_cp -d /dev/rdsk/cntndns1 -u /efi/hpux/auto ./AUTO b. Copy the file from the current directory into the new disk’s EFI partition: efi_cp -d /dev/rdsk/c3t1d0s1 ./AUTO /efi/hpux/auto Step 7.
Administering a System: Managing Disks and Files Managing File Systems Step 9. Display the BDRA. Verify that the mirrored disk is displayed as a boot disk and that the boot, root, and swap logical volumes appear to be on both disks: lvlnboot –v Step 10. Specify the mirror disk as the alternate boot path in nonvolatile memory: setboot -a path_to_disk Step 11. Add a line to /stand/bootconf for the new boot disk using vi or another text editor: vi /stand/bootconf l /dev/dsk/c3t1d0s2 where l denotes LVM.
Administering a System: Managing Disks and Files Managing File Systems Synchronizing a Mirrored Logical Volume At times, the data in your mirrored copy or copies of a logical volume can become out of sync, or “stale”. For example, this might happen if LVM cannot access a disk as a result of disk power failure. Under such circumstances, in order for each mirrored copy to re-establish identical data, synchronization must occur.
Administering a System: Managing Disks and Files Managing File Systems For each of those logical volumes, you can use lvdisplay to check which logical extents are mapped onto the disk, and if there’s a current copy of that data on another disk, as discussed in “Synchronizing a Mirrored Logical Volume” on page 638: lvdisplay -v /dev/vol_group/lvoln | grep /dev/dsk/cntndn Step 2. Run vgcfgbackup to save the volume group configuration information, if necessary: vgcfgbackup /dev/vol_group Step 3.
Administering a System: Managing Disks and Files Managing File Systems Step 7. Run vgchange -a y to reactivate the volume group to which the disk belongs. Since the volume group is already currently active, no automatic synchronization occurs: vgchange -a y /dev/vol_group Step 8. If any of the logical volumes on the disk had a nondefault timeout assigned, restore the previous timeout: lvchange -t value /dev/vol_group/lvoln Step 9.
Administering a System: Managing Disks and Files Managing File Systems disk will be made on the substitute physical volume. This process is referred to as automatic sparing, or just sparing. This occurs while the file system remains available to users. You can then schedule the replacement of the failed disk at a time of minimal inconvenience to you and your users.
Administering a System: Managing Disks and Files Managing File Systems • All logical volumes in the volume group must have been configured with strict mirroring whereby mirrored copies are maintained on separate disks. This is because LVM copies the data on to the spare from an undamaged disk rather than from the defective disk itself.
Administering a System: Managing Disks and Files Managing File Systems Step 5. Use pvmove to move the data from the spare back to the replaced physical volume. As a result, the data from the spare disk is now back on the original disk or its replacement and the spare disk is returned to its role as a “standby” empty disk.
Administering a System: Managing Disks and Files Managing File Systems fsadm -d -D -e -E /mount_point For detailed information, consult fsadm_vxfs (1M). Daily Defragmentation To maintain optimal performance on busy file systems, it may be necessary to defragment them nightly. For example, to defragment every evening at 9 p.m.
Administering a System: Managing Disks and Files Managing File Systems 2. Create a new JFS file system on the logical volume containing the HFS file system, and copy the HFS file system to the JFS file system.
Administering a System: Managing Disks and Files Managing File Systems Table 6-9 File System Conversion Methods Comparison (Continued) Method One: Create and Copy Method Two: Replace HFS with JFS Method Three: vxfsconvert Need ACL conversion script yes yes maybe Flexible yes yes no Safe yes yes some risk NOTE See “Managing Access to Files and Directories” on page 753 for more information about Access Control Lists, or ACLs, on HFS and JFS.
Administering a System: Managing Disks and Files Managing File Systems mount -F hfs -o ro /dev/vg00/rlvol4 Step 4. Mount the new JFS file system read-write on a temporary mount point. For example: mkdir /new-home mount -F vxfs -o rw /dev/vg00/rlvol5 /new-home Step 5. Copy the files from the old HFS file system to the newly created JFS file system using cpio (1), tar (1), fbackup (1M), or another tool of your choice. For example, cd /home; tar -cvf * | (cd /new_home; tar -xvf -) Step 6.
Administering a System: Managing Disks and Files Managing File Systems Step 11. Mount the new JFS file system in place of the old HFS file system. mount -F vxfs /home Method 2: Replacing the HFS with JFS on the Existing Logical Volume Method 2: Replace HFS with JFS Use this method to convert an HFS file system to a JFS file system when you want to minimize the space you need to do the conversion and you can afford significant downtime. Step 1. Back up your file system data using your favorite backup tool.
Administering a System: Managing Disks and Files Managing File Systems Step 5. If there are ACLs to be converted, record the HFS ACLs and save the information in a file on a different file system. See “Managing Access to Files and Directories” on page 753 for more information about HFS and JFS ACLs. Step 6. In an NFS environment, tell remote users to unmount the affected file system to avoid having stale NFS mounts later. Step 7. Warn all users that the system is shutting down. Step 8.
Administering a System: Managing Disks and Files Managing File Systems In an NFS environment, tell users of other systems that they can remount the file systems to their systems. After you have verified that the new JFS file systems are accessible, you can remove the /etc/fstab.save file and edit the /etc/fstab file to remove the commented out lines.
Administering a System: Managing Disks and Files Managing File Systems Step 3. Make sure the file system is clean. vxfsconvert cannot convert a dirty file system. For example: fsck -F hfs /dev/vg00/lvol5 Step 4. If the file system contains non-POSIX ACLs (unsupported in JFS) to be converted, run a script to convert them to supported POSIX ACLs. Step 5. Back up your file system data using your favorite backup tool. (See “Backing Up Data” on page 674 for procedural logistics.
Administering a System: Managing Disks and Files Managing File Systems Step 10. If you have the HP OnLineJFS product, run fsadm to reorganize and optimize the file system. For example: fsadm -ed /opt NOTE If you do not run fsadm to optimize the file system, performance of existing files may degrade. Step 11. In an NFS environment, tell users of other systems that they can remount the file systems to their systems.
Administering a System: Managing Disks and Files Managing File Systems For example, suppose the file system /home resides in the logical volume /dev/vg4/users_lv. Its current size is 50 MB, as verified by running bdf. You want the new file system (as well as logical volume size) to be 72 MB. Enter: lvextend -L 72 /dev/vg4/users_lv Read SAM’s online help or lvextend (1M) for further details. 4. Resize the JFS file system. fsadm -b newsize /mount_point newsize is specified in blocks.
Administering a System: Managing Disks and Files Managing File Systems For example, suppose the file system /home resides in the logical volume /dev/vg4/users_lv. Its current size is 50MB, as verified by running bdf. You want the new file system (as well as logical volume size) to be 72MB. Enter: lvextend -L 72 /dev/vg4/users_lv Read SAM’s online help or lvextend (1M) for further details. 3. Back up the JFS file system, using any backup utility you prefer.
Administering a System: Managing Disks and Files Managing File Systems Creating a Large-Files File System If you want a file system to support large files (greater than 2 GB), then large files must be explicitly enabled, since the default on a system is small files. (A system will not support large files just because it has been updated to a release of HP-UX that supports large files.
Administering a System: Managing Disks and Files Managing File Systems Changing from a Large-Files File System You can change a file system back and forth between large files and no large files using the fsadm command. It is important to realize that the conversion of these file systems must be done on an unmounted file system, and fsck will be called after a successful conversion. The following example shows how to convert a no-large-files file system to a large-files file system.
Administering a System: Managing Disks and Files Managing File Systems the -o largefiles option. The fsck command repairs the file system, which you are then able to mount. This scenario would preserve the large file, if fsck did not find it corrupt in any other way. In the second scenario, using noninteractive mode, fsck purges the large file on a no-large-files file system.
Administering a System: Managing Disks and Files Managing File Systems Enabling/Disabling the /etc/ftpd/ftpaccess Configuration File • To enable the /etc/ftpd/ftpaccess file, specify the -a option for the ftp entry in the /etc/inetd.conf file. For example, ftp stream tcp nowait root /usr/lbin/ftpd ftpd -a -l -d (The -l option logs all commands sent to the ftpd server into syslog. The -d option logs debugging information into syslog.
Administering a System: Managing Disks and Files Managing File Systems /usr/bin/ckconfig For more information see the ckconfig (1) manpage.
Administering a System: Managing Disks and Files Managing File Systems NOTE To enable the /etc/ftpd/ftpaccess file, you must specify the -a option in the ftp entry of the /etc/inetd.conf file. For details on the log commands keyword, see the ftpaccess (4) manpage. Logging FTP File Transfers You can log file transfer information from the FTP server daemon to the /var/adm/syslog/xferlog log file.
Administering a System: Managing Disks and Files Managing File Systems For detailed information on setting up virtual FTP support, see Chapter 2 of the Installing and Administering Internet Services manual. NOTE Chapter 6 Setting up a virtual FTP server requires IP address aliasing. This is supported in HP-UX 10.30 and later.
Administering a System: Managing Disks and Files Managing Swap and Dump Managing Swap and Dump This section explains how to manage your system’s swap space, including determining how much and what type of swap space the system needs, and how to add or remove swap space as the system’s needs change. It also explains how to configure your dump area.
Administering a System: Managing Disks and Files Managing Swap and Dump requires the system to perform a greater amount of processing and is usually slower than device swap, it should not be used as a permanent replacement for a sufficient amount of device swap space. The file system used for swap can be either a local or a remote file system. Cluster clients can use remote file system swap for their swap needs.
Administering a System: Managing Disks and Files Managing Swap and Dump Designing Your Swap Space Allocation When designing your swap space allocation: • Check how much swap space you currently have. • Estimate your swap space needs. • Adjust your system’s swap space parameters. • Review the recommended guidelines. Checking How Much Swap Space You Currently Have Available swap on a system consists of all swap space enabled as device and file system swap.
Administering a System: Managing Disks and Files Managing Swap and Dump NOTE To get the total amount of swap space being used, run swapinfo -ta If the total percentage used is high, roughly 90% or greater, then you probably need to add more swap space. Once you know or suspect that you will have to increase (or decrease) your swap space, you should estimate your swap space requirements. The following section describes one method.
Administering a System: Managing Disks and Files Managing Swap and Dump 1. Enter the amount of the physical memory currently on the local machine. At a minimum, swap space should equal that amount. Enter the amount in KB. ———— 2. Determine the swap space required by your largest application (look in the manual supplied with your application or check with the manufacturer; 1MB = 1,024KB = 10,248 bytes).
Administering a System: Managing Disks and Files Managing Swap and Dump For example, when the value of the parameter maxswapchunks is 256, the maximum configurable device swap space (maxswapchunks x swchunk x DEV_BSIZE) is: 256 x 2 MB = 512 MB If you need to increase the limit of configurable swap space beyond the default, increase the value of the maxswapchunks operating system parameter either by using SAM (which has more information on tunable parameters) or reconfigure the kernel using HP-UX commands.
Administering a System: Managing Disks and Files Managing Swap and Dump • Interleave file system swap areas for best performance. The use of interleaving on separate disks is described under “Guidelines for Setting Up Device Swap Areas” on page 667. • To keep good system performance, avoid using heavily used file systems such as the root (/) for file system swap. Use the bdf command to check file systems for available space.
Administering a System: Managing Disks and Files Managing Swap and Dump Several file systems can be used for file system swap. The tunable system parameter nswapfs determines the maximum number of file systems you can enable for swap. You can dynamically create file system swap using either SAM or the swapon command.
Administering a System: Managing Disks and Files Managing Swap and Dump NOTE If you have an entry in /etc/fstab defining the swap, but the swap has not been enabled using SAM or swapon, then you can just remove the entry either with SAM or by editing /etc/fstab. In this case, no reboot is necessary. Configuring Primary and Secondary Swap You can configure primary swap through the kernel configuration file, using either HP-UX commands or SAM.
Administering a System: Managing Disks and Files Managing Swap and Dump NOTE If the location of your primary swap device has been specified in the system configuration file, then if it is changed or removed from this file, you must regenerate the kernel and reboot. (The default system configuration file is /stand/system; see config (1M) for more information).
Administering a System: Managing Disks and Files Managing Swap and Dump longer need to contain the entire contents of physical memory. With expanded physical memory limits, you may wish to dump only those classes of physical memory which you will use in a crash dump analysis. Further, you now have an additional way to configure dump devices: In addition to reconfiguring the kernel, at 11.0, you can also do dump configuration at runtime using the crashconf (1M) command without the need to reboot the system.
Administering a System: Managing Disks and Files Managing Swap and Dump To create a dump logical volume, you first use the lvcreate command. You must set a contiguous allocation policy using the -C y option and specify no bad block relocation using -r n. See lvcreate (1M) for more information. When configuring a logical volume as a dump device, you must next use lvlnboot (1M) with the -d option to update the BDRA (Boot Data Reserved Area).
Administering a System: Managing Disks and Files Backing Up Data Backing Up Data Of all the tasks that system administrators perform, among the most important are creating system backups. The most effective way to ensure against loss of your system’s data is to copy the data from your system onto storage media (such as magnetic tape or optical disk) that you can store away from your system, so that you can recover the data should something happen to your primary copies.
Administering a System: Managing Disks and Files Backing Up Data Choosing the Type of Storage Device When you evaluate which media to use to back up your data, consider the following: NOTE • How much data do you need to back up (rough estimate)? • How quickly will you need to retrieve the data? • What types of storage devices do you have access to? • How automated do you want the process to be? (For example, will an operator be executing the backup interactively or will it be an unattended backup?
Administering a System: Managing Disks and Files Backing Up Data Table 6-11 Criteria for Selecting Media (Continued) Storage Device Type Holds Lots of Data? Recovers and Backs Up Data Quickly? Suggested for Unattended Backup? Hard disk Good Excellent No Optical disk multidisk library Good Good Yes a Optical disk single drive Good Good No a a. You can perform an unattended (automatic) backup if all of the data will fit on one tape, optical disk, and so on.
Administering a System: Managing Disks and Files Backing Up Data Choosing SAM for Backup You can use SAM or HP-UX commands to back up data. Generally, SAM is simpler and faster to use than using the HP-UX commands. Choosing an HP-UX Backup/Recovery Utility Table 6-12 compares several HP-UX backup utilities based on selected tasks. For details about specific commands, see the associated manpage.
Administering a System: Managing Disks and Files Backing Up Data Table 6-12 A Comparison of HP-UX Backup/Recovery Utilities (Continued) Backup Utility Task fbackup frecover Multiple, independent backups on a single tape Not possible (fbackup rewinds the tape). cpio Use mt with no-rewind device to position the tape, then use cpio. dump restorea vxdump vxrestoreb Use mt with no-rewind device to position the tape, then use tar. Use mt with no-rewind device to position the tape, then use dump.
Administering a System: Managing Disks and Files Backing Up Data Table 6-12 A Comparison of HP-UX Backup/Recovery Utilities (Continued) Backup Utility Task fbackup frecover cpio tar dump restorea vxdump vxrestoreb List files as they are backed up or restored Possible. Use -v option.s Possible. Use -v option.t Possible. Use the -v option. u Possible (on a restore only). v Possible (on a restore only). w Do a backup based on selected criteria (such as group) Not possible. Possible. Use find.
Administering a System: Managing Disks and Files Backing Up Data Table 6-12 A Comparison of HP-UX Backup/Recovery Utilities (Continued) Backup Utility Task fbackup frecover cpio tar dump restorea vxdump vxrestoreb Ease of selecting files for backup from numerous directories High. Medium. Low. Not possible. Not possible. Back up a snapshot file system Not possible. Possible.y Possible.y Not possible. Possible. Backup/ restore extent attributes Possible. Not possible. Not possible.
Administering a System: Managing Disks and Files Backing Up Data r. Use vxrestore -i -f device_or_file s. Use fbackup -i path -f device_or_file -v 2 >index t. Use find . | cpio -ov > device_or_file 2 > index u. Use tar -cvf device_or_file * 2 > index v. Use restore -t or restore -trv. w. Use vxrestore -t or vxrestore -trv. x. However, you can use frecover -x -ipath to specify individual files. y. If the snapshot file system has extent attributes, you will need to use vxdump filesystem.
Administering a System: Managing Disks and Files Backing Up Data Graph files contain one entry per line. Entries that begin with the character i indicate included files; those that begin with the character e indicate excluded files. For example: i /home e /home/deptD The above file will cause all of the directory /home with the exception of /home/deptD to be backed up. You can identify a graph file with the -g option of the fbackup command.
Administering a System: Managing Disks and Files Backing Up Data Backup Levels If you use SAM to back up your system, you do not need to know about backup levels (because SAM will handle them for you). If you will use the commands fbackup and frecover directly, you should read this section. A backup level is a level you define that identifies the different degrees of incremental backups. Each backup level has a date associated with it that indicates when the last backup at that level was created.
Administering a System: Managing Disks and Files Backing Up Data There are three “layers” (levels) associated with the above schedule (the once per month level, the once per week level, and the once per day level). The once per month level is a full backup. The other two are incremental backups. The problem is how to distinguish between the two types of incremental backup. This is accomplished with backup levels.
Administering a System: Managing Disks and Files Backing Up Data If your data becomes corrupt on Thursday the 12th, do the following to restore your system to its Wednesday the 11th state: 1. Restore the monthly full backup tape from Sunday the 1st. 2. Restore the weekly incremental backup tape from Friday the 6th. 3. Restore the incremental backup tape from Wednesday the 11th. For information on the actual method and commands to restore these tapes, see “Restoring Your Data” on page 696.
Administering a System: Managing Disks and Files Backing Up Data General Procedure for Using the fbackup Command To use the fbackup (1M) command: 1. Ensure that you have superuser capabilities. 2. Ensure that files you want to back up are not being accessed. The fbackup command will not back up files that are active (opened) or locked. 3. Verify that the backup device is properly connected. 4. Verify that the backup device is turned on. 5. Load the backup device with write-enabled media.
Administering a System: Managing Disks and Files Backing Up Data Also, fbackup assumes all files remaining to be backed up will fit on the current tape for the index contained on that media. Therefore, if you did not use the -I option on fbackup or removed the index file, extract an index from the last media of the set. Use the /usr/sbin/frecover utility to list the contents of the index at the beginning of a backup volume made with fbackup.
Administering a System: Managing Disks and Files Backing Up Data 2. Specify the -u option to update the file /var/adm/fbackupfiles/dates. 3. Specify a backup level. Because this will be a full backup, we’ll use the backup level 0. Any backup level would do as long as it is the lowest backup level in use. See “Backup Levels” on page 683 for details about how backup levels are interpreted by fbackup.
Administering a System: Managing Disks and Files Backing Up Data (using the -I option), it will create the file after the backup is complete. Therefore, the online index file will be completely accurate with respect to which files are on each volume of the backup. For example to back up every file on the entire system to the two magnetic tape drives represented by device files /dev/rmt/0m and /dev/rmt/1m, enter:.
Administering a System: Managing Disks and Files Backing Up Data tar cvf - . | remsh remote-system dd of=/dev/rmt/0m For information on restoring files remotely using the tar command, “Restoring Your Data” on page 696. Setting Up an Automated Backup Schedule If possible, use SAM to set up an automated backup schedule. If you use HP-UX commands, you can automate your backup procedure using the crontab utility, which uses with cron, the HP-UX process scheduling facility.
Administering a System: Managing Disks and Files Backing Up Data NOTE Specify multiple values in a field by separating them with commas (no spaces), as in 10,20,30. The value * in any field represents all legal values. Therefore, to schedule the ps command (see ps (1)) to execute at 5:10 p.m.
Administering a System: Managing Disks and Files Backing Up Data TIP To edit the crontab input file directly, use the crontab -e option. Displaying an Automated Backup Schedule To list your currently scheduled processes, enter: crontab -l This displays the contents of your activated crontab input file. Activating an Automated Backup Schedule Before you activate a new crontab input file, you should view the currently scheduled processes (see “Displaying an Automated Backup Schedule” on page 692).
Administering a System: Managing Disks and Files Backing Up Data vgcfgbackup command is run automatically to record the group’s configuration (vgcfgbackup saves the configuration of each volume group in /etc/lvmconf/volume_group_name.conf). To ensure recovery of LVM information following disk corruption, you must back up both the /dev and /usr directories. Include the /usr directory in the root volume group during your backup.
Administering a System: Managing Disks and Files Backing Up Data Restoring Large Files If you use fbackup to back up large files (> 2 GB), then those files can only be restored on a large file system. For instance, suppose that you back up a 64-bit file system containing large files; you cannot restore those files to a 32-bit file system that is not enabled for large files.
Administering a System: Managing Disks and Files Backing Up Data Allowing for 20% change to this 40 MB file system, you would want to create a logical volume of 8 blocks (8 MB). b. Use lvcreate to create a logical volume to contain the snapshot file system. For example, lvcreate -L 8 -n lvol1 /dev/vg02 creates an 8 MB logical volume called /dev/vg02/lvol1, which should be sufficient to contain a snapshot file system of lvol4. See lvcreate (1M) for syntax. 2.
Administering a System: Managing Disks and Files Restoring Your Data Restoring Your Data HP-UX has a number of utilities for backup and recovery. This discussion focuses on the fbackup and frecover commands used by SAM. Refer to the HP-UX Reference for information on the other backup and restore utilities: cpio, dump, ftio, pax, restore, rrestore, tar, vxdump, and vxrestore.
Administering a System: Managing Disks and Files Restoring Your Data • A list of files you need to restore • The media on which the data resides • The location on your system to restore the files (original location or relative to some other location) • The device file corresponding to the backup device used for restoring the files Restoring Your Data Using SAM You can use SAM or HP-UX commands to restore data. Generally, SAM is simpler than HP-UX commands.
Administering a System: Managing Disks and Files Restoring Your Data and use the root= option to the /usr/sbin/exportfs command to export the permissions. For more information, see exportfs (1M) and Installing and Administering NFS Services. Restoring Large Files If you use fbackup to back up large files (> 2 GB), then those files can only be restored on a large file system.
Administering a System: Managing Disks and Files Restoring Your Data Examples of Restoring Data Remotely Here are some examples of restoring data remotely (across the network): • To use frecover to restore files across the network, enter: frecover -r -vf remote-system:/dev/rmt/0m • To use the tar command to restore files across the network, enter: remsh remote-system -l user dd if=/dev/rmt/0m bs=7k \ | tar -xvf If the tar backup used relative paths, the files will be restored relative to the current dir
Administering a System: Managing Disks and Files Restoring Your Data 700 Chapter 6
Administering a System: Managing Printers, Software, and Performance 7 Administering a System: Managing Printers, Software, and Performance This section contains information on the following topics: Chapter 7 • “Managing Printers” on page 702 • “Managing Software” on page 713 • “About Patches” on page 724 • “Managing System Performance” on page 726 701
Administering a System: Managing Printers, Software, and Performance Managing Printers Managing Printers NOTE The term “plotter” can be used interchangeably with the term “printer” throughout this section. Thus, all features ascribed to printers can be performed with plotters. This section deals with two approaches for administering printers: the traditional UNIX LP spooler and the HP Distributed Printer Server (HPDPS).
Administering a System: Managing Printers, Software, and Performance Managing Printers Stopping and Restarting the LP Spooler Typically, the LP spooler is started during the boot process. (To change the boot-up procedure to not start the scheduler, edit the file /etc/rc.config.d/lp and set the shell environment variable LP to 0.) The spooler must be stopped whenever the spooling system is modified (such as when adding or removing a printer) and then restarted after the modification is made.
Administering a System: Managing Printers, Software, and Performance Managing Printers Step 4. Restart the LP spooler. /usr/sbin/lpsched When the spooler is restarted, any print request actively being printed at the time the lpshut command was issued will be completely reprinted, regardless of how much of the request was previously printed.
Administering a System: Managing Printers, Software, and Performance Managing Printers You can issue individual enable and disable commands for each printer or issue one command separating each printer by blank spaces. For example: /usr/bin/enable laser1 laser2 laser3 You can enable or disable individual printers only, not printer classes. By default, any requests printing when a printer is disabled are reprinted in their entirety when the printer is reactivated.
Administering a System: Managing Printers, Software, and Performance Managing Printers /usr/sbin/lpshut For more information, see “Stopping and Restarting the LP Spooler” on page 703. Step 3. Change the priority. For example: /usr/sbin/lpadmin -pmyprinter -g7 If you do not specify the -g option, the default request priority is set to zero. Step 4. Restart the LP spooler: /usr/sbin/lpsched Summary of Additional Printer Tasks Table 7-1 summarizes additional printer tasks.
Administering a System: Managing Printers, Software, and Performance Managing Printers Table 7-1 Additional Printing Tasks (Continued) Task Example Additional Information Move all print requests from one printer destination to another. lpshut lpmove lj1 lj2 lpsched lj1 and lj2 are source and destination printers or printer classes. You must issue lpshut and lpsched. See lpmove (1M) and lpsched (1M). View the status of printers and print requests.
Administering a System: Managing Printers, Software, and Performance Managing Printers Table 7-2 Printer Problems and Solutions (Continued) Problem Solution Output being printed is not what you want. Cancel the job. For example: Printing does not resume after paper jam or paper out. To restart a listing from the beginning: cancel laserjet-1194 1. Take printer offline 2. Issue the disable command 3. Clear jam or reload paper 4. Put printer online 5.
Administering a System: Managing Printers, Software, and Performance Managing Printers Typical LP Commands for Users and LP Administrators Any user can queue files to printers, get status of the LP system, cancel any print job, and mark printers in and out of service. The following LP commands can be issued by any user. Consult the HP-UX manpage for options and usage.
Administering a System: Managing Printers, Software, and Performance Managing Printers Table 7-4 LP Administrator Commands (Continued) Command Description lpsched (1M) Schedules print requests for printing to destinations; typically invoked at system startup. lpmove (1M) Moves requests from one printer to another. lpfence (1M) Defines the minimum priority for which a spooled file can be printed.
Administering a System: Managing Printers, Software, and Performance Managing Printers Table 7-5 HPDPS User Commands (summary) (Continued) Command Purpose pdq (1) Query and list status of one or more print jobs. pdrm (1) Remove print jobs. Table 7-6, “HPDPS Administrator Commands (summary),” on page 711 lists commands used to administer HPDPS: Table 7-6 HPDPS Administrator Commands (summary) Command Chapter 7 Purpose pdstartclient (1M) Start the HPDPS client daemon.
Administering a System: Managing Printers, Software, and Performance Managing Printers Table 7-6 HPDPS Administrator Commands (summary) (Continued) Command Purpose pdresubmit (1) Resubmits previously submitted print jobs. pdmod (1) Modify attributes of submitted print jobs. Migrating LP Spooler Printers to HPDPS Minimal work needs to be done to enable printers already configured into the LP spooler to be recognized by HPDPS commands.
Administering a System: Managing Printers, Software, and Performance Managing Software Managing Software The following applications help you manage your applications and operating system software: • Software Distributor enables you to manage and distribute both operating system software and application software. See “Software Distributor (SD-UX)” below. • Software Package Builder provides a visual method to create and edit software packages using the HP-UX Software Distributor (SD) package format.
Administering a System: Managing Printers, Software, and Performance Managing Software • Copy software from a distribution source or media onto a system. • Verify compatibility of software products with your system. • Create software packages that make later software installations quicker and easier. • Configure installed software. For a list of SD-UX commands, see Table 7-7, “SD-UX Command Summary,” on page 716. SD-UX Software Structure SD-UX commands work on a hierarchy of software objects.
Administering a System: Managing Printers, Software, and Performance Managing Software The Runtime subproduct contains all the filesets in the MinimumRuntime subproduct as well as some additional filesets. Examples of filesets are: Networking.LAN-KRN Networking.LAN-PRG Networking.LAN-RUN Networking.SLIP-RUN These filesets are all part of both bundles, HPUXEngCR700 and HPUXEngRT700. The first three are included in both the subproducts, Networking.Runtime and Networking.
Administering a System: Managing Printers, Software, and Performance Managing Software Tape Depot Software in a tape depot is formatted as a tar archive. Tape depots such as cartridge tapes, DAT and 9-track tape are referred to by the file system path to the tape drive’s device file. A tape depot can only be created by using swpackage and it cannot be verified or modified with SD-UX commands. You cannot copy software (using swcopy) directly to a tape; use swpackage for this operation.
Administering a System: Managing Printers, Software, and Performance Managing Software Table 7-7 SD-UX Command Summary (Continued) Command Purpose swpackage Package software into a depot swcopy Copy software from one depot to another swlist List software in a depot or installed on a system swreg Make a depot visible to other systems swverify Verify the integrity of installed software and depot software swconfig Configure and unconfigure installed software swacl Change access to SD-UX softwar
Administering a System: Managing Printers, Software, and Performance Managing Software • products • filesets To select an item, move the cursor to the bundle and press Return or Space. You can select one or more items and mark them for installation. To see all subsets belonging to a bundle or product, choose Open. You can do this when only one item is selected. To see a description of the item (if there is one), select the item and choose Show Description Of Software.
Administering a System: Managing Printers, Software, and Performance Managing Software Configuration Phase Configures installed filesets for your system. In some cases this must be done after the system is rebooted. This is done with the script /sbin/rc2.d/S120swconfig which is a link to /sbin/init.d/swconfig. Information about the installation is logged in /var/adm/sw/swinstall.log. You open the log file during the installation process by pressing Logfile.... Check the log file for errors.
Administering a System: Managing Printers, Software, and Performance Managing Software Here is a sample CD-ROM certificate.
Administering a System: Managing Printers, Software, and Performance Managing Software Table 7-8 Example Tasks and Commands (Continued) Example Task Command To list all files that are part of the LVM product swlist -l file LVM To list files using the SD-UX graphical user interface on 11.x swlist -i You can use SAM to list software: • Choose Software Management/List Software. • Choose List Depot Software or List Installed Software. • Press Apply. See the swlist (1M) manpage.
Administering a System: Managing Printers, Software, and Performance Managing Software A network host contains one or more depots and is connected to a network. It can act as a common software installation source for other network clients. You copy software from a depot to the network host. From the network host, you can copy software to systems as needed. Figure 7-2 SD-UX Roles Software Package Builder (SPB) HP-UX 11i Version Software Package Builder (SPB) provides a visual method to create and 1 (B.11.
Administering a System: Managing Printers, Software, and Performance Managing Software validates software package attributes against these policies. The SPB command line interface can also perform validation of software package attributes against policies. Using SPB you can do the following: • Create a product specification file (PSF) to organize files into products, filesets, and optionally, into bundles and subproducts.
Administering a System: Managing Printers, Software, and Performance About Patches About Patches You can find information about patches at: • In the US, Canada, Asia Pacific, and Latin America, use: http://us-support.external.hp.com • In Europe, use: http://europe-support.external.hp.com From there you can obtain a list of patches and their descriptions. You can also search for and download available patches.
Administering a System: Managing Printers, Software, and Performance About Patches /usr/sbin/mount If there is no entry for the CD-ROM drive, mount it: /usr/sbin/mount /dev/dsk/devicefile /your_mount_directory Step 3. Read (or print) the READMEFIRST on the CD-ROM prior to installing the patch bundles: cd /your_mount_directory more READMEFIRST This file contains warnings, installation instructions, and the list of patch bundles.
Administering a System: Managing Printers, Software, and Performance Managing System Performance Managing System Performance This section provides some guidelines and suggestions for improving the performance of a system or workgroup.
Administering a System: Managing Printers, Software, and Performance Managing System Performance Disk Bottlenecks: • high disk activity • high idle CPU time waiting for I/O requests to finish • long disk queues NOTE Network Bottlenecks: Put your most frequently accessed information on your fastest disks, and distribute the workload evenly among identical, mounted disks so as to prevent overload on a disk while another is under-utilized.
Administering a System: Managing Printers, Software, and Performance Managing System Performance — Distribute the workload evenly across these disks. For example, if two teams are doing I/O intensive work, put their files on different disks or volume groups. See “Checking Disk Load with sar and iostat” on page 729. — Distribute the disks evenly among the system’s I/O controllers. • For exported HFS file systems, make sure the NFS read and write buffer size on the client match the block size on the server.
Administering a System: Managing Printers, Software, and Performance Managing System Performance In practice, though, a server is dealing with many I/O requests at a time, and intelligence is designed into the drivers to take account of the current head location and direction when deciding on the next seek. This means that defragmenting an HFS file system on HP-UX may never be necessary; JFS file systems, however, do need to be defragmented regularly.
Administering a System: Managing Printers, Software, and Performance Managing System Performance This runs sar -d ten times with a five-second sampling interval. The %busy column shows the percentage of time the disk (device) was busy during the sampling interval. Compare the numbers for each of the disks the exported file systems occupy (note the Average at the end of the report).
Administering a System: Managing Printers, Software, and Performance Managing System Performance NOTE For a JFS file system, you can use mkfs -m to see the parameters the file system was created with. But adjusting the client’s read/write buffer size to match is probably not worthwhile because the configured block size does not govern all of the blocks. See “Examining File System Characteristics” on page 885. • On the NFS client, use SAM to check read/write block size.
Administering a System: Managing Printers, Software, and Performance Managing System Performance Run SAM on the NFS server, go to Networking and Communications/Networked File Systems/Exported Local File Systems, select each exported file system in turn, pull down the Actions menu and select View More Information. This screen shows Asynchronous Writes as either Allowed or Not Allowed. You can change the setting of the Asynchronous Writes flag in SAM, while the file system is still mounted and exported.
Administering a System: Managing Printers, Software, and Performance Managing System Performance The column to watch most closely is po. If it is not zero, the system is paging. If the system is paging consistently, you probably need more RAM. Checking for Socket Overflows with netstat -s Although many different processes use sockets, and can contribute to socket overflows, regular socket overflows on an NFS server may indicate that you need to run more nfsd processes.
Administering a System: Managing Printers, Software, and Performance Managing System Performance Making Changes • “Increasing the Number of nfsd Daemons” on page 734 • “Defragmenting an HFS File System” on page 734 • “Defragmenting a JFS File System” on page 643 • “Configurable Kernel Parameters” on page 736 Increasing the Number of nfsd Daemons To increase the number of nfsds running on a server, do the following steps: Step 1. Edit /etc/rc.config.
Administering a System: Managing Printers, Software, and Performance Managing System Performance The example that follows shows an alternative method, using dcopy, and assumes you have enough disk space to create a new logical volume at least as large as /dev/vg01/lvol8. We’ll operate on the /work file system, which resides on the logical volume /dev/vg01/lvol8. Step 1. Back up the file system; for example, tar cv /work backs up /work to the system default tape device, /dev/rmt/0m. Step 2.
Administering a System: Managing Printers, Software, and Performance Managing System Performance Configurable Kernel Parameters In some cases, you may be able to get the results you need by resetting kernel parameters. For example, if a user frequently runs out of processes (symptom no more processes), raising the value of maxuprc might be the answer. NOTE Tunable kernel parameters can be static or dynamic (not requiring a system reboot or kernel rebuild).
Administering a System: Managing Printers, Software, and Performance Managing System Performance • “SAM” on page 737 • “The top Command” on page 737 • “OpenView Products” on page 738 • “Kernel Resource Monitor (KRM)” on page 739 HP also provides several sources for tools and support for HP-UX. See http://www.software.hp.com. This web page has links to: • HP-UX 3rd party and public domain software This catalog contains over 1000 packages in binary and source format.
Administering a System: Managing Printers, Software, and Performance Managing System Performance OpenView Products A broad portfolio of OpenView based products to help you manage your HP-UX and Windows NT based systems is available from HP and HP OpenView Solutions Partners.
Administering a System: Managing Printers, Software, and Performance Managing System Performance HP MeasureWare Agent is a comprehensive long-term performance tool which collects, alarms on, and manages system performance information as well as metrics from other sources such as database probes. It provides data and alarms for PerfView, HP OpenView NNM or IT/Operations as well as third-party products.
Administering a System: Managing Printers, Software, and Performance Managing System Performance 740 Chapter 7
Administering a System: Managing System Security 8 Administering a System: Managing System Security This chapter describes security measures for both standard and trusted HP-UX systems.
Administering a System: Managing System Security policy is an extensive and complicated process. A complete coverage of system security is beyond the scope of this chapter. You should consult computer security trade books and adopt security measures that fit your business needs. References The following book is suggested as a good source of security information: Practical UNIX & Internet Security, by Simson Garfinkel and Gene Spafford, O’Reilly & Associates, 1996, ISBN 1-56592-148-8.
Administering a System: Managing System Security Standard System Security Standard System Security The following sections describe standard system security as it is available without the Trusted System environment, HP-UX Bastille, or the optional security packages.
Administering a System: Managing System Security Planning System Security Planning System Security There is no one single method for developing a security policy. The process below provides a general approach. • Form a security policy. The policy will help you to make appropriate choices when you need to make difficult decisions later on. • Identify what you need to protect. These are your assets such as employees, hardware, data (on-site and off-site), and documentation.
Administering a System: Managing System Security Planning System Security • Erase obsolete data and securely dispose of console logs and printouts. • Erase disks and diskettes before disposing of them. Maintaining System Security Maintaining system security involves: • Identifying Users. All users must have a unique login identity (ID) consisting of an account name and password. • Authenticating Users.
Administering a System: Managing System Security Planning System Security CAUTION Of particular importance: • Do not run or copy software whose origin you do not know. Games and pirated software are especially suspect. • Use, and encourage all users to use, the HP-UX security features provided to the fullest practical extent. • Monitor and follow the recommendations given in HP-UX security bulletins.
Administering a System: Managing System Security Planning System Security To subscribe to automatically receive new HP Security Bulletins, use your browser to access the HP Electronic Support Center page: • In the U.S., Canada, Asia Pacific, and Latin America, use: http://us-support.external.hp.com • In Europe, use: http://europe-support.external.hp.com Click on the Technical Knowledge Database, register as a user (remember to save the User ID assigned to you, and your password).
Administering a System: Managing System Security Managing Standard Passwords and System Access Managing Standard Passwords and System Access The password is the most important individual user identification symbol. With it, the system authenticates a user to allow access to the system. Since they are vulnerable to compromise when used, stored, or known, passwords must be kept secret at all times.
Administering a System: Managing System Security Managing Standard Passwords and System Access • Do not choose a word found in a dictionary in any language, even if you spell it backwards. Software programs exist that can find and match it. • Do not choose a password easily associated with you, such as a family or pet name, or a hobby. • Do not use simple keyboard sequences, such as asdfghjkl, or repetitions of your login (e.g., if your login is ann; a bad password is annann).
Administering a System: Managing System Security Managing Standard Passwords and System Access The fields contain the following information (listed in order), separated by colons: 1. User (login) name, consisting of up to 8 characters. (In the example, robin) 2. Encrypted password field. (Z.yxGaSvxAXGg) 3. User ID (uid), an integer ranging from 0 to MAXINT-1 (equal to 2,147,483,646 or 231 -2). (102) 4. Group ID (gid), from /etc/group, an integer ranging from 0 to MAXINT-1. (99) 5.
Administering a System: Managing System Security Managing Standard Passwords and System Access allowed to function as pseudo-accounts, with entries listed in /etc/passwd. The customary pseudo- and special accounts are shown in Figure 8-1 on page 751.
Administering a System: Managing System Security Managing Standard Passwords and System Access • Cancel system access promptly when a user is no longer an employee. • Establish a regular audit schedule to review remote usage. • Connect the modems and dial-back equipment to a single HP-UX system, and allow network services to reach the destination system from that point. • Exceptions to dial-back must be made for UUCP access. Additional restrictions are possible through proper UUCP configuration.
Administering a System: Managing System Security Managing Access to Files and Directories Managing Access to Files and Directories On a traditional UNIX system, file access is controlled by granting permissions to the file owner, the file’s group, and all other users. These can be set with the chmod command and displayed with the ll (ls -l) command. (See chmod (1) and ls (1).
Administering a System: Managing System Security Managing Access to Files and Directories Using HFS Access Control Lists (ACLs) HFS ACL permissions are set with the chacl command and displayed with the lsacl command. (See chacl (1) and lsacl (1).) IMPORTANT You must use chmod with its -A option when working with files that have HFS ACL permissions assigned. Without the -A option, chmod will delete the ACL permissions from the file. The syntax is: chmod -A mode file...
Administering a System: Managing System Security Managing Access to Files and Directories Example 8-1 Creating an HFS ACL Suppose you use the chmod command to allow only yourself write permission to myfile. (This also deletes any previous HFS ACLs.) $ chmod 644 myfile $ ll myfile -rw-r--r-1 allan users 0 Sep 21 16:56 myfile $ lsacl myfile (allan.%,rw-)(%.users,r--)(%.%,r--) myfile The lsacl command displays just the default (no ACL) values, corresponding to the basic owner, group, and other permissions.
Administering a System: Managing System Security Managing Access to Files and Directories HFS ACLs and HP-UX Commands and Calls • • 756 The following commands and system calls work with ACLs on HFS file systems: ❏ chacl: Change HFS ACLs of files. See chacl (1). ❏ getaccess: List user’s access rights to files. See getaccess (1). ❏ lsacl: List HFS ACLs of files. See lsacl (1). ❏ getaccess(): Get a user’s effective access rights to a file. See getaccess (2).
Administering a System: Managing System Security Managing Access to Files and Directories • Chapter 8 ❏ find: Can identify files whose ACL entries match or include specific ACL patterns on HFS or JFS file systems. See find (1). ❏ ls -l: The long form indicates the existence of HFS or JFS ACLs by displaying a + after the file’s permission bits. See ls (1). ❏ mailx: Does not support optional ACL entries on /var/mail/* files. See mailx (1).
Administering a System: Managing System Security Managing Access to Files and Directories Using JFS Access Control Lists (ACLs) This section describes JFS Access Control Lists and how to use them. NOTE JFS supports ACLs beginning with JFS 3.3. JFS is available for HP-UX 11.0 from the HP Software Depot, http://software.hp.com and included in the operating environments for HP-UX 11i. See the HP JFS documentation on http://docs.hp.com for more information about installing JFS on HP-UX systems.
Administering a System: Managing System Security Managing Access to Files and Directories The second and third entries in a minimal ACL specify the permission granted to members of the file’s owning group; the permissions specified in these entries are exactly equal in a minimal ACL. For example, ACL entries granting read-only access to the file’s owning group would look like this: group::r-class:r-The class and group entries will be described at length later in “JFS ACL Class Entries” on page 760.
Administering a System: Managing System Security Managing Access to Files and Directories user:boss:rwx Similarly, additional group entries grant and deny access to specific group IDs on your system. For example, an ACL with the following entry would deny access to a user in the group spies: group:spies:--JFS ACL Class Entries Class entries are distinct from owning group entries In a file with a minimal ACL, the owning group and class ACL entries are identical.
Administering a System: Managing System Security Managing Access to Files and Directories NOTE Further details about the use of the getacl and setacl commands are in “Changing the JFS Access Control List of a File with setacl” on page 767. See also getacl (1) and setacl (1). Consider a file, exfile, with read-only (444) permissions and a minimal JFS ACL.
Administering a System: Managing System Security Managing Access to Files and Directories ACL entries are unaffected. However, when we grant read-execute permissions to the group dev, the upper bound on permissions (the class entry) is extended to include execute permission.
Administering a System: Managing System Security Managing Access to Files and Directories Example 8-8 ls -l Output for exfile with JFS ACL $ ls -l exfile -rw-r--rw-+ 1 jsmith users 12 Sep 20 15:02 exfile Default JFS Access Control Lists Often, you will want all the files created in a directory to have certain ACL entries. For example, you might want to allow another person to write to any file in a directory of yours where the two of you are working on something together.
Administering a System: Managing System Security Managing Access to Files and Directories group::rwgroup:dev:rwclass:rwother:--default:user:boss:r--default:user:jjones:r-default:group:dev:r-With these entries in place, any new file created in the directory projectdir could have an ACL like that shown below for planfile. The entries for user:boss, user:jjones, and group:dev are generated from the default entries on the projectdir directory.
Administering a System: Managing System Security Managing Access to Files and Directories default:user:boss:r-default:user:jjones:r-default:group:dev:r-How the System Generates a JFS ACL Whenever a file is created on a VxFS version 4 file system, the system initializes a minimal JFS ACL for the file, containing a user entry for the owner permissions, a group entry for the owning group permissions, a class entry for the owning group permissions, and an other entry for the other group permissions.
Administering a System: Managing System Security Managing Access to Files and Directories group::rwclass:rwother:r-If setacl is used to give read-write permission to user2 and user3 and read-only permission to group2, getacl would produce the following output: Example 8-13 Example getacl Output after Additions to the ACL $ getacl junk # file: junk # owner: user1 # group: group1 user::rwuser:user2:rwuser:user3:rwgroup::rwgroup:group2:rwx class:rwx other:r-Note that the class entry changed to include execute
Administering a System: Managing System Security Managing Access to Files and Directories Because chmod affects the class ACL entry and not the owning group entry, chmod may be used to deny access to all additional user and group entries without the need to reset each entry with setacl.
Administering a System: Managing System Security Managing Access to Files and Directories Using setacl -f If you are adding or changing several entries, you will probably want to use a different procedure. You can save the ACL to a file, edit it, adding, changing, or deleting entries to produce whatever ACL you want, and then apply this new ACL to the file. For example, you could save the ACL to a file with this command: getacl junk > junk.acl Then you could edit it so that it appeared as below.
Administering a System: Managing System Security Managing Access to Files and Directories be granted in practice.
Administering a System: Managing System Security Managing Access to Files and Directories • A JFS directory’s ACL can have default entries, which are applied to files subsequently created in that directory. HFS ACLs do not have this capability. • An HFS ACL has an owner that can be different from the owner of the file the ACL controls. JFS ACLs are owned by the owner of the corresponding file. • An HFS ACL can have different entries for a particular user in specific groups.
Administering a System: Managing System Security Managing Access to Files and Directories ACLs in a Network Environment ACLs are not visible on remote files by Network File System (NFS), although their control over access permissions remains effective. Individual manpage entries specify the behavior of the various system calls, library calls, and commands under these circumstances.
Administering a System: Managing System Security Managing Access to Files and Directories Protecting User Accounts These guidelines should be followed to protect user accounts: • Except for the owners, home directories should not be writable because it allows any user to add and remove files from them. • Users’ .profile, .kshrc, .login, and .cshrc files should not be writable by anyone other than the account owner. • A user’s .
Administering a System: Managing System Security Managing Access to Files and Directories • Protect all disk special files: ❏ Write-protect all disk special files from general users, to prevent inadvertent data corruption. Turn off write access for group and other. ❏ Read-protect disk special files to prevent disclosure. Turn off read access for other.
Administering a System: Managing System Security Guidelines for Running a Secure System Guidelines for Running a Secure System Guidelines for Handling Setuid and Setgid Programs Since they pose great security liability to your system, note which programs are setuid and setgid and • Stay vigilant of any changes to them. • Investigate further any programs that appear to be needlessly setuid. • Change the permission of any unnecessarily setuid program to setgid.
Administering a System: Managing System Security Guidelines for Running a Secure System However, running a setuid or setgid program changes the euid or egid of the process from that associated with the owner to that of the object. The processes spawned acquire their attributes from the object, giving the user the same access rights as the program’s owner and/or group. • If the setuid bit is turned on, the privileges of the process are set to that of the owner of the file.
Administering a System: Managing System Security Guidelines for Running a Secure System Guidelines for Limiting Setuid Power Use great caution if you add setuid-to-root programs to an existing system. Adding a setuid-to-root program changes the system configuration, and might compromise your security. Enforce restrictive use of privileged programs through the following suggestions: 776 • Use setuid and setgid only when absolutely necessary. • Make sure that no setuid program is writable by others.
Administering a System: Managing System Security Guidelines for Running a Secure System • Do not use the creat() system call to make a lock file. Use lockf() or fcntl() instead. See lockf (2) and fcntl (2). • Be especially careful to avoid buffer overruns, such as through the use of sprintf(), strcpy(), and strcat() without proper parameter length validation. See printf (3S) and string (3C).
Administering a System: Managing System Security Guidelines for Running a Secure System • Daily incremental and full weekly backups are recommended. Synchronize your backup schedule with the information flow in your organization. For example, if a major database is updated every Friday, you might want to schedule your weekly backup on Friday evenings. 778 • If all files must be backed up on schedule, request that all users log off before performing the backup.
Administering a System: Managing System Security Guidelines for Running a Secure System • Auditing is not enabled automatically when you have recovered the system. Be sure to turn auditing on. Guidelines for Mounting and Unmounting a File System The mount command enables you to attach removable file systems and disk or disk partitions to an existing file tree. The mount command uses a file called /etc/fstab, which contains a list of available file systems and their corresponding mount positions.
Administering a System: Managing System Security Guidelines for Running a Secure System ❏ Mount the foreign file system read-only at that location, for example, by loading the disk and typing: # mount /dev/disk1 /securefile -r ❏ Check all directories for special objects and privileged programs, and verify the identity of every program. ❏ Run ncheck -s to scan for setuid and setgid programs and device files, and investigate any suspicious findings.
Administering a System: Managing System Security Guidelines for Running a Secure System 3. Mount all file systems, using mount -a. Until their integrity has been verified, set restrictive directory permissions (drwx------) to prevent users from accessing the questionable files. This is a short-term solution only. 4. Compare file size from the previously backed-up system to the current one. Examine the dates that files were last written, check sums, byte count, inodes, and ownership.
Administering a System: Managing System Security Guidelines for Running a Secure System Tracking Root A useful method to keep track of system access and reduce security breaches on standard and trusted servers is to physically secure the system console and allow root to login only at the system console. Users logging in through other ports must first log in as themselves, then execute su to become root.
Administering a System: Managing System Security Controlling Security on a Network Controlling Security on a Network From the perspective of security, networked systems are more vulnerable than standalone systems. Networking increases system accessibility, but also add greater risk of security violations. While you cannot control security over the network, you can control the security of each node on the network to limit penetration risk without reducing the usefulness of the system or user productivity.
Administering a System: Managing System Security Controlling Security on a Network 4. Control root and local security on every node in your administrative domain. A user with superuser privileges on any machine in the domain can acquire those privileges on every machine in the domain. 5. Maintain consistency of user name, uid, and gid among password files in your administrative domain. 6. Maintain consistency among any group files on all nodes in your administrative domain.
Administering a System: Managing System Security Controlling Security on a Network Understanding Network Services HP-UX provides various networking services, each providing a means of authentication, either through password verification or authorization set up in a file on the remote system. Table 8-2 Access Verification for Network Services Network service Access verification ftp Password verification. See ftp (1). mount Entry in /etc/exports. See mount (1M). rcp Entry in .rhosts or hosts.
Administering a System: Managing System Security Controlling Security on a Network The service-name is the official name (not an alias) of a valid service in the file /etc/services. The service-name for RPC-based services (NFS) is the official name (not an alias) of a valid service in the file /etc/rpc. The wildcard character * and the range character - are permitted in addresses. Refer to inetd.sec (4) for complete details on the syntax and use of this file.
Administering a System: Managing System Security Controlling Security on a Network file system without having logged into the server system. See “Managing File Systems” on page 602 for more information. See also exports (4) for further information on controlling access to exported file systems. Server Vulnerability Server security is maintained by setting restrictive permissions on the file /etc/exports. Root privileges are not maintained across NFS.
Administering a System: Managing System Security Controlling Security on a Network How to Safeguard NFS-Mounted Files • If possible, make sure that the same person administers both client and server systems. • Maintain uniformity of user ID and group ID for server and client systems. • Stay vigilant of /dev files in file systems exported from server. • Restrict write access to the /etc/passwd and /tcb/files/auth/*/* client files.
Administering a System: Managing System Security Trusted System Security Trusted System Security The following sections describe the process and effect of adding Trusted System security to a standard HP-UX system.
Administering a System: Managing System Security Setting Up Your Trusted System Setting Up Your Trusted System To set up and maintain a Trusted System, follow these steps: 1. Establish an overall security policy appropriate to your work site. See “Planning System Security” on page 744. 2. Inspect all existing files on your system for security risks, and remedy them. This is important before you convert to a Trusted System. Thereafter, examine your files regularly, or when you suspect a security breach.
Administering a System: Managing System Security Setting Up Your Trusted System • Converts the at, batch and crontab input files to use the submitter’s audit ID. • Starting with HP-UX 11.0, changes the default value for umask to 077 (-rw-------, drwx------); see umask (1). 5. Verify that the audit files are on your system: a. Use swlist -l fileset to list the installed file sets. Look for the file set called SecurityMon which contains the auditing program files.
Administering a System: Managing System Security Auditing a Trusted System Auditing a Trusted System An HP-UX Trusted System provides auditing. Auditing is the selective recording of events for analysis and detection of security breaches. Using SAM to perform all auditing tasks is recommended as it focuses choices and helps avoid mistakes. However, all auditing tasks can be done manually using the following audit commands: audsys Starts/stops auditing; sets and displays audit file information.
Administering a System: Managing System Security Auditing a Trusted System A record is written when the event type is selected for auditing, and the user initiating the event has been selected for auditing. The login event is an exception. Once selected, this event will be recorded whether or not the user logging in has been selected for auditing. • When an event type is selected, its associated system calls are automatically enabled.
Administering a System: Managing System Security Auditing a Trusted System Table 8-3 Event Type Audit Event Types and System Calls Description of Action Associated System Calls admin Log all administrative and privileged events acct (2), adjtime (2), audctl (2), audswitch (2), clock_settime (2), getksym (2), getprivgrp (2), kload (2)a, modadm (2)a, modload (2), modpath (2), modstat (2), moduload (2), mpctl (2), plock (2), reboot (2), sched_setparam (2), sched_setscheduler (2), serialize (2), setaudid
Administering a System: Managing System Security Auditing a Trusted System Table 8-3 Event Type Audit Event Types and System Calls (Continued) Description of Action Associated System Calls modaccess Log all access modifications other than Discretionary Access Controls chdir (2), chroot (2), fchdir (2), link (2), lockf (2), lockf64 (2), rename (2), setcontext (2), setgid (2), setgroups (2), setpgid (2), setpgrp (2), setpgrp2 (2), setpgrp3 (2), setregid (2), setresgid (2), setresuid (2), setsid (2), setu
Administering a System: Managing System Security Auditing a Trusted System Table 8-4 Event Type Audit Event Types and System Commands Description of Action Associated System Commands admin Log all administrative and privileged events sam (1M), audisp (1M), audevent (1M), audsys (1M), audusr (1M), chfn (1), chsh (1), passwd (1), pwck (1M), init (1M) ipcdgram Log ipc datagram transactions udp (7P) login Log all logins and logouts login (1), init (1M) modaccess Log all access modifications other
Administering a System: Managing System Security Auditing a Trusted System audevent Select events to be audited; see audevent (1M) audisp Display the audit data; see audisp (1M) audsys Start or halt the auditing system; see audsys (1M) audusr Select users to be audited; see audusr (1M) init Change run levels, users logging off; see init (1M) lpsched Schedule line printer requests; see lpsched (1M) fbackup Flexible file backup; see fbackup (1M) ftpd File transfer protocol daemon; see ftpd (1M)
Administering a System: Managing System Security Auditing a Trusted System The primary log file is where audit records begin to be collected. When this file approaches a predefined capacity (its Audit File Switch (AFS) size), or when the file system on which it resides approaches a predefined capacity (its File Space Switch (FSS) size), the auditing subsystem issues a warning.
Administering a System: Managing System Security Auditing a Trusted System The audit data shows what the user program passed to the kernel. In this case, what got passed is not initialized due to a user code error, but the audit system still correctly displays the uninitialized values that were used. • System calls that take file name arguments may not have device and inode information properly recorded. The values will be zero if the call does not complete successfully.
Administering a System: Managing System Security Auditing a Trusted System Using Auditing in an NFS Diskless Environment NOTE NFS diskless is not supported in HP-UX 10.30 and later releases. Auditing can only be done on Trusted Systems. Each diskless client has its own audit file. Each system on the cluster must administer its own auditing, including making sure the file system where the audit files are to reside is mounted. The audit record files are stored in the /.secure directory.
Administering a System: Managing System Security Managing Trusted Passwords and System Access Managing Trusted Passwords and System Access The password is the most important individual user identification symbol. With it, the system authenticates a user to allow access to the system. Since they are vulnerable to compromise when used, stored, or known, passwords must be kept secret at all times.
Administering a System: Managing System Security Managing Trusted Passwords and System Access • Change the initial password immediately; change the password periodically. • Report any changes in status and any suspected security violations. • Make sure no one is watching when entering the password. • Choose a different password for each machine on which there is an account.
Administering a System: Managing System Security Managing Trusted Passwords and System Access Password Files A Trusted System maintains multiple password files: the /etc/passwd file and the files in the protected password database /tcb/files/auth/ (see “The /tcb/files/auth/ Database” on page 804). Each user has an entry in two files, and login looks at both entries to authenticate login requests. If NIS+ is configured, this process is more complex; see “Network Information Service Plus (NIS+)” on page 839.
Administering a System: Managing System Security Managing Trusted Passwords and System Access The fields contain the following information (listed in order), separated by colons: 1. User (login) name, consisting of up to 8 characters. (In the example, robin) 2. Unused password field, held by an asterisk instead of an actual password. (*) 3. User ID (uid), an integer ranging from 0 to MAXINT-1, equal to 2,147,483,646 or 231 -2. (102) 4. Group ID (gid), from /etc/group, an integer ranging from 0 to MAXINT-1.
Administering a System: Managing System Security Managing Trusted Passwords and System Access On Trusted Systems, key security elements are held in the protected password database, accessible only to superusers. Password data entries should be set via SAM. Password data which are not set for a user will default to the system defaults stored in the file /tcb/files/auth/system/default. The protected password database contains many authentication entries for the user.
Administering a System: Managing System Security Managing Trusted Passwords and System Access • Number of unsuccessful login attempts; cleared upon successful login. • Maximum number of login attempts allowed before account is locked. Password Selection and Generation On Trusted Systems, the system administrator can control how passwords are generated. The following password generation options are available: • User-generated passwords.
Administering a System: Managing System Security Managing Trusted Passwords and System Access • Expiration time. A time after which a user must change that password at login. • Warning time. The time before expiration when a warning will be issued. • Lifetime. The time at which the account associated with the password is locked if the password is not changed. Once an account is locked, only the system administrator can unlock it.
Administering a System: Managing System Security Managing Trusted Passwords and System Access the event is logged. The permitted range of access times is stored in the protected password database for users and may be set with SAM. Users that are logged in when a range ends are not logged out. Device-Based Access Control For each MUX port and dedicated DTC port on a Trusted System, the system administrator can specify a list of users allowed for access.
Administering a System: Managing System Security Managing Trusted Passwords and System Access putpwent (3C) Write password file entries to /etc/passwd. getspwent (3X) Get password entries from /tcb/files/auth/, provided for backward compatibility. putspwent (3X) Write password entries to /tcb/files/auth/, provided for backward compatibility. putprpwnam (3) Write password file entries to /tcb/files/auth/.
Administering a System: Managing System Security Configuring NFS Diskless Clusters for Trusted Systems Configuring NFS Diskless Clusters for Trusted Systems NOTE NFS diskless is not supported in HP-UX 10.30 and later releases. NFS diskless clusters on Trusted Systems come in two basic configurations. 1. Each member of the cluster has its own private password database, or 2. A single password database is shared across the entire cluster.
Administering a System: Managing System Security Configuring NFS Diskless Clusters for Trusted Systems Converting a Trusted Standalone System to Trusted Cluster You create the cluster using the Cluster Configuration area of SAM. When you add the first client, specify “private” for the password policy. SAM will add the client as a nontrusted system. You can then boot the client and convert the client to trusted status using the same procedure as in the previous case.
Administering a System: Managing System Security Configuring NFS Diskless Clusters for Trusted Systems chmod 500 /export/private_roots/CL_NAME/.
Administering a System: Managing System Security Configuring NFS Diskless Clusters for Trusted Systems Converting Trusted Standalone System to Trusted Cluster These instructions must be followed for each client that is added to the cluster. All of these instructions except for booting the client are to be performed on the cluster server. These instructions also assume the standalone system has already been converted to a Trusted System. 1. Use the Cluster Configuration area of SAM to add a client.
Administering a System: Managing System Security Configuring NFS Diskless Clusters for Trusted Systems chmod 664 /export/private_roots/CL_NAME/tcb/files/ttys cp /usr/newconfig/tcb/files/devassign \ /export/private_roots/CL_NAME/tcb/files/devassign chgrp root /export/private_roots/CL_NAME/tcb/files/devassign chmod 664 /export/private_roots/CL_NAME/tcb/files/devassign 6. You can now boot the client.
Administering a System: Managing System Security HP-UX Bastille HP-UX Bastille Overview Bastille is a security hardening, lockdown tool that can be used to enhance the security of the HP-UX operating system. It provides customized lockdown on a system-by-system basis by encoding functionality similar to the Bastion Host (see “Documentation” on page 832) and other hardening/lockdown checklists. Bastille was originally developed by the open source community for use on Linux systems.
Administering a System: Managing System Security HP-UX Bastille For previous HP-UX 11.x and 11i releases, Bastille is also available from the HP Software Depot, at http://www.software.hp.com/. Additional Software If you install from an Operating Environment medium, the default Bastille installation automatically includes Bastille, Perl, Security Patch Check, IPFilter, and Secure Shell. If you downloaded from the HP Software Depot, you may need to download the other four packages as well.
Administering a System: Managing System Security HP-UX Bastille Predefined Configuration Files Beginning with HP-UX 11i v2, Bastille includes three predefined configuration files (see Table 8-5) that provide an increasing level of lockdown. The files are delivered in /etc/opt/sec_mgmt/bastille Table 8-5 Configuration File Name Predefined Configuration Files Install-Time Module Description HOST.config Sec10Host Host lockdown: no firewall; networking runs normally, including Telnet and FTP.
Administering a System: Managing System Security HP-UX Bastille Table 8-6 HOST.
Administering a System: Managing System Security HP-UX Bastille b. The following ndd changes will be made: ip_forward_directed_broadcasts=0 ip_forward_src_routed=0 ip_forwarding=0 ip_ire_gw_probe=0 ip_pmtu_strategy=1 ip_send_source_quench=0 tcp_conn_request_max=4096 tcp_syn_rcvd_max=1000 c. Settings only applied if software is installed.
Administering a System: Managing System Security HP-UX Bastille Table 8-7 MANDMZ.config: Additional Security Settings Category Actions Includes all security settings from HOST.
Administering a System: Managing System Security HP-UX Bastille Table 8-8 DMZ.config: Additional Security Settings Category Actions Includes all security settings from HOST.config (Table 8-6) and MANDMZ.config (Table 8-7) IPFiltera Additions: • Block all traffic except Secure Shell, adding blocking for: — incoming HIDS agent connectionsb c — incoming WBEM connectionsd — incoming web admin connections — incoming web admin autostart connections a.
Administering a System: Managing System Security HP-UX Bastille • In the /etc/opt/sec_mgmt/bastille directory, you can copy a custom configuration to the config file (perhaps one you made with the interactive interface). Go to “Applying Bastille” on page 828 to install it. Typically, you would create a special configuration on one system and then copy that configuration to other systems that you wish to protect identically. You should also copy your modified TODO.
Administering a System: Managing System Security HP-UX Bastille Interactive Configuration CAUTION Since the interactive configuration uses an insecure GUI, it is important that you review “Security Considerations” on page 816 before proceeding. Bastille uses a series of questions, extracted from the file /etc/sec_mgmt/bastille/Questions.txt, to prepare the configuration file, /etc/sec_mgmt/bastille/config.
Administering a System: Managing System Security HP-UX Bastille At this point, it displays the title screen (Figure 8-2) of the graphical interface.
Administering a System: Managing System Security HP-UX Bastille After the Title Screen, Bastille always displays the Security Patch Check screen (Figure 8-3). This allows you to reconfigure this important software. Figure 8-3 Bastille Security Patch Check (long) Navigation You can return to a previous question by selecting the Back button. You move to the next question with the OK button. Most questions take Yes or No as an answer; click the appropriate button.
Administering a System: Managing System Security HP-UX Bastille Long and Short Explanations Many of the question screens have both short and long explanations. You can toggle between them with the Explain Less/Explain More buttons. Figure 8-3 shows the long version; Figure 8-4 shows the corresponding short version. Figure 8-4 Bastille Security Patch Check (short) Progress Checkmarks As you complete a section of the questions, Bastille places a check mark in the Modules list, as shown in Figure 8-5.
Administering a System: Managing System Security HP-UX Bastille When you reach (or select) the End Screen, you can go back and make further modifications (by choosing Back or No) or you can complete your session (by choosing Yes and OK). Figure 8-6 Bastille End Screen On the Save Changes screen (Figure 8-7), you can go back and make further modifications, exit without saving the current configuration, or save the current configuration in /etc/opt/sec_mgmt/bastille/config and go on.
Administering a System: Managing System Security HP-UX Bastille If you save your changes, the Finishing Up screen (Figure 8-8) gives you one more chance to change the configuration, or you can exit without applying the new configuration, or you can have the new configuration applied immediately.
Administering a System: Managing System Security HP-UX Bastille may take a number of minutes, depending on the speed of your machine.
Administering a System: Managing System Security HP-UX Bastille If there are errors, Bastille has locked down your system as much as possible. When you correct the problems, you can run bastille -b to apply the rest of the lockdown. If you prefer, you can return the system to its unlocked state with the revert command, bastille -r, and then make any corrections that you need. 2. Review the log files. /var/opt/sec_mgmt/bastille/log/action-log Records the specific actions that Bastille performed.
Administering a System: Managing System Security HP-UX Bastille Reverting Bastille To revert the security configuration to the state before Bastille was run, execute the command: # bastille -r If there are any manual actions that need to be performed to restore the pre-Bastille state, this process creates a file, /var/opt/sec_mgmt/bastille/TOREVERT.txt. It is important that you perform the listed actions.
Administering a System: Managing System Security HP-UX Bastille Stack performance is slightly slower with a Bastille configuration that utilizes IPFilter. • HP-UX HIDS If you are also running HP-UX Host Intrusion Detection System, you may need to modify the IPFilter firewall rules. See HP-UX Host Intrusion Detection System Administrator’s Guide for details. • MC/ServiceGuard MC/ServiceGuard’s use of dynamic ports does not work if the MANDMZ.config or DMZ.
Administering a System: Managing System Security HP-UX Bastille Command Execution The bastille command performs the following operations. bastille Starts an interactive session to create a configuration file for HP-UX in the configuration file, /etc/opt/sec_mgmt/bastille/config. bastille -b Executes the instructions in the configuration file, automatically making some changes to your system and creating a TODO.txt list of commands for you to edit and execute.
Administering a System: Managing System Security HP-UX Bastille /etc/opt/sec_mgmt/bastille/HOST.config Predefined configuration file. See “Predefined Configuration Files” on page 817. /etc/opt/sec_mgmt/bastille/MANDMZ.config Predefined configuration file. See “Predefined Configuration Files” on page 817. /var/opt/sec_mgmt/bastille/log/action-log Automatic actions that Bastille performed when applying the current configuration.
Administering a System: Managing System Security Other Security Packages Other Security Packages The following sections describe a number of other packages available to enhance security on your standard or trusted HP-UX system.
Administering a System: Managing System Security HP-UX Host Intrusion Detection System HP-UX Host Intrusion Detection System The HP-UX Host Intrusion Detection System (HP-UX HIDS) can enhance local host-level security within your network by automatically monitoring each configured host system within the network for signs of unwanted and potentially damaging intrusions.
Administering a System: Managing System Security HP-UX Shadow Passwords HP-UX Shadow Passwords Increasing computational power available to password crackers has made the nonhidden passwords in the /etc/passwd file vulnerable to decryption. Shadow passwords enhance system security by hiding user encrypted passwords in a shadow password file.
Administering a System: Managing System Security HP-UX Shadow Passwords Programming APIs The way to interface with the /etc/shadow file is through the industry standard getspent (3C) calls. These calls are similar to the getpwent (3C) interfaces. Other Software Support HP-UX Shadow Passwords are supported by: • Lightweight Directory Access Protocol (LDAP). You can download LDAP-UX Integration, version B.03.00 or later, from http://software.hp.com . • Ignite-UX version B.4.1 or later.
Administering a System: Managing System Security Network Information Service Plus (NIS+) Network Information Service Plus (NIS+) NIS+, the next generation of the Network Information Service (NIS), was introduced in HP-UX Release 10.30 and is supported in both standard and trusted HP-UX systems. NIS+ is not an enhancement to NIS; it is a whole new service.
Administering a System: Managing System Security Network Information Service Plus (NIS+) Using SAM with NIS+ The HP-UX System Administration Manager (SAM) supports the administration of users and groups in the NIS+ tables. Operations that support locally defined users and groups (including adding, modifying, and removing) also support users and groups defined in the NIS+ tables. This includes the administration of user attributes when a system is in trusted mode.
Administering a System: Managing System Security Network Information Service Plus (NIS+) 3. Start the ttsyncd daemon. See ttsyncd (1M). You can execute the command, /sbin/init.d/comsec start Setting Up the Client 4. On each client, perform the following steps in either order: • Set up the NIS+ client. The steps are described in Installing and Administering NFS Services. See also nisserver (1M), nispopulate (1M), and nisclient (1M). • Convert the client to trusted mode using SAM.
Administering a System: Managing System Security Network Information Service Plus (NIS+) To stop the daemon, /sbin/init.d/comsec stop The ttsyncd daemon can be started on an HP-UX master server even if it is in standard mode. If the daemon is not started or if the server is non-HP-UX, the security attributes need to be managed on client systems locally. In this case, there will not be central administration for security.
Administering a System: Managing System Security Pluggable Authentication Modules (PAM) Pluggable Authentication Modules (PAM) The Pluggable Authentication Module (PAM) is an industry standard authentication framework. PAM gives system administrators the flexibility of choosing any authentication service available on the system to perform authentication. The PAM framework also allows new authentication service modules to be plugged in and made available without modifying the applications.
Administering a System: Managing System Security Pluggable Authentication Modules (PAM) System administrators can require CDE users to conform to the security policies enforced in the Trusted System databases. Control is available on both a system-wide and an individual user basis. The system files are: HP References /etc/pam.conf System-wide control file. /etc/pam_user.conf Individual user control file. pam (3), pam.conf (4), pam_updbe (5), pam_user.conf (4).
Administering a System: Managing System Security Pluggable Authentication Modules (PAM) If this file is corrupt or missing from the system, root is allowed to log into the console in single-user mode to fix the problem. See pam (3), pam.conf (4), and sam (1M) for additional information. Per-User Configuration The PAM configuration file /etc/pam_user.conf configures PAM on a per-user basis. /etc/pam_user.conf is optional. It is needed only if PAM applications need to behave differently for various users.
Administering a System: Managing System Security Pluggable Authentication Modules (PAM) sufficient module-path If the test succeeds, then no further tests are performed. is a path name to a shared library object that implements the service. If the path is not absolute, it is assumed to be relative to /usr/lib/security, where the HP-supplied modules reside. The module-path for the standard HP-UX module is /usr/lib/security/libpam_unix.1.
Administering a System: Managing System Security Pluggable Authentication Modules (PAM) Test the password that the user entered for the first module of the module-type. If it doesn’t match the database or no password has been entered, prompt the user for a password. use_psd Request the user’s personal identification number (Enter PIN:) and use it to read and decode the password from the user’s personal security device. If the password doesn’t match the database, quit. This option is not supported by DCE.
Administering a System: Managing System Security Pluggable Authentication Modules (PAM) user’s personal security device. If the password doesn’t match the database, quit. If it matches, prompt the user for a new password. This option is not supported by DCE. Default: If none of these options is specified, each module behaves independently, each requesting passwords and data in its normal fashion. Lines beginning with # are comments. The default contents of /etc/pam.
Administering a System: Managing System Security Pluggable Authentication Modules (PAM) dtlogin password required dtaction password required OTHER password required /usr/lib/security/libpam_unix.1 /usr/lib/security/libpam_unix.1 /usr/lib/security/libpam_unix.1 The pam_user.conf Configuration File Individual users can be assigned different options by listing them in the user control file /etc/pam_user.conf.
Administering a System: Managing System Security Pluggable Authentication Modules (PAM) How PAM Works: A Login Example This example describes the auth process for login. If there is a single, standard login/auth entry in /etc/pam.conf, such as: login auth required /usr/lib/security/libpam_unix.1 login proceeds normally. If there are two or more system-wide login/auth entries, such as: login login auth auth required required /usr/lib/security/libpam_unix.1 /usr/lib/security/libpam_dce.
Administering a System: Managing System Security Pluggable Authentication Modules (PAM) 2 and 3 of /etc/pam.conf with “debug” and “try_first_pass”, respectively. Then the modules specified by lines 2 and 3 are executed with the revised options. When isabel logs in, line 1 in /etc/pam.conf causes PAM to read /etc/pam_user.conf and temporarily replace the options field of line 2 of /etc/pam.conf with “debug use_psd”. Line 3 is unchanged.
Administering a System: Managing System Security Secure Internet Services (SIS) Secure Internet Services (SIS) Secure Internet Services (SIS) provides network authentication and authorization when it is used in conjunction with the HP DCE security services, the HP Praesidium/Security Server, or other software products that provide a Kerberos V5 Network Authentication Services environment. SIS was introduced as a separate product in HP-UX 10.20 with HP DCE.
Administering a System: Managing System Security Secure Internet Services (SIS) Environment SIS requires a Kerberos V5 network authentication services environment which includes a properly configured Key Distribution Center (KDC). Supported KDCs are the HP DCE security server, the HP Praesidium/Security Server, or any third-party KDC based on Kerberos Version 5 Release 1.0. A properly configured KDC must be running for the Secure Internet Services to work.
Administering a System: Managing System Security Security Patch Check Security Patch Check Security Patch Check is a tool that helps you automate the process of checking the current list of HP-UX security patches and bulletins and determining whether you need to patch, update, or manually configure your system to be in bulletin compliance. It runs on all HP-UX 11.0 and 11i systems.
Administering a System: Managing System Security Security Patch Check # export https_proxy=http://mysys.mydomain.com:8088 # export http_proxy=http://mysys.mydomain.com:8088 For HTTP, # export http_proxy=http://mysys.mydomain.com:8088 For FTP, # export ftp_proxy=http://mysys.mydomain.com:8088 Documentation HP References Chapter 8 The security_patch_check (1M) manpage, delivered in /opt/sec_mgmt/share/man/man1m.
Administering a System: Managing System Security Security Patch Check 856 Chapter 8
Administering a Workgroup 9 Administering a Workgroup This information covers routine administration of a workgroup.
Administering a Workgroup — — — — “Installing New Systems” on page 384 “Adding Users to a Workgroup” on page 388 “Configuring Printers for a Workgroup” on page 433 “Compatibility Between HP-UX Releases 10.x and 11.
Administering a Workgroup Managing Disks Managing Disks • “Distributing Applications and Data” on page 61 • “Distributing Disks” on page 76 • “Capacity Planning” on page 77 • “Disk-Management Tools” on page 79 • Quick Reference for “Adding a Disk” on page 861 • Configuring Logical Volumes; see: ❏ “Managing Logical Volumes Using SAM” on page 571 ❏ “Managing Logical Volumes Using HP-UX Commands” on page 571 ❏ Examples: — — — — — — — — — — — “Adding a Disk” on page 861 “Adding a Logical Volum
Administering a Workgroup Managing Disks — Planning a workstation or server’s swap; see “Designing Your Swap Space Allocation” on page 664 ❏ ❏ ❏ Increasing Primary Swap; see “Configuring Primary and Secondary Swap” on page 670 Reducing Primary Swap; see “Configuring Primary and Secondary Swap” on page 670 “Adding, Modifying, or Removing File System Swap” on page 668 • “Configuring Dump” on page 671 • “Examples” on page 860 Examples NOTE 860 All of the procedures that follow require you to be the roo
Administering a Workgroup Managing Disks Adding a Disk For detailed information and instructions on adding a disk, see Configuring HP-UX for Peripherals. What follows is a quick reference; we’ll be using SAM. NOTE To configure the disk with disk striping, you must use lvcreate with the -i and -I options, not SAM (see “Setting Up Disk Striping” on page 589). Step 1. Shut down and power off the system. See “Shutting Down Systems” on page 520. Step 2. Connect the disk to the system and the power supply.
Administering a Workgroup Managing Disks See “Exporting a File System (HP-UX to HP-UX)” on page 395 for more information. Step 7. To configure disk quotas for new file systems, follow directions under “Managing Disk Space Usage with Quotas” on page 620. Adding a Logical Volume For detailed discussion of LVM (Logical Volume Manager) see “Managing Disks” on page 556. The following is a quick reference; we’ll be using SAM. Step 1. Decide how much disk space the logical volume will need.
Administering a Workgroup Managing Disks To export the new file system(s) to other systems in the workgroup, go to Networking and Communications/Networked File Systems/ Exported Local File Systems, select Add from the Actions pull-down menu and follow SAM’s prompts. See “Exporting a File System (HP-UX to HP-UX)” on page 395. As a result of all this, SAM creates a new logical volume and mounts it on a new file system, for example, /dev/vg01/lvol7 mounted on /work/project5.
Administering a Workgroup Managing Disks You might see, for example, that volume group vg01 has 1800 MB of unallocated space out of a total of about 2500 MB, and you might also find (by pulling down the Actions menu and clicking on View More Information) that vg01 is spread across two disks. In this case it’s likely that each disk has 500 MB free. Step 5.
Administering a Workgroup Managing Disks Step 1. Decide how much more disk space the logical volume will need. For example, you might want to add 200 MB of swap, or an existing project might need an additional 1000 MB. Step 2.
Administering a Workgroup Managing Disks Extending a Logical Volume When You Can’t Use SAM Before you can extend a logical volume, you must unmount the file system mounted to it. In the case of system directories, such as /var and /usr, you will need to be in single-user mode to do this. NOTE Extending the root (/) logical volume is a special case. You will not be able to extend the root file system using the procedure described below.
Administering a Workgroup Managing Disks Step 2. Find out if any space is available: /sbin/vgdisplay You’ll see output something like this: - Volume groups VG Name VG Write Access VG Status Max LV Cur LV Open LV Max PV Cur PV Act PV Max PE per PV VGDA PE Size (Mbytes) Total PE Alloc PE Free PE Total PVG /dev/vg00 read/write available 255 8 8 16 1 1 2000 2 4 249 170 79 0 The Free PE entry indicates the number of 4 MB extents available, in this case, 79 (316 MB) Step 3.
Administering a Workgroup Managing Disks /sbin/lvextend -L 332 /dev/vg00/lvol7 increases the size of this volume to 332 MB. Step 6. Unmount /usr: /sbin/umount /usr This is required for the next step, since extendfs can only work on unmounted volumes. Step 7. Extend the file system size to the logical volume size; for example: /sbin/extendfs /dev/vg00/rlvol7 Step 8. Remount /usr: /sbin/mount /usr Step 9.
Administering a Workgroup Managing Disks NOTE If the file system is exported to other systems, check on those other systems that no one is using it (fuser works on NFS-mounted file systems as of 10.x), and then unmount it on those systems before unmounting it on the server. Step 2. Back up the data in the logical volume. For example, to back up /work/project5 to the system default tape device: tar cv /work/project5 Step 3.
Administering a Workgroup Managing Disks Step 8. Recover the data from the backup; for example, tar xv recovers all the contents of a tape in the system default drive. Step 9. If /work/project5 will continue to be used by NFS clients, reexport it on the server (exportfs -a) and remount it on the clients (mount -a). Removing a Logical Volume In this example we’ll assume you want to remove a logical volume that is either unused or contains obsolete data. We’ll be using SAM.
Administering a Workgroup Managing Disks Step 2. Run SAM: /usr/sbin/sam Step 3. Make sure the volume group that contains the logical volume you want to mirror has enough free space. It needs at least as much free space as the logical volume you want to mirror currently has allocated to it - that is, you will be doubling the amount of physical space this volume requires.
Administering a Workgroup Managing Disks Removing a Mirror from a Logical Volume For detailed discussion of mirroring see “Creating and Modifying Mirrored Logical Volumes” on page 628. The following is a quick reference; we’ll be using SAM. Step 1. Run SAM: /usr/sbin/sam Step 2. Go to Disks and File Systems/Logical Volumes. Pull down the Actions menu and select Change # of Mirror Copies. Set the number of copies to zero (or to the number of copies you want to keep) on the menu that pops up.
Administering a Workgroup Managing Disks Step 4. Temporarily disable all paths to the disk: pvchange -a N /dev/dsk/cntndn Once the command completes, proceed to the next step. Step 5. Physically disconnect the bad disk and connect the replacement. Step 6. If you are replacing a mirror of the boot disk, set up the boot area on the disk. a.
Administering a Workgroup Managing Disks NOTE You can use the same procedure to replace a disk that contains unmirrored logical volumes. However, by removing the disk, you will permanently lose any unmirrored data on that disk. Therefore, before starting this procedure, confirm that you have a backup of any unmirrored logical volume, then halt any applications using it, and unmount any file system mounted on it.
Administering a Workgroup Managing Disks The SAM Volume Groups menu shows the free space for each volume group in megabytes; the pvdisplay command provides the same information in terms of physical extents; multiply Free PE by four to get free space in megabytes. Step 3. Do this step on the new server, that is, the system you plan to move the directory to, fp_server in this example. After selecting a volume group with sufficient space, create a new logical volume in it.
Administering a Workgroup Managing Disks If the umount fails on any system, run fuser -cu to see if anyone on that system still has files open, or is working in a directory, under /projects: fuser -cu /projects NOTE (10.x and later systems) fuser will not be aware of files opened in other directories within an editor. Step 7. Do this step on the original server, that is the system where the directory that is to be moved currently resides, in this example, wsb2600. Back up /projects.
Administering a Workgroup Managing Disks Step 9. Do this step on the new server, that is, the system you are moving the directory to, fp_server in this example. Export the directory; for example, by editing /etc/exports to include an entry such as, /work/project6 -async,anon=65534 and running the exportfs command to force the system to reread /etc/exports: exportfs -a You can also use SAM; see “Exporting a File System (HP-UX to HP-UX)” on page 395.
Administering a Workgroup How To: How To: Here’s information on: • “Determining What Version of the HP-UX Operating System is Running” on page 879 • “Backing Up and Recovering Directories: Quick Reference for tar” on page 880 • “Breaking Out of the Boot Screen” on page 881 • “Checking the System’s Run Level” on page 882 • “Managing Groups of Distributed Systems or Serviceguard Clusters” on page 882 • “Diagramming a System’s Disk Usage” on page 882 • “Finding Large Files” on page 885 • “Exami
Administering a Workgroup How To: • For information on adding, extending, mirroring, reducing, and removing logical volumes, “Managing Disks” on page 859 • “Adding a Logical Volume” on page 862 • “Moving a Directory to a Logical Volume on Another System” on page 874 Determining What Version of the HP-UX Operating System is Running To determine what version of operating system you are running and on which platform, use the uname command with the -a option: uname -a HP-UX tavi B.10.
Administering a Workgroup How To: Backing Up and Recovering Directories: Quick Reference for tar The following examples may be useful for workstation users wanting to make a quick backup to tape or disk. For information on system backup, see “Backing Up Data” on page 674. • To create a tar backup to tape: tar cv /home/me/mystuff /work/project5/mystuff This can include files and directories. NOTE This overwrites anything already on the tape. • ❏ v (verbose) is optional throughout.
Administering a Workgroup How To: • To write out the tape table of contents to a file: tar tv > /home/me/backup.8.31.97 • To print out the tape table of contents: tar tv | lp lp_options • To extract a file (get it back off the tape): tar x /users/me/mystuff/needed • To extract a directory: tar x /users/me/mystuff • To restore all the files on the tape (write them back to disk): tar x NOTE tar recreates the directories on the tape if they aren’t already on the system.
Administering a Workgroup How To: Checking the System’s Run Level To find out what run level the system is in (for example if you want to check that you are in single-user mode) enter: who -r The run level is the number in the third field from the right. For example, this output run-level 4 Apr 23 16:37 4 0 S means that the system is in run-level 4.
Administering a Workgroup How To: Figure 9-1 Diagram of a System’s Disk Usage The information for the preceding disk usage diagram (Figure 9-1) was obtained as follows: Step 1.
Administering a Workgroup How To: Step 2. Go to Disks and File Systems/Disk Devices. For each disk this screen shows you: • Hardware path (e.g., 52.6). • Usage (e.g., LVM). • Volume group (e.g., vg00). • The disk’s total capacity. (The usable space will be somewhat less than this, probably about 15% less altogether, depending on the setting of the minfree kernel parameter; see “Setting Up Logical Volumes for File Systems” on page 563.
Administering a Workgroup How To: • The file system the logical volume is mounted to, if any. Again this screen allows you to see how a file system is distributed across LVM disks; for example, the /home directory on the system shown in the diagram is mounted to /dev/vg02/lvol1, which as we have seen occupies all of c0t2d0 and 356 MB of c0t5d0.
Administering a Workgroup How To: NOTE bsize in the resulting output is the configured block size, in bytes, of the file system /work. But in JFS file systems, the configured block size determines only the block size of the direct blocks, typically the first blocks written out to a new file. Indirect blocks, typically those added to a file as it is updated over time, all have a block size of 8 kilobytes. See mkfs_vxfs (1M) for an explanation of each field in the output.
Administering a Workgroup How To: Moving a System This is a cookbook for moving a system from one subnet to another, changing the system’s host name, IP address, and Domain Name Server. NOTE Do steps 1-10 before moving the system. Step 1. Run set_parms: /sbin/set_parms hostname Step 2. Change the system name when prompted. Step 3. Answer “no” to the “reboot?” question. Step 4. Run set_parms again: /sbin/set_parms ip_address Step 5. Change the system IP address when prompted. Step 6.
Administering a Workgroup How To: Popping the Directory Stack You can avoid retyping long path names when moving back and forth between directories by using the hyphen (-) to indicate the last directory you were in; for example: $ pwd /home/patrick $ cd /projects $ cd /home/patrick Scheduling a cron Job To schedule a job in cron (as root): Step 1. Save old /usr/spool/cron/crontabs/root. Step 2. Edit /usr/spool/cron/crontabs/root.
Administering a Workgroup How To: See “Creating an Automated Backup Schedule” on page 690 for additional informationand examples on how to format cron file entries. Step 3. Tell cron to execute the file: crontab /usr/spool/cron/crontabs/root See cron (1M) and crontab (1) for more information. Continuing to Work During a Scheduled Downtime If your file server is down and you export files from that system, those files are inaccessible to you.
Administering a Workgroup Troubleshooting Troubleshooting This section serves as an index to troubleshooting procedures throughout this manual. Table 9-1 Troubleshooting For...
Administering a Workgroup Troubleshooting Table 9-1 Troubleshooting (Continued) For... Terminals See “Troubleshooting Problems with Terminals” on page 258 Tips on Interpreting HP-UX Error Messages The file /usr/include/sys/errno.h contains a list of error returns generated by HP-UX system calls.You can use the grep command to locate the name associated with the HP-UX error number you received.
Administering a Workgroup Troubleshooting Step 3. If inetd is not running, start it: /usr/sbin/inetd Step 4. If inetd is running and users still cannot rlogin (or remsh or telnet) the service may be disabled. Check /etc/inetd.conf for the following lines: telnet stream tcp nowait root /usr/lbin/telnetd telnetd login stream tcp nowait root /usr/lbin/rlogind rlogind shell stream tcp nowait root /usr/lbin/remshd remshd Step 5.
Administering a Workgroup Adding Software to a Workgroup Adding Software to a Workgroup • “Installing and Managing Software For an Enterprise” on page 893 • “Setting up a Network Host (Building a Depot)” on page 893 Installing and Managing Software For an Enterprise To install and manage software from a central controller on a multivendor network (including PCs), use the product HP OpenView Software Distributor.
Administering a Workgroup Adding Software to a Workgroup Copying Software From a Depot with the SD User Interface To copy software from a depot, start the SD-UX graphical or terminal user interface. Type: /usr/sbin/swinstall or /usr/sbin/swcopy swinstall automatically configures your system to run the software when it is installed; configuration is not done with swcopy. Copying Software From CD-ROM Step 1. Make sure the CD-ROM drive is mounted. You can use SAM or the mount (1M) command to do this. Step 2.
Administering a Workgroup Adding Software to a Workgroup More Examples The first command in the example that follows copies all software (“*”) from the path /release/s700_10.01_gsK/wszx6 at the network source appserver to the target /mnt1/depot. The second command does the same thing except that it copies only the software specified in the file /tmp/langJ. swcopy -s appserver.cup.hp.com:/release/s700_10.
Administering a Workgroup Other Workgroup Management Tools Other Workgroup Management Tools Some of the tools that HP provides are described in “Other Performance Management Tools” on page 736.
Setting Up and Administering an HP-UX NFS Diskless Cluster 10 Setting Up and Administering an HP-UX NFS Diskless Cluster IMPORTANT This section provides information on NFS Diskless, a technology supported on HP-UX 10.0 through 10.20. If all your servers are running 10.30 or later, this information will not be of interest to you; we’ve included it because we recognize that many workgroups are running several different versions of HP-UX. See also “Compatibility Between HP-UX Releases 10.x and 11.
Setting Up and Administering an HP-UX NFS Diskless Cluster Table 10-1 Task List (Continued) Set up the cluster server “Setting Up the Cluster Server” on page 918 Set the policies for a cluster “Setting the Policies for a Cluster” on page 919 Add clients to a cluster “Adding Clients to a Cluster” on page 919 Boot new clients “Booting New Clients” on page 924 Add a local disk to a client “What To Do Next” on page 926 Administer a cluster “Administering Your NFS Diskless Cluster” on page 928 See
Setting Up and Administering an HP-UX NFS Diskless Cluster What Is an NFS Diskless Cluster? What Is an NFS Diskless Cluster? An HP-UX NFS diskless cluster is a network of HP 9000 Series 700 and 800 computers sharing resources, particularly operating systems and file system elements. The underlying technology is the Network File System (NFS) and its protocols.
Setting Up and Administering an HP-UX NFS Diskless Cluster What Is an NFS Diskless Cluster? PostScript form in the file /usr/share/doc/NFSD_Concepts_Admin.ps. If you are unfamiliar with NFS diskless cluster concepts, you should read the white paper before continuing to set up an NFS diskless cluster. Also see the white paper NFS Client/Server Configuration, Topology, and Performance Tuning Guide (supplied on most 10.x systems in the file /usr/share/doc/NFS_Client_Server.
Setting Up and Administering an HP-UX NFS Diskless Cluster What Is an NFS Diskless Cluster? private root A directory on the cluster server that serves as a client system’s root directory (/). This directory contains all the client’s private files and directories and mount points for shared files and directories from a shared root. SAM establishes private roots in the /export/private_roots directory in the form /export/private_roots/clientname.
Setting Up and Administering an HP-UX NFS Diskless Cluster Planning Your Cluster Policies Planning Your Cluster Policies Before you actually create your cluster and begin to add clients, you must be prepared to set three sharing policies for your cluster. These policies will determine much of the behavior of your cluster, your users’ view of it, and the relative ease with which you can administer it.
Setting Up and Administering an HP-UX NFS Diskless Cluster Planning Your Cluster Policies Policies for the Location of User and Group Data Shared SAM configures the cluster such that /etc/passwd and /etc/group exist only on the cluster server, but are made available to all clients through the use of NFS mounts and symbolic links. Sharing these files allows any user to log onto any system in the cluster using the same user ID and password.
Setting Up and Administering an HP-UX NFS Diskless Cluster Planning Your Cluster Policies Policies for Electronic Mail Shared Every user mailbox is accessible from every cluster member, and users can send and receive mail while logged into any cluster member. All outgoing mail has the appearance of having originated from the cluster server. All maintenance of the mail system, such as the mail aliases file, the reverse aliases file, and the sendmail configuration file, is done on the server.
Setting Up and Administering an HP-UX NFS Diskless Cluster Setting Up NFS Cluster Hardware Setting Up NFS Cluster Hardware Peripherals A cluster-wide resource, such as a printer, is generally one that must be configured as local to one cluster member and as remote on the other members. When a cluster-wide resource is defined or modified, SAM performs the appropriate tasks on each member of the cluster to achieve the required results.
Setting Up and Administering an HP-UX NFS Diskless Cluster Setting Up NFS Cluster Hardware If some clients have local file systems that are not accessible from the server, backups need to be done from the clients. The clients can do the backup over the network to the backup device on the server. Printers and Plotters SAM allows you to add printers and plotters to the cluster server or any cluster client.
Setting Up and Administering an HP-UX NFS Diskless Cluster Setting Up NFS Cluster Hardware A local file system can hold a user’s home directory. Using the standard naming conventions, such a file system would be mounted in the client’s root file system at /home/username. File access on a local disk is faster than access over the network. • Swap files By default, clients use swap files in their /paging directory in their private root on the cluster server’s disk.
Setting Up and Administering an HP-UX NFS Diskless Cluster Obtaining Information About Your Server and Client Obtaining Information About Your Server and Client To set up and administer an NFS diskless cluster, you need to obtain information about the computers that will be in the cluster.
Setting Up and Administering an HP-UX NFS Diskless Cluster Obtaining Information About Your Server and Client Getting the Hardware (Station) Address When requested to provide boot service, the NFS cluster server identifies a supported client by the client’s hardware address. Before you can add a client to the cluster, you must get its built-in LAN interface hardware address.
Setting Up and Administering an HP-UX NFS Diskless Cluster Obtaining Information About Your Server and Client The output will have one entry for each LAN card in the computer. If the computer does not have additional LAN cards (that is, if it has only the built-in LAN card), you will only see the first entry. The LAN hardware address for your built-in LAN interface will be in the first position highlighted in the example above.
Setting Up and Administering an HP-UX NFS Diskless Cluster Installing Diskless Software Installing Diskless Software Before a standalone system can be configured as a cluster server, the diskless software product must be installed in the system root as part of the operating system. Usually it is installed as part of the operating system bundle.
Setting Up and Administering an HP-UX NFS Diskless Cluster Installing Diskless Software Step 5. From the “Actions” menu of the “Software Selection” screen, select “Mark For Install”. Step 6. Again from the “Actions” menu, select “Install (analysis)”. Step 7. Proceed with the installation analysis and complete the installation. To install the diskless product in the Series 700 alternate root of the cluster server, include the product when you execute swinstall to install the alternate root.
Setting Up and Administering an HP-UX NFS Diskless Cluster Installing a Series 700 Client on a Series 800 Cluster Server Installing a Series 700 Client on a Series 800 Cluster Server Both Series 700 and Series 800 systems can be used as cluster servers. Only Series 700 systems can be used as cluster clients. When a Series 700 client is installed on a Series 700 server, the client can use the same system software as the server.
Setting Up and Administering an HP-UX NFS Diskless Cluster Installing a Series 700 Client on a Series 800 Cluster Server During the installation, you will have to identify the software source (tape, CD-ROM, or a network source) and select the particular software you want to have installed. For this alternate root installation, install the Series 700 HP-UX run-time environment bundle for the appropriate language. For example, you might install the “English HP-UX Run-time Environment” bundle.
Setting Up and Administering an HP-UX NFS Diskless Cluster Configuring a Relay Agent Configuring a Relay Agent It is likely that most or all of your NFS cluster’s clients are attached to the same subnetwork as your cluster server. If not, a gateway (a device such as a router, or a computer) can be used to connect two or more networks. Once a gateway is attached, the server can boot clients that are on subnetworks that the server is not directly attached to.
Setting Up and Administering an HP-UX NFS Diskless Cluster Configuring a Relay Agent To configure the relay agent, follow these steps: NOTE You must make the changes on the relay system manually (that is, without using SAM). Later, when you use SAM to configure a gateway client, use the IP address of the relay system in the “Default Route” field of the “Define Clients” screen. Step 1. In the file /etc/inetd.
Setting Up and Administering an HP-UX NFS Diskless Cluster Configuring a Relay Agent START_RBOOTD=1 Step 4. If it is not running already, start the rbootd daemon (see rbootd (1M)). The rbootd daemon provides NFS diskless cluster support for Series 700 clients with older boot ROMs designed for the “DUX” clustered environment without requiring boot ROM modifications (SAM automatically configures rbootd on the cluster server).
Setting Up and Administering an HP-UX NFS Diskless Cluster Setting Up the Cluster Server Setting Up the Cluster Server A cluster server is defined as such when the first client is installed. At that time, SAM ensures that the necessary subsystems are configured on the server system where SAM is running. These subsystems include the diskless product software and, if the server is a Series 800, an alternate Series 700 root.
Setting Up and Administering an HP-UX NFS Diskless Cluster Setting Up the Cluster Server Setting the Policies for a Cluster To set the policies for the cluster: Step 1. Run SAM on the cluster server: sam Step 2. From the “SAM Areas” screen, select “Clusters”. Step 3. From the “SAM Areas:Clusters” screen, select “NFS Cluster Configuration”. Step 4. From the “Actions” menu of the “NFS Cluster Configuration” screen, choose “Set Cluster Policies”. Step 5.
Setting Up and Administering an HP-UX NFS Diskless Cluster Setting Up the Cluster Server Step 3. From the “SAM Areas:Clusters” screen, select “NFS Cluster Configuration”. Step 4. From the “Actions” menu of the “NFS Cluster Configuration” screen, choose “Define Clients”. Step 5. Fill in the following fields on the “Define Clients” screen: NOTE As you supply information, SAM will automatically fill in fields with complete or partial default information.
Setting Up and Administering an HP-UX NFS Diskless Cluster Setting Up the Cluster Server This is the address you obtained in “Getting the Hardware (Station) Address” on page 909. SAM provides a portion of this address because all LAN cards supplied by HP have an address that begins with 080009. You will have to type in the last six hexadecimal digits. Hexadecimal letters can be upper or lower case.
Setting Up and Administering an HP-UX NFS Diskless Cluster Setting Up the Cluster Server Step 7. When you have defined all your clients, select “OK”. Step 8. From the “Actions” menu of the “NFS Cluster Configuration” screen, choose “Install Defined Clients”. Step 9. On the “Select Clients to Install” screen, edit the list of clients to be installed.
Setting Up and Administering an HP-UX NFS Diskless Cluster Setting Up the Cluster Server c. If you have not set the cluster policies yet, the “Set Cluster Policies” screen will be displayed. • NOTE Set the policies you decided upon when planning the cluster. (See “Planning Your Cluster Policies” on page 902 for details.) Once you have installed a client, you cannot change the cluster policies unless you delete all the clients first. • After you have set the policies, select “OK”. Step 11.
Setting Up and Administering an HP-UX NFS Diskless Cluster Booting New Clients Booting New Clients After you have installed a client to your cluster, boot it from the server. If you have installed several clients, you can boot them singly or all at once. Further details on booting are in “Booting Systems” on page 464. For each client, turn on (or cycle) the power on the Series 700 workstation and interact with its Boot Console User Interface (in some models it is called the Boot Administration Utility).
Setting Up and Administering an HP-UX NFS Diskless Cluster Booting New Clients NOTE Some Series 700 workstations can use either the hardware address or the IP address of the server. Check your Owner’s Guide. 4. Boot the client. Enter: boot primary NOTE The initial boot of a cluster client takes much longer than subsequent boots (as much as 30 minutes or more). During the initial boot, system configuration files, device files, and private directories are created and put in place.
Setting Up and Administering an HP-UX NFS Diskless Cluster What To Do Next What To Do Next You have now created (or expanded) your cluster and booted its clients. Tasks you might need to do now include: • Add local disk drives to clients. Local disk drives (drives attached to a client rather than to the server) can have any of the following uses: — Local swap. This means that the client swaps to its own local disk, rather than to the server’s disk space. — Shared or private file space.
Setting Up and Administering an HP-UX NFS Diskless Cluster What To Do Next If you need to add a local disk to a new cluster client and the disk is not already attached to or integrated into your computer, attach it by following the instructions provided with the hardware. To configure the disk, refer to Configuring HP-UX for Peripherals. For a quick reference, see “Adding a Disk” on page 861 If you want to put a file system on the disk, see “Managing File Systems” on page 602.
Setting Up and Administering an HP-UX NFS Diskless Cluster Administering Your NFS Diskless Cluster Administering Your NFS Diskless Cluster If you have chosen “shared” for the cluster policies and you manage all printers/plotters and file systems as cluster-wide resources, your HP-UX cluster will look and act much like a single, multiuser computer. For the end-user there is little difference between HP-UX running on a standalone system and any member (server or client) of such a cluster.
Setting Up and Administering an HP-UX NFS Diskless Cluster Administering Your NFS Diskless Cluster Table 10-3 Where to Perform Tasks (Continued) Task Chapter 10 Where to Perform the Task Shutdown or reboot a cluster member Use the shutdown (1M) or reboot (1M) command on the cluster member.
Setting Up and Administering an HP-UX NFS Diskless Cluster Administering Your NFS Diskless Cluster Table 10-3 Where to Perform Tasks (Continued) Task Where to Perform the Task Add remote printer Any cluster member b LP spooler administration (enable, disable, accept, reject, and so on) of printer that is not a cluster-wide resource On the system where the change is to be made LP spooler administration of printer that is a cluster-wide resource Any cluster member Add, modify, remove user accounts: S
Setting Up and Administering an HP-UX NFS Diskless Cluster Administering Your NFS Diskless Cluster c. If private policies are used, a user account must be added, modified, or removed from each member of the cluster where the user account exists. d. If the cluster server is an NTP client, changing the date and time must be done on the NTP server.
Setting Up and Administering an HP-UX NFS Diskless Cluster NFS Diskless Questions and Answers NFS Diskless Questions and Answers This section answers some common questions about administering NFS Diskless. It is a slightly condensed version of the “Questions and Answers” section of the NFS Diskless Concepts and Administration White Paper, which is supplied in its entirety as /usr/share/doc/NFSD_Concepts_Admin.ps on most 10.x systems.
Setting Up and Administering an HP-UX NFS Diskless Cluster NFS Diskless Questions and Answers 1. Before the client is added. When you add a client via SAM, SAM creates two directories to hold the client’s private files: • the private root /export/private_roots/client • the boot file or kernel directory /export/tftpboot/client. You can create these directories “by hand” (not using SAM), before adding a client. The directories must be empty when you use SAM to add the client.
Setting Up and Administering an HP-UX NFS Diskless Cluster NFS Diskless Questions and Answers As a result, NFS mount points are established for /usr, /sbin, and the /opt directories on the client. If a subdirectory of a sharing point (a directory specified as a share link) is a separate file system, the file-sharing model breaks down because NFS does not propagate the mount point.
Setting Up and Administering an HP-UX NFS Diskless Cluster NFS Diskless Questions and Answers NOTE You cannot configure swap for NFS Diskless clients into their kernels; you must do it either by running swapon from the command-line, or through entries in the client’s /etc/fstab. Apart from this limitation, a client has the same choices for swap as the server. If you want the client to swap to some other destination than /paging, remove the swapfs entry for / in the client’s /etc/fstab.
Setting Up and Administering an HP-UX NFS Diskless Cluster NFS Diskless Questions and Answers The HP proprietary “DUX” technology, which NFS Diskless replaces, required a configuration process on the server which converted key system files (hp-ux, /etc/checklist, and others) into context-dependent files (CDFs) and modified the server’s kernel to enable diskless functions. NFS Diskless does not require any modification of the server’s file system.
Setting Up and Administering an HP-UX NFS Diskless Cluster NFS Diskless Questions and Answers As the system administrator, you can create other shared roots with any name you choose, although HP recommends certain name elements: architecture, application vs. OS, release level. You can create these directories “by hand”, and they are also created by swinstall when you perform an alternate root install. An alternate root install populates a shared root with sharing points (i.e. products).
Setting Up and Administering an HP-UX NFS Diskless Cluster NFS Diskless Questions and Answers Use the exportfs command to see what is currently exported, or look at the /etc/exports file directly. If there is an error in the file, nothing may be exported. Question: How can I tell what kind of boot ROM my system has? Answer: You shouldn’t need to know because both bootp and rbootd services are started on cluster servers.
Setting Up and Administering an HP-UX NFS Diskless Cluster NFS Diskless Questions and Answers Question: My network becomes congested when booting many clients simultaneously. What can I do? Answer: When many diskless clients boot from one boot server simultaneously, the server may be too busy to respond to each client’s boot request quickly. The default timeout values specified in each client’s /etc/fstab file take into account large numbers of clients booting simultaneously.
Setting Up and Administering an HP-UX NFS Diskless Cluster NFS Diskless Questions and Answers If the corresponding .fs file is not present, the secondary loader will default to loading the file called vmunix.fs. If this file is not compatible with your kernel, unexpected behavior (possibly bad) may result. Single Point Administration Policies Question: I selected the “Shared Home Directories” policy, but my users’ directories under /users and /users2 did not appear on the clients.
Setting Up and Administering an HP-UX NFS Diskless Cluster NFS Diskless Questions and Answers ensures that the mounts occur in the correct order. (Local mounts occur after boot NFS mounts but before other NFS and automounter mounts). Question: How do I configure local swap on clients? Answer: Run SAM on each client that will have local swap and use the “Disk Devices” subarea under “Disks and Filesystems” to add a local disk for swap.
Setting Up and Administering an HP-UX NFS Diskless Cluster NFS Diskless Questions and Answers Answer: The best approach is to run SAM on a system where the resource is unconfigured, select the resource, and select “Add Unconfigured”. This gives you the option of adding the cluster-wide resource to just the local system, or to all systems in the cluster where the resource is not configured.
Setting Up and Administering an HP-UX NFS Diskless Cluster NFS Diskless Questions and Answers Answer: This is a case where SAM is not going to be much more help than if you had a large collection of standalone systems and you wanted some of them to have access to a resource. SAM helps you manage all of the systems in a cluster consistently, with some flexibility to allow for exceptions, but does not help you manage subsets of a cluster.
Setting Up and Administering an HP-UX NFS Diskless Cluster NFS Diskless Questions and Answers Users and Groups Question: What if I want to use NIS to manage user/group data? Answer: SAM cluster configuration provides one method of sharing user/group data, home directories and mailboxes among all of the members of a cluster. There are certainly other methods of accomplishing the same goals.
Setting Up and Administering an HP-UX NFS Diskless Cluster NFS Diskless Questions and Answers • If you need to unlink the file, perform the operation on /etc/share/passwd, not /etc/passwd (and /etc/share/group rather than /etc/group). For example: cp /etc/share/passwd /etc/share/passwd.new vi /etc/share/passwd.new mv /etc/share/passwd.new /etc/share/passwd • Use code such as the fragment that follows to modify the password file programmatically.
Setting Up and Administering an HP-UX NFS Diskless Cluster NFS Diskless Questions and Answers found = 0; setpwent(); while((pwd = getpwent()) != NULL) { if (strcmp(pwd->pw_name, login_name) == 0) { found = 1; strcpy(pwd->pw_dir, new_directory); } putpwent(pwd, tf); } endpwent(); fsync(fileno(tf)); fclose(tf); if (!found) ERROR /* replace existing passwd file with modified file */ if (rename(temp_pwd, passwd_file) < 0) ERROR /* unlock password file */ ulckpwdf(); 946 Chapter 10
A Using High Availability Strategies High availability is the term used to describe computer systems that have been configured so as to minimize the percentage of time that they will be down or otherwise unavailable, and as a result, allow for the greatest degree of usefulness. High system availability is achieved by minimizing the possibility that a hardware failure or a software defect will result in a loss of the use of the system or in a loss of its data.
Using High Availability Strategies Using Software Mirroring as a Disk Protection Strategy Using Software Mirroring as a Disk Protection Strategy Data redundancy is necessary to prevent instances in which a single disk failure can cause a system to go down until the problem is located and corrected. There are two methods of providing data redundancy: software mirroring and hardware mirroring. Each represents RAID Level 1.
Using High Availability Strategies Using Disk Arrays Using Disk Arrays A disk array consists of multiple disk drives under the command of an array controller. The disk array incorporates features that differentiate it from traditional disk storage devices. Most types of disk arrays provide for one of two possible options for protecting data in the event of a disk failure. This becomes more and more important as the number of disks on a system increases, since the chance of a disk failure also increases.
Using High Availability Strategies Disk Arrays Using RAID Data Protection Strategies Disk Arrays Using RAID Data Protection Strategies RAID stands for Redundant Arrays of Independent Disks. Various configurations or RAID levels are available. We will mention several. Mirroring (RAID Level 1) In a RAID 1 configuration, all data is duplicated on two or more disks. In hardware mirroring, each disk has a “twin,” a backup disk containing an exact copy of its data.
Using High Availability Strategies Disk Arrays Using RAID Data Protection Strategies Recommended Uses and Performance Considerations Effective for high performance I/O environments using noncritical data. Data striping can also prevent “hot spots,” which are caused by constant hits on a single drive; a specific drive may be accessed so often that it will slow down I/O traffic, or shorten the life of the drive. RAID 3 This type of array uses a separate data protection disk to store encoded data.
Using High Availability Strategies Disk Arrays Using RAID Data Protection Strategies RAID 5 With this RAID level, both data and encoded data protection information are spread across all the drives in the array. Level 5 is designed to provide a high transfer rate (a one-way transmission of data) and a moderate I/O rate (a two-way transmission of data). In RAID 5 technology, the hardware reads and writes parity information to each module in the array.
Using High Availability Strategies What is AutoRAID? What is AutoRAID? HP offers a disk array with a patented technology named AutoRAID. AutoRAID hardware and software monitor the use of data on a system to provide the best possible performance by determining the best RAID array level for that specific system. With a traditional array, the process of configuring the system for optimum performance is time-consuming and error prone.
Using High Availability Strategies HP SureStore E Disk Array HP SureStore E Disk Array The HP SureStore E Disk Array XP256 and XP512 provides high capacity and high speed mass storage with continuous data availability, ease of service, scalability and connectivity. It is designed to handle very large databases as well as data warehousing and data mining applications since it has a huge data capacity as measured in terabytes. It is ideal for clustered configurations of HP-UX servers.
Using High Availability Strategies Using Hot Spared Disks Using Hot Spared Disks A hot spared disk drive is a disk that is reserved for swapping with a bad disk that has no mirrored or parity data. It is simply a spare disk that is online and waiting for a disk failure in a disk array. Use a hot spare if, in RAID 5, RAID 1/0, or RAID 1 groups, high availability is so important that you want to regain data redundancy as soon as possible if a disk module fails.
Using High Availability Strategies Using High Available Storage Systems (HASS) Using High Available Storage Systems (HASS) High Available Storage Systems (HASS) provide two internal SCSI busses, each with their own connectors, power cords, power supplies, and fans. This hardware redundancy, when combined with software mirroring, can prevent most single point of failure problems. HASS do not provide any RAID support on their own. Pros and Cons of HASS There are many advantages of systems protected by HASS.
Using High Availability Strategies Using Serviceguard Using Serviceguard An Serviceguard cluster is a networked grouping of HP 9000 servers (nodes) having sufficient redundancy of software and hardware that a single point of failure will not significantly disrupt service. Applications and services are grouped together in packages.
Using High Availability Strategies Using Serviceguard Serviceguard is an excellent choice for high availability data protection. It may be used in conjunction with other high availability products. HP References Managing Serviceguard http://www.hp.com/go/enterprise Serviceguard Features Serviceguard Automatic Rotating Standby Using a feature called automatic rotating standby, you can configure a cluster that lets you use one node as a substitute in the event a failure occurs.
Using High Availability Strategies Using Serviceguard corresponding to each tape or library robotic mechanism are created and written to an ATS ASCII configuration file. ATS uses this file to keep track of the devices configured in the cluster.
Using High Availability Strategies Other High Availability Products and Features Other High Availability Products and Features High Availability Monitors High availability monitors allow you to check up on your system’s resources and to be informed if problems develop. They can be used in conjunction with Serviceguard. Monitors are available for disk resources, cluster resources, network interfaces, system resources, and database resources.
Using High Availability Strategies Other High Availability Products and Features • Between one system that is attached locally to an array frame and another remote node that is attached locally to another array frame. • Among local nodes that are attached to the same array.
Using High Availability Strategies Other High Availability Products and Features from a single management station. High availability products, such as Serviceguard and HA Monitors, reside physically on the HyperPlex cluster nodes. HP ServiceControl organizes nodes into HyperPlex clusters.
Configuring HP-UX Bastille: Interview Bastille Configuration Questions and Explanations for HP-UX B Configuring HP-UX Bastille: Interview Bastille Configuration Questions and Explanations for HP-UX HP-UX Bastille uses a series of questions, extracted from the file /etc/sec_mgmt/bastille/Questions.txt, to prepare a configuration file, as described in “HP-UX Bastille” on page 815. This appendix contains the questions and explanations that are relevant to HP-UX, in the order that they are presented.
Configuring HP-UX Bastille: Interview Bastille Configuration Questions and Explanations for HP-UX Some questions have two levels of explanatory text, which you can adjust with the Explain Less/More button. Current support information for HP-UX Bastille is provided on the HP-UX Bastille product page at http://software.hp.com HP-UX Bastille has the potential to make changes which will affect the functionality of other software.
Configuring HP-UX Bastille: Interview Bastille Configuration Questions and Explanations for HP-UX TODO list so that you can apply the necessary patches. (MANUAL ACTION REQUIRED TO COMPLETE THIS CONFIGURATION, see TODO list for details) Patches Q: Should Bastille set up a cron job to run Security Patch Check? [Y] Bastille can configure Security Patch Check to run on a daily basis using the cron scheduling daemon. Keeping a system secure requires constant vigilance.
Configuring HP-UX Bastille: Interview Bastille Configuration Questions and Explanations for HP-UX If this machine is behind a proxy-type firewall, security patch check needs to be configured to traverse that firewall. For example, the proxy might be specified as "http://myproxy.mynet.com:8088" If this machine can ftp directly to the Internet without a proxy, answer no to this question. Patches Q: Please enter the URL for the web proxy.
Configuring HP-UX Bastille: Interview Bastille Configuration Questions and Explanations for HP-UX - "cat" directories such as those in /usr/share/man are used by the "man" command to write pre-processed man pages. Eliminating the world-writeable bit will cause a degradation in performance because the man page will have to be reformatted every time it is accessed. - Some directories may have incorrect owners and/or groups.
Configuring HP-UX Bastille: Interview Bastille Configuration Questions and Explanations for HP-UX The ownerships and permissions of the files and subdirectories in that directory determine how those files and subdirectories can be modified, respectively. You can tell that the "sticky" bit is set if there is a "t" in the last permissions column. (e.g.: drwxrwxrwt). Left unedited, the created script will set the "sticky" bit on any world-writeable directory.
Configuring HP-UX Bastille: Interview Bastille Configuration Questions and Explanations for HP-UX For HP-UX 11.20 and prior, the system will be converted to trusted mode to hide the encrypted passwords. In addition, a trusted system provides other useful security features such as auditing and login passwords with lengths greater than 8 characters. Also, more options are available, such as password length requirements, and password aging.
Configuring HP-UX Bastille: Interview Bastille Configuration Questions and Explanations for HP-UX without typing the password. However, if an access to the machine and enough time, there do to prevent unauthorized access. This may case when an authorized administrator messes remember the password. attacker has physical is very little you can be more problematic in the up the machine and can't Note: For HP-UX 11.22 and prior, this requires conversion to trusted mode.
Configuring HP-UX Bastille: Interview Bastille Configuration Questions and Explanations for HP-UX they may end up writing the password down (a very bad security practice.) Thus, it is important to set password policies which conform to your overall security policies but do not unduly burden your users. On HP-UX 11.11 and prior, this will ensure that the system is converted to trusted mode, enable password aging and allow you to change some basic defaults.
Configuring HP-UX Bastille: Interview Bastille Configuration Questions and Explanations for HP-UX user. A user is not allowed to re-use a stored, previously used password. This will cause the system to be converted to trusted mode. PASSWORD_HISTORY_DEPTH=N A new password is checked against only the N most recently used passwords for a particular user. A configuration of password history depth of 2 prevents users from alternating between two passwords.
Configuring HP-UX Bastille: Interview Bastille Configuration Questions and Explanations for HP-UX less than the PASSWORD_MAXDAYS! However, if there is ever a need to temporarily give someone your password, (there are generally more secure alternatives) this option could prevent changing the password immediately following. NOTE: If your system is not converted to trusted mode then this value will be rounded up to weeks for current users.
Configuring HP-UX Bastille: Interview Bastille Configuration Questions and Explanations for HP-UX NOTE: This is applicable only for non-root users and only for services which use the "login" binary for authentication. Account Security Q: Enter the maximum number of logins per user [1] The NUMBER_OF_LOGINS_ALLOWED parameter controls the number of simultaneous logins allowed per user. This is applicable only for non-root users.
Configuring HP-UX Bastille: Interview Bastille Configuration Questions and Explanations for HP-UX parameter does not apply to a super-user account, and is applicable only when the "-" option is not used along with su command. Account Security Q: Should Bastille disallow root logins from network tty's? [N] [N] Bastille can restrict root from logging into a tty over the network. This will force administrators to log in first as a non-root user, then su to become root.
Configuring HP-UX Bastille: Interview Bastille Configuration Questions and Explanations for HP-UX Ftp is another problematic protocol. First, it is a clear-text protocol, like telnet -- this allows an attacker to eavesdrop on sessions and steal passwords. This also allows an attacker to take over an FTP session, using a clear-text-takeover tool like Hunt or Ettercap. Second, it can make effective firewalling difficult due to the way FTP requires many ports to stay open.
Configuring HP-UX Bastille: Interview Bastille Configuration Questions and Explanations for HP-UX fingerd is the server for the RFC 742 Name/Finger protocol. It provides a network interface to finger, which gives a status report of users currently logged in on the system or a detailed report about a specific user (see finger(1)). We recommend disabling the service as fingerd provides local system user information to remote sources, this can be useful to someone attempting to break into your system.
Configuring HP-UX Bastille: Interview Bastille Configuration Questions and Explanations for HP-UX undefined data, preferably data in some recognizable pattern (RFC 862) echo: Simply returns the packets sent to it. (RFC 862) Secure Inetd Q: Should Bastille ensure that inetd's time service does not run on this system? [Y] The time service that is built into inetd produces machine-readable time, in seconds since midnight on 1 January 1900 (RFC 868).
Configuring HP-UX Bastille: Interview Bastille Configuration Questions and Explanations for HP-UX HP SharedX Receiver Service is used to receive shared windows from another machine in X without explicitly performing any xhost command. This service is required for MPower remote windows, if you use MPower leave this service running on your system. The SharedX Receiver Service is an automated wrapper around the xhos t command, see xhost(1).
Configuring HP-UX Bastille: Interview Bastille Configuration Questions and Explanations for HP-UX Secure Inetd Q: Should Bastille tell you to disable unneeded inetd services in the TODO list? [Y] In addition to the previously mentioned services, one should also disable other unneeded inetd services. The aim is to only leave those services running that are critical to the operation of this machine. This is an example of the frequent tradeoff between security and functionality.
Configuring HP-UX Bastille: Interview Bastille Configuration Questions and Explanations for HP-UX One alternative is CIFS/9000 (Samba). It is still a clear-text, shared file system and therefore still raises security concerns, but unlike NFS, CIFS/9000 at least requires the user to authenticate (prove they are who they say they are) before reading or writing to files.
Configuring HP-UX Bastille: Interview Bastille Configuration Questions and Explanations for HP-UX computers. When you use NIS, the encrypted password is transmitted in clear-text and made available to anyone on the network, compromising this defense measure. Because of this, the HP-UX trusted mode and password shadowing security features that Bastille can enable, are incompatible with NIS. If you choose to convert to trusted-mode or shadow passwords, you should also disable NIS.
Configuring HP-UX Bastille: Interview Bastille Configuration Questions and Explanations for HP-UX access control lists, and (3) block SNMP traffic at your firewall. it makes sense to disable the SNMP daemons. Otherwise The average home user has no reason to run these daemons and depending on their default configuration, they could be a major security risk.
Configuring HP-UX Bastille: Interview Bastille Configuration Questions and Explanations for HP-UX The rbootd daemon is used for a protocol called RMP, which is a predecessor to the "bootp" protocol (which serves DHCP). Basically, unless you are using this machine to serve dynamic IP addresses to very old HP-UX systems (prior to 10.0, or older than s712's), you have no reason to have this running.
Configuring HP-UX Bastille: Interview Bastille Configuration Questions and Explanations for HP-UX Sendmail Q: Would you like to disable the VRFY and EXPN sendmail commands? [Y] [Y] An attacker can use sendmail's vrfy (verify recipient existence) and expn (expand recipient alias/list contents) commands to learn more about accounts on the system.
Configuring HP-UX Bastille: Interview Bastille Configuration Questions and Explanations for HP-UX If you do not plan to use this system as a web server, then it is recommended that you deactivate your Apache 2.x web server. Programs that require an Apache server installed but do not bind to port 80 will still be able start their own instances of the web server. If you do not plan to use your Apache 2.x server immediately, then you should deactivate it until you need it.
Configuring HP-UX Bastille: Interview Bastille Configuration Questions and Explanations for HP-UX a WU-FTPD server from the following users: root, daemon, bin, sys, adm, uucp, lp, nuucp, hpdb, and guest. If you have a compelling reason to allow these users ftp access, then answer no to this question. Use this as a secondary measure if you have already chosen to deactivate the ftp server.
Configuring HP-UX Bastille: Interview Bastille Configuration Questions and Explanations for HP-UX ip_forwarding ip_ire_gw_probe ip_pmtu_strategy ip_send_redirects ip_send_source_quench tcp_conn_request_max tcp_syn_rcvd_max 2 1 2 1 1 20 500 => => => => => => => 0 0 1 0 0 4096 1000 For more information on each of these parameters, run ndd -h Note: If you already have some non-default settings in effect, you will need to merge the settings manually, and a reminder will be added to your TODO list.
Configuring HP-UX Bastille: Interview Bastille Configuration Questions and Explanations for HP-UX unable to reach the internet from this machine, you should answer "no." If you have suggestions for improvements, new questions, code, and/or tests, you can discuss these on the Bastille Linux discussion list. You can subscribe at: http://lists.sourceforge.net/mailman/listinfo/bastille-linux-discuss You can also provide feedback concerning the HP-UX version of Bastille directly to bastille-feedback@fc.hp.com.
Configuring HP-UX Bastille: Interview Bastille Configuration Questions and Explanations for HP-UX you can add custom rules which better fit the specific needs of your environment. If you modify the custom file, you should rerun the Bastille backend (bastille -b) to apply the new rule-set. WARNING: Changing this file has the ability to either increase or decrease the security of your system. After applying this custom configuration, be sure to double-check the active rule-set and your ipf.
Configuring HP-UX Bastille: Interview Bastille Configuration Questions and Explanations for HP-UX HP-UX Host Intrusion Detection System (HIDS) enhances host-level security with near real-time automatic monitoring of each configured host for signs of potentially damaging intrusions.
Configuring HP-UX Bastille: Interview Bastille Configuration Questions and Explanations for HP-UX Answer YES if you are NOT running the HP-UX Host HIDS GUI on this host. Also answer YES if you are running the HP-UX Host HIDS GUI on this host, and it only manages one LOCAL HIDS agent running on this host (i.e., you are not managing any HIDS agents on any remote hosts using this GUI). Answer NO if you are running an HP-UX Host HIDS GUI on this host AND you are managing some remote HIDS agents.
Configuring HP-UX Bastille: Interview Bastille Configuration Questions and Explanations for HP-UX DNS query connections should only be allowed on DNS servers. If this machine is a DNS server for other machines, then you should answer "No" to this question. Otherwise, you should block DNS queries by answering "Yes". IPFilter Q: Do you want to BLOCK incoming DNS zone transfers with IPFilter? [Y] DNS zone transfer connections should only be allowed on master DNS servers.
Configuring HP-UX Bastille: Interview Bastille Configuration Questions and Explanations for HP-UX 994 Appendix B
Index Symbols $HOME/.rhosts file, 386 ,.. password, 247 . directory in PATH, 779 .cshrc, 270 .cshrc file, 772 .kshrc file, 772 .login, 270 .login file, 772 .netrc file, 772 .profile, 270 .profile file, 772 .rhosts file, 772 /.secure/etc/*, 793 /etc directory, 784 /etc/d_passwd, 751 /etc/default/fs, 605 /etc/dialups, 751 /etc/exports, 783, 786 and nfsd, 396 /etc/exports network file, 784 /etc/fresolv.
Index A abnormal system shutdowns, 531 abort boot, 881 accept, 435, 440, 704 access device-based access, 808 password, 806 restricting network, 785 terminal control, 806 time-based access, 806, 807 Access Control List, see ACL access control lists (ACL), 251 accessing multiple systems, 388 ACL, 753 HFS, 754 /var/mail/* files, 757 acl description, 770 acltostr() function, 756, 770 ar command, 757 chacl command, 756, 770 chmod command, 756 chmod() system call, 756 chownacl() function, 756, 770 commands, 756 c
Index adding logical volume with mirroring SAM, 863 adding network printer, 439 adding PC/NT systems workgroup, 413 adding peripherals, 255 adding printer commands, 434 adding remote printer, 436 adding user to several systems, 389 adding users workgroup, 388 address hardware (station), 909 Internet, 908, 920, 921 adm subsystem, 750 administering a workgroup, 857 administrative domain, 783 aid how set, 775 aid (audit ID), 775 all system self test, 503, 510 ALT.
Index overriding, 469, 473, 489 automounter configuring (SAM), 399 autosearch flag, 489 auxiliary audit log file, 793, 798 B backup JFS snapshot file system, 694 tar quick reference, 880 tar, scheduling, 888 backup devices in an NFS cluster, 905 backup media security of, 777 backups automating, 690 determining how often, 682 determining which data, 681 DLT tape, 687 fbackup, 686 full, 682 HP Omniback II, 124 included files, 682 incremental, 682 index files, 686 initial backup, 270 JFS snapshot file system,
Index HP 9000 Systems, 486 HP Integrity Servers, 465 LVM maintenance mode, 486, 500 new clients, 924 primary boot path, 496 root logical volume role in, 578 single-user mode, 483, 499, 527 SpeedyBoot, 501 boottest, 507 breach of security, 780 build environment, 285 building a depot, 893 C C shell, 248 environment variable, 143, 257 login files, 270 cancel, 706, 707 catman, 265 CD ROM, 557 CDE, 140 CD-ROM copying software, 894 CD-ROM File System (CDFS), 85 CEC system self test, 504 cfagent, 147 cfengine, 146
Index dump, 757 ed, 757 efi_cp, 481 fbackup, 757 find, 757 frecover, 757 fsck, 757 ftio, 757 getaccess, 756 getacl, 770 getty, 261 init, 258, 261 kermit, 752 kill, 261 lifcp, 498 ll, 262 lock, 744 login, 803 ls, 757 lsacl, 756, 770 lsautofl, 498 lssf, 262 lvdisplay, 482 mailx, 757 map, 472, 477, 484 mkboot, 498 mkfs, 757 ncheck, 757 pack, 757 passwd, 750 ps, 259 rcp, 785 rcs, 757 reboot, 500 remsh, 785 restore, 757 savecrash, 539, 553 scss, 757 set (shell command), 264 set_parms, 513 setacl, 770 setboot, 47
Index consolidated logs securing, 234 viewing, 237 Consolidation, log, 37 contiguous allocation and logical volume size, 564 defined, 562 for dump, 673 converting dump formats, 553, 554 copying software from CD-ROM, 894 copying software from depot, 894 copying software from tape, 894 copyutil, 699 core dump, 671 corrupt files, indications of, 614 cp command, 757 cpacl() function, 756, 770 cpio, 675 cpio command, 757 cpset command, 756 crash dump processing, 548 operator override, 549 post-recovery actions,
Index diagramming system’s disks, 882 disk striping, 590 I/O interfaces, 567 load, measuring, 729 Logical Volume Manager (LVM), 79 LVM versus whole disk, 81 management tools, 79 managing, 556 mirroring, 82 moving, 582 performance, 726 reconfiguring, 582 striping, 82 vxvm, 80, 556, 578 whole disk access, 81 disk arrays, 557 disk drive cluster client restrictions, 899 distributing in a cluster, 906 local, 899 setting up, 905 disk interface types, 559 disk management strategy, 393 disk partition direct access,
Index uncompressed, 553 dumps/save cycle, 532 dynamic and static directories, 62 dynamic tunable kernel parameters, 282, 736 E early_cpu system self test, 503 ed command, 757 edquota -p option, 622 -t option, 623 effective group ID (egid), 774 effective user ID (euid), 774 EFI determining EFI disk partition, 482 full screen editor, 479 EFI Boot Manager, 469 setting boot paths, 475 setting the autoboot timeout, 470 EFI file system copying files from, 481 EFI shell changing autoexecute file, 476 configuring s
Index examples, 686 included files, 682 NFS mount points, 697 trusted backup, 777 fbackup command, 757 fcntl, 777 fcpacl() function, 756 fence priority, 705 fgetacl() system call, 756, 770 file, 617 .kshrc, 772 .login, 772 .netrc, 772 .profile, 772 .rhosts, 772 /etc/exports, 784 /etc/fstab, 526, 532 /etc/group, 804 /etc/hosts, 784 /etc/hosts.equiv, 784 /etc/inetd.
Index ownership, 247 sharing, 56 sharing via NFS, 394 transferring via ftp, 409 fileset NONHPTERM, 256 X11-RUN, 143 file-sharing model, 56 client-server, 59 multiuser, 56 NFS Diskless, 57 private versus shared, 62 find, 781 find command, 757 find large files, 885 firewall working with, 854 firmware boot path actions, 487 fle .
Index getaccess() system call, 756 getacl command, 770 getacl() system call, 756, 770 getdvagent function, 808 getprdfent function, 808 getprpwent function, 808 getprtcent function, 808 getpwent function, 808 getspwent function, 809 getty, 261 gettydefs, 261 getusershell and ftp, troubleshooting, 412 gid how set, 775 workgroup issues, 104 gid (group ID), 774 gid and uid workgroup issues, 102 GlancePlus and GlancePlusPak, 738 global user IDs, 102 group device file, 575 group ID (gid), 774, 804 /etc/passwd fi
Index HP-UX Reference setting up manpages, 265 HP-UX releases compatibility, 448 HP-UX runstate, 258 hpux.
Index dump devices defined in, 540, 544 failing to boot, 286 kernel file selection, 467, 472, 473, 496 steps for reconfiguring, 284 when to reconfigure, 282, 315 kernel parameters, 282, 736 kernel resource monitor, 739 kill, 707, 798 killing processes, 260 kmem, 772 Korn shell, 248 environment variable, 143, 257 login files, 270 L LAN backup devices, 905 clusters, 906 hardware (station) address, 909 lanscan, 909 large file find, 885 large file compatibility, 455 large files backup, 693 restoring, 694 large
Index login command, 803 login name /etc/passwd file, 750 login screen CDE, 140 login shell /etc/passwd file, 750 long file names, 605 Loopback File System (LOFS), 85 lost+found directory, 615, 778 LP spooler commands, 106 initializing, 434 interface scripts, 106 local printer, 434 overview, 106 print requests, 106 printer class, 439 printer model files, 109 printer queues, 106 remote printer, 436 removing printer, 440 request directories, 106 statistics, 706 stopping and restarting, 703 lp subsystem, 750 l
Index minimum time password aging, 806 minor number, 261, 575 mirroring adding logical volume with, 863 commands for, 629 creating mirrored copies, 628 disk mirroring, 82 logical to physical extents, 560 logical volumes, 640 modifying mirrored copies, 628 moving a mirror copy, 637 online backup, 630 primary swap logical volume, 631 removing from logical volume, 870 replacing a disk, 638 root logical volume, 631 root logical volume on IPF systems, 634 strict allocation, 630 synchronizing, 638 using physical
Index /etc/protocols, 784 /etc/services, 784 Network File System see NFS Network File System (NFS), 85 crossing mount points, 697 mounting problems, 609 unmounting, 610 network gateway shutting down, 529 network host setting up, 893 network information setting parameters, 385 Network Information Service, 746 configuring on a gateway client, 917 hostname, 908 Internet protocol address, 920 policies for user and group data, 903 network overload, 733 network printer adding, 439 network security, 783 network se
Index installing software, 911 LAN, 906 member, 899 node, 899 policies, 902 server, 899 setting up hardware, 905 setting up the server, 918 tasks, 928 why create, 900 NFS diskless environment auditing, 800 NFS file server shutting down, 529 NFS server /etc/exports, 396 adding a disk, 861 adding a logical volume, 862 block size, 730 configuring, 395 configuring (SAM), 395 exportfs -a, 396 exporting files, 395 exporting to NT, 402 increasing nfsds, 734 moving directory to another server, 874 performance, 726
Index login shell, 750 security, 748, 801 sharing, 749 types of, 806 user ID (uid), 750 password aging expiration time, 806 lifetime, 806 minimum time, 806 password database /tcb/files/auth/, 801, 803 password file /etc/passwd, 749 editing, 749 fields, 803 null fields, 749 protected password database, 790, 801, 805 useradd command, 749 userdel command, 749 usermod command, 749 password history, 807 password reuse, 807 PASSWORD_HISTORY_DEPTH parameter, 807 patches Security Patch Check, 854 PATH, 776 default
Index POSIX shell, 248 environment variable, 143, 257 login files, 270 PostScript printers, 109 power failure, 778 recovering network services, 407 power failures, 525 preloaded system starting, 138 primary audit log file, 793, 798 Primary Boot Path setting using the Boot Console Handler, 493 setting using the setboot command, 474 primary boot path, 466, 474, 487, 492 primary swap, 663 as a dump area, 671 configuring, 670 reconfiguring, 671 print request alter, 706 cancel, 706 id numbers, 706 move destinati
Index pwck, 796 Q quotacheck example, 625 quotaon, 624 quotas file as a sparse file, 622 R raw data logical volumes stripe size for, 593 rcp network service, 785 rcs command, 757 rdump, 675 real group ID (rgid), 774 real user ID (ruid), 774 reboot, 500 recovering from problems (summary table), 890, 891 recovering network services, 407 recovery system, 699 recursive crashes, 537 reducing a logical volume command line, 868 reducing size of logical volume, 586 reformatting dumps, 553, 554 reject, 441, 443, 704
Index defined, 578 rrestore, 675 ruid how set, 775 ruid (real user ID), 774 run level checking, 882 run-level changing, 253 configuration, 515 creating new, 253 description, 252 for HP VUE, 252 runstate, 258 runtime dump device definitions, 546 rvxdump, 675 rvxrestore, 675 S SAM, 36, 737 adding a disk, 861 adding a local disk drive, 926 adding a local printer, 434 adding a logical volume, 862 adding a remote printer, 436 adding cluster clients, 919 adding LV with mirroring, 863 adding network-based printer,
Index security-alert PGP key, 747 security-alert@hp.
Index to single-user mode, 524 with reboot, 523 shutdowns abnormal, 531 avoiding, 530 system panics, 527 unclean, 526 shutting down mail servers, 528 name servers, 528 network gateways, 529 NFS clients, 530 NFS cluster clients, 530 NFS cluster server, 530 NFS file servers, 529 SIM, 37 single-user mode, 258, 483, 499, 524, 527 checking for, 882 single-user workstation, 46 socket overflows, 733 soft limits description, 621 software porting deciding, 451 when not to, 451 when to, 452 Software Transition Kit (S
Index maximum by default, 666 minimum required, 664 need to modify, 664 performance considerations, 667 planning, 77 priority of multiple areas, 668 pseudo-swap, 663 sampling usage, 78 server requirements, 666 stripe size for, 593 system parameters, 666 types of, 662 swapinfo, 611, 664, 668, 669 swapmem_on, 663 swapon, 669 swchunk parameter, 666 swinstall, 911 swlist, 911 swlist command, 791 sync pseudo-account, 750 synchronization client configuring, 161 Synchronization, configuration, 37 synchronizing a m
Index full_memory, 503 IO_HW, 504 late_cpu, 503 Memory_init, 504 PDH, 504 Platform, 503 SELFTESTS, 503 system self tests all, 510 bypassing, 501 configuring, 501 configuring from a booted system, 509 configuring from the Boot Console Handler, 506 configuring from the EFI shell, 507 definitions of, 503 FASTBOOT, 506 full_memory, 511 how system panics effect execution, 502 HP recommendations, 506 late_cpu, 511 system_prep script, 285 systems configure into network, 384 configure into workgroup, 387 installin
Index UEVENT1 event type, 797 UEVENT2 event type, 797 UEVENT3 event type, 797 uid global, 102 how set, 775 issues in a workgroup, 388 workgroup issues, 104 uid (user ID), 774 uid and gid workgroup issues, 102 umask, 251, 776, 791 umask command, 771 umount, 610 umountall, 610 unclean shutdowns, 526 uncompressed dumps, 553 unmounting a file system, 83, 609, 611 at shutdown, 610 problems, 610 unpack command, 757 unresponsive terminals, 258 untic, 257 user account protecting, 772 user id global, 102 user ID (ui
Index client-server, 59 client-server, defined, 53 configuring, 383 configuring ftp, 409 configuring NFS, 394 defined, 45 diagramming server’s disks, 882 disk space, planning, 77 distributing disks, 76 exporting file systems, 395 exporting HP-UX files to NT, 402 extending a logical volume, 864 focus of this manual, 44 home, mail directories, 103 how to (examples), 879 importing files, 396 importing HP-UX files to NT, 403 increasing server’s nfsds, 734 local home directory, 391 login issues, 388 managing use