Computer Drive User Manual
Table Of Contents
- Chapter 1. HPSS 7.1 Configuration Overview
- Chapter 2. Security and System Access
- Chapter 3. Using SSM
- 3.1. The SSM System Manager
- 3.2. Quick Startup of hpssgui
- 3.3. Configuration and Startup of hpssgui and hpssadm
- 3.4. Multiple SSM Sessions
- 3.5. SSM Window Conventions
- 3.6. Common Window Elements
- 3.7. Help Menu Overview
- 3.8. Monitor, Operations and Configure Menus Overview
- 3.9. SSM Specific Windows
- 3.10. SSM List Preferences
- Chapter 4. Global & Subsystem Configuration
- 4.1. Global Configuration Window
- 4.2. Storage Subsystems
- 4.2.1. Subsystems List Window
- 4.2.2. Creating a New Storage Subsystem
- 4.2.3. Storage Subsystem Configuration Window
- 4.2.3.1. Create Storage Subsystem Metadata
- 4.2.3.2. Create Storage Subsystem Configuration
- 4.2.3.3. Create Storage Subsystem Servers
- 4.2.3.4. Assign a Gatekeeper if Required
- 4.2.3.5. Assign Storage Resources to the Storage Subsystem
- 4.2.3.6. Create Storage Subsystem Fileset and Junction
- 4.2.3.7. Migration and Purge Policy Overrides
- 4.2.3.8. Storage Class Threshold Overrides
- 4.2.4. Modifying a Storage Subsystem
- 4.2.5. Deleting a Storage Subsystem
- Chapter 5. HPSS Servers
- 5.1. Server List
- 5.1. Server Configuration
- 5.1.1. Common Server Configuration
- 5.1.1. Core Server Specific Configuration
- 5.1.2. Gatekeeper Specific Configuration
- 5.1.3. Location Server Additional Configuration
- 5.1.4. Log Client Specific Configuration
- 5.1.1. Log Daemon Specific Configuration
- 5.1.2. Migration/Purge Server (MPS) Specific Configuration
- 5.1.3. Mover Specific Configuration
- 5.1.3.1. Mover Specific Configuration Window
- 5.1.3.1. Additional Mover Configuration
- 5.1.3.1.1. /etc/services, /etc/inetd.conf, and /etc/xinetd.d
- 5.1.3.1.2. The Mover Encryption Key Files
- 5.1.3.1.3. /var/hpss/etc Files Required for Remote Mover
- 5.1.3.1.1. System Configuration Parameters on IRIX, Solaris, and Linux
- 5.1.3.1.1. Setting Up Remote Movers with mkhpss
- 5.1.3.1.2. Mover Configuration to Support Local File Transfer
- 5.1.1. Physical Volume Repository (PVR) Specific Configuration
- 5.1.1. Deleting a Server Configuration
- 5.1. Monitoring Server Information
- 5.1.1. Basic Server Information
- 5.1.1. Specific Server Information
- 5.1.1.1. Core Server Information Window
- 5.1.1.1. Gatekeeper Information Window
- 5.1.1.1. Location Server Information Window
- 5.1.1.2. Migration/Purge Server Information Window
- 5.1.1.3. Mover Information Window
- 5.1.1.1. Physical Volume Library (PVL) Information Window
- 5.1.1.2. Physical Volume Repository (PVR) Information Windows
- 5.1. Real-Time Monitoring (RTM)
- 5.2. Starting HPSS
- 5.1. Stopping HPSS
- 5.2. Server Repair and Reinitialization
- 5.1. Forcing an SSM Connection
- Chapter 6. Storage Configuration
- 6.1. Storage Classes
- 6.2. Storage Hierarchies
- 6.3. Classes of Service
- 6.4. Migration Policies
- 6.5. Purge Policies
- 6.6. File Families
- Chapter 7. Device and Drive Management
- Chapter 8. Volume and Storage Management
- 8.1. Adding Storage Space
- 8.2. Removing Storage Space
- 8.3. Monitoring Storage Space
- 8.4. Dealing with a Space Shortage
- 8.5. Volume Management
- 8.6. Monitoring and Managing Volume Mounts
- 8.7. New Storage Technology Insertion
- Chapter 9. Logging and Status
- Chapter 10. Filesets and Junctions
- Chapter 11. Files, Directories and Objects by SOID
- Chapter 12. Tape Aggregation
- Chapter 13. User Accounts and Accounting
- Chapter 14. User Interfaces
- Chapter 15. Backup and Recovery
- Chapter 16. Management Tools
To help mitigate this, when the thread pool is full, the System Manager notifies all the threads in the
thread pool that are waiting on list updates to return to the client as if they just timed out as normal. This
could be as many as 15 threads per client that are awakened and told to return, which makes those
threads free to do other work.
If the client interface RPC thread pool is still full (as it could be if, for example, there were 15 threads in
the client interface RPC request queue that took over the 15 that were just released), then the System
Manager sets the wait time for the new RPCs to 1 second rather than whatever the client requested. This
way the RPC won't try to hang around too long.
Realize that once the System Manager gets in this mode (constantly having a full client interface RPC
thread pool and having to cut short the thread wait times), the System Manager starts working hard and
the CPU usage will start to increase. If you close some windows and/or some clients things should start
to stabilize again.
You can see whether the System Manager client interface RPC thread pool has ever been full by looking
at the Maximum Active/Queued RPCs field in the Client column of the RPC Interface Information
group in the System Manager Statistics window (Section 3.9.4.1: System Manager Statistics Window on
page 63). If this number is greater than or equal to the corresponding client interface's Thread Pool Size
(default 100), then the thread pool was full at some time during the System Manager execution (although
it may not be full currently).
To tell whether the thread pool is currently full, look at the number of Queued RPCs. If Queued RPCs is
0 then the thread pool is not full at the moment.
If Active RPCs is equal to Thread Pool Size then the thread pool for the interface is currently full.
Active RPCs should never be greater than Thread Pool Size. When it reaches Thread Pool Size then the
new RPCs will be queued and Queued RPCs become greater than 0.
When the thread pool gets full, the System Manager tries harder to clear them out before accepting new
ones, so one hopes that if the thread pool fills up, it doesn't stay full for long.
If the site runs with low refresh rates and more than 40 clients, the recommendation is to set the client
interface RPC thread pool size to 150 or 200 and the client interface RPC request queue size to 1000 in
the System Manager Server Configuration window (Section 5.1.1.2: Interface Controls on page 92).
Otherwise, the default values should work well.
3.1.3. Labeling the System Manager RPC Program Number
Labeling the System Manager RPC program number is not required but can be a useful debugging aid.
The SSM System Manager registers with the RPC portmapper at initialization. As part of this
registration, it tells the portmapper its RPC program number. Each HPSS server configuration contains
the server's RPC program number. To find the System Manager's program number, open the Servers
window, select the SSM System Manager, and click the Configure button to open the SSM System
Manager Configuration window. The System Manager's RPC program number is in the Program
Number field on the Execution Controls tab of this window.
The rpcinfo utility with the -p option will list all registered programs, their RPC program numbers, and
the port on which they are currently listening for RPCs. When diagnosing SSM problems, it can be
useful to run the rpcinfo program and search for the System Manager RPC program number in the
output, to see whether the System Manager has successfully initialized its rpc interface and to see which
HPSS Management Guide November 2009
Release 7.3 (Revision 1.0) 32