A.05.80 HP Insight Remote Support Advanced Managed Systems Configuration Guide (June 2013)
Table Of Contents
- Managed Systems Configuration Guide
- Contents
- About This Document
- Insight Remote Support Advanced Managed Systems Overview
- ProLiant Windows Server Configuration
- ProLiant Linux Server Configuration
- ProLiant VMware ESX Server Configuration
- ProLiant VMware ESXi Server Configuration
- Proliant Citrix Server Configuration
- ProLiant c-Class BladeSystem Enclosure Configuration
- Integrity Windows 2003 Server Configuration
- Integrity Windows 2008 Server Configuration
- Integrity Linux Server Configuration
- Integrity Superdome 2 Server Configuration
- HP-UX Server Configuration
- Meeting HP-UX Operating System, Software, and Patch Requirements
- More About WBEM and SFM with Insight Remote Support
- Verifying System Fault Management is Operational
- Creating WBEM Users
- Configuring WEBES to Support WBEM Indications
- Firewall and Port Requirements for HP-UX Managed Systems
- Configuring HP-UX Managed Systems for Proactive Collection Services
- OpenVMS Server Configuration
- Tru64 UNIX Server Configuration
- NonStop Server Configuration
- Enterprise Virtual Array Configuration
- Understanding the Different Server Types and Software Applications
- Command View EVA 8.0.1 and Higher Hosted on the CMS
- Important Port Settings Information
- Important Information Regarding New HP SIM Installations
- Correcting an Existing HP SIM Installation
- Change the WMI Mapper Proxy port in the HP SIM User Interface on the CMS
- Restore Defaults to the wbemportlist.xml file
- Installing and Configuring Command View EVA After HP SIM
- Resetting the Port Numbers when Command View EVA was Installed before HP SIM
- Command View EVA Hosted on a Separate SMS
- Requirements and Documentation to Configure Command View EVA on the SMS
- Overview of Command View EVA 7.0.1 through 8.0.1 with SMI-S Requirements
- SMS System and Access Requirements
- WEBES – EVA Communication
- HP SIM – EVA Communication
- Software Required on the SMS
- Fulfilling ELMC Common Requirements for a Windows SMS
- Installing MC3 on the SMS
- Configuring EVA-Specific Information on the CMS
- Requirements to Support EVA4400 and P6000 with Command View EVA on the ABM
- Enabling User-Initiated Service Mode in Command View EVA 9.3
- Performing a Remote Service Test in Command View EVA 9.3
- Troubleshooting EVA Managed Systems
- P4000 Storage Systems Migration Procedure
- Network Storage System Configuration
- Modular Smart Array Configuration
- Tape Library Configuration
- System Requirements
- Managed Systems Configuration
- Nearline (Tape Library) Configuration
- Secure Key Manager Configuration
- StoreOnce D2D (Disk-to-Disk) Backup System Configuration
- Enterprise Systems Library G3 Configuration
- TapeAssure Service Configuration
- Prerequisites
- Command View for Tape Libraries and TapeAssure Service Installation
- Configure the Command View TL 2.8 CIMOM and TapeAssure Provider
- Configure the Command View TL 3.0 CIMOM and TapeAssure Provider
- HP SIM Device Discovery
- WEBES Configuration
- Create a New SMI-S Protocol in WEBES
- Subscribe to the Command View TL and TapeAssure CIMOM
- SAN Switch Configuration
- E-Series Switch Configuration
- A-Series Switch Configuration
- UPS Network Module Configuration
- Modular Cooling System Configuration
- Glossary
- Index

Managed Systems Configuration Guide
Chapter 13: OpenVMS Server Configuration
Result: ELMC installs itself for all nodes.
l Cluster: All but two nodes share system disk A. The other two nodes share system disk B.
Install Node: A node that uses system disk A.
Install Target: The default location SYS$COMMON:[HP...].
Result: The other two nodes will not have ELMC.
In the previous case, you can install ELMC one more time for the remaining two nodes by running the
install from either node and again choosing the default location of SYS$COMMON:[HP...]. Consider
this a completely separate ELMC installation from the first install on the majority of the nodes.
l Cluster: All but two nodes share system disk A. The other two nodes share system disk B. All nodes
also mount a non-system disk C.
Install Node: Any node.
Install Target: A directory on disk C, specified by you during the installation.
Result: ELMC installs itself for all nodes.
Note: In all cases the installation package also lets you choose only a subset of the nodes that can
see the install location.
Archiving and Cleaning the Error Log
After WEBES is installed on the CMS, WEBES begins using ELMC to analyze all events stored in the
error log, which can result in high CPU usage over an extended period. To control this operation, you are
encouraged to archive and clean the error log before installing. This reduces the size of the log and the
time required for the initial scan.
Follow these guidelines for cleaning the error log. If ELMC is installed and running when you clean the log,
you do not need to stop and restart the Director process. Also, do not stop and restart the ERRFMT system
event logging process.
The default error log, typically SYS$SYSROOT:[SYSERR]ERRLOG.SYS, increases in size and remains
on the system disk until the user explicitly renames or deleted it. When either occurs, the system creates
a new, clean error log file after about 15 minutes.
Caution: After renaming or deleting the existing log, do not install ELMC until the new default log is
present.
If you rename the log, the saved log can be analyzed at a later time.
Aside from starting with a clean log before installing WEBES, you may want to perform regular
maintenance on the error log. One method is to rename errlog.sys on a daily basis. For example, you
might rename errlog.sys to errlog.old every morning at 9:00. To free space on the system disk,
you then can back up the renamed version to a different volume and delete the file from the system disk.
HP Insight Remote Support Advanced (A.05.80)Page 92 of 204