HP-UX Workload Manager A.03.05.xx Release Notes for HP-UX 11i v3 (B8843-90051, February 2011)
Table Of Contents
- HP-UX Workload Manager A.03.05.xx Release Notes
- Table of Contents
- 1 Announcement
- 2 New in this version
- 3 Known problems and workarounds
- System panic when PRM is enabled; install failure in absence of PRM when certain kernel patches are present
- Capping issue
- WLM uses only the assigned CPU resources even with utilitypri set
- Temporary Instant Capacity (TiCAP) expires while WLM is managing nPartitions
- Automatic activation of Instant Capacity core without authorization
- Partition management affected when cores are deactivated with iCAP on fully owned system
- Application hangs in FSS group
- Shutdown slow; “Waiting for shutdown confirmation” and “Shutdown initiated; however, ... unable to acquire confirmation” messages displayed
- Unable to get CPU allocation due to number of processes
- Collectors abort when updated while running
- GlancePlus/OpenView Performance Agent and processor sets
- GlancePlus may not correctly identify processes’ PRM groups
- glance Adviser memory consumption increases continually
- WLM enables/disables SLOs at end of interval
- No metrics on startup or reconfiguration
- WLM configurations cannot be activated with fewer than 100 Mbytes of memory available
- Secure Resource Partitions: Blocked port on a virtual network interface
- Reaching the system V semaphore limit
- Configuration wizard requires PRM
- Processes in transient FSS groups appear unexpectedly in other workload groups
- Modifying a managed partition requires WLM and the global arbiter be stopped
- Performing online cell operations
- WLM GUI is not compatible with different versions of WLM
- "Message violation" error
- Upgrading or installing PRM before upgrading WLM can cause failed swverify checks
- 4 Compatibility information and installation requirements
- Disk and memory requirements
- Network operating environment
- Compatibility with other software
- Compatibility with long hostnames
- Compatibility with X Windows
- Compatibility with GlancePlus
- Compatibility with HP Integrity Virtual Machines
- Compatibility of WLM virtual partition management and Instant Capacity / PPU
- Compatibility of WLM virtual partition management and certain CPU bindings
- Compatibility of WLM partition management and PSETs
- Compatibility of psrset and PSETs
- Compatibility with PRM
- Compatibility with gWLM
- Compatibility with Java
- Installation procedure
- 5 Patches and fixes in this version
- 6 Software availability in native languages
- 7 Security
- 8 Available manuals
- 9 WLM toolkits
- 10 Providing feedback
- 11 Training

NOTE: Do not use GlancePlus to change PRM allocations. WLM controls PRM.
Compatibility with HP Integrity Virtual Machines
WLM supports HP Integrity Virtual Machines (Integrity VM). You can run WLM both on the
Integrity VM Host and in an Integrity VM (guest), but each WLM runs as an independent instance.
To run WLM on the Integrity VM Host, you must use a strictly host-based configuration—a
WLM configuration designed exclusively for moving cores across HP-UX Virtual Partitions or
nPartitions, or for activating Temporary Instant Capacity (TiCAP) cores or Pay per use (PPU)
cores. (WLM will not run with FSS groups or PSETs on Integrity VM Hosts where guests are
running.) In addition, ensure that the minimum number of cores allocated to a WLM host is
greater than or equal to the maximum number of virtual CPUs (vCPU count) assigned to each
VM guest. Otherwise, VM guests with a vCPU count greater or equal to WLM’s minimum
allocation could receive insufficient resources and eventually crash. For example, if an Integrity
VM host has 8 cores and three guests with 1, 2, and 4 virtual CPUs, respectively, your WLM host
should maintain an allocation of at least 4 cores at all times. You can achieve this by using the
WLM hmincpu keyword.
WLM runs inside an Integrity VM but will not support PPU, vPar, and Instant Capacity (iCAP)
integration. However, Integrity VM will take advantage of cores added to the Integrity VM Host
by PPU, Instant Capacity, and TiCAP. As noted previously, WLM must continue allocating at
least as many cores as the maximum number of virtual CPUs in any VM guest on the system. In
addition, when running WLM inside an Integrity VM, you should specify a WLM interval greater
than 60 seconds. This helps ensure a fair allocation of CPU resources for FSS groups.
For more information on Integrity VM, go to the following location and navigate to the “Solution
components” page:
http://www.hp.com/go/vse
Compatibility of WLM virtual partition management and
Instant Capacity / PPU
If you have Instant Capacity (iCAP) or Pay per use (PPU) software installed, use WLM virtual
partitions management only if you have vPars version A.03.01 or later.
If you have a vPars version prior to A.03.01, using WLM virtual partition management may
cause an Instant Capacity core to be automatically enabled without customer authorization. If
this situation occurs, please contact your HP representative.
With vPars version A.04.01 or later, use Instant Capacity v7 or later.
Compatibility of WLM virtual partition management and certain CPU
bindings
Do not use cell-specific CPU bindings or user-assigned CPU bindings on virtual partitions you
are going to manage with WLM.
Compatibility of WLM partition management and PSETs
WLM now supports simultaneous management of partitions (virtual partitions or nPartitions)
and PSET-based workload groups. Such support requires the following:
• If Instant Capacity (iCAP) is available on the complex, it must be v7 or later
• If HP-UX Virtual Partitions (vPars) is on the complex, it must be v4 (A.04.01) or later
• If Pay per use (PPU) is on the complex, it must be v7 or later
Compatibility with HP Integrity Virtual Machines 23