HP-UX Workload Manager overview
5
relative level of importance (priority). WLM enables you to prioritize the SLOs so that an SLO
assigned a high priority has precedence over SLOs with a low priority. Typically, you also specify a
usage goal to attain a targeted resource usage. If a performance measure (metric) is available, you
can specify a metric goal. As the applications run, WLM compares the application usage or metrics
against the goals. To achieve the goals, WLM then automatically adjusts CPU allocations for the
workloads.
CPU resources can be allocated in shares (portions or time slices) of multiple cores or, when using
WLM partition management or pSet management, in whole cores. WLM supports the logical CPU
(Hyper-Threading) feature for pSet-based groups. Hyper-Threading is available on certain processors
starting with HP-UX 11i v3 (B.11.31). A logical CPU is an execution thread contained within a core.
Each core with Hyper-Threading enabled can contain multiple logical CPUs. WLM automatically sets
the Hyper-Threading state for the default pSet o optimize performance. (The default pSet is where
FSS groups are created.) When new pSets are created, they inherit the Hyper-Threading state that the
system had before WLM was activated (because WLM may change the Hyper-Threading setting of
the default pSet to optimize performance). Cores can be moved from one partition to another and will
take on the Hyper-Threading state of their destination pSet. You can override the default state for
cores assigned to a specific pSet-based group; you can also modify the Hyper-Threading state of the
system. (Modifications to the Hyper-Threading state should not be made while WLM is running.) For
more information, see the HP-UX Workload Manager User’s Guide or the wlmconf(4) manpage.
Workload management across virtual partitions and nPartitions
WLM is optimized for moving cores among hosts such as virtual partitions and nPartitions. Using these
hosts as workloads, WLM manages workload allocations while maintaining the isolation of their HP-
UX instances. WLM automatically moves or “virtually transfers” cores among partitions based on
SLOs and priorities that you define for the partitions.
With virtual partitions, WLM can automatically balance resources across the partitions. For example,
if a processor is not being utilized within one virtual partition, WLM can deallocate it and reassign it
to an alternate virtual partition that currently requires additional resources.
With nPartitions, which represent physical hardware, WLM does not move resources physically across
partitions. With HP iCAP present, core movement is simulated by deactivating one or more cores in
one nPartition and then activating cores in another nPartition.
The tools WLM uses to manage cores depend on the software enabled on the complex—such as HP
iCAP, HP PPU, and virtual partitions.
For each host (nPartition or virtual partition) workload, you define one or more SLOs in the host’s
WLM configuration file. Once configured, WLM then automatically manages CPU resources to satisfy
the SLOs for each workload. On an HP-UX system that has network connectivity to the partitions being
managed by WLM, you configure the global arbiter (wlmpard). The global arbiter takes input from
the WLM instances on the individual partitions and then moves cores between partitions as needed to
better achieve the SLOs specified in the WLM configuration file that is active in each partition.
WLM can manage nested workloads, with workloads based on FSS groups and pSets inside virtual
partitions inside nPartitions. For more information, see “Managing nested partitions” on page 23. In
addition, you can integrate WLM with HP Serviceguard to reallocate resources in a failover situation
according to defined priorities (for more information on integrating with HP Serviceguard, see “Using
HP-UX Workload Manager with HP Serviceguard” on page 25).
Workload management within a single HP-UX instance
When you use WLM to manage workloads to divide resources within a single HP-UX instance, WLM
manages SLOs for workloads that are based on PRM-based pSets or FSS groups. These workloads
are usually referred to as “workload groups.”