Resizing partitions automatically with HP-UX Workload Manager
8
– If Pay per use is on the complex, it must be v7 or later.
You can configure WLM to manage FSS and pSet-based workload groups and partitions at the same
time. Observe the software restrictions that apply to using pSet-based groups with virtual partitions,
Instant Capacity, and Pay per use, as noted previously. For more information on restrictions, see the
WLM release notes available at:
http://docs.hp.com/hpux/netsys/index.html#HP-UX%20Workload%20Manager
WLM allocates cores to a partition based on the CPU limits of the partition (physical limits for
nPartitions; logical limits for virtual partitions). For example, WLM adjusts the number of cores
assigned to a virtual partition within the limits of the partition’s (that is, vPar’s) minimum and maximum
number of cores, which you set using vparmodify.
The way WLM uses group weighting to determine CPU allocations across partitions is the same as the
way it uses group weighting to determine allocations within a partition. For more information, see the
HP-UX Workload Manager User’s Guide or the wlmconf(4) manpage.
How HP-UX Workload Manager manages partitions
WLM manages both virtual partitions and nPartitions through a global arbiter that receives CPU
requests from a WLM instance running in each partition. The arbiter then uses the SLOs to determine
how to allocate CPU resources to the partitions. The changes are then made. With nPartitions, the
migration of CPU resources is simulated by using Instant Capacity software to deactivate cores on one
nPartition and then activate them on another nPartition where the resources are needed more.
Within a partition, you can run a single application and let WLM manage the application’s resources
by adjusting the number of cores in the partition. Alternatively, you can have several applications
sharing the partition’s resources.
For more details on the WLM management of a partitioned system, see Figure 2. This graphic
illustrates WLM treating all the applications in a partition as a single workload. For a generalized
overview of WLM partition management, see the white paper, “HP-UX Workload Manager
overview,” available from the information library at:
http://www.hp.com/go/wlm
Figure 2 shows a more detailed flow of WLM operations:
• Each partition, or workload, has a WLM configuration file to specify a usage goal for the
workload.
• Each usage goal results in WLM creating a usage data collector and a controller. The controller
tracks the actual CPU usage of the processes in the partition. WLM then divides this number by the
workload’s number of cores to determine a utilization percentage for the partition. The controller
requests an increase or decrease to the partition’s number of cores to bring the utilization
percentage within a configurable range.
• WLM adds data to the statistics log /var/opt/wlm/wlmdstats if enabled through the wlmd -l
option.
• At intervals, WLM calculates new resource allocations to request from the global arbiter for each
partition
• The WLM instance continuously requests that the global arbiter allocate a certain number of cores
for the partition.