HP-UX vPars and Integrity VM V6.1.5 Administrator Guide (5900-2295, April 2013)

guest with the start_attr attribute set to auto, the startup order is based on a memory weight
and a processor weight added together.
A rough estimate of the memory weight calculation is:
100 * guest memory size / available host memory + 2 (if the guest resources can fit into a cell's
available CLM and processors)
A rough estimate of the processor weight calculation is:
(minimum guest cpu entitlement * number of virtual processors) / (100 * number of host processors)
Guests are expected to start in order of highest weight to lowest. You can adjust the order by
setting the sched_preference attribute (Section 3.2.6). If a guest fails to start for any reason,
the sequence continues with the next guest. For memory placement on a non cell-based system or
cell-based system with all interleaved (ILM) memory configured, the boot order has little affect.
In general, on these configurations, the largest guests boot first. On cell-based systems with CLM
configured, expected memory placement depends on the calculated weights, the
sched_preference setting and the VSP memory configuration:
If sched_preference is not set or set to cell” and the guest resources fit into one cell, CLM
is used.
If there is not enough CLM and there is enough ILM, ILM is used.
If sched_preference is set to “ilm” and there is enough ILM, ILM is used.
If there is not enough ILM, the memory is allocated from all cells (striped).
If there is insufficient ILM but the guest resources fit into one cell, CLM is used. Otherwise the
memory is striped.
7.1.13 Specifying dynamic memory parameters
Specifies whether the new virtual machine (shared VM type only) uses dynamic memory and the
values associated with it by including the following keywords:
dynamic_memory_control={0|1}
ram_dyn_type={none|any|driver}
ram_dyn_min=amount
ram_dyn_max=amount
ram_dyn_target_start=amount
ram_dyn_entitlement=amount
amr_enable={0|1}
amr_chunk_size=amount
For more information about using dynamic memory for guests, see Section 11.9 (page 184).
7.1.14 Configuration limits
Table 12 lists the configuration limits for Integrity VM Version 6.1.5. For NPIV supported limits,
see Table 9 (page 56).
Table 12 Configuration Limits
SupportDescription
min (#pCPUs, Max vCPU)# vCPUs/VM — Maximum (Integrity VM V6.1.5 Max vCPU
= 16)
20# vCPUs/pCPU — Maximum
254# VMs per VSP — Maximum
86 Creating virtual machines