Operating Environment Software Manual
• You have specified hosts in different complexes and the complexes are not managed in the
same GiCAP group.
• You have specified hosts in different nPartitions in a complex when there are no iCAP usage
rights to share between the nPartitions.
• The cimserver (or a provider) on one or more hosts is not functioning properly and
consequently complex or partition IDs are not discovered properly.
If you receive this message:
• Inspect the /var/opt/gwlm/gwlmagent.log.0 files on the indicated managed nodes
for error messages.
• If partitions have been renamed, restarting the agents in the complex might correct the
problem.
• If available, inspect the discovery tree for unexpected difference in complex or partition
names. Check the functioning of the parstatus, icapstatus and vparstatus commands
on the hosts which do not have the expected IDs. Restarting the cimserver on those hosts
might correct the problem.
Compatibility with PRM and WLM
You cannot use gWLM with either Process Resource Manager (PRM) or Workload Manager
(WLM) to manage the same system at the same time. Attempts to do so result in a message
indicating that a lock is being held by whichever application is actually managing the system.
To use gWLM in this situation, first turn off the application holding the lock.
For PRM, enter the following commands:
# /opt/prm/bin/prmconfig -d
# /opt/prm/bin/prmconfig -r
For WLM, enter the following command:
# /opt/wlm/bin/wlmd -k
Rare incompatibility with virtual partitions
Depending on workload characteristics, gWLM can migrate CPU resources rapidly. This frequent
migration can potentially, although very rarely, produce a race condition, causing the virtual
partition to crash. It can also produce a panic, resulting in one or more of the following messages:
No Chosen CPU on the cell-cannot proceed with NB PDC.
or
PDC_PAT_EVENT_SET_MODE(2) call returned error
Workaround Upgrading to vPars A.03.04 resolves this issue.
With earlier versions of vPars, you can work around this issue as follows: Assign (using path
assignment) at least one CPU per cell as a bound CPU to at least one virtual partition. (It can be
any virtual partition). This ensures that there is no redesignation on CPU migrations. For example,
if you have four cells (0, 1, 2, 3), each with four CPUs (10, 11, 12, 13) and four virtual partitions
(vpar1, vpar2, vpar3, vpar4), you could assign 0/1x to vpar1, 1/1x to vpar2, 2/1x to vpar3, and
3/1x to vpar4, where x is 0,1,2,3.
Workloads in gWLM do not follow associated Serviceguard packages
With the exception of virtual machines, a workload can be managed by gWLM in only one
deployed SRD at a time. As a result, if a workload is directly associated with a Serviceguard
package (using the selector in the Workload Definition dialog), gWLM can manage it on only
one of the hosts on which it might potentially run. However, management of such a workload
might disrupt the Virtualization Manager and Capacity Advisor tracking of the workload
utilization between cluster members. Thus, it is recommended that you not directly manage a
workload associated with a Serviceguard package.
Workaround For all hosts to which a workload associated with a Serviceguard package might
fail over, you must apply a policy to an enclosing operating system instance (virtual partition or
58 Global Workload Manager A.6.2.0.* Known issues