6.0.1
Table Of Contents
- vSphere Resource Management
- Contents
- About vSphere Resource Management
- Updated Information
- Getting Started with Resource Management
- Configuring Resource Allocation Settings
- CPU Virtualization Basics
- Administering CPU Resources
- Memory Virtualization Basics
- Administering Memory Resources
- View Graphics Information
- Managing Storage I/O Resources
- Managing Resource Pools
- Creating a DRS Cluster
- Using DRS Clusters to Manage Resources
- Creating a Datastore Cluster
- Initial Placement and Ongoing Balancing
- Storage Migration Recommendations
- Create a Datastore Cluster
- Enable and Disable Storage DRS
- Set the Automation Level for Datastore Clusters
- Setting the Aggressiveness Level for Storage DRS
- Datastore Cluster Requirements
- Adding and Removing Datastores from a Datastore Cluster
- Using Datastore Clusters to Manage Storage Resources
- Using NUMA Systems with ESXi
- Advanced Attributes
- Fault Definitions
- Virtual Machine is Pinned
- Virtual Machine not Compatible with any Host
- VM/VM DRS Rule Violated when Moving to another Host
- Host Incompatible with Virtual Machine
- Host has Virtual Machine that Violates VM/VM DRS Rules
- Host has Insufficient Capacity for Virtual Machine
- Host in Incorrect State
- Host has Insufficient Number of Physical CPUs for Virtual Machine
- Host has Insufficient Capacity for Each Virtual Machine CPU
- The Virtual Machine is in vMotion
- No Active Host in Cluster
- Insufficient Resources
- Insufficient Resources to Satisfy Configured Failover Level for HA
- No Compatible Hard Affinity Host
- No Compatible Soft Affinity Host
- Soft Rule Violation Correction Disallowed
- Soft Rule Violation Correction Impact
- DRS Troubleshooting Information
- Cluster Problems
- Load Imbalance on Cluster
- Cluster is Yellow
- Cluster is Red Because of Inconsistent Resource Pool
- Cluster is Red Because Failover Capacity is Violated
- No Hosts are Powered Off When Total Cluster Load is Low
- Hosts are Powered Off When Total Cluster Load is High
- DRS Seldom or Never Performs vMotion Migrations
- Host Problems
- DRS Recommends Host be Powered On to Increase Capacity When Total Cluster Load Is Low
- Total Cluster Load Is High
- Total Cluster Load Is Low
- DRS Does Not Evacuate a Host Requested to Enter Maintenance or Standby Mode
- DRS Does Not Move Any Virtual Machines onto a Host
- DRS Does Not Move Any Virtual Machines from a Host
- Virtual Machine Problems
- Cluster Problems
- Index
4 Under Scheduling Affinity, select physical processor affinity for the virtual machine.
Use '-' for ranges and ',' to separate values.
For example, "0, 2, 4-7" would indicate processors 0, 2, 4, 5, 6 and 7.
5 Select the processors where you want the virtual machine to run and click OK.
Potential Issues with CPU Affinity
Before you use CPU affinity, you might need to consider certain issues.
Potential issues with CPU affinity include:
n
For multiprocessor systems, ESXi systems perform automatic load balancing. Avoid manual
specification of virtual machine affinity to improve the scheduler’s ability to balance load across
processors.
n
Affinity can interfere with the ESXi host’s ability to meet the reservation and shares specified for a
virtual machine.
n
Because CPU admission control does not consider affinity, a virtual machine with manual affinity
settings might not always receive its full reservation.
Virtual machines that do not have manual affinity settings are not adversely affected by virtual
machines with manual affinity settings.
n
When you move a virtual machine from one host to another, affinity might no longer apply because the
new host might have a different number of processors.
n
The NUMA scheduler might not be able to manage a virtual machine that is already assigned to certain
processors using affinity.
n
Affinity can affect the host's ability to schedule virtual machines on multicore or hyperthreaded
processors to take full advantage of resources shared on such processors.
Host Power Management Policies
ESXi can take advantage of several power management features that the host hardware provides to adjust
the trade-off between performance and power use. You can control how ESXi uses these features by
selecting a power management policy.
In general, selecting a high-performance policy provides more absolute performance, but at lower efficiency
(performance per watt). Lower-power policies provide less absolute performance, but at higher efficiency.
ESXi provides five power management policies. If the host does not support power management, or if the
BIOS settings specify that the host operating system is not allowed to manage power, only the Not
Supported policy is available.
You select a policy for a host using the vSphere Web Client. If you do not select a policy, ESXi uses Balanced
by default.
Table 4‑1. CPU Power Management Policies
Power Management Policy Description
Not supported The host does not support any power management features
or power management is not enabled in the BIOS.
High Performance The VMkernel detects certain power management features,
but will not use them unless the BIOS requests them for
power capping or thermal events.
Balanced (Default) The VMkernel uses the available power management
features conservatively to reduce host energy consumption
with minimal compromise to performance.
Chapter 4 Administering CPU Resources
VMware, Inc. 25