6.5.1
Table Of Contents
- vSphere Resource Management
- Contents
- About vSphere Resource Management
- Getting Started with Resource Management
- Configuring Resource Allocation Settings
- CPU Virtualization Basics
- Administering CPU Resources
- Memory Virtualization Basics
- Administering Memory Resources
- Configuring Virtual Graphics
- Managing Storage I/O Resources
- Managing Resource Pools
- Creating a DRS Cluster
- Using DRS Clusters to Manage Resources
- Creating a Datastore Cluster
- Initial Placement and Ongoing Balancing
- Storage Migration Recommendations
- Create a Datastore Cluster
- Enable and Disable Storage DRS
- Set the Automation Level for Datastore Clusters
- Setting the Aggressiveness Level for Storage DRS
- Datastore Cluster Requirements
- Adding and Removing Datastores from a Datastore Cluster
- Using Datastore Clusters to Manage Storage Resources
- Using NUMA Systems with ESXi
- Advanced Attributes
- Fault Definitions
- Virtual Machine is Pinned
- Virtual Machine not Compatible with any Host
- VM/VM DRS Rule Violated when Moving to another Host
- Host Incompatible with Virtual Machine
- Host Has Virtual Machine That Violates VM/VM DRS Rules
- Host has Insufficient Capacity for Virtual Machine
- Host in Incorrect State
- Host Has Insufficient Number of Physical CPUs for Virtual Machine
- Host has Insufficient Capacity for Each Virtual Machine CPU
- The Virtual Machine Is in vMotion
- No Active Host in Cluster
- Insufficient Resources
- Insufficient Resources to Satisfy Configured Failover Level for HA
- No Compatible Hard Affinity Host
- No Compatible Soft Affinity Host
- Soft Rule Violation Correction Disallowed
- Soft Rule Violation Correction Impact
- DRS Troubleshooting Information
- Cluster Problems
- Load Imbalance on Cluster
- Cluster is Yellow
- Cluster is Red Because of Inconsistent Resource Pool
- Cluster Is Red Because Failover Capacity Is Violated
- No Hosts are Powered Off When Total Cluster Load is Low
- Hosts Are Powered-off When Total Cluster Load Is High
- DRS Seldom or Never Performs vMotion Migrations
- Host Problems
- DRS Recommends Host Be Powered on to Increase Capacity When Total Cluster Load Is Low
- Total Cluster Load Is High
- Total Cluster Load Is Low
- DRS Does Not Evacuate a Host Requested to Enter Maintenance or Standby Mode
- DRS Does Not Move Any Virtual Machines onto a Host
- DRS Does Not Move Any Virtual Machines from a Host
- Virtual Machine Problems
- Cluster Problems
- Index
4 Next to swap le location, click Edit.
5 Select where to store the swap le.
Option Description
Virtual machine directory
Stores the swap le in the same directory as the virtual machine
conguration le.
Datastore specified by host
Stores the swap le in the location specied in the host conguration.
If the swap le cannot be stored on the datastore that the host species, the
swap le is stored in the same folder as the virtual machine.
6 Click OK.
Delete Swap Files
If a host fails, and that host had running virtual machines that were using swap les, those swap les
continue to exist and consume many gigabytes of disk space. You can delete the swap les to eliminate this
problem.
Procedure
1 Restart the virtual machine that was on the host that failed.
2 Stop the virtual machine.
The swap le for the virtual machine is deleted.
Sharing Memory Across Virtual Machines
Many ESXi workloads present opportunities for sharing memory across virtual machines (as well as within
a single virtual machine).
ESXi memory sharing runs as a background activity that scans for sharing opportunities over time. The
amount of memory saved varies over time. For a fairly constant workload, the amount generally increases
slowly until all sharing opportunities are exploited.
To determine the eectiveness of memory sharing for a given workload, try running the workload, and use
resxtop or esxtop to observe the actual savings. Find the information in the PSHARE eld of the interactive
mode in the Memory page.
Use the Mem.ShareScanTime and Mem.ShareScanGHz advanced seings to control the rate at which the system
scans memory to identify opportunities for sharing memory.
You can also congure sharing for individual virtual machines by seing the sched.mem.pshare.enable
option.
Due to security concerns, inter-virtual machine transparent page sharing is disabled by default and page
sharing is being restricted to intra-virtual machine memory sharing. This means page sharing does not occur
across virtual machines and only occurs inside of a virtual machine. The concept of salting has been
introduced to help address concerns system administrators may have over the security implications of
transparent page sharing. Salting can be used to allow more granular management of the virtual machines
participating in transparent page sharing than was previously possible. With the new salting seings,
virtual machines can share pages only if the salt value and contents of the pages are identical. A new host
cong option Mem.ShareForceSalting can be congured to enable or disable salting.
See Chapter 15, “Advanced Aributes,” on page 115 for information on how to set advanced options.
vSphere Resource Management
40 VMware, Inc.