6.7
Table Of Contents
- vSphere Resource Management
- Contents
- About vSphere Resource Management
- Getting Started with Resource Management
- Configuring Resource Allocation Settings
- CPU Virtualization Basics
- Administering CPU Resources
- Memory Virtualization Basics
- Administering Memory Resources
- Persistent Memory
- Configuring Virtual Graphics
- Managing Storage I/O Resources
- Managing Resource Pools
- Creating a DRS Cluster
- Using DRS Clusters to Manage Resources
- Creating a Datastore Cluster
- Initial Placement and Ongoing Balancing
- Storage Migration Recommendations
- Create a Datastore Cluster
- Enable and Disable Storage DRS
- Set the Automation Level for Datastore Clusters
- Setting the Aggressiveness Level for Storage DRS
- Datastore Cluster Requirements
- Adding and Removing Datastores from a Datastore Cluster
- Using Datastore Clusters to Manage Storage Resources
- Using NUMA Systems with ESXi
- Advanced Attributes
- Fault Definitions
- Virtual Machine is Pinned
- Virtual Machine not Compatible with any Host
- VM/VM DRS Rule Violated when Moving to another Host
- Host Incompatible with Virtual Machine
- Host Has Virtual Machine That Violates VM/VM DRS Rules
- Host has Insufficient Capacity for Virtual Machine
- Host in Incorrect State
- Host Has Insufficient Number of Physical CPUs for Virtual Machine
- Host has Insufficient Capacity for Each Virtual Machine CPU
- The Virtual Machine Is in vMotion
- No Active Host in Cluster
- Insufficient Resources
- Insufficient Resources to Satisfy Configured Failover Level for HA
- No Compatible Hard Affinity Host
- No Compatible Soft Affinity Host
- Soft Rule Violation Correction Disallowed
- Soft Rule Violation Correction Impact
- DRS Troubleshooting Information
- Cluster Problems
- Load Imbalance on Cluster
- Cluster is Yellow
- Cluster is Red Because of Inconsistent Resource Pool
- Cluster Is Red Because Failover Capacity Is Violated
- No Hosts are Powered Off When Total Cluster Load is Low
- Hosts Are Powered-off When Total Cluster Load Is High
- DRS Seldom or Never Performs vMotion Migrations
- Host Problems
- DRS Recommends Host Be Powered on to Increase Capacity When Total Cluster Load Is Low
- Total Cluster Load Is High
- Total Cluster Load Is Low
- DRS Does Not Evacuate a Host Requested to Enter Maintenance or Standby Mode
- DRS Does Not Move Any Virtual Machines onto a Host
- DRS Does Not Move Any Virtual Machines from a Host
- Virtual Machine Problems
- Cluster Problems
Name spaces
Name spaces for PMem are configured before ESXi starts. Name spaces are similar to disks on the
system. ESXi reads name spaces and combines multiple name spaces into one logical volume by writing
GPT headers. This is formatted automatically by default, if you have not previously configured it. If it has
already been formatted, ESXi attempts to mount the PMem.
GPT tables
If the data in PMem storage is corrupted it can cause ESXi to fail. To avoid this, ESXi checks for errors in
the metadata during PMem mount time.
PMem regions
PMem regions are a continuous byte stream that represent a single vNVDimm or vPMemDisk. Each
PMem volume belongs to a single host. This could be difficult to manage if an administrator has to
manage each host in a cluster with a large number of hosts. However, you do not have to manage each
individual datastore. Instead you can think of the entire PMem capacity in the cluster as one datastore.
VC and DRS automate initial placement of PMem datastores. Select a local PMem storage profile when
the VM is created or when the device is added to the VM. The rest of the configuration is automated. One
limitation is that ESXi does not allow you to put the VM home on a PMem datastore. This is because it
takes up valuable space to store VM log and stat files. These regions are used to represent the VM data
and can be exposed as byte addressable nvDimms, or VpMem disks.
Migration
Since PMem is a local datastore, if you want to move a VM you must use storage vMotion. A VM with
vPMem can only be migrated to an ESX host with PMem resource. A VM with vPMemDisk can be
migrated to an ESX host without a PMem resource.
Error handling and NVDimm management
Host failures can result in a loss of availability. In the case of catastrophic errors, you may lose all data
and must take manual steps to reformat the PMem.
vSphere Resource Management
VMware, Inc. 48