6.5.1
Table Of Contents
- vSphere Troubleshooting
- Contents
- About vSphere Troubleshooting
- Updated Information
- Troubleshooting Overview
- Troubleshooting Virtual Machines
- Troubleshooting Fault Tolerant Virtual Machines
- Hardware Virtualization Not Enabled
- Compatible Hosts Not Available for Secondary VM
- Secondary VM on Overcommitted Host Degrades Performance of Primary VM
- Increased Network Latency Observed in FT Virtual Machines
- Some Hosts Are Overloaded with FT Virtual Machines
- Losing Access to FT Metadata Datastore
- Turning On vSphere FT for Powered-On VM Fails
- FT Virtual Machines not Placed or Evacuated by vSphere DRS
- Fault Tolerant Virtual Machine Failovers
- Troubleshooting USB Passthrough Devices
- Recover Orphaned Virtual Machines
- Virtual Machine Does Not Power On After Cloning or Deploying from Template
- Troubleshooting Fault Tolerant Virtual Machines
- Troubleshooting Hosts
- Troubleshooting vSphere HA Host States
- vSphere HA Agent Is in the Agent Unreachable State
- vSphere HA Agent is in the Uninitialized State
- vSphere HA Agent is in the Initialization Error State
- vSphere HA Agent is in the Uninitialization Error State
- vSphere HA Agent is in the Host Failed State
- vSphere HA Agent is in the Network Partitioned State
- vSphere HA Agent is in the Network Isolated State
- Configuration of vSphere HA on Hosts Times Out
- Troubleshooting vSphere Auto Deploy
- vSphere Auto Deploy TFTP Timeout Error at Boot Time
- vSphere Auto Deploy Host Boots with Wrong Configuration
- Host Is Not Redirected to vSphere Auto Deploy Server
- Package Warning Message When You Assign an Image Profile to a vSphere Auto Deploy Host
- vSphere Auto Deploy Host with a Built-In USB Flash Drive Does Not Send Coredumps to Local Disk
- vSphere Auto Deploy Host Reboots After Five Minutes
- vSphere Auto Deploy Host Cannot Contact TFTP Server
- vSphere Auto Deploy Host Cannot Retrieve ESXi Image from vSphere Auto Deploy Server
- vSphere Auto Deploy Host Does Not Get a DHCP Assigned Address
- vSphere Auto Deploy Host Does Not Network Boot
- Recovering from Database Corruption on the vSphere Auto Deploy Server
- Authentication Token Manipulation Error
- Active Directory Rule Set Error Causes Host Profile Compliance Failure
- Unable to Download VIBs When Using vCenter Server Reverse Proxy
- Troubleshooting vSphere HA Host States
- Troubleshooting vCenter Server and the vSphere Web Client
- Troubleshooting Availability
- Troubleshooting Resource Management
- Troubleshooting Storage DRS
- Storage DRS is Disabled on a Virtual Disk
- Datastore Cannot Enter Maintenance Mode
- Storage DRS Cannot Operate on a Datastore
- Moving Multiple Virtual Machines into a Datastore Cluster Fails
- Storage DRS Generates Fault During Virtual Machine Creation
- Storage DRS is Enabled on a Virtual Machine Deployed from an OVF Template
- Storage DRS Rule Violation Fault Is Displayed Multiple Times
- Storage DRS Rules Not Deleted from Datastore Cluster
- Alternative Storage DRS Placement Recommendations Are Not Generated
- Applying Storage DRS Recommendations Fails
- Troubleshooting Storage I/O Control
- Troubleshooting Storage DRS
- Troubleshooting Storage
- Resolving SAN Storage Display Problems
- Resolving SAN Performance Problems
- Virtual Machines with RDMs Need to Ignore SCSI INQUIRY Cache
- Software iSCSI Adapter Is Enabled When Not Needed
- Failure to Mount NFS Datastores
- Troubleshooting Storage Adapters
- Checking Metadata Consistency with VOMA
- No Failover for Storage Path When TUR Command Is Unsuccessful
- Troubleshooting Flash Devices
- Troubleshooting Virtual Volumes
- Troubleshooting VAIO Filters
- Troubleshooting Networking
- Troubleshooting MAC Address Allocation
- The Conversion to the Enhanced LACP Support Fails
- Unable to Remove a Host from a vSphere Distributed Switch
- Hosts on a vSphere Distributed Switch 5.1 and Later Lose Connectivity to vCenter Server
- Hosts on vSphere Distributed Switch 5.0 and Earlier Lose Connectivity to vCenter Server
- Alarm for Loss of Network Redundancy on a Host
- Virtual Machines Lose Connectivity After Changing the Uplink Failover Order of a Distributed Port Group
- Unable to Add a Physical Adapter to a vSphere Distributed Switch
- Troubleshooting SR-IOV Enabled Workloads
- A Virtual Machine that Runs a VPN Client Causes Denial of Service for Virtual Machines on the Host or Across a vSphere HA Cluster
- Low Throughput for UDP Workloads on Windows Virtual Machines
- Virtual Machines on the Same Distributed Port Group and on Different Hosts Cannot Communicate with Each Other
- Attempt to Power On a Migrated vApp Fails Because the Associated Protocol Profile Is Missing
- Networking Configuration Operation Is Rolled Back and a Host Is Disconnected from vCenter Server
- Troubleshooting Licensing
Table 8‑1. Steps to Complete the Conversion to the Enhanced LACP Manually
Conversion Stage Target Configuration State Solution
1. Create a new LAG. A newly created LAG must be
present on the distributed switch.
Check the LACP configuration of the distributed switch and
create a new LAG if there is none.
2. Create a an intermediate
LACP teaming and failover
configuration on the
distributed port groups.
The newly created LAG must be
standby that lets you migrate
physical NICs to the LAG without
losing connectivity.
Check the teaming and failover configuration of the distributed
port group. Set the new LAG as standby if it is not.
If you do not want to use a LAG to handle the traffic for all
distributed port groups, revert the teaming and failover
configuration to a state where standalone uplinks are active
and the LAG is unused .
3. Reassign physical NICs
from standalone uplinks to
LAG ports.
All physical NICs from the LAG
ports must be reassigned from
standalone uplinks to the LAG
ports
Check whether physical NICs are assigned to the LAG ports.
Assign a physical NIC to every LAG port.
Note The LAG must remain standby in the teaming and
failover order of the distributed port groups while you reassign
physical NICs to the LAG ports.
4. Create the final LACP
teaming and failover
configuration on the
distributed port groups.
The final LACP teaming and
failover configuration is the
following.
n
Active: only the new LAG
n
Standby: empty
n
Unused: all standalone
uplinks
Check the teaming and failover configuration of the distributed
port group. Create a valid LACP teaming and failover
configuration for all distributed port groups for which you want
to apply LACP.
For example, suppose you verify that a new LAG has been created on the distributed switch and that an
intermediate teaming and failover configuration has been created for the distributed port groups. You
continue with checking whether there are physical NICs assigned to the LAG ports. You find out that not
all hosts have physical NICs assigned to the LAG ports, and you assign the NICs manually. You complete
the conversion by creating the final LACP teaming and failover configuration for the distributed port
groups.
Unable to Remove a Host from a vSphere Distributed
Switch
Under certain conditions, you might be unable to remove a host from the vSphere distributed switch.
Problem
n
Attempts to remove a host from a vSphere distributed switch fail, and you receive a notification that
resources are still in use. The notification that you receive might look like the following:
The resource '16' is in use.
vDS DSwitch port 16 is still on host 10.23.112.2 connected to MyVM nic=4000 type=vmVnic
vSphere Troubleshooting
VMware, Inc. 87