6.0.1
Table Of Contents
- vSphere Troubleshooting
- Contents
- About vSphere Troubleshooting
- Updated Information
- Troubleshooting Overview
- Troubleshooting Virtual Machines
- Troubleshooting Fault Tolerant Virtual Machines
- Hardware Virtualization Not Enabled
- Compatible Hosts Not Available for Secondary VM
- Secondary VM on Overcommitted Host Degrades Performance of Primary VM
- Increased Network Latency Observed in FT Virtual Machines
- Some Hosts Are Overloaded with FT Virtual Machines
- Losing Access to FT Metadata Datastore
- Turning On vSphere FT for Powered-On VM Fails
- FT Virtual Machines not Placed or Evacuated by vSphere DRS
- Fault Tolerant Virtual Machine Failovers
- Troubleshooting USB Passthrough Devices
- Recover Orphaned Virtual Machines
- Virtual Machine Does Not Power On After Cloning or Deploying from Template
- Troubleshooting Fault Tolerant Virtual Machines
- Troubleshooting Hosts
- Troubleshooting vSphere HA Host States
- vSphere HA Agent Is in the Agent Unreachable State
- vSphere HA Agent is in the Uninitialized State
- vSphere HA Agent is in the Initialization Error State
- vSphere HA Agent is in the Uninitialization Error State
- vSphere HA Agent is in the Host Failed State
- vSphere HA Agent is in the Network Partitioned State
- vSphere HA Agent is in the Network Isolated State
- Configuration of vSphere HA on Hosts Times Out
- Troubleshooting Auto Deploy
- Auto Deploy TFTP Timeout Error at Boot Time
- Auto Deploy Host Boots with Wrong Configuration
- Host Is Not Redirected to Auto Deploy Server
- Package Warning Message When You Assign an Image Profile to Auto Deploy Host
- Auto Deploy Host with a Built-In USB Flash Drive Does Not Send Coredumps to Local Disk
- Auto Deploy Host Reboots After Five Minutes
- Auto Deploy Host Cannot Contact TFTP Server
- Auto Deploy Host Cannot Retrieve ESXi Image from Auto Deploy Server
- Auto Deploy Host Does Not Get a DHCP Assigned Address
- Auto Deploy Host Does Not Network Boot
- Authentication Token Manipulation Error
- Active Directory Rule Set Error Causes Host Profile Compliance Failure
- Unable to Download VIBs When Using vCenter Server Reverse Proxy
- Troubleshooting vSphere HA Host States
- Troubleshooting vCenter Server and the vSphere Web Client
- Troubleshooting Availability
- Troubleshooting Resource Management
- Troubleshooting Storage DRS
- Storage DRS is Disabled on a Virtual Disk
- Datastore Cannot Enter Maintenance Mode
- Storage DRS Cannot Operate on a Datastore
- Moving Multiple Virtual Machines into a Datastore Cluster Fails
- Storage DRS Generates Fault During Virtual Machine Creation
- Storage DRS is Enabled on a Virtual Machine Deployed from an OVF Template
- Storage DRS Rule Violation Fault Is Displayed Multiple Times
- Storage DRS Rules Not Deleted from Datastore Cluster
- Alternative Storage DRS Placement Recommendations Are Not Generated
- Applying Storage DRS Recommendations Fails
- Troubleshooting Storage I/O Control
- Troubleshooting Storage DRS
- Troubleshooting Storage
- Resolving SAN Storage Display Problems
- Resolving SAN Performance Problems
- Virtual Machines with RDMs Need to Ignore SCSI INQUIRY Cache
- Software iSCSI Adapter Is Enabled When Not Needed
- Failure to Mount NFS Datastores
- VMkernel Log Files Contain SCSI Sense Codes
- Troubleshooting Storage Adapters
- Checking Metadata Consistency with VOMA
- Troubleshooting Flash Devices
- Troubleshooting Virtual Volumes
- Troubleshooting VAIO Filters
- Troubleshooting Networking
- Troubleshooting MAC Address Allocation
- The Conversion to the Enhanced LACP Support Fails
- Unable to Remove a Host from a vSphere Distributed Switch
- Hosts on a vSphere Distributed Switch 5.1 and Later Lose Connectivity to vCenter Server
- Hosts on vSphere Distributed Switch 5.0 and Earlier Lose Connectivity to vCenter Server
- Alarm for Loss of Network Redundancy on a Host
- Virtual Machines Lose Connectivity After Changing the Uplink Failover Order of a Distributed Port Group
- Unable to Add a Physical Adapter to a vSphere Distributed Switch
- Troubleshooting SR-IOV Enabled Workloads
- A Virtual Machine that Runs a VPN Client Causes Denial of Service for Virtual Machines on the Host or Across a vSphere HA Cluster
- Low Throughput for UDP Workloads on Windows Virtual Machines
- Virtual Machines on the Same Distributed Port Group and on Different Hosts Cannot Communicate with Each Other
- Attempt to Power On a Migrated vApp Fails Because the Associated Protocol Profile Is Missing
- Networking Configuration Operation Is Rolled Back and a Host Is Disconnected from vCenter Server
- Troubleshooting Licensing
- Index
The Conversion to the Enhanced LACP Support Fails
Under certain conditions, the conversion from an existing LACP configuration to the enhanced LACP
support on a vSphere Distributed Switch 5.5 and later might fail.
Problem
After you upgrade a vSphere distributed switch to version 5.5 and later, when you initiate the conversion to
the enhanced LACP support from an existing LACP configuration, the conversion fails at a certain stage of
the process.
Cause
The conversion from an existing LACP configuration to the enhanced LACP support includes several tasks
for reconfiguring the distributed switch. The conversion might fail because another user might have
reconfigured the distributed switch during the conversion. For example, physical NICs from the hosts might
have been reassigned to different uplinks or the teaming and failover configuration of the distributed port
groups might have been changed.
Another reason for the failure might be that some of the hosts have disconnected during the conversion.
Solution
When the conversion to the enhanced LACP support fails on a certain stage, it is completed only partially.
You must check the configuration of the distributed switch and the participating hosts to identify the objects
with incomplete LACP configuration.
Check the target configuration that must result from each conversion stage in the order that is listed in the
table. When you locate the stage where the conversion has failed, complete its target configuration manually
and continue with the stages that follow.
Table 8‑1. Steps to Complete the Conversion to the Enhanced LACP Manually
Conversion Stage Target Configuration State Solution
1. Create a new LAG. A newly created LAG must be
present on the distributed
switch.
Check the LACP configuration of the distributed switch
and create a new LAG if there is none.
2. Create a an intermediate
LACP teaming and failover
configuration on the
distributed port groups.
The newly created LAG must
be standby that lets you
migrate physical NICs to the
LAG without losing
connectivity.
Check the teaming and failover configuration of the
distributed port group. Set the new LAG as standby if it
is not.
If you do not want to use a LAG to handle the traffic for
all distributed port groups, revert the teaming and
failover configuration to a state where standalone
uplinks are active and the LAG is unused .
3. Reassign physical NICs
from standalone uplinks to
LAG ports.
All physical NICs from the
LAG ports must be reassigned
from standalone uplinks to
the LAG ports
Check whether physical NICs are assigned to the LAG
ports. Assign a physical NIC to every LAG port.
NOTE The LAG must remain standby in the teaming and
failover order of the distributed port groups while you
reassign physical NICs to the LAG ports.
4. Create the final LACP
teaming and failover
configuration on the
distributed port groups.
The final LACP teaming and
failover configuration is the
following.
n
Active: only the new LAG
n
Standby: empty
n
Unused: all standalone
uplinks
Check the teaming and failover configuration of the
distributed port group. Create a valid LACP teaming and
failover configuration for all distributed port groups for
which you want to apply LACP.
Chapter 8 Troubleshooting Networking
VMware, Inc. 81