6.0.1
Table Of Contents
- vSphere Troubleshooting
- Contents
- About vSphere Troubleshooting
- Updated Information
- Troubleshooting Overview
- Troubleshooting Virtual Machines
- Troubleshooting Fault Tolerant Virtual Machines
- Hardware Virtualization Not Enabled
- Compatible Hosts Not Available for Secondary VM
- Secondary VM on Overcommitted Host Degrades Performance of Primary VM
- Increased Network Latency Observed in FT Virtual Machines
- Some Hosts Are Overloaded with FT Virtual Machines
- Losing Access to FT Metadata Datastore
- Turning On vSphere FT for Powered-On VM Fails
- FT Virtual Machines not Placed or Evacuated by vSphere DRS
- Fault Tolerant Virtual Machine Failovers
- Troubleshooting USB Passthrough Devices
- Recover Orphaned Virtual Machines
- Virtual Machine Does Not Power On After Cloning or Deploying from Template
- Troubleshooting Fault Tolerant Virtual Machines
- Troubleshooting Hosts
- Troubleshooting vSphere HA Host States
- vSphere HA Agent Is in the Agent Unreachable State
- vSphere HA Agent is in the Uninitialized State
- vSphere HA Agent is in the Initialization Error State
- vSphere HA Agent is in the Uninitialization Error State
- vSphere HA Agent is in the Host Failed State
- vSphere HA Agent is in the Network Partitioned State
- vSphere HA Agent is in the Network Isolated State
- Configuration of vSphere HA on Hosts Times Out
- Troubleshooting Auto Deploy
- Auto Deploy TFTP Timeout Error at Boot Time
- Auto Deploy Host Boots with Wrong Configuration
- Host Is Not Redirected to Auto Deploy Server
- Package Warning Message When You Assign an Image Profile to Auto Deploy Host
- Auto Deploy Host with a Built-In USB Flash Drive Does Not Send Coredumps to Local Disk
- Auto Deploy Host Reboots After Five Minutes
- Auto Deploy Host Cannot Contact TFTP Server
- Auto Deploy Host Cannot Retrieve ESXi Image from Auto Deploy Server
- Auto Deploy Host Does Not Get a DHCP Assigned Address
- Auto Deploy Host Does Not Network Boot
- Authentication Token Manipulation Error
- Active Directory Rule Set Error Causes Host Profile Compliance Failure
- Unable to Download VIBs When Using vCenter Server Reverse Proxy
- Troubleshooting vSphere HA Host States
- Troubleshooting vCenter Server and the vSphere Web Client
- Troubleshooting Availability
- Troubleshooting Resource Management
- Troubleshooting Storage DRS
- Storage DRS is Disabled on a Virtual Disk
- Datastore Cannot Enter Maintenance Mode
- Storage DRS Cannot Operate on a Datastore
- Moving Multiple Virtual Machines into a Datastore Cluster Fails
- Storage DRS Generates Fault During Virtual Machine Creation
- Storage DRS is Enabled on a Virtual Machine Deployed from an OVF Template
- Storage DRS Rule Violation Fault Is Displayed Multiple Times
- Storage DRS Rules Not Deleted from Datastore Cluster
- Alternative Storage DRS Placement Recommendations Are Not Generated
- Applying Storage DRS Recommendations Fails
- Troubleshooting Storage I/O Control
- Troubleshooting Storage DRS
- Troubleshooting Storage
- Resolving SAN Storage Display Problems
- Resolving SAN Performance Problems
- Virtual Machines with RDMs Need to Ignore SCSI INQUIRY Cache
- Software iSCSI Adapter Is Enabled When Not Needed
- Failure to Mount NFS Datastores
- VMkernel Log Files Contain SCSI Sense Codes
- Troubleshooting Storage Adapters
- Checking Metadata Consistency with VOMA
- Troubleshooting Flash Devices
- Troubleshooting Virtual Volumes
- Troubleshooting VAIO Filters
- Troubleshooting Networking
- Troubleshooting MAC Address Allocation
- The Conversion to the Enhanced LACP Support Fails
- Unable to Remove a Host from a vSphere Distributed Switch
- Hosts on a vSphere Distributed Switch 5.1 and Later Lose Connectivity to vCenter Server
- Hosts on vSphere Distributed Switch 5.0 and Earlier Lose Connectivity to vCenter Server
- Alarm for Loss of Network Redundancy on a Host
- Virtual Machines Lose Connectivity After Changing the Uplink Failover Order of a Distributed Port Group
- Unable to Add a Physical Adapter to a vSphere Distributed Switch
- Troubleshooting SR-IOV Enabled Workloads
- A Virtual Machine that Runs a VPN Client Causes Denial of Service for Virtual Machines on the Host or Across a vSphere HA Cluster
- Low Throughput for UDP Workloads on Windows Virtual Machines
- Virtual Machines on the Same Distributed Port Group and on Different Hosts Cannot Communicate with Each Other
- Attempt to Power On a Migrated vApp Fails Because the Associated Protocol Profile Is Missing
- Networking Configuration Operation Is Rolled Back and a Host Is Disconnected from vCenter Server
- Troubleshooting Licensing
- Index
Failure to Mount NFS Datastores
Attempts to mount NFS datastores with names in international languages result in failures.
Problem
The use of non-ASCII characters for directory and file names on NFS storage might cause unpredictable
behavior. For example, you might fail to mount an NFS datastore or not be able to power on a virtual
machine.
Cause
ESXi supports the use of non-ASCII characters for directory and file names on NFS storage, so you can
create datastores and virtual machines using names in international languages. However, when the
underlying NFS server does not offer internationalization support, unpredictable failures might occur.
Solution
Always make sure that the underlying NFS server offers internationalization support. If the server does not,
use only ASCII characters.
VMkernel Log Files Contain SCSI Sense Codes
Certain VMkernel messages related to storage might contain SCSI Sense codes.
Problem
When you analyze ESXi host's /var/log/vmkernel log files, you encounter events or error messages that
contain SCSI Sense codes.
Solution
Ability to interpret the SCSI Sense codes can help you better understand problems in your storage
environment. Because the SCSI Sense code values are assigned by the T10 committee, you need to consult
the T10 standards documentation to determine the meaning of the codes. This topic explains how to use the
T10 documentation to interpret the SCSI Sense codes.
Example: Interpreting SCSI Sense Codes
The following is an example of a SCSI error message that appears in the ESXi log file:
2011-04-04T21:07:30.257Z cpu2:2050)ScsiDeviceIO: 2315: Cmd(0x4124003edb00) 0x12, CmdSN 0x51 to
dev "naa.600508XXXXXXXXXXXXX" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x25 0x0
In this example, SCSI Sense codes are represented by two fields, H:0x0 D:0x2 P:0x0 and 0x5 0x25 0x0.
The first field, H:0x0 D:0x2 P:0x0, is a combination of SCSI Status codes for the three components in your
storage environment, the host, the device, and the plug-in. The SCSI Status code is used to determine the
success or failure of a SCSI command. To interpret each SCSI Status code, see the
http://www.t10.org/lists/2status.htm.
NOTE Hexadecimal numbers in the T10 documentation use the NNNh format, while SCSI Sense codes in
the ESXi log files follow the 0xNNN format. For example, 0x2 = 02h.
You will get the following interpretation for the status field of the above example: H:0x0 D:0x2 P:0x0 =
H(host):GOOD D(device):CHECK CONDITION P(plug-in):GOOD.
The second field in a typical SCSI error message provides more detailed information about the error. It is a
combination of Sense Key (sense), Additional Sense Code (asc), and Additional Sense Code Qualifier (ascq)
parameters.
Chapter 7 Troubleshooting Storage
VMware, Inc. 67