Release Notes
Table Of Contents
- VMware vSphere 6 on Dell EMC PowerEdge Servers Release Notes
- Overview
- Known issues
- Data integrity issue occurs when deleting virtual disks on ESXi system with PERC9 controller in RAID mode
- qedf driver data integrity may fail at 256 KB block size
- Uninstallation of Dell EMC OpenManage VIB fails to delete certain files
- IPMI driver stack may stop responding when iDRAC hard reset is performed
- Dell EMC PowerEdge Servers with VSAN All-Flash configuration and deduplication enabled, displays checksum error
- After installing ESXi OS, ACPI error messages are displayed in the VMkernel log
- PowerEdge 14th generations servers installed with ESXi are configured with default login credentials
- By default, VMFS datastore is disabled on Dell EMC 14th generation PowerEdge Servers with factory-installed VMware ESXi on BOSS-S1
- Scratch partition stops working after hardware or software iSCSI is enabled on ESXi with the scsi-be2iscsi Emulex driver
- In ESXi Hardware Status, vmnics displays Battery status
- In ESXi host client, physical NICs reports duplicate entries of the speed supported
- Embedded Host Client or vCenter Server reports an error when configuring SR-IOV
- Virtual machines fail to power on, when System BIOS has MMIO set to 56 TB with Network Controllers enabled with NPAR or NPAREP and SR-IOV
- ESXi 6.0 U3 upgrade to ESXi 6.5 fails with UEFI Secureboot enabled
- QLE2692 Network controller card is listed as QLogic Corp 2700
- Operating system reinstallation on top of an existing ESXi installation on a BOSS device fails
- NUMA related warning message is reported in VMkernel logs when Dell Fault Resilient Memory is enabled
- iDRAC does not report the operating system information
- Vendor label for certain hard drives report as NVMe
- When iSM VIB is running on ESXi 6.0 U2, syslog file displays error messages intermittently
- IPMI driver stack may stop responding when iDRAC hard reset is performed
- Fault tolerance feature is not supported on AMD 63xx series processor
- vSphere web client displays incorrect Service Tag for Dell EMC PowerEdge blade servers
- Unable to upgrade VMware ESXi when the ESXi partition table contains a coredump partition
- Configuring NVMe devices as passthrough device to the guest operating system, ESXi host stops responding and results in PSOD
- Power supply unit status and details are displayed incorrectly in vSphere Web Client or vCenter Server
- Temperature status of the processor may display incorrectly in vSphere Web Client or vCenter Server
- Storage-related sensor details are not available in vSphere Web Client or vCenter Server
- Dell EMC PowerEdge Express Flash NVMe PCIe SSD device is not detected during hot-plug
- VMs configured with Fault Tolerance might not be in a protected state
- The status of LUN or disks is displayed as degraded
- Dual port Mellanox card displays incorrect vmnic number
- VMware ESXi installer lists local LUNs under the Remote section
- Incorrect name for Dell PowerEdge FD332 storage controller
- Software RAID is not supported for VMware ESXi
- Status of some of the PCI devices is listed as Unknown on vCenter server
- PSU wattage is not displayed for a ESXi host on the vCenter Server
- ESXi Direct Console User Interface displays the hardware label as N/A
- ESXi DCUI displays lesser memory than the total
- ESXi installation may fail while deploying from virtual media
- Unable to boot ESXi 6.0 with Intel X710 devices
- ESXi 6.0 host does not function and results in Purple Screen of Death
- VMware ESXi host periodically disconnect and reconnect from vCenter Server during heavy load on storage subsystem
- Unable to write vmkernel coredump to local LUN when PSOD occurs
- ESXi console displays an error message
- vmkernel log file displays an error message
- Unable to turn on Windows virtual machine when Dell PowerEdge Express Flash NVMe PCIe SSD is directly connected as a passthrough device
- Cannot enable SR-IOV on Intel X520 adapter using vSphere web client
- The PCI passthrough section on vSphere client or vCenter server does not display Dell PowerEdge Express Flash NVMe PCIe SSD
- Related information for virtualization solutions
- Getting help
• vmkernel log file displays an error message
• Unable to turn on Windows virtual machine when Dell PowerEdge Express Flash NVMe PCIe SSD is directly connected as a
passthrough device
• Cannot enable SR-IOV on Intel X520 adapter using vSphere web client
• The PCI passthrough section on vSphere client or vCenter server does not display Dell PowerEdge Express Flash NVMe
PCIe SSD
Data integrity issue occurs when deleting virtual disks
on ESXi system with PERC9 controller in RAID mode
Description: On the Dell EMC PowerEdge systems running ESXi 6.0.x, data integrity issue occurs on PERC9 controller
in RAID mode in the following conditions:
● PERC H730, H730P, H830, FD332xS, FD332xD
● ESXi OS running PERC driver 7.x
● Minimum three virtual disks or more RAID arrays are configured
● A virtual disk is removed, using a delete command, or manually removing all physical disks from an
array resulting in an array (VD) failure or removal
Applies to: ESXi 6.0.x
Solution: Download and install the ESXi 6.0.x image with PERC driver v7.703.18.00.
qedf driver data integrity may fail at 256 KB block size
Description:
On the Dell EMC PowerEdge systems running ESXi 6.0.x with QLogic FCoE driver (qedf), the I/O
workload executed from a Linux VM data integrity may fail at 256 KB block size.
Applies to: ESXi 6.0.x
Solution: Update to Dell EMC customized VMware ESXi 6.5 U1 A11 image.
Uninstallation of Dell EMC OpenManage VIB fails to
delete certain files
Description:
Specific Dell EMC OpenManage files and directories are not deleted after uninstalling the Dell EMC
OpenManage VIB.
Applies to: ESXi 6.0.x
Solution: There is no functionality loss. Reboot the system for complete cleanup.
IPMI driver stack may stop responding when iDRAC
hard reset is performed
Description:
On the Dell EMC PowerEdge systems, the IPMI driver stack stops responding when iDRAC hard reset is
performed.
Applies to: ESXi 6.0.x
Solution: This is a known issue. Complete the following workaround steps to resolve the issue:
1. Stop all the applications that use IPMI stack by using the command /etc/init.d/sfcbd-
watchdog stop.
2. To unload the drivers, run Vmkload_mod –u ipmi_si_drv.
Known issues 7