Release Notes
Table Of Contents
- VMware vSphere 7.x on Dell EMC PowerEdge Servers Release Notes
- Contents
- Release summary
- Compatibility
- New and enhanced in VMware vSphere 7.0 release
- Known issues
- vLCM upgrade fails when Mellanox-nmlx4 Async 3.19.70.1 driver is installed
- Host seeding feature fails to perform when Dell EMC customized VMware 7.0 U2 A00 image is installed
- vLCM upgrade with Dell EMC 7.0 U1-A03 Custom Addon file with Intel-icen Async driver 1.4.2.0 fails
- VMware ESXi 7.0 U2 A00 fails to boot on upgrading from earlier ESXi version using custom offline bundles
- An ESXi host with Solarflare devices may experience PSOD during reboot post enabling SR-IOV
- Persistent memory datastore is not listed under vCenter
- iDRAC web interface displays incorrect status with ESXi inbox native driver
- Configuration changes made are not persistent across hosts that are restarted
- Operating system takes more time to complete boot when Intel DCPMM is configured in the interleaved app-direct mode
- All vCPUs assigned to a virtual machine are not visible in the guest operating system
- The Dell iSM and SFCB services fail to start after DCUI restarts the management agents
- Hostd core dump files are generated in /var/core on the VMware ESXi 7.0 host
- Virtual machine fails to start when Secure Encrypted Virtualization is enabled
- Virtual machine with Secure Encrypted Virtualization enabled uses only one VMware vCPU
- NVDIMM battery health status is displayed incorrectly on vCenter web client and ESXi host web client
- The health status of the SD cards are not updated under the Hardware Health tab
- Network connectivity is lost when virtual machines configured with Virtual Guest Tagging (VGT) are migrated to another ESXi host
- The Hardware Health tab of the vCenter Web Client does not display any information when a drive is removed
- Status of the NVMe devices configured as a host cache device is not updated after surprise removal
- Upgrading to ESXi 7.0 from earlier ESXi version fails
- VMware ESXi 7.0 does not boot after upgrading from VMware ESXi 6.5.x
- Upgrading to ESXi 7.0 from earlier ESXi version fails due to system management software incompatibility
- Installation of ESXi 7.0 fails on PowerEdge modular servers with BCM 5719 or 5720
- Surprise hot-plug of NVMe device is not supported
- Surprise hot-plug of an NVMe in a vSAN environment results in a PSOD
- The esxcli commands fail to locate the non-NVMe storage devices
- The option to start the NTP service does not work post configuration
- The driver modules qedf and qedi are not available for the QLogic 412xx adapter
- Unable to enable SRIOV on Solarfare devices with ESXi 7.0
- The rdma module for BCM 57416 devices fail to load in ESXi 7.0
- Uninstalling Dell EMC iSM VIB fails to delete files
- FCoE adapter speed is displayed as 0
- Status of the FCoE controller is displayed as offline
- The NIC description is incorrect
- The hardware label for PCIe devices is shown as Not appropriate
- Description of the NVMe drive displayed is incorrect
- Resources and support
- Contacting Dell EMC
Operating system takes more time to complete boot
when Intel DCPMM is configured in the interleaved
app-direct mode
Description: Dell EMC PowerEdge servers with the VMware ESXi operating system installed take more time to
complete boot when Intel® Optane™ DC Persistent Memory (DCPMM) is configured in the interleaved
app-direct mode. The time taken to complete the boot depends on the capacity, configuration, and
the number of sockets on the server. Messages such as: Attempt to allocate zero bytes,
allocating 1 byte from the VMkernel log can be ignored. A similar issue is documented when the
VMware ESXi operating system takes a longer time to complete boot when DIMMs are configured in
non-interleaved sets. For more information, see Dell EMC DCPMM User's Guide.
Applies to: ESXi 7.0.x
Systems
affected:
Dell EMC PowerEdge servers that support Persistent Memory (PMEM)
Tracking number: 174273
All vCPUs assigned to a virtual machine are not visible
in the guest operating system
Description:
The guest operating system does not display all the vCPUs assigned to it when Secure Encrypted
Virtualization (SEV) is enabled on the ESXi 7.0 U1 host. Only one vCPU is displayed on the guest
operating system. Errors similar to the following are displayed in the vmware.log log file:
[ 10.236141] smpboot: do_boot_cpu failed(-1) to wakeup CPU#1
[ 20.236141] smpboot: do_boot_cpu failed(-1) to wakeup CPU#2
[ 30.236140] smpboot: do_boot_cpu failed(-1) to wakeup CPU#3
[ 40.236142] smpboot: do_boot_cpu failed(-1) to wakeup CPU#4
[ 50.236139] smpboot: do_boot_cpu failed(-1) to wakeup CPU#5
[ 60.236143] smpboot: do_boot_cpu failed(-1) to wakeup CPU#6
[ 70.236139] smpboot: do_boot_cpu failed(-1) to wakeup CPU#7
Applies to: ESXi 7.0 U1
Systems
affected:
All Dell EMC PowerEdge yx5x servers with AMD processors that support Secure Encrypted Virtualization
(SEV).
Tracking number: 180679
The Dell iSM and SFCB services fail to start after
DCUI restarts the management agents
Description:
The Dell iDRAC Service Module (iSM) and the small footprint CIM broker (SFCB) services fail to start on
ESXi 7.0 hosts after the management agents are restarted from the ESXi Direct Console User Interface
(DCUI).
Applies to: ESXi 7.0.x
Workaround: Restart the management agents again from the ESXi DCUI or restart the SFCB service from command
line to reestablish the communication between the Dell iSM and Dell Remote Access Controller (iDRAC).
To restart the SFCB service, follow the steps:
1. Log in to the root ESXi Shell or SSH. For more information about enabling ESXi Shell or (SSH), see
VMware Knowledge Base article 2004746
12 Known issues










