Release Notes
Table Of Contents
- Dell EMC PowerEdge Systems Running Red Hat Enterprise Linux 8 Release Notes
- Contents
- Release summary
- Compatibility
- New and enhanced in RHEL 8 release
- Important notes
- Fixes
- BIOS update does not complete when an update is performed using the Linux .BIN files
- Dmesg shows drm related call trace in RHEL 8.3
- Operating system crashes on servers with NVIDIA GPGPUs
- Dmesg and /var/log/messages display AMD-Vi related messages
- The status of the NetworkManager service may be inactive when RHEL 8.3 is rebooted
- Operating system crashes on AMD Rome CPU-based systems and with Intel E810 NIC
- The lvcreate command requests a response from the user when -wipesignature -yes parameters are passed
- The mdmonitor service displays an error during operating system installation
- The dmidecode utility displays the slot type as
for PCIe Gen 4 NVMe slots - The mcelog utility logs 'only decoding architectural errors' message in var/log/messages
- Disk drives part of MD RAID are not listed as installation destination by the installer
- Dell EMC OpenManage Storage Services utility fails to reconfigure the virtual disk
- Guest VMs with SRIOV VFs assigned take a long time to power on, and libvirt related errors are observed
- Dmesg displays Integrity Measurement Architecture (IMA) driver related-messages during system boot
- After every reboot, the network interface name changes
- Red Hat Enterprise Linux Version 8 installation wizard creates a duplicate bonding interface
- Servers with the AMD Rome processor display a CCP initialization failure message in dmesg
- PowerEdge servers with the AMD Rome processor fail to detect an NVMe drive after multiple hot plugs
- Operating system enters the dracut shell during boot
- System crashes when rebooted with SR-IOV-enabled QLogic cards
- After system reboot, Disk data format (DDF) devices are not listed in /proc/mdstat
- Updating NVMe firmware using the nvme-cli utility displays an error in dmesg
- Fatal error BDF 02:00.0 is detected with BCM574xx NICs
- NVMe devices are not detected after hot-plugging
- Linux operating system fails to detect the Intel x710 card
- Dmidecode displays OUT OF SPEC in Slot Type and Slot Length of SMBIOS system slots
- Custom partitioning fails with FC LUN
- When booting the system from iSCSI with Mellanox CX-4 and CX-5 adapters, the system reports csum failure message
- Red Hat Enterprise Linux 8 kernel panic is observed due to fatal hardware error
- Known issues
- System hangs when Intel tboot is used to boot the operating system
- The anaconda installer crashes while autoconfiguring disk partitions
- The version field in the output of the modinfo command for certain networking drivers is null
- NetworkManager may restart unexpectedly when creating greater than 256 VLAN devices configured with DHCP IP
- FCoE session is not reestablished after MX9116N switch is rebooted
- Dmesg displays error messages when NVMe device is surprise removed
- Status of the RAID 0 logical volume is displayed as Available when one of the members of the RAID array is surprise removed
- /proc/mdstat and mdadm -D commands display incorrect statuses when two NVMe devices are surprise removed from a RAID 5 MD array
- Dell Controlled Turbo feature is not functional
- Caps Lock key-press is not registered on the Dell PowerEdge iDRAC virtual console
- RHEL 8.3 installer does not automatically locate the source installation repository when only inst.stage2=hd boot option is used
- The output of the systemctl status command displays the status as thawing
- Advanced Configuration and Power Interface (ACPI) error messages displayed in dmesg
- Drivers available in OEMDRV drive are not installed during the operating system installation
- The Mellanox IB devices are listed under an incorrect device category on Red Hat Enterprise Linux 8
- The lspci utility is unable to read Vital Product Data (VPD) from QLogic QLE2692 adapter
- Driver dependency mismatch errors reported while installing out-of-box drivers on Red Hat Enterprise Linux 8.x
- Dmesg displays TPM and nvdimm related-messages in Red Hat Enterprise Linux 8.1
- Link Up message is observed when the NVMe device slot is powered off and the device is unplugged
- Mellanox InfiniBand adapters are listed in Bluetooth
- iscsiadm output displays STATIC in the iface.bootproto field when the network interface is configured to DHCP
- When system reboots, system stops responding at the end of the reboot process
- Unable to shut down RHEL 8 when you select Graceful shutdown option or when you press power button on the server
- RHEL 8 does not discover FCoE LUNs connected over Broadcom BCM57XXX NICs
- iSCSI LUN not discovered during RHEL 8 installation
- RHEL 8 installation fails on systems with Emulex OneConnect card
- Switching between runlevels fails
- Limitations
- Resources and support
- Contacting Dell EMC
Dmesg displays error messages when NVMe device is
surprise removed
Description: Dmesg or /var/log/messages show the following error messages after an NVMe device is unbound from
the NVMe driver and surprise removed:
kernel: pcieport 0000:b0:06.0: Timeout waiting for Presence Detect
kernel: pcieport 0000:b0:06.0: link training error: status 0x8001
kernel: pcieport 0000:b0:06.0: Failed to check link status
Applies to: Red Hat Enterprise Linux 8.2 and later
Solution: The issue is a cosmetic issue and can be ignored.
Cause: The error that is displayed is due to an issue with the pciehp driver.
Systems
affected:
Dell EMC PowerEdge R740XD and Dell EMC PowerEdge R7525.
Tracking number: 180987
Status of the RAID 0 logical volume is displayed as
Available when one of the members of the RAID array
is surprise removed
Description:
When Logical Volume Manager (LVM) is used to create a RAID 0 array and a member of the RAID array is
surprise removed, the lvdisplay command shows the logical volume (LV) status as 'Available'.
Applies to: Red Hat Enterprise Linux 8.2 and later.
Solution: Use the command lvs -o +lv_health_status to check the status of the RAID array. The command displays
the output Partial when a member of the RAID array is removed.
Systems
affected:
All Dell EMC PowerEdge systems supporting NVMe Surprise Removal.
Tracking number: 175865
/proc/mdstat and mdadm -D commands display
incorrect statuses when two NVMe devices are
surprise removed from a RAID 5 MD array
Description:
When two of three NVMe devices are surprise removed from a RAID 5 MD array, the command cat/
proc/mdstat displays the array status incorrectly as active. Similarly, when the status of the MD RAID
is queried using the mdadm -D /dev/mdN command, the number of active and working devices that
are displayed is two. Only the status of the array reported is incorrect. However, when I/O operations are
performed, I/O errors are observed as expected.
Applies to: Red Hat Enterprise Linux 8.2 and later.
Cause: When the number of devices that are surprise removed exceeds the number of devices that are required
for the array to function, the MD status is not updated.
Systems
affected:
All Dell EMC PowerEdge systems supporting NVMe Surprise Removal.
24 Known issues