White Papers

Known issues with NVMe surprise removal
10 NVMe Surprise Removal on Dell EMC PowerEdge servers running Linux operating systems | 451
3.1.4 /proc/mdstat and mdadm -D commands display incorrect statuses when two
NVMe devices are surprise removed from a RAID 5 MD array
Description: When two of three NVMe devices are surprise removed from a RAID 5 MD array, the command
cat/proc/mdstat displays the array status incorrectly as active. Similarly, when the status of the MD
RAID is queried using the mdadm -D /dev/mdN command, the number of active and working devices
displayed is two. Only the status of the array reported is incorrect however, when I/O operations are
performed, I/O errors are observed as expected.
Cause: When the number of devices that are surprise removed exceeds the number of devices that are
required for the array to function, the MD status is not updated.
3.2 Red Hat Enterprise Linux 8.2
3.2.1 Dmesg displays error messages when NVMe device is surprise removed
Description: Dmesg or /var/log/messages show the following error messages after an NVMe device is
unbound from the NVMe driver and surprise removed:
kernel: pcieport 0000:b0:06.0: Timeout waiting for Presence Detect
kernel: pcieport 0000:b0:06.0: link training error: status 0x8001
kernel: pcieport 0000:b0:06.0: Failed to check link status
The issue is a cosmetic issue and can be ignored.
Applies to: Red Hat Enterprise Linux 8.2 and later
Cause: The error that is displayed is due to an issue with the pciehp driver.
3.2.2 Status of the RAID 0 logical volume is displayed as Available when one of the
members of the RAID array is surprise removed
Description: When Logical Volume Manager (LVM) is used to create a RAID 0 array and a member of the
RAID array is surprise removed, the lvdisplay command shows the logical volume (LV) status as
‘Available’.
Solution: Use the command lvs -o +lv_health_status to check the status of the RAID array. The
command displays the output Partial when a member of the RAID array is removed.