Interesting never seen that one before, I'm assuming that by cloning the drives one is informing mdadm that there are no bad blocks on those drives
You only have to do that if the array is active (auto-read-only)
What you've done is an interesting solution to solving your problem
giving this an update: There were still badblocks still after cloning the drives. The badblock seem were logic badblocks or whatever you call it. The array was not mounting when that command finished. In journalctl, it was showing several corrupt files and that the filesystem was inconsistent, which I had to try to recover with fsck. After this I could only recover like 60% of the data. Most files were corrupted.
After this I thought that maybe the cloning was not so successful. Having 60% of the data back was better than 0, but I thought, ok, le me try this with the original drives, including the WD RED (which apparently is an SMR drive and that is not suitable to any NAS raid. Acording to the experts, you should avoid these, specially a mix of them). read this
So I again used mdadm --assemble --update=force-no-bbl --force /dev/md127 /dev/sda /dev/sdd /dev/sde /dev/sdc and assembled the array as before but ignoring the badblocks. Now I think I have all the data intact and the raid is healthy. No more filesystem corrupt or the need to do a fsck.
Going to backup everything to a 8TB drive plus some 500gb drives that I have lying around and I'm going to rebuild the entire array probably with only the toshibas N300 drives.
root@openmediavault:/srv/dev-disk-by-uuid-62ddec69-f2d5-49bb-b83e-02b352912183/downs# mdadm --detail /dev/md127
/dev/md127:
Version : 1.2
Creation Time : Sat Apr 7 18:19:22 2018
Raid Level : raid5
Array Size : 15627548672 (14903.59 GiB 16002.61 GB)
Used Dev Size : 3906887168 (3725.90 GiB 4000.65 GB)
Raid Devices : 5
Total Devices : 5
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Mon Mar 6 10:32:47 2023
State : clean
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : bitmap
Name : helios4:Helios4
UUID : b6d959d8:96a3a0e6:29da39c1:15407b98
Events : 58970
Number Major Minor RaidDevice State
6 8 80 0 active sync /dev/sdf
5 8 48 1 active sync /dev/sdd
7 8 32 2 active sync /dev/sdc
4 8 16 3 active sync /dev/sdb
8 8 0 4 active sync /dev/sda
Alles anzeigen
I hope that my mistakes and the time I lost with this, saves someone else from the same hassle.