It appeared that may have achieved some success as the array was rebuilding with 5 of the 6 drives.
But now this has failed.
The array was operational with 4 of the 6 devices sdb, sdd, sde, and sdf.
I used sudo smartctl -i /dev/sda and sudo smartctl -i /dev/sdc to check that these devices were not in the array against serial numbers shown within the browser control panel and then ran smartctl short test short tests on each of sda and sdc with both showing zero errors.
I used wipefs -a /dev/sda and wipefs -a /dev/sdc to clear the drives.
I used mdadm /dev/md127 --add /etc/sda (and the same for sdc) to add the devices and the file system is again showing that it is missing.
I suspect that I should have used --re-add instead of --add
~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : inactive sdc[6](S) sde[4](S) sdf[5](S) sda[7](S) sdd[3](S) sdb[1](S)
46883366928 blocks super 1.2
~# mdadm --examine --scan -v
ARRAY /dev/md/OMV136R629 level=raid6 metadata=1.2 num-devices=6 UUID=89becd69:b5a4761c:30e77a30:c3b6a22f name=OMV136.local:OMV136R629
devices=/dev/sda,/dev/sdf,/dev/sdd,/dev/sde,/dev/sdc,/dev/sdb
I stopped the array and forced rebuilding. It has now started with 4 drives with 1 drive rebuilding
~# mdadm --stop /dev/md127
mdadm: stopped /dev/md127
~# sudo mdadm --assemble --verbose --force /dev/md127 /dev/sd[abcdef]
mdadm: looking for devices for /dev/md127
mdadm: /dev/sda is identified as a member of /dev/md127, slot 2.
mdadm: /dev/sdb is identified as a member of /dev/md127, slot 1.
mdadm: /dev/sdc is identified as a member of /dev/md127, slot 0.
mdadm: /dev/sdd is identified as a member of /dev/md127, slot 3.
mdadm: /dev/sde is identified as a member of /dev/md127, slot 4.
mdadm: /dev/sdf is identified as a member of /dev/md127, slot 5.
mdadm: forcing event count in /dev/sde(4) from 24440 upto 24442
mdadm: added /dev/sdb to /dev/md127 as 1
mdadm: added /dev/sda to /dev/md127 as 2 (possibly out of date)
mdadm: added /dev/sdd to /dev/md127 as 3
mdadm: added /dev/sde to /dev/md127 as 4
mdadm: added /dev/sdf to /dev/md127 as 5
mdadm: added /dev/sdc to /dev/md127 as 0
mdadm: /dev/md127 has been started with 4 drives (out of 6) and 1 rebuilding.
Alles anzeigen
Querying the array shows device sda removed although the outputs above show that it is added
~# sudo mdadm --query --detail /dev/md127
/dev/md127:
Version : 1.2
Creation Time : Fri Jun 26 15:13:28 2020
Raid Level : raid6
Array Size : 31255576576 (29807.64 GiB 32005.71 GB)
Used Dev Size : 7813894144 (7451.91 GiB 8001.43 GB)
Raid Devices : 6
Total Devices : 5
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Fri Aug 27 16:39:23 2021
State : clean, degraded, recovering
Active Devices : 4
Working Devices : 5
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : bitmap
Rebuild Status : 2% complete
Name : OMV136.local:OMV136R629
UUID : 89becd69:b5a4761c:30e77a30:c3b6a22f
Events : 24637
Number Major Minor RaidDevice State
6 8 32 0 spare rebuilding /dev/sdc
1 8 16 1 active sync /dev/sdb
- 0 0 2 removed
3 8 48 3 active sync /dev/sdd
4 8 64 4 active sync /dev/sde
5 8 80 5 active sync /dev/sdf
Alles anzeigen
The browser control panel shows the file system as being online and the RAID management page shows that it is clean, degraded, rebuilding and with 5 of the 6 drives.
~# sudo mdadm --query --detail /dev/md127
/dev/md127:
Version : 1.2
Creation Time : Fri Jun 26 15:13:28 2020
Raid Level : raid6
Array Size : 31255576576 (29807.64 GiB 32005.71 GB)
Used Dev Size : 7813894144 (7451.91 GiB 8001.43 GB)
Raid Devices : 6
Total Devices : 5
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Fri Aug 27 16:39:23 2021
State : clean, degraded, recovering
Active Devices : 4
Working Devices : 5
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : bitmap
Rebuild Status : 2% complete
Name : OMV136.local:OMV136R629
UUID : 89becd69:b5a4761c:30e77a30:c3b6a22f
Events : 24637
Number Major Minor RaidDevice State
6 8 32 0 spare rebuilding /dev/sdc
1 8 16 1 active sync /dev/sdb
- 0 0 2 removed
3 8 48 3 active sync /dev/sdd
4 8 64 4 active sync /dev/sde
5 8 80 5 active sync /dev/sdf
Alles anzeigen
PROBLEM
Now after 6.8%, rebuilding appears to have stalled, the RAID management page on the browser control panel only shows 4 devices sdb, sdc, sdd and sdf and the finish time has extended from 610 minutes to about 15,000 minutes. Device was the device rebuilding and device sde appears to be missing
The monitor connected to the NAS shows scrolling data with the text md: super_written gets error =10
The RAID and the file system are missing
I now have the following error
~# mdadm --examine -v /dev/md127
mdadm: No md superblock detected on /dev/md127.
QUESTION
What is occurring and what action do I now need to take to fix this?
Thank you