I had a hdd failure in my RAID 1 consisting out of /dev/sdb and /dev/sdc and the respective RAID array named "Datenspiegel1" in OMV went into state "clean, degraded", but the RAID volume filesystem /dev/md127/ was online and working with only /dev/sdb hdd online.
I replaced the defective /dev/sdc with new hdd, same type, same size and recovered the raid in the OVM RAID management.
After full recovery the RAID 1 array went back into state clean, so everything seemed to be good again, filesystem /dev/md127/ was online and synching.
I now also wanted to replace the older /dev/sdb hdd as a precaution but as soon as I shut down the system, also replacing the /dev/sdb/ hdd with an empty hdd same type, same size,
after rebooting, althoug disks /dev/sdc/ and /dev/sdb/ are recognized and online, the RAID 1 entry in OMV RAId management completely disappears and filesystem /dev/md127/ states "missing".
How can I fix this so I can recover the RAID 1 array again with the second new hdd?
Find the RAID array details below:
Version : 1.2
Creation Time : Fri Oct 14 16:01:40 2016
Raid Level : raid1
Array Size : 3906887488 (3725.90 GiB 4000.65 GB)
Used Dev Size : 3906887488 (3725.90 GiB 4000.65 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Fri Feb 3 15:57:43 2023
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : bitmap
Name : nas1:Datenspiegel1 (local to host nas1)
UUID : fb9463db:f41487d2:1cc69691:4f4b3deb
Events : 15362
Number Major Minor RaidDevice State
0 8 16 0 active sync /dev/sdb
2 8 32 1 active sync /dev/sdc
I do not know if this is just a co-incident, but after the first recovery after the initial hdd failure I got a strange message "/dev/md/Datenspiegel1" SparesMissing event via mail notification once per day (I assume via cron job).
I checked /etc/mdadm/mdadm.conf and found a "spares=1" entry for the respective array but the raid array in the OMV frontend itself stated spares=0
According to a forum thread I found here, I corrected the /etc/mdadm/mdadm.conf and now the message does not appear again, also after several reboots.