OMV Keeps loosing RAID HDDs

  • Hi everyone,


    we are using a Gigabyte Board with a B450 Chip having 6 Sata Ports and 1 M.2 Port. OMV is running on the M.2 SSD. As this deactivates one SSD Port from the mainboard we have an additional 2-Port PCIE-SATA controller. This controller is used for one RAID 1 (md2). In addition we have two RAID 1 (md0 and md1) connected to the sata ports of the mainboard. All raids are controlled by mdadm and the OMV gui.


    Some weeks ago we had a degradedarray event at md1. So I changed the HDD, repaired the RAID and everything was fine. A few days ago I recieved an e-mail that the "filesystem flags changed to 0x1000" at md0 and got two mails for degradedarray events at md0 and md1. At md1 it was the freshly replaced HDD. So i checked the old md1 HDD and it was running perfectly, so the first md1 degradedarray was not caused by a broken HDD.


    So I checked the currently "broken" drives from md0 und md1 and these HDDs are fine as well, showing up using lsblk, smart tests are fine. In addition I was wondering why this error occurs only at hdds connected directly to the mainboard and not to the sata controller.


    So now I think I will format the "broken" drives and readd them to the raids, but I must know whats causing these issues as this is a productive system.


    Any ideas where I can start?

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!