I had a disk failure on my RAID5 recently while I was on holidays with no spare time to dedicate to that issue. The array was logically in degraded mode from that moment but was still running fine with 3 disks until a second event occured yesterday :
I don't really understand why I received this message as this device is OK, the HP Gen 8 RAID controller does not detect any error on it and SMART tests are good. At this time I turned off the server and physically unplugged the first drive which was faulty then I restarted it but now my RAID5 is marked as inactive and is no more visible by OMV although the three remaining physical disks are present in the GUI.
/dev/sdb: UUID="394b8163-356b-6262-52f6-72d24c3bc33f" UUID_SUB="57584ed1-3398-6977-2e38-530ef2b968e2" LABEL="XPENOLOGY:2" TYPE="linux_raid_member"/dev/sdc1: UUID="e2ec8542-ea1a-e93a-3017-a5a8c86610be" TYPE="linux_raid_member" PARTUUID="3daedda2-ab78-4a06-b185-c980acdfe091"/dev/sdc2: UUID="d5d5bcfe-89b5-e8c4-2cf3-e5bf6a1edd70" TYPE="linux_raid_member" PARTUUID="42e1c209-98ba-48ee-9afc-247152902ac1"/dev/sdc5: UUID="394b8163-356b-6262-52f6-72d24c3bc33f" UUID_SUB="fe533fdf-afea-192c-5fd5-11c6102a15ca" LABEL="XPENOLOGY:2" TYPE="linux_raid_member" PARTUUID="dec4d9c2-d2ec-4513-b951-1bb8771fc52f"/dev/sr0: UUID="2015-06-29-06-52-36-00" LABEL="OpenMediaVault" TYPE="iso9660" PTUUID="78a03a04" PTTYPE="dos"
My configuration :
My RAID5 contains 4 6To drives, 1 of them is faulty and has been removed physically (formerly /dev/sde)
What I've tried to do until now :
I thought the force mode would do the trick, especially with this little difference on the events counter (26486 vs 26567) but no luck with that.
What's the next step then ? I've read some stuff about assume-clean switch that could work but I'm not sure about that. Could it be a good idea to do a mdadm --zero-superblock /dev/sdb ?
Any help greatly appreciated