Hi guys, problem is the following - my NAS is a Dell T20 server running currently 5x 2TBs disks in RAID5. This has been set up a while back with OMV4, running on an USB stick. In the last week I noticed a couple of issues (mainly I couldn't log into the webinterface) so I just restarted the server. Noting that there seemed to be some consistency issues on the filesystem (it was booting default to initramfs which I was able to fix with fsck). Indepedently, I had a look into my webinterface once it was back up online and could see that there were some issues with one of the HDDs (red alarm signal in SMART - having some bad sectors) and the RAID was graded as 'clean, degraded' - files were still accessible.
So I did get a new HDD and wanted to replace it over the weekend - unfortunately there hit a power outage on Thursday/Friday and the system went off. Rebooting the machine the USB thumb drive seemed unreadable so I did set up a new OMV installation (v5) on a different thumb drive. After successfully setting it up I replaced the drive that was claimed to have bad sectors with a new one - unfortunately I do not see the RAID now in the webinterface. I thought software RAID would be hot swappable, especially because 4 drives are still operating fine, looks like that is now a bit of an issue...I have now put the old (bad sectors) HDD back in and seem to have issues to get the RAID back into active mode:
This is the defective drive:
And these are the ones that operate correctly:
Now I am feeling a bit hopeless - I am not sure how to interpret this?
Though it looks like OMV can still see all HDDs and correctly considers them being in a RAID:
/dev/sde: UUID="7b1be9be-d75a-f1ec-6370-02de39001c58" UUID_SUB="70f0181e-be31-b75e-e78f-498a7379f630" LABEL="Nas:RAID5" TYPE="linux_raid_member"/dev/sda: UUID="7b1be9be-d75a-f1ec-6370-02de39001c58" UUID_SUB="c762fde1-9e8f-0b91-cca4-ab0c6ae35729" LABEL="Nas:RAID5" TYPE="linux_raid_member"/dev/sdc: UUID="7b1be9be-d75a-f1ec-6370-02de39001c58" UUID_SUB="e392ac97-159d-f386-b562-983e0ed8929a" LABEL="Nas:RAID5" TYPE="linux_raid_member"/dev/sdd: UUID="7b1be9be-d75a-f1ec-6370-02de39001c58" UUID_SUB="9590adb7-f274-c2eb-b2f9-b5041dae9a1a" LABEL="Nas:RAID5" TYPE="linux_raid_member"
It says still active, but FAILED/not started - does anyone have an idea here how to circumvent this maybe?
Thanks in advance!!