Soft RAID disappears after one disk is removed

  • I have soft RAID 1 created out of 2 WD 4TB hard drives for all my data, and I always taught that if one of them failed, everything will continue to work without any interruptions.
    And today I decided to test that and realized that nothing is working as I taught!
    I shut down my machine, took out the cables out of one of my disks and started the machine... and raid disappeared!, and because raid disappeared, all folders that are referencing locations on raid disk are now dead.
    So I shut down the machine, put back the cables for the disk that was missing earlier and voila, raid is back again. But I'm not sure that this is normal. Can someone confirm that that is not normal behavior? Raid should be there no matter if one disk is missing or not?
    Below on the left is the output from OMV with both disks present and on the right is output from OMV with one disk missing.

  • Once upon I time I had IBM server that had hardware RAID1 set up, and when one of the two disks died, server haven't stopped working, it booted normally to Windows Server with notification that RAID1 should be repaired (by replacing failed disk).
    So I assumed that everything should continue to work without interruption, in this case with soft raid1 and OMV, but I guess that I was wrong.
    Thanks for the clarification.

    • Offizieller Beitrag

    So I assumed that everything should continue to work without interruption, in this case with soft raid1 and OMV, but I guess that I was wrong.

    I wouldn't say wrong, soft raid and hardware raid are totally different, but to help you there is a thread here albeit the thread references a Raid5 the follow on in that thread will explain.

  • I read the thread you linked and it was indeed an interesting read.
    I'll try commands below as soon as I get home to see if raid is going to be in working order again, but now with only one drive.
    mdadm --stop /dev/md0
    mdadm --assemble --force /dev/md0 /dev/sda

    • Offizieller Beitrag

    I read the thread you linked and it was indeed an interesting read.

    The reason I referenced the thread was to highlight the fact that simply 'pulling a drive' is not the same as a drive failing.


    In a hardware raid pulling a drive from a raid would allow the raid to continue to function, in a soft raid if you pulled a drive without rebooting the raid would still display clean. It's not until you reboot that the software knows that a drive is missing but in essence doesn't know what to do, hence it becomes inactive.


    If a drive fails in OMV's soft raid the raid would display clean/degraded, you can then remove the failed drive using the GUI add a clean drive and recover without the use of the command line.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!