RAID 5 not fault tolerant

    • OMV 4.x

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • RAID 5 not fault tolerant

      I have a RAID5 (5 disks) configured using the OMV GUI. I am running OMV 4.1.4-1.
      Whilst working on the machine while shut down I accidentally unclipped one of the drives (SATA cable). I started the machine up and noticed that the RAID was not detected, and the file system on the RAID was listed as missing.

      I found this post on the forum:
      RAID Array missing after failed drive. Trying to rebuild
      The above thread from post 10 pretty much describes my symptoms (except I have 5 disks) and I am (relatively) sure that the commands posted by ryecoaaron in post 11 will solve my problem. The question is, why does the RAID not start, but in a degraded mode, automatically, and the file system mounted? I could then add another disk, rebuild, and move on. Is there a way to tell OMV to start degraded RAIDs (so that they can be rebuilt)? Isn't that the purpose of RAID5 (to be fault tolerant if one disk is missing)?
    • I don’t use raid (I use to) but I think the idea is fault tolerant during live operation. Most real servers would have a spare(s) disk that will enter the array as soon as one is failed.

      From what ive learn when i use to have raid setup and reading about it , when a raid5 rebuilds in a degraded state will induce serious stress on the disks, that can cause a second disk to fail which would mean data lost.

      Omv provides just a web configuration panel for raid. Everything done under is controlled by the md driver and mdadm utilities supplied by Debian kernel and packages.
      New wiki
      chat support at #openmediavault@freenode IRC | Spanish & English | GMT+10
      telegram.me/openmediavault broadcast channel
      openmediavault discord server