RAID 1 disappears if sdb is removed

    • OMV 4.x
    • Resolved

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • RAID 1 disappears if sdb is removed

      New

      Hi, I'm having some issues with redundancy not working as it should with my RAID 1 setup. I'm running a fresh install of OMV 4.1.8 and as storage drives I have a WD RED 4TB (sda) and a Seagate Barracuda 3TB (sdb).

      Today I noticed, by accident, that if sdb is removed then the entire RAID disappears and the File System status shows that md0 is missing. sda still shows up in 'Disks' but not in 'File Systems' This, of course, is not how RAID 1 should work and it means that if I lose the Seagate drive then my RAID is apparently lost. This is also bad because the Seagate is the older drive and it has a reputation for failing.

      When both drives are connected the RAID works as it should and the status is 'Clean' with both drives showing up in 'Disks' and 'RAID management'. But I noticed that sda does *not* show up in 'File Systems' even when both drives are connected and the RAID is working normally. I'm guessing that's part of the issue why my RAID disappears if sdb is removed... I've attached a screenshot of what it looks like when both drives are connected and RAID is working.




      Any ideas? Thanks in advance!
    • New

      Check

      Shell-Script

      1. # cat /proc/mdstat
      It surely shows you a degraded RAID.

      jackster wrote:

      This, of course, is not how RAID 1 should work
      I think you misunderstand how MDADM RAID works. Please check the kernel documentation about MDADM for more information.
      Absolutely no support through PM!

      I must not fear.
      Fear is the mind-killer.
      Fear is the little-death that brings total obliteration.
      I will face my fear.
      I will permit it to pass over me and through me.
      And when it has gone past I will turn the inner eye to see its path.
      Where the fear has gone there will be nothing.
      Only I will remain.

      Litany against fear by Bene Gesserit
    • New

      votdev wrote:

      Check

      Shell-Script

      1. # cat /proc/mdstat


      It surely shows you a degraded RAID.

      jackster wrote:

      This, of course, is not how RAID 1 should work
      I think you misunderstand how MDADM RAID works. Please check the kernel documentation about MDADM for more information.
      Here's the output of cat /proc/mdstat, doesn't this mean that the RAID is ok?:


      Source Code

      1. md0 : active raid1 sdb2[0] sda2[2] 2929765240 blocks super 1.2 [2/2] [UU]
      And here's the output of mdadm --detail /dev/md0:

      Source Code

      1. /dev/md0:
      2. Version : 1.2
      3. Creation Time : Thu Mar 10 21:14:15 2016
      4. Raid Level : raid1
      5. Array Size : 2929765240 (2794.04 GiB 3000.08 GB)
      6. Used Dev Size : 2929765240 (2794.04 GiB 3000.08 GB)
      7. Raid Devices : 2
      8. Total Devices : 2
      9. Persistence : Superblock is persistent
      10. Update Time : Thu Jun 14 10:22:49 2018
      11. State : clean
      12. Active Devices : 2
      13. Working Devices : 2
      14. Failed Devices : 0
      15. Spare Devices : 0
      16. Name : :0 (local to host)
      17. UUID : 782d7526:40efb178:b7926e6d:deed1e09
      18. Events : 2298267
      19. Number Major Minor RaidDevice State
      20. 0 8 18 0 active sync /dev/sdb2
      21. 2 8 2 1 active sync /dev/sda2
      Display All

      The post was edited 1 time, last by jackster ().

    • New

      Could the reason for sda not showing up in 'File System' be because the RAID was created on another NAS (proprietary 2-bay NAS box NSA325-V2 running linux by Zyxel)? I didn't notice until now that both drives have 487M partitions outside of the RAID array, sda1 is swap and sdb1 is ext2.

      In any case I need the array to work even if the Seagate, sdb, fails... Should I just delete and recreate the RAID 1 array from within OMV and use all space for the array or is there any other way I could fix this?


      Source Code

      1. ~# lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
      2. NAME SIZE FSTYPE TYPE MOUNTPOINT
      3. sda 3.7T disk
      4. ├─sda1 487M swap part
      5. └─sda2 3.7T linux_raid_member part
      6. └─md0 2.7T ext4 raid1 /srv/dev-disk-by-id-md-name-Vault-0
      7. sdb 2.7T disk
      8. ├─sdb1 487M ext2 part
      9. └─sdb2 2.7T linux_raid_member part
      10. └─md0 2.7T ext4 raid1 /srv/dev-disk-by-id-md-name-Vault-0