RAID 1 disappears if sdb is removed

  • Hi, I'm having some issues with redundancy not working as it should with my RAID 1 setup. I'm running a fresh install of OMV 4.1.8 and as storage drives I have a WD RED 4TB (sda) and a Seagate Barracuda 3TB (sdb).


    Today I noticed, by accident, that if sdb is removed then the entire RAID disappears and the File System status shows that md0 is missing. sda still shows up in 'Disks' but not in 'File Systems' This, of course, is not how RAID 1 should work and it means that if I lose the Seagate drive then my RAID is apparently lost. This is also bad because the Seagate is the older drive and it has a reputation for failing.


    When both drives are connected the RAID works as it should and the status is 'Clean' with both drives showing up in 'Disks' and 'RAID management'. But I noticed that sda does *not* show up in 'File Systems' even when both drives are connected and the RAID is working normally. I'm guessing that's part of the issue why my RAID disappears if sdb is removed... I've attached a screenshot of what it looks like when both drives are connected and RAID is working.





    Any ideas? Thanks in advance!

    • Offizieller Beitrag

    Check


    Bash
    # cat /proc/mdstat

    It surely shows you a degraded RAID.

    This, of course, is not how RAID 1 should work

    I think you misunderstand how MDADM RAID works. Please check the kernel documentation about MDADM for more information.

  • Check


    Bash
    # cat /proc/mdstat


    It surely shows you a degraded RAID.

    I think you misunderstand how MDADM RAID works. Please check the kernel documentation about MDADM for more information.

    Here's the output of cat /proc/mdstat, doesn't this mean that the RAID is ok?:



    Code
    md0 : active raid1 sdb2[0] sda2[2]      2929765240 blocks super 1.2 [2/2] [UU]

    And here's the output of mdadm --detail /dev/md0:


  • Could the reason for sda not showing up in 'File System' be because the RAID was created on another NAS (proprietary 2-bay NAS box NSA325-V2 running linux by Zyxel)? I didn't notice until now that both drives have 487M partitions outside of the RAID array, sda1 is swap and sdb1 is ext2.


    In any case I need the array to work even if the Seagate, sdb, fails... Should I just delete and recreate the RAID 1 array from within OMV and use all space for the array or is there any other way I could fix this?



    Code
    ~# lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
    NAME          SIZE FSTYPE            TYPE  MOUNTPOINT
    sda           3.7T                   disk
    ├─sda1        487M swap              part
    └─sda2        3.7T linux_raid_member part
      └─md0       2.7T ext4              raid1 /srv/dev-disk-by-id-md-name-Vault-0
    sdb           2.7T                   disk
    ├─sdb1        487M ext2              part
    └─sdb2        2.7T linux_raid_member part
      └─md0       2.7T ext4              raid1 /srv/dev-disk-by-id-md-name-Vault-0

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!