Single raid1 split into two degraded ones

  • I have rather unusual setup, namely I have two external USB disks connected to old netbook configured in raid1 (mirror). I set it up and it worked well for ~2 days. Today, however, when I turned my NAS on instead one mirror on /dev/md0 I see two degraded ones /dev/md126 and /dev/md127. The same disks are connected to the same USB ports all the time. The only thing which comes to my mind is the fact I was playing with partitions on system disk (not touching the mirror, both disks were disconnected that time) according to the method 4 from http://forums.openmediavault.o…topic.php?f=10&t=192#p734 (but I haven't run "mdadm /dev/md100 --create --force --level=linear --raid-devices=1 /dev/sda3" instruction proposed there to create linear array, I just simply mounted new /dev/sda3 from the Filesytems tab in the UI) - I wonder if that could do something wrong to the raid? Please find below all my current configuration.

    • What happened and how to fix it?
    • How to prevent this in the future?


    cat /proc/mdstat


    mdadm --detail /dev/md/mirror1


    mdadm --detail /dev/md126


    mdadm --detail /dev/md127


    cat /etc/fstab


    mount


    df

    Code
    root@nas:/home/reddy# df
    Filesystem           1K-blocks      Used Available Use% Mounted on
    /dev/sda1              8343480   1174032   6750020  15% /
    tmpfs                   510256         4    510252   1% /lib/init/rw
    udev                    504076       208    503868   1% /dev
    tmpfs                   510256         0    510256   0% /dev/shm
    tmpfs                   510256         8    510248   1% /tmp
    /dev/md126           1952559564    576696 1951982868   1%   /media/fced0dfe-e95d-402d-8d66-4296b0736393
    /dev/sda3            106781756     32944 106748812   1%   /media/ad146658-eed1-4447-8eeb-4659db35086a




    All 3 disks: internal system one and both USB drives are configured
    the same way to use minimum power, spin down after 10 minutes and
    enable S.M.A.R.T.:





    I'll post filesystems configuration screenshot in another post due to limit of 3 attachments per post.

  • I tried to fix the array in following way:


    Code
    umount /dev/md126
    umount /dev/md127
    mdadm –stop /dev/md126
    mdadm –stop /dev/md127
    mdadm -A /dev/md0 -f –update=summaries /dev/sdb /dev/sdc
    mdadm /dev/md0 -a /dev/sdc


    And it started re-building the array:


    Code
    reddy@nas:~$ cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md0 : active raid1 sdc[2] sdb[0]
          1953513424 blocks super 1.2 [2/1] [U_]
          [===>.................]  recovery = 17.6% (345273472/1953513424) finish=1285.7min speed=20846K/sec
    
    
    unused devices: <none>


    It will take quite a while. Is there any other way avoiding recovery? Both disks should already contain perfectly synchronized data in this case...

  • When rebuild finished everything seemed to be ok, mirror was mounted as originally defined to /dev/md0. However, after reboot the mirror is still clean but again changed mount point to /dev/md127 this time:




    This instability scares me a bit... Why it behaves in such a strange manner?


    For reference, current /proc/mdstat


    Code
    root@nas:/home/reddy#  cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md127 : active raid1 sdb[0] sdc[2]
          1953513424 blocks super 1.2 [2/2] [UU]
    
    
    unused devices: <none>


    and /etc/mdadm/mdadm.conf


  • No idea what is causing it, and I usually don't worry about mountpoint changes, but I do find XFS scary at times and I have a feeling that this may be related to your issue. If I were you, I would move the data you have there (if possible), and re-arrange the drives with ext4. Unless you have a specific reason to use XFS, stay away from it for a while. What it gains in performance, it looses in reliability (from my experience).

  • Thanks for the hint on the filesystem, I can still switch as I don't have a lot of data there yet. I built the NAS for reliability only, speed is not an issue (it works on USB drives anyway) so if there's something scary about XFS I'd prefer to switch to good old ext4.
    Nevertheless, I'd like to know why those mount points change...

  • Not sure, probably when fixes were done it decided to change them, but there's never harm on that to be honest. You did say they are USB drives, moving them from one port to another may cause those changes too.

  • Hello, I just had the same problem as reddy today.


    I started my NAS with approximately the same configuration except that my disk are in SATA inside the NAS and already in EXT4, not XFS.
    And my RAID was in the same state all of a sudden, 2 device with strange name, both in degraded status.


    I did the same commands as reddy and now my RAID is recovering. I hope I won't have lost any data :/

    Code
    umount /dev/md126
    umount /dev/md127
    mdadm –-stop /dev/md126
    mdadm –-stop /dev/md127
    mdadm -A /dev/md0 -f –-update=summaries /dev/sdb /dev/sdc
    mdadm /dev/md0 -a /dev/sdc


    thanks for the help reddy.

  • Hello,


    After rebuilding the raid it works fine again for a few weeks and today i'm back with the same issue : I have two degraded raid.


    A litlle precision maybe : If I reboot again without touching anything, the drives resort themselves. (/dev/sda become /dev/sdb, sdb become sdc, etc...)


    This doesn't make a lot of sens to me and I begin to think about finding an other solution to create a raid because this one is obviously not reliable.


    Too bad, when it works it's a great tool...

  • Just FYI - since I switched to ext4 (more than 6 months now) my mirror is stable with no surprises like this split. I hope it will stay like that :) I have no idea if and how the file system can affect mdadm, so I don't know how to help anybody or even myself if that happens again...

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!