BTRFS Raid1 inactive since lost of one drive

  • Hello,


    I cannot access any data of the raid due the lack of possibility to mount the filesystem.


    I started using OMV since version 3 and created in that time a raid1 with btrfs filesystem.

    I moved this raid from version 3 to 4 and now from version 4 to 5.

    Unluckily one of the two data drives died. Therefor I ordered a new drive and shut the system down. As the new drive arrived I started the system, starting to update the omv software (the raid1 was in degraded mode but accessable) and reboot the system.

    After the reboot the raid1 cannot be seen in the omv-surface.



    cat /proc/mdstat

    gives the feedback:
    md127 : inactive sdc[0](S) sdc is the remaining drive of the raid1


    So I assume the data of the raid is still there, but at the moment I cannot get the raid back to work or copy the data to a new filesystem.


    Any suggestions?


    thanks in adance.

    mac

    • Offizieller Beitrag

    mdadm --stop /dev/md127


    mdadm --assemble --force --verbose /dev/md127 /dev/sdc this should bring the raid back in a clean/degraded state.


    Storage -> Disks -> select the new drive click wipe on the menu, click short


    When complete; Raid Management -> select the raid, click recover on the menu, from the dialog box select the new drive, click OK and the raid should now resync.

  • Hello,


    thanks for the solution.


    So I should first stop and "unmount" the raid, resync the raid afterwards with just the sdc drive in command line and rebuild the raid in omv - fine, I will try it tonight.


    But what happend with the raid? - in ext3/ext4 raid1 this kind of error never occoured before.

    • Offizieller Beitrag

    So I should first stop

    Just follow what I posted above in the order I posted and it should be OK.

    But what happend with the raid


    Therefor I ordered a new drive and shut the system down. As the new drive arrived I started the system, starting to update the omv software (the raid1 was in degraded mode but accessable) and reboot the system.

    After the reboot the raid1 cannot be seen in the omv-surface.

    Regarding the text in red, if a drive failed on it's own mdadm removes it, what should have been possible was to remove the failed drive, install the new one, wipe it then add that to the raid. A raid becomes inactive if a drive is 'pulled' and the system restarted. Hence I couldn't quite work out what you were explaining

  • bjoern

    Hat das Label gelöst hinzugefügt.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!