raid5 disk failure.

  • Ok, unplugged a drive while powered down and tried the command that ness1602 suggested. changed it a bit to only include dev b,c, and d as those are the remaining 3 in the array.

    Code
    root@openmediavault4:~# mdadm --assemble --force /dev/md0 /dev/sdb /dev/sdc /dev/sdd
    mdadm: /dev/sdb is busy - skipping
    mdadm: /dev/sdc is busy - skipping
    mdadm: /dev/sdd is busy - skipping

    No joy there.
    I then went to the GUI under file systems where the RAID array shows up as a device and unmounted it. It changed to unmounted.
    I tried the command again with the same results.

    • Offizieller Beitrag

    Actually click on a folder to open it and it throws an error message that says "The share is inaccessible because a device has been removed"

    It would because as far as Windows is concerned it's still there on the network.


    No joy there.
    I then went to the GUI under file systems where the RAID array shows up as a device and unmounted it. It changed to unmounted.
    I tried the command again with the same results.

    Interesting you were able to unmount it from the GUI, that must be possible due to the raid being inactive, do you get a 'save configuration' come up?


    What happens if you do mdadm -- stop /dev/md0 then mdadm --assemble --force /dev/md0 /dev/sd[bcd] if that works what's the output of cat /proc/mdstat

    • Offizieller Beitrag

    When you fail one disk(mdadm based) RAID should be active/degraded. It shouldnt be inactive anytime.

    Yes and no. If you fail a drive using mdadm it's output would be clean/degraded, if you pull a drive whilst the machine is powered down it will come up as inactive as @ryecoaaron confirmed yesterday, simply stopping the raid and reassembling will bring it back up as clean/degraded.
    This can also occur if there is a power outage one drive would fail to initialise and this 'could' also result in the array in the array being inactive rather clean/degraded. I've tested one and experienced the other.

  • Yes and no. If you fail a drive using mdadm it's output would be clean/degraded, if you pull a drive whilst the machine is powered down it will come up as inactive as @ryecoaaron confirmed yesterday, simply stopping the raid and reassembling will bring it back up as clean/degraded.This can also occur if there is a power outage one drive would fail to initialise and this 'could' also result in the array in the array being inactive rather clean/degraded. I've tested one and experienced the other.

    This is consistent with what I'm seeing happen.

    Interesting you were able to unmount it from the GUI, that must be possible due to the raid being inactive, do you get a 'save configuration' come up?
    What happens if you do mdadm -- stop /dev/md0 then mdadm --assemble --force /dev/md0 /dev/sd[bcd] if that works what's the output of cat /proc/mdstat

    Yes I got a save configuration prompt.
    Found it really weird though that even after unmounting, I powered down and plugged in the unplugged drive. This returned everything to normal with Raid clean and all drives included.
    I intend to try your suggested commands this evening.

  • Powered down machine and unplugged drive.
    Boot up and entire raid array is gone in GUI as had been the case.
    did the commands below that geaves and ness1602 helped me with.
    It appears that after a forced stop command and then an assemble command, I'm good to go.
    The array showed back up in the GUI in a degraded state which allowed me to add another drive and recover from there.

    This is a great learning experience and I thank you gentlemen greatly.

    • Offizieller Beitrag

    The array showed back up in the GUI in a degraded state which allowed me to add another drive and recover from there.

    Well that's good :) but if a drive failed 'naturally' i.e. the Raid came up as clean/degraded the process of recovery would be different than the one you simulated.

  • Hello, folks,
    ask for the help.



    First of all, I'm not a LINUX expert.
    Use only for me.
    I had the same problem with RAID5.
    Performed the steps described above.
    With putty I get the following output:


    Via WEBUI I get the following errors during mount:

    Something went wrong with the backup.
    Not all folders are copied.


    Do I still have the possibility to restore the data?
    Many thanks for every help.


    Many greetings
    Gelo

    • Offizieller Beitrag

    Many thanks for every help.

    I've read this a couple of times but I know I can't help I have not dealt with how you have your raid set up, but I can see the cause of the error.


    You have set the Raid 5 with 3 disks, and you have set this using LVM -> /dev/md0: UUID="B3qaTb-2Tbj-Lmny-jE54-ibhh-ypeB-0b4Gcf" TYPE="LVM2_member"


    The output of your mdadm.conf -> ARRAY /dev/md0 metadata=1.2 spares=1 name=zuhause:Raid UUID=90ac7b52:dad73430:43e23631:dd5d0bab shows there is a spare. ->?


    You might be better running mdadm --detail /dev/md0 it might give more information.


    The error you are seeing and the reason it will not mount I think is this -> /dev/mapper/speicher-Raidspeicher: LABEL="Speicher" UUID="8b4ff320-223e-4aaf-821 a-0792b4ec3378" UUID_SUB="6c587569-eade-4152-8683-814ea3dc4eae" TYPE="btrfs"


    That drive obviously has something to do with your Raid but it's formatted with btrfs hence this error -> mount -v --source '/dev/disk/by-label/Speicher' 2>&1' with exit code '32': mount: wrong fs type, bad option, bad superblock on /dev/mapper/speicher-Raidspeicher,


    As I said I can 'see' the cause but I have not had any experience with this, but the above should give you a starting point.

  • Hello, there.
    Thanks geaves for taking the time.
    I have my backup disk with TestDisk searched for the deleted data.
    It's been running for days and this morning it was only 17%.
    The hope remains...


    You might be better running mdadm --detail /dev/md0 it might give more information.

    At the beginning I added three 2TB disks to the RAID5.
    Formatted as BTRFS, then released with LVM about 2TB.
    If it helps you...

  • Hello geaves,


    just wanted to thank me one more time for the support.
    Unfortunately I could recover the whole thing, but with the help of TestDisk I could recover all data.


    I've learned now:"Raid is not a backup! Would you go skydiving without a parachute?"



    Best regards
    gelo

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!