Accidently overrided RAID config

  • Hi, I did something dump with my NAS. It was running Raid 5 config from OMV with 4 x 2 TB HDD, and a SSD 256GB to run the OS.


    My server has an array card and I by some mistake created a hardware Raid for the HDDs. Later on I deleted it but well obviously OMV could not recognize the raid any more :(.


    So I would like to seek your advice what I could do to recover my data. The Raid config has been lost but hopefully data still intact somehow :( .


    I attached some config for your reference.


    Thank you and sorry for my bad English.

  • KM0201

    Hat das Thema freigeschaltet.
  • Limshaman The output of mdadm -E /dev/sd[abcd] looks normal with all disk having events counts and update times. So stop the array and see if it will assemble and resync.


    mdadm --stop /dev/md0


    mdadm --assemble --force  /dev/md0  /dev/sda /dev/sdb /dev/sdc /dev/sdd


    Check result and progress:


    cat /proc/mdstat


    On completion of rebuild, as /etc/fstab should still contain original entry for array /dev/md0, mount the array filesystem with:


    mount -a 

  • Hello Krisbee,


    Thank you, I tried --assemble before but still failed,


    I'm not Linux expert but look at md0 details, could it be set to raid0 at the moment?

    If I initialize a new raid 5 from the omv GUI, will it possible to retrieve the current data or all will be gone for sure?


    Thank you.

  • Limshaman


    AFAIK you cannot attempt to rescue your array via the WebUI. You may be able to rescue it with other create commands at the CLI. But there's no guarantees. Do you have a back up?


    Could you also please post the output of blkid between code tags

    • Offizieller Beitrag

    Couldn't all four drives have dropped out of the array at the time

    Don't think so, best guess would be something related to the hardware raid which has 'written' something to the software raid, but it would explain why the array would not assemble. The output of blkid 'might' throw something.

  • Hi guys, thank you for your response. Here's the blkid  output:


    Code
     blkid
    /dev/sde1: UUID="be5d288f-1ae3-4204-8b0b-63b30015b0d1" TYPE="ext4" PARTUUID="a204df02-01"
    /dev/sde5: UUID="ca840eff-53eb-4824-9146-2ae4c8b36602" TYPE="swap" PARTUUID="a204df02-05"
    /dev/sdb: UUID="9302f5ea-f0d4-6996-c301-694eea82e8f2" UUID_SUB="8e498630-19e5-42a0-3ae7-666d41d56a0a" LABEL="omv:Data" TYPE="linux_raid_member"
    /dev/sdd: UUID="9302f5ea-f0d4-6996-c301-694eea82e8f2" UUID_SUB="8e498630-19e5-42a0-3ae7-666d41d56a0a" LABEL="omv:Data" TYPE="linux_raid_member"
    /dev/sda: UUID="9302f5ea-f0d4-6996-c301-694eea82e8f2" UUID_SUB="a6109a2d-2f81-edb8-bbee-1c60d22eef17" LABEL="omv:Data" TYPE="linux_raid_member"
    /dev/sdc: UUID="9302f5ea-f0d4-6996-c301-694eea82e8f2" UUID_SUB="a6109a2d-2f81-edb8-bbee-1c60d22eef17" LABEL="omv:Data" TYPE="linux_raid_member"

    On the GUI File Systems there is a missing device, I guess it is the missing raid:


    Do you have a back up?

    I don't, unfortunately :(

    • Offizieller Beitrag

    Using this which another user did could be a way of getting the array back, however, the --assume-clean switch is a last resort and can cause data loss, the missing option is something I've not seen nor done before. So;


    mdadm --stop /dev/md0


    mdadm --create /dev/md0 --assume-clean --level=5 --verbose --raid-devices=4 /dev/sd[ab] missing /dev/sd[cd]


    the above is taken from the other users thread, if you do this you do so at your own risk because there is a risk of data loss, but currently the array is not going to assemble as it can only find 2 of the 4 drives.


    As the other user did check the file system and then mount the array


    If this works perhaps you'll consider a backup

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!