Raid 5 restore after OMV reinstall

  • Hello guys, I would need some help to rebuild my RAID 5 and recover my data.


    My config :

    - HP Proliant Gen8

    - ESXI on SD card

    - Virtual machine with OMV on SSD

    - 4x 3To WD red in raid 5. Disks are directly mapped to OMV.


    What's happen:

    ESXI crash, I had to reinstall it. I also lost VM OMV and had to reinstall it.

    I have mapped again the disks to OMV but from what I have understood I have to rebuild the RAID.

    Looking to those log it seems 1 disk is not ok..


    Code
    root@openmediavault:~# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md127 : inactive sdd[2](S) sde[0](S) sdb[1](S)
    8790308232 blocks super 1.2
    unused devices: <none>
    Code
    root@openmediavault:~# blkid
    /dev/sda1: UUID="e2f43903-97ee-40cf-8071-b71c7925c637" TYPE="ext4" PARTUUID="c684c0c4-01"
    /dev/sda5: UUID="8b1d20b7-dad5-4c62-8dee-a1db7a32d4ee" TYPE="swap" PARTUUID="c684c0c4-05"
    /dev/sdb: UUID="c91b4ded-2fd5-9908-936a-1f3402af4b25" UUID_SUB="39beb9de-b9bd-71f1-3c68-4f9a50d53d86" LABEL="omv:OMV" TYPE="linux_raid_member"
    /dev/sde: UUID="c91b4ded-2fd5-9908-936a-1f3402af4b25" UUID_SUB="a7357e6c-b9de-4b58-a8a4-f9463ae8fb74" LABEL="omv:OMV" TYPE="linux_raid_member"
    /dev/sdd: UUID="c91b4ded-2fd5-9908-936a-1f3402af4b25" UUID_SUB="ee554dab-d234-64a2-8757-ee37af016982" LABEL="omv:OMV" TYPE="linux_raid_member"
    /dev/sdc: PTUUID="edf65d70-51df-4bf4-9b5a-559e18a68199" PTTYPE="gpt"
    Code
    root@openmediavault:~# mdadm --detail --scan --verbose
    INACTIVE-ARRAY /dev/md127 num-devices=3 metadata=1.2 name=omv:OMV UUID=c91b4ded:2fd59908:936a1f34:02af4b25
    devices=/dev/sdb,/dev/sdd,/dev/sde


    Thanks for your support !

  • Hikari

    Changed the title of the thread from “Please help me to restore a raid 5 after OMV reinstall” to “Raid 5 restore after OMV reinstall”.
  • I have no idea how ESXI behaves having checked a VM of OMV running on VB the output is different.


    Anyway, the array is inactive as the output suggests from mdsat and mdadm --detail, however, that output shows 3 drives /dev/sd[bde] and that is confirmed from the output of blkid. There is also no definition in mdadm conf.

    I assume that /dev/sdc should also be part of the array, (you state in your first post there are 4 drives) but that drive does not have a "linux raid member" signature.


    When you reinstalled ESXI and subsequently OMV were those 4 drives disconnected? do have a backup?


    Normal procedure to get an inactive array back up;

    mdadm --stop /dev/md127

    mdadm --assemble --force --verbose /dev/md127 /dev/sd[bde]


    I have no idea if that will work or if your data will be accessible.

  • Hello geaves,


    I have a backup, but not with everything, so I would like to access the files in order to retrieve the missing ones if possible.


    I don't understand why 1 of the 4 disk behaves differently than the others and is not pointed as a "linux_raid_member".


    Anyway, if I reactive the array on 3 disks only, will I have access to the data ?

    My goal next is to delete the RAID and go for another configuration (snapraid or other, I am still not sure)


    Thanks for your support!

  • Anyway, if I reactive the array on 3 disks only, will I have access to the data ?

    TBH I have no idea, reactivating a raid is not normally an issue it's as if yours has had a drive physically removed i.e. as you would a hot swap except mdadm does not support hot swap.


    A Raid 5 will allow for one drive failure, will the re assembly fail, technically it shouldn't, then you have lost your data.

  • Hello geaves,


    I applied the procedure to restore the array, and it works with the 3 disks.

    State of the array is clean, degraded (as expected) but I was able to create a system file and to access my data.

    Now I just have to find the best backup method to replace RAID5 .


    Thanks again for your support :-)

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!