Lost power while growing RAID5

  • I am pretty new to OMV and set up my first NAS with a 3x1TB RAID5. After purchasing an additional 1TB disk, I wanted to add it to the RAID. So I went in to the web GUI easy peasy and started growing the RAID. Alas, while it was in the process of growing the RAID, we had a power outage :(


    Once power came back on, I tried to turn on my machine and it wouldn't come up. I can boot up the machine with only the system drive, or even the first 3 RAID drives (but without the new one I was adding). If I put the drives in (or boot with the first three) OMV seems to recognize the individual drives just fine, but nothing shows up in RAID management. There is an option for "Recover" but it is grayed out.


    I've tried searching the forums but haven't found any answers that fit my situation. I can't figure out how to get my RAID back. Any ideas would be greatly appreciated.

  • What is the output of:


    blkid
    cat /proc/mdstat
    fdisk -l

    Hi Ryecoaaron, I realise this thread is very old, but I'm having the EXACT same problem as Steven.
    Hopefully you can help!


    I had 1x OS drive and 3x 2TB drives in raid, I added a new 3TB drive, grew the raid, it was about 90% complete then I had a power failure.


    Now the system won't boot with all 5 drives in I get:


    It then gets stuck on:



    If I take the new disk out I can boot, but no raid is shown:



    Any ideas??

  • What is the output of:


    blkid
    cat /proc/mdstat
    fdisk -l

    Also to answer this, I have removed the "new" drive, booted and ran those commands, the results are:

    Code
    /dev/sdc: UUID="8f1ec97a-310a-5688-dbd1-10f823c4393d" UUID_SUB="f34ef623-3850-55da-41cc-cd6e9149fe96" LABEL="OLYMPUS:MainRaid" TYPE="linux_raid_member"
    /dev/sdb: UUID="8f1ec97a-310a-5688-dbd1-10f823c4393d" UUID_SUB="6968c488-bf4e-6d00-6c4e-cab409f540ab" LABEL="OLYMPUS:MainRaid" TYPE="linux_raid_member"
    /dev/sda: UUID="8f1ec97a-310a-5688-dbd1-10f823c4393d" UUID_SUB="69bc78c7-a56f-6727-9b15-dd0827b5a3a0" LABEL="OLYMPUS:MainRaid" TYPE="linux_raid_member"
    /dev/sdd1: UUID="f764ba74-c78c-40da-a4cf-424be53cf0d3" TYPE="ext4"
    /dev/sdd5: UUID="8d5c5445-9a67-4334-9a46-63a8a2c0d234" TYPE="swap"



    and:

    Code
    Personalities : [raid6] [raid5] [raid4]
    md0 : inactive sdc[0] sda[2] sdb[1]
    5860150536 blocks super 1.2
    unused devices: <none>


    and:


  • mdadm --stop /dev/md0
    mdadm --assemble --force --verbose /dev/md0 /dev/sd[cab]

    Yes! Thank you!!
    I can see the file system online but unmounted, and the RAID shows but status is "clean, degraded"


    Should I "recover" the RAID with the 3 disks or shutdown put 4th back in and reboot then recover? (do I need to reformat before inserting 4th drive again?)


    Also should I mount file system or fix the RAID first?

  • I can see the file system online but unmounted, and the RAID shows but status is "clean, degraded"

    It is supposed to be unmounted. The filesystem on the array isn't automounted when the array is fixed.


    Should I "recover" the RAID with the 3 disks or shutdown put 4th back in and reboot then recover?

    Post the output of: cat /proc/mdstat You want the array to be ready for a reboot first.


    do I need to reformat before inserting 4th drive again?

    Don't do anything until I can see the output of the above command.


    Also should I mount file system or fix the RAID first?

    If you don't have a backup (raid is not backup), you could mount it now to save files.

    omv 5.3.9 usul | 64 bit | 5.3 proxmox kernel | omvextrasorg 5.2.6
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • It is supposed to be unmounted. The filesystem on the array isn't automounted when the array is fixed.

    Post the output of: cat /proc/mdstat You want the array to be ready for a reboot first.

    Don't do anything until I can see the output of the above command.

    If you don't have a backup (raid is not backup), you could mount it now to save files.

    Wow thanks for the quick reply, the result of that command is:

    Code
    Personalities : [raid6] [raid5] [raid4]
    md0 : active (auto-read-only) raid5 sdc[0] sda[2] sdb[1]
    3906765824 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
    unused devices: <none>
  • I would reboot and add the drive. You don't mention if you have a backup though?

    I don't have a backup as 99% of the data is films etc. that I could download again (although would take a long time)
    But there is a handful of files I'd like to get back if possible.


    What do you recommend?

  • I think I've fixed it.


    So I ran systemrescuecd like you said, but all I did was run fdisk -l and it showed the RAID capacity was 3.6TB (or similar) which is what it was with 3 drives, I ran a few other fdisk commands to see which disks it was showing, then I ran fdisk -l again, and it said the raid capacity was 5.8TB (or similar)


    I rebooted into OMV and it had repaired the RAID, took a few hours to sync the RAID again but is now showing a clean RAID, I'm not just resizing the filesystem.


    Thanks for all your help ryecoaaron, I owe you a few beers!

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!