Lost power while growing RAID5

    • OMV 1.0

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • Lost power while growing RAID5

      I am pretty new to OMV and set up my first NAS with a 3x1TB RAID5. After purchasing an additional 1TB disk, I wanted to add it to the RAID. So I went in to the web GUI easy peasy and started growing the RAID. Alas, while it was in the process of growing the RAID, we had a power outage :(

      Once power came back on, I tried to turn on my machine and it wouldn't come up. I can boot up the machine with only the system drive, or even the first 3 RAID drives (but without the new one I was adding). If I put the drives in (or boot with the first three) OMV seems to recognize the individual drives just fine, but nothing shows up in RAID management. There is an option for "Recover" but it is grayed out.

      I've tried searching the forums but haven't found any answers that fit my situation. I can't figure out how to get my RAID back. Any ideas would be greatly appreciated.
    • ryecoaaron wrote:

      What is the output of:

      blkid
      cat /proc/mdstat
      fdisk -l
      Hi Ryecoaaron, I realise this thread is very old, but I'm having the EXACT same problem as Steven.
      Hopefully you can help!

      I had 1x OS drive and 3x 2TB drives in raid, I added a new 3TB drive, grew the raid, it was about 90% complete then I had a power failure.

      Now the system won't boot with all 5 drives in I get:


      It then gets stuck on:



      If I take the new disk out I can boot, but no raid is shown:



      Any ideas??
    • ryecoaaron wrote:

      What is the output of:

      blkid
      cat /proc/mdstat
      fdisk -l
      Also to answer this, I have removed the "new" drive, booted and ran those commands, the results are:

      Source Code

      1. /dev/sdc: UUID="8f1ec97a-310a-5688-dbd1-10f823c4393d" UUID_SUB="f34ef623-3850-55da-41cc-cd6e9149fe96" LABEL="OLYMPUS:MainRaid" TYPE="linux_raid_member"
      2. /dev/sdb: UUID="8f1ec97a-310a-5688-dbd1-10f823c4393d" UUID_SUB="6968c488-bf4e-6d00-6c4e-cab409f540ab" LABEL="OLYMPUS:MainRaid" TYPE="linux_raid_member"
      3. /dev/sda: UUID="8f1ec97a-310a-5688-dbd1-10f823c4393d" UUID_SUB="69bc78c7-a56f-6727-9b15-dd0827b5a3a0" LABEL="OLYMPUS:MainRaid" TYPE="linux_raid_member"
      4. /dev/sdd1: UUID="f764ba74-c78c-40da-a4cf-424be53cf0d3" TYPE="ext4"
      5. /dev/sdd5: UUID="8d5c5445-9a67-4334-9a46-63a8a2c0d234" TYPE="swap"



      and:

      Source Code

      1. Personalities : [raid6] [raid5] [raid4]
      2. md0 : inactive sdc[0] sda[2] sdb[1]
      3. 5860150536 blocks super 1.2
      4. unused devices: <none>

      and:

      Source Code

      1. Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
      2. 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
      3. Units = sectors of 1 * 512 = 512 bytes
      4. Sector size (logical/physical): 512 bytes / 4096 bytes
      5. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      6. Disk identifier: 0x00000000
      7. Disk /dev/sda doesn't contain a valid partition table
      8. Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
      9. 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
      10. Units = sectors of 1 * 512 = 512 bytes
      11. Sector size (logical/physical): 512 bytes / 4096 bytes
      12. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      13. Disk identifier: 0x00000000
      14. Disk /dev/sdb doesn't contain a valid partition table
      15. Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes
      16. 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
      17. Units = sectors of 1 * 512 = 512 bytes
      18. Sector size (logical/physical): 512 bytes / 4096 bytes
      19. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      20. Disk identifier: 0x00000000
      21. Disk /dev/sdc doesn't contain a valid partition table
      22. Disk /dev/sdd: 250.1 GB, 250059350016 bytes
      23. 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors
      24. Units = sectors of 1 * 512 = 512 bytes
      25. Sector size (logical/physical): 512 bytes / 512 bytes
      26. I/O size (minimum/optimal): 512 bytes / 512 bytes
      27. Disk identifier: 0x0008e8c3
      28. Device Boot Start End Blocks Id System
      29. /dev/sdd1 * 2048 472150015 236073984 83 Linux
      30. /dev/sdd2 472152062 488396799 8122369 5 Extended
      31. /dev/sdd5 472152064 488396799 8122368 82 Linux swap / Solaris
      Display All
    • ryecoaaron wrote:

      mdadm --stop /dev/md0
      mdadm --assemble --force --verbose /dev/md0 /dev/sd[cab]
      Yes! Thank you!!
      I can see the file system online but unmounted, and the RAID shows but status is "clean, degraded"

      Should I "recover" the RAID with the 3 disks or shutdown put 4th back in and reboot then recover? (do I need to reformat before inserting 4th drive again?)

      Also should I mount file system or fix the RAID first?
    • jw123 wrote:

      I can see the file system online but unmounted, and the RAID shows but status is "clean, degraded"
      It is supposed to be unmounted. The filesystem on the array isn't automounted when the array is fixed.

      jw123 wrote:

      Should I "recover" the RAID with the 3 disks or shutdown put 4th back in and reboot then recover?
      Post the output of: cat /proc/mdstat You want the array to be ready for a reboot first.

      jw123 wrote:

      do I need to reformat before inserting 4th drive again?
      Don't do anything until I can see the output of the above command.

      jw123 wrote:

      Also should I mount file system or fix the RAID first?
      If you don't have a backup (raid is not backup), you could mount it now to save files.
      omv 4.0.11 arrakis | 64 bit | 4.13 backports kernel | omvextrasorg 4.1.0
      omv-extras.org plugins source code and issue tracker - github.com/OpenMediaVault-Plugin-Developers

      Please don't PM for support... Too many PMs!
    • ryecoaaron wrote:

      jw123 wrote:

      I can see the file system online but unmounted, and the RAID shows but status is "clean, degraded"
      It is supposed to be unmounted. The filesystem on the array isn't automounted when the array is fixed.

      jw123 wrote:

      Should I "recover" the RAID with the 3 disks or shutdown put 4th back in and reboot then recover?
      Post the output of: cat /proc/mdstat You want the array to be ready for a reboot first.

      jw123 wrote:

      do I need to reformat before inserting 4th drive again?
      Don't do anything until I can see the output of the above command.

      jw123 wrote:

      Also should I mount file system or fix the RAID first?
      If you don't have a backup (raid is not backup), you could mount it now to save files.
      Wow thanks for the quick reply, the result of that command is:

      Source Code

      1. Personalities : [raid6] [raid5] [raid4]
      2. md0 : active (auto-read-only) raid5 sdc[0] sda[2] sdb[1]
      3. 3906765824 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
      4. unused devices: <none>
    • I think I've fixed it.

      So I ran systemrescuecd like you said, but all I did was run fdisk -l and it showed the RAID capacity was 3.6TB (or similar) which is what it was with 3 drives, I ran a few other fdisk commands to see which disks it was showing, then I ran fdisk -l again, and it said the raid capacity was 5.8TB (or similar)

      I rebooted into OMV and it had repaired the RAID, took a few hours to sync the RAID again but is now showing a clean RAID, I'm not just resizing the filesystem.

      Thanks for all your help ryecoaaron, I owe you a few beers!