Lost RAID upgrading from v0.5x to v1.0

    • OMV 1.0
    • Resolved

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • Lost RAID upgrading from v0.5x to v1.0

      I noticed that my OpenMediaVault (OMV) version was fairly out of date, so I decided to upgrade. I performed an in-place upgrade to v1.0 but, at some point in the upgrade, I lost network access. After doing some reading, I found that if I manually create the "/run/network" directory, I would get my network back. This worked, then completed the upgrade (can't remember the command now), but I lost the RAID. For whatever reason, it wouldn't assemble my RAID any more. Eventually, I decided to save off some config files, unplug my RAID, and upgrade with the v1.9 ISO. It loaded just fine. When I shut it down, plugged in my data drives, and tried to assemble the RAID, it was no better. I was finally able to assemble it with 3 of the 4 drives into the RAID 5, but I couldn't mount it. I've exhausted everything I've found to try from the web or my own experience, and am half sick with stress that I've lost my data. Below is some information that will hopefully prove useful. I think there may also be issues with the superblock of either the RAID or individual drives. Before trying OMV v1.9, I tried to restore a superblock for my 4th hard drive, that did not have an issue prior to the recent chaos, but it didn't work. I honestly don't recall what the error was at this point. Sorry for the wall of text, but is anyone able to help?

      mdadm --examine /dev/md127

      /dev/md127:
      Magic : a92b4efc
      Version : 1.2
      Feature Map : 0x0
      Array UUID : f5c065f5:4e188600:bd2c9d4f:aabc392e
      Name : OMV-NAS:127 (local to host OMV-NAS)
      Creation Time : Sun May 3 12:45:35 2015
      Raid Level : raid5
      Raid Devices : 4

      Avail Dev Size : 11720035328 (5588.55 GiB 6000.66 GB)
      Array Size : 5860148736 (5588.67 GiB 6000.79 GB)
      Used Dev Size : 3906765824 (1862.89 GiB 2000.26 GB)
      Data Offset : 262144 sectors
      Super Offset : 8 sectors
      State : active
      Device UUID : ce827f2a:5e915227:47327772:e2245925

      Update Time : Mon May 4 16:25:01 2015
      Checksum : bc32949c - correct
      Events : 2

      Layout : left-symmetric
      Chunk Size : 512K

      Device Role : spare
      Array State : AAA. ('A' == active, '.' == missing)


      mdadm --detail /dev/md127

      /dev/md127:
      Version : 1.2
      Creation Time : Tue May 5 17:42:14 2015
      Raid Level : raid5
      Array Size : 3906765824 (3725.78 GiB 4000.53 GB)
      Used Dev Size : 1953382912 (1862.89 GiB 2000.26 GB)
      Raid Devices : 3
      Total Devices : 3
      Persistence : Superblock is persistent

      Update Time : Tue May 5 19:32:37 2015
      State : clean, reshaping
      Active Devices : 3
      Working Devices : 3
      Failed Devices : 0
      Spare Devices : 0

      Layout : left-symmetric
      Chunk Size : 512K

      Reshape Status : 4% complete
      Delta Devices : -1, (4->3)

      Name : OMV-NAS:127 (local to host OMV-NAS)
      UUID : 353afb78:aae4adf0:88709556:4f6c72ff
      Events : 322

      Number Major Minor RaidDevice State
      0 8 16 0 active sync /dev/sdb
      1 8 32 1 active sync /dev/sdc
      2 8 48 2 active sync /dev/sdd


      mdadm.conf

      ARRAY /dev/md127 metadata=1.2 name=OMV-NAS:127 UUID=353afb78:aae4adf0:88709556:4f6c72ff


      blkid

      /dev/sdb: UUID="353afb78-aae4-adf0-8870-95564f6c72ff" UUID_SUB="ab61bd1b-0f28-23f2-5c10-cabc0d572726" LABEL="OMV-NAS:127" TYPE="linux_raid_member"
      /dev/sdc: UUID="353afb78-aae4-adf0-8870-95564f6c72ff" UUID_SUB="289d97d0-1b36-efd4-7a0e-fe9e24dc4edc" LABEL="OMV-NAS:127" TYPE="linux_raid_member"
      /dev/sda1: UUID="a37c7912-2b77-4015-a492-48baa013c366" TYPE="ext4"
      /dev/sda5: UUID="c7e184f5-ed7d-48fc-ab29-bcd4102351f3" TYPE="swap"
      /dev/sdd: UUID="353afb78-aae4-adf0-8870-95564f6c72ff" UUID_SUB="2fb57910-8ddf-0a10-171b-9bc7f2841049" LABEL="OMV-NAS:127" TYPE="linux_raid_member"
      /dev/md127: UUID="f5c065f5-4e18-8600-bd2c-9d4faabc392e" UUID_SUB="ce827f2a-5e91-5227-4732-7772e2245925" LABEL="OMV-NAS:127" TYPE="linux_raid_member"


      fdisk -l

      Disk /dev/sda: 164.7 GB, 164696555520 bytes
      255 heads, 63 sectors/track, 20023 cylinders, total 321672960 sectors
      Units = sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 512 bytes
      I/O size (minimum/optimal): 512 bytes / 512 bytes
      Disk identifier: 0x000161f7

      Device Boot Start End Blocks Id System
      /dev/sda1 * 2048 308563967 154280960 83 Linux
      /dev/sda2 308566014 321671167 6552577 5 Extended
      /dev/sda5 308566016 321671167 6552576 82 Linux swap / Solaris

      Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
      255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
      Units = sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 4096 bytes
      I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      Disk identifier: 0x00000000

      Disk /dev/sdb doesn't contain a valid partition table

      Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes
      255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
      Units = sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 4096 bytes
      I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      Disk identifier: 0x00000000

      Disk /dev/sdc doesn't contain a valid partition table

      Disk /dev/sdd: 2000.4 GB, 2000398934016 bytes
      255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
      Units = sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 4096 bytes
      I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      Disk identifier: 0x00000000

      Disk /dev/sdd doesn't contain a valid partition table
      fdisk: unable to read /dev/sde: Inappropriate ioctl for device


      /etc/fstab (just the relevant line for the array)

      UUID=f5c065f5-4e18-8600-bd2c-9d4faabc392e /media/raid-storage ext4 defaults 0 2


      mount -a

      mount: wrong fs type, bad option, bad superblock on /dev/md127,
      missing codepage or helper program, or other error
      In some cases useful info is found in syslog - try
      dmesg | tail or so
    • I tried:
      mdadm --create /dev/md127 -v -l 5 -n 4 /dev/sdb /dev/sdc /dev/sdd /dev/sde

      That would fail, as it seems something is wrong with /dev/sde, so I tried:
      mdadm --create /dev/md127 -v -l 5 -n 4 /dev/sdb /dev/sdc /dev/sdd missing

      That created it fine with 3 out of 4 drives, but I couldn't do anything else with it.
    • Using the --create flag is a bad idea. You created a new array overwriting previous information. It won't even know there is a filesystem on it as you have found. Photorec may be your only choice now. Just a note, you should only use the --assemble flag when fixing an array. If it doesn't work, using --force can help.
      omv 4.0.5 arrakis | 64 bit | 4.12 backports kernel | omvextrasorg 4.0.2
      omv-extras.org plugins source code and issue tracker - github.com/OpenMediaVault-Plugin-Developers

      Please don't PM for support... Too many PMs!
    • That wasn't the first attempt I made to recreate the array, including using the GUI in OMV that wouldn't even list the RAID, but it was the latest and that's all that matters. Thanks for the advice, and I'll try to see if Photorec can recover it. If nothing else, it helps quite a bit to know that any further attempts at assembling it will fail, and I can stop wasting my time.

      As a side note, I accept full responsibility for my relative ignorance in trying to get my RAID operational again, but the reason it was even required is very upsetting, starting with a common bug in the upgrade process.
    • pugsley42 wrote:

      As a side note, I accept full responsibility for my relative ignorance in trying to get my RAID operational again, but the reason it was even required is very upsetting, starting with a common bug in the upgrade process.

      What was the bug?
      omv 4.0.5 arrakis | 64 bit | 4.12 backports kernel | omvextrasorg 4.0.2
      omv-extras.org plugins source code and issue tracker - github.com/OpenMediaVault-Plugin-Developers

      Please don't PM for support... Too many PMs!
    • During the upgrade, it lost networking. There are several posts on the web but, essentially, the "/run/network" directory didn't exist anymore. Once I found that, I manually created the directory, ifup, and all was good. If you reboot, the directory disappears again and has to be recreated. There appeared to be two reported ways to get the directory to stay:

      1. Create the directory as root (not a regular account using sudo)
      2. Use omv-firstaid to setup networking again.

      I had done both before rebooting, so I can't say which method worked, but it did. I was able to complete the upgrade with the "-f" option after that. What started me down the rabbit hole was thinking that it's a good idea to upgrade a working system to the next version. During the upgrade, it killed my networking, halting the upgrade. After getting it back, my RAID played hide-and-seek with me and I lost. In fact, I lost 2TB of data. I tried getting it back through the GUI, but it would lock up every time I tried to look at the storage. Everything I read about people that have lost their RAID, had some success with getting it back using the "create" option, although there were many others I tried first. I hadn't read where that killed the RAID, but you live and learn. It may be some freakish mix of hardware and software issues, but that it was working prior to all this makes me wish I hadn't upgraded at all. I'm trying to get it back through Photorec and, though the software is great, the contents will still never be the same even if I get them back. I could spend months renaming files alone, presuming I ever knew the original name. I'm still desperately clinging to any hope of getting the RAID back in shape, but that hope is swiftly fading.