Corrupted file system, data not visible

    • Corrupted file system, data not visible

      Dear forum

      After a somewhat failed system upgrade from omv 2 to omv 3 I was forced to reinstall omv and chose the current version (4.1.23-1). After extensive research in forums I learned that omv should be capable of rediscovering previously created RAIDs without bigger problems and went on to upgrade (I must point out at this point that my personal stupidity prohibited me from making backups. If I had those I would not write here). After the upgrade/reinstall of the OS the RAIDs from the old system were detected correctly but I was not able to mount them. Looking into the filesystem showed an unpleasant surprise, several ZFS headers were found on both the RAID0 as well as the RAID1 present. Using wipefs I was able to remove these ZFS headers hoping that my original EXT4 header could be found underneath that mess.
      Unfortunately I had no luck.
      Then, without fileystem headers I tried using testdisk and fsck to restore and rebuild the partition tables wiped. I now am at a point were my RAID0 seems to have a filesystem, as I can indeed mount it, as it can be seen in the screenshot below. Also visible is the fact that omv seems to see used disk space.


      Now to my current problem. The 1.89 TiB of storage are not accessible at the moment, not through SSH and not through protocols such as SMB. When I check mounting point though SSH the partition is simply empty.
      Display Spoiler
      And now as a small disclaimer, I know that I am an idiot for not backing this up. But: the data stored on this RAID0 does not mean the world to me and nobody is going to die if I loose it all.

      Thank you so much for your help in advance- I would not write this post if I hadn't tried everything in my limited imagination.
    • Wow, that's a mess, you could have used wipefs just to remove the zfs info and leave the ext4 file system in tact.

      However according to the image you have 2 raid arrays /dev/md0 labelled MediaBack... and /dev/md127 labelled Media so you have 2 Raid mirrors?

      Judging by the image they are both mounted but not referenced.
      Raid is not a backup! Would you go skydiving without a parachute?
    • Yes I know that's a mess... first of all thank you for the reply!

      geaves wrote:

      you could have used wipefs just to remove the zfs info and leave the ext4 file system in tact.
      that was the plan, but the ext4 header was not there anymore- for some very strange reason!


      geaves wrote:

      However according to the image you have 2 raid arrays /dev/md0 labelled MediaBack... and /dev/md127 labelled Media so you have 2 Raid mirrors?
      true! but the raid labelled /dev/md0 is empty and has just been created. It's there for a possible backup location of the /dev/md127 raid (on which the data is stored that I would love to restore), in case heavy modifications are done.


      geaves wrote:

      Judging by the image they are both mounted but not referenced.
      true! As soon as I reference the /dev/md126 and "check out its content" there is nothing on it.
    • Well I tried the SSH of the actual linux machine running OMV, I tried a windows10 machine and I tried my MacBook pro...

      My guess is the following- when I recreated the EXT4 filesystem, it did so on the blocks that were not in use at that time instead of recreating the filesystem and trying to reassemble the data