Corrupted file system, data not visible

  • Dear forum


    After a somewhat failed system upgrade from omv 2 to omv 3 I was forced to reinstall omv and chose the current version (4.1.23-1). After extensive research in forums I learned that omv should be capable of rediscovering previously created RAIDs without bigger problems and went on to upgrade (I must point out at this point that my personal stupidity prohibited me from making backups. If I had those I would not write here). After the upgrade/reinstall of the OS the RAIDs from the old system were detected correctly but I was not able to mount them. Looking into the filesystem showed an unpleasant surprise, several ZFS headers were found on both the RAID0 as well as the RAID1 present. Using wipefs I was able to remove these ZFS headers hoping that my original EXT4 header could be found underneath that mess.
    Unfortunately I had no luck.
    Then, without fileystem headers I tried using testdisk and fsck to restore and rebuild the partition tables wiped. I now am at a point were my RAID0 seems to have a filesystem, as I can indeed mount it, as it can be seen in the screenshot below. Also visible is the fact that omv seems to see used disk space.



    Now to my current problem. The 1.89 TiB of storage are not accessible at the moment, not through SSH and not through protocols such as SMB. When I check mounting point though SSH the partition is simply empty.


    Thank you so much for your help in advance- I would not write this post if I hadn't tried everything in my limited imagination.

    • Offizieller Beitrag

    Wow, that's a mess, you could have used wipefs just to remove the zfs info and leave the ext4 file system in tact.


    However according to the image you have 2 raid arrays /dev/md0 labelled MediaBack... and /dev/md127 labelled Media so you have 2 Raid mirrors?


    Judging by the image they are both mounted but not referenced.

  • Yes I know that's a mess... first of all thank you for the reply!


    you could have used wipefs just to remove the zfs info and leave the ext4 file system in tact.

    that was the plan, but the ext4 header was not there anymore- for some very strange reason!



    However according to the image you have 2 raid arrays /dev/md0 labelled MediaBack... and /dev/md127 labelled Media so you have 2 Raid mirrors?

    true! but the raid labelled /dev/md0 is empty and has just been created. It's there for a possible backup location of the /dev/md127 raid (on which the data is stored that I would love to restore), in case heavy modifications are done.



    Judging by the image they are both mounted but not referenced.

    true! As soon as I reference the /dev/md126 and "check out its content" there is nothing on it.

  • Well I tried the SSH of the actual linux machine running OMV, I tried a windows10 machine and I tried my MacBook pro...


    My guess is the following- when I recreated the EXT4 filesystem, it did so on the blocks that were not in use at that time instead of recreating the filesystem and trying to reassemble the data

    • Offizieller Beitrag

    exactly what I did!

    Ok, why not simply start again if as you say it would not be the end of the world if the data was lost. What's done is done, you have tried the most obvious approach, why not start again lose the raid and use MergerFS + Snapraid.

  • Well... giving in is not the most elegant solution tbh
    I would have hoped for some clever idea as to what extent it would be possible to reconstruct a partition table from scratch

  • Hi, I am very sorry for the late response- work caught up with me


    So, I just tried du -h in the /srv on the mounted disk and a f***ton of folders showed up, all located in the ./lost and found. I think this is good news!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!