Update from 3.x to 4.x has dropped my SnapRaid config?

  • Hi All,


    I've been putting off my OMV 3.x -> 4.x upgrade for a while, but decided to give it a shot a few hours ago. I uninstalled all the 3.x plugins that had no 4.x counterpart and then rolled through with omv-update.


    All seemed to go pretty well from a noobs point of view. However, I found that the SnapRaid plugin was no longer installed, so I went ahead an installed it. My previous config for SnapRaid was 8 x 4TB drives (7 Data, 1 Parity) which were presented as a single filesystem using UnionFS. I also have 2 x 500GB drives I use for VirtualBox and a single 30GB SSD for the OMV install.


    Looking at disks everything seems to be there;


    Disks.jpg


    When I get to filesystem however all but two of my SnapRaid devices are showing offline with the last two disks seemingly having lost their SnapRaid labels. My Union FS is showing online however I it appears I can access it via Samba, though haven't used it for fear of corruption;


    FileSystems.jpg


    If I look at the UnionFS config, it is only showing 1 disk;

    Union.jpg


    When I go into the SnapRaid gui there are no drives shown, and when I click add drive I get this;

    snapraid.jpg


    I'm quite a novice at this, so would really appreciate it if anyone could give me some pointers what to do next to get my config back online properly.

  • Have you cleared your browser cache?

    --
    Google is your friend and Bob's your uncle!


    A backup strategy is worthless unless you have a verified to work by testing restore strategy.


    OMV AMD64 7.x on headless Chenbro NR12000 1U Intel Xeon CPU E3-1230 V2 @ 3.30GHz 32GB ECC RAM.


  • I have now, even tried different browsers. No difference. The issue above was not the only one I experienced after the upgrade, though I think it is likely the root of them all. I should have two file systems, one called NASDATA comprised of 8x4TB drives in SnapRaid using Union FS. The other is one called NASSYSTEM which is comprised of 2x500GB drives in software RAID1.


    The other issues I had in the upgrade was ClamAV will not start, however I wonder if this because of the underlying file system issues. I also had an issue with VirtualBox showing as uninstalled (just like SnapRaid) after the upgrade. I had VirtualBox configured to use the NASSYSTEM filesystem exclusively. Fail2ban also would not start post upgrade, however an uninstall and re-install seems to have fixed that.


    I'm only really used to dealing with OMV via the gui, so am at a complete loss of where to go from here.


    I do have everything on my NASDATA filesystem saved in my Business GDrive account, and there is nothing on the NASSYSTEM filesystem that can't be rebuilt in time. My absolute preference though would be to get this all up and running again as restoring 25TB+ from Gdrive is going to take a month or more on my Internet connection.


    Not sure it is required (have seen it requested in other threads), but I have attached a status report and my config.xml file if this helps zero in on anything.


    *EDIT*


    Should also point out I have not rebooted since the upgrade as I was a bit worried I might lose everything.

  • Should also point out I have not rebooted since the upgrade as I was a bit worried I might lose everything.

    It would be completely unreasonable to expect an upgrade of this magnitude to work without rebooting.


    Also, since OMV 3 is long ago end of life with its repositories shuttered, it may not even be upgradeable at this point in time.


    Another thing to consider is that OMV 3 mounts data drives in /media. OMV 4 and up uses /srv. This alone would require a reboot to be picked up.

    --
    Google is your friend and Bob's your uncle!


    A backup strategy is worthless unless you have a verified to work by testing restore strategy.


    OMV AMD64 7.x on headless Chenbro NR12000 1U Intel Xeon CPU E3-1230 V2 @ 3.30GHz 32GB ECC RAM.


    Einmal editiert, zuletzt von gderf ()

  • Was absolutely expecting a reboot at some stage, just didn't want to go ahead if what I was seeing was an indication of a bigger issue as I've only read of a few having issues with filesystem through the upgrade with no solid solution found in any of their cases. I have now rebooted and no longer have access to my data.


    I do notice though that syslog is full off messages about mount point errors. I'll start digging into that.

  • Not convinced this is a OMV issue, seems to be some sort of disk labeling issue perhaps?


    Wipefs seems to be reporting my drives that aren't mounting as ZFS Members. I've never installed ZFS, would this be right?



  • Was absolutely expecting a reboot at some stage, just didn't want to go ahead if what I was seeing was an indication of a bigger issue as I've only read of a few having issues with filesystem through the upgrade with no solid solution found in any of their cases. I have now rebooted and no longer have access to my data.


    I do notice though that syslog is full off messages about mount point errors. I'll start digging into that.

    You should be able to mount your data drives inthe Filesystems page.

    --
    Google is your friend and Bob's your uncle!


    A backup strategy is worthless unless you have a verified to work by testing restore strategy.


    OMV AMD64 7.x on headless Chenbro NR12000 1U Intel Xeon CPU E3-1230 V2 @ 3.30GHz 32GB ECC RAM.


    Einmal editiert, zuletzt von gderf ()

  • All the buttons on the Filesystems page are greyed out for each entry with the exception of /dev/sdj1 .


    I think I am getting closer as to what is causing this, how to fix it will be another thing. As above, the output from wipefs shows that all the drives not being mounted are being reported as ZFS_Members.


    My fstab file shows that it is referencing all sources by label. These labels are automatically generated by udev using blkid as far as I can tell. Specially /lib/udev/rules.d/60-persistent-storage.rule here;


    Code
    # by-label/by-uuid links (filesystem metadata)
    ENV{ID_FS_USAGE}=="filesystem|other|crypto", ENV{ID_FS_UUID_ENC}=="?*", SYMLINK+="disk/by-uuid/$env{ID_FS_UUID_ENC}"
    ENV{ID_FS_USAGE}=="filesystem|other", ENV{ID_FS_LABEL_ENC}=="?*", SYMLINK+="disk/by-label/$env{ID_FS_LABEL_ENC}"


    So looking at one of the drives that is mounting you can see ID_FS_LABEL_ENC required for labels is being populated;



    Using the same query on any of the partitions not mounting I get this;


    Code
    root@TryanNAS:~# blkid -o udev -p /dev/sdd1
    ID_FS_AMBIVALENT=filesystem:ext4:1.0 filesystem:zfs_member:5000
    root@TryanNAS:~#


    Weird (to me at least) is if I query the partition label via other methods and it returns what I expect;

    Code
    root@TryanNAS:~# e2label /dev/sdd1
    4TB1
    root@TryanNAS:~#


    Anyone have any idea why this might be or how I might fix it?

  • So I seem to have made progress here. 100% the problem exists between the keyboard and the chair.


    I revisited wipefs after reading the man page for blkid which stated that the "ID_FS_AMBIVALENT" is returned when more than one file system is detected, which is exactly what is shown in the output above. I was super confused as I have never used the ZFS plugin for OMV, and why did some disks have this ZFS filesystem while others didn't. Penny dropped then as waaaaaay back I used to use FreeNAS which was ZFS. When I created my filesystems in OMV 3 it mustn't have cleared all the disk signatures. No idea how this wasn't an issue under Jessy as blkid / udev behavior according to the man page should have been the same.


    Anyway, so I go back to wipefs and delete all signatures (ZFS had left 13 signatures per partition) for each disk while ensuring I take a backup;


    Code
    sudo wipefs --all --force --backup /dev/sdX1


    Then I used dd to restore just the ext4 partition signature;


    Code
    dd if=~/wipefs-sdX1-0x00000438.bak of=/dev/sdX1 seek=$((0x00000438)) bs=1


    Within seconds of doing this OMV started mounting my partitions and following a reboot the unionFS was showing all branches as present. Most importantly I can access all my data again.


    My SnapRAID config remains trashed. I've checked each partition I had under SnapRAID and the old snapraid.conf/content/parity files are all still there, however when I go into SnapRAID GUI this is not picked up it seems. I'll need to read up on whether I should delete these snapraid files or if I can safely import them somehow. That is tomorrow's job.

  • Ok, so "restoring" SnapRaid from the command line seems to have been a breeze. I just dropped the snapraid.conf file saved on anyone of my drives in over the running config in /etc/snapraid.conf . I ran various checks from the command line and the configuration looks be fine.


    The issue I have now is that even after doing this and rebooting my NAS, no drives are showing up in the drives tab of the snapraid plugin. I can't see this anywhere in the OMV Snapraid guide, so just unsure if I need to do something in particular for the OMV plugin to ingest these drives so they are shown, or if I need to just add them manually again using the same data / content / parity configuration and settings as are already in use.


    Anyone have any guidance here?


    *Edit*


    Not sure if it has any bearing, however I can use any menu item off the tools menu drop down on the Drives tab and it works. This seems to be purely an issue where the OMV Snapraid plugin. Almost as if it does not pull in the snapraid disk configuration from snapraid.conf. Would this data be stored anywhere else?

  • Nicoloks

    Hat das Label gelöst hinzugefügt.
  • Ended up just re-adding all the disks into the OMV plugin ensuring to keep the same data, content and parity setting the same per disk and it seems to be working fine. Looks like everything is up an going again for me now, the underlying solution for me was stripping the ZFS partition signatures off as per this post which then allowed me to mount my file systems again.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!