ZFS not mounting - OMV 4.x

  • Hi


    Since I had to upgrade some stuff on my NAS I decided to go with OMV 4.x testing. The NAS is running just fine only ZFS doesn't mount after a reboot as the shared folders are created before the ZFS mount occurs
    and the ZFS refuses to mount. So each time I reboot the NAS I have to ssh in and delete the shared folders and mount the ZFS - then it is all good.


    This issues has been reported on the bugtracker:
    0001851
    0001827




    Both bug reports are marked as "solved" but foe me nothing has changed - so before I re-open the bug I wanted to ask if anyone else is having this issue or if it is just me


    thx

  • Same problem here:
    This is the detailt:
    This is the ZFS plugin:


    And this is the fyle system tab, as you can the Mount button is grayed out:


    Finally this is the status of the ZFS Mirror:

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

    • Offizieller Beitrag

    No, and i really can't help here anymore because i'm not using ZFS and i'm not the maintainer of the ZFS plugin. The GIT commit https://github.com/openmediava…a81a3d1329dc0b98996673e09 has added the zfs-mount dependency in hope to get it working. It works during my tests, so i'm out of ideas. Maybe this is the time for users to contribute to the OMV project by investigating into this issue to get it working.

  • found this in syslog:
    monit[1456]: 'mountpoint_ZFS' status field (1) -- /ZFS is not a mountpoint


    but i'm not sure what does it mean :(

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

  • solved the problem thanks to the log:

    • uninstalled ZFs plugin
    • deleted every folder inside /ZFS after checking that they were empty
    • installed the plug-in, now the ZFS pool isn't already imported
    • imported my pool and now is working

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

    • Offizieller Beitrag

    solved the problem thanks to the log:

    • uninstalled ZFs plugin
    • deleted every folder inside /ZFS after checking that they were empty
    • installed the plug-in, now the ZFS pool isn't already imported
    • imported my pool and now is working

    Reboot and tell me if everything is working properly. IMO there are still some issues with sharedfolders units and zfs mount, especially on big pools.

  • Rebooted and still working. I shared the ZFS mirror with Windows and I can still access and open files

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

    • Offizieller Beitrag

    Good to know.



    @votdev have you though about instead of using Requires=, just create a custom target for those units that run at last, maybe even after multi-user.target?. Hypothetically tomorrow we can have a "yfs" that also does not obey fstab rules, then if we have a plugin you will have to add another Requires= for that particular "yfs"


    If there is a custom target, then we can work it from the plugin using systemd overrides that can be shipped from the package. Just an idea


  • Hi


    @votdev
    @subzero79


    thx for the feedback. As @Blabla die I will remove my ZFS installation and re-intsall the plugin from scratch in the next days. I will let u know if that hepls.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!