ZFS device(s) not listed in devices dropdown

  • Hi!


    I created an issue for omv yesterday, because I can't create new shares on my ZFS device anymore. The issue got closed because it seems to be caused by the ZFS plugin (see last post of the issue).


    When I set up my NAS a few month I used OMV v4.0.X with the ZFS plugin v4.0 to create the ZFS pool and all shares. After setting up the NAS I updated all system packages on a regular basis (but not the kernel).


    Yesterday I tried to create a new shared folder but the device dropdown menu in the "Add shared folder" popup is empty. The device name for all previous created shares is set to "n/a" in the shared folder overview, but all shared folders are mounted and accessible.


    When I try to edit an existing shared folder, I get the following error message:



    Some basic infos about my system:


    Kind regards
    Teeminus

  • Same for me
    Disk usage tab for my ZFS pool has vanished from GUI !


    and here too
    Finding the correct mntent UUID for a filesystem not in config.xml


    There is definitively a problem with ZFS plugin or something related too, and that's not the version of the kernel...


    Check if you still have the <mntent> section of your ZFS pool in config.xml. You can copy the section back, but it will be deleted at some point... don't know by what at the moment.


    Lian Li PC-V354 (with Be Quiet! Silent Wings 3 fans)
    ASRock Rack x470D4U | AMD Ryzen 5 3600 | Crucial 16GB DDR4 2666MHz ECC | Intel x550T2 10Gb NIC

    1 x ADATA 8200 Pro 256MB NVMe for System/Caches/Logs/Downloads
    5 x Western Digital 10To HDD in RAID 6 for Datas
    1 x Western Digital 2To HDD for Backups

    Powered by OMV v5.6.26 & Linux kernel 5.10.x

    Einmal editiert, zuletzt von sbocquet ()

  • Did you export / import the pool by the WebUI for any reason? With OMV 3.x there where issues after exporting / importing the pool.


    The implementation of the ZFS features in OMV is provided by the ZFS plugin. Unfortunately the ZFS functionality is not part of the basic OMV system maintaind by votdev. Therefore almost all reported issues are immediately closed because of this reason. @ryecoaaron and other mods here in the forum do their very best to solve some of the issues.


    IMHO ZFS should not be used with OMV because one have always to calculate that something will go wrong after the next update. Far to unstable for such an important thing like a file system. And to check every update in a virtual machine before installing it on the productive system is not an real option for me. And I didn´t have the time for this.


    Personally I have a running OMV 3 system with ZFS. I do not touch it. I know some of the drawbacks of the ZFS plugin and how to bypass them. But in the end I always feel uncomfortable while using ZFS with OMV.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • The ZFS pool was created 2/3 weeks ago with the plugin.
    It runs nicely since without a problem since, and continue to... but something is removing the <mntent> section of the pool in config.xml. If I re-add the section in the OMV config file, all the problems are solved.
    In that case, I doubt that ZFS himself is in fault. That's something with the pluging itself or in OMV.

    Lian Li PC-V354 (with Be Quiet! Silent Wings 3 fans)
    ASRock Rack x470D4U | AMD Ryzen 5 3600 | Crucial 16GB DDR4 2666MHz ECC | Intel x550T2 10Gb NIC

    1 x ADATA 8200 Pro 256MB NVMe for System/Caches/Logs/Downloads
    5 x Western Digital 10To HDD in RAID 6 for Datas
    1 x Western Digital 2To HDD for Backups

    Powered by OMV v5.6.26 & Linux kernel 5.10.x

    • Offizieller Beitrag

    There are two problems with zfs:


    1 - zfs pools not being found at boot. This is not an omv issue but I'm not sure if the plugin could something to help.


    2 - fixOMVMntEnt is called whenever getObjectTree and that is called whenever the plugin's main panel is loaded or when you view details. If the pool/filesystem isn't found at boot time, this will remove the zfs mntent entries for those pools/filesystems. This is bad but not sure how to fix since zfs doesn't do things normal.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • But there maybe a thing to check like if this mntent is used by a sharedfolder to avoid to delete it if it is used?

    Lian Li PC-V354 (with Be Quiet! Silent Wings 3 fans)
    ASRock Rack x470D4U | AMD Ryzen 5 3600 | Crucial 16GB DDR4 2666MHz ECC | Intel x550T2 10Gb NIC

    1 x ADATA 8200 Pro 256MB NVMe for System/Caches/Logs/Downloads
    5 x Western Digital 10To HDD in RAID 6 for Datas
    1 x Western Digital 2To HDD for Backups

    Powered by OMV v5.6.26 & Linux kernel 5.10.x

    Einmal editiert, zuletzt von sbocquet ()

    • Offizieller Beitrag

    But there maybe a thing to check like is this mntent is used by a sharedfolder to avoid to delete it if it is used?

    That would probably help. I still think this function is dangerous. The mntent entries should be relatively static. This functions makes them very dynamic.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • The fact is that this entry is remove even after boot time... I gonna test to re-add it and go to the plugin panel to see what happens...


    EDIT : Bingo ! That's it !
    On a current working conf, no reboot...
    - I copy back the ZFS pool mount section in config.xml.
    - In the GUI, move to several place, tab, etc... except ZFS panel tab. No change in config.xml.
    - In the GUI, move to ZFS panel. config.xml is modified and the ZFS pool mount section is remove...


    Done several times

    Lian Li PC-V354 (with Be Quiet! Silent Wings 3 fans)
    ASRock Rack x470D4U | AMD Ryzen 5 3600 | Crucial 16GB DDR4 2666MHz ECC | Intel x550T2 10Gb NIC

    1 x ADATA 8200 Pro 256MB NVMe for System/Caches/Logs/Downloads
    5 x Western Digital 10To HDD in RAID 6 for Datas
    1 x Western Digital 2To HDD for Backups

    Powered by OMV v5.6.26 & Linux kernel 5.10.x

    Einmal editiert, zuletzt von sbocquet ()

  • I can confirm that when I add in an entry for a "vanished" drive, not only do associated shares continue to fail to show a mount point, but the mntent disk entry is deleted from config.xml post boot, presumably by the mechanism above. Could it be that the ZFS plugin expects some magic in the mntent UUID entries?

    • Offizieller Beitrag

    Done several times

    I figured that was causing the problem. Thank you for confirming it. @subzero79 and I trying to come up with a good solution.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    Why does the ZFS pool has to be present at boot time?

    It doesn't have to be for the plugin but it they are never found at all, that is a problem. Someone shouldn't have to manually import pools after a reboot.


    Wouldn't it be sufficient to poll the pools/check for the available pools every time the device list is requested?

    Yuck. The filesystems tab updates every few seconds. If you have a lot of pools/filesystems/snapshots, it would take forever. Personally, I don't like when the filesystem is not mounted right at boot and stays that way. I wish zfs did things the Linux way.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I think that 99% of the ZFS users mount their data pool at boot time. ZoL and the plugin does it that way, and that's the way to do things.
    That's not a USB disk thing... to play with ;) Will you play with a mdadm RAID array at each reboot ???

    Lian Li PC-V354 (with Be Quiet! Silent Wings 3 fans)
    ASRock Rack x470D4U | AMD Ryzen 5 3600 | Crucial 16GB DDR4 2666MHz ECC | Intel x550T2 10Gb NIC

    1 x ADATA 8200 Pro 256MB NVMe for System/Caches/Logs/Downloads
    5 x Western Digital 10To HDD in RAID 6 for Datas
    1 x Western Digital 2To HDD for Backups

    Powered by OMV v5.6.26 & Linux kernel 5.10.x

  • As it happens I have pools I unmount and mount while running to back up my main storage (which is mounted at boot time through OMV). Fortunately I handle all that outside of the OMV interface.


    What's still a mystery to me is (a) how my root storage zpool (mounted at boot) lost its mntent (b) why I can't for the life of me get OMV to re-recognise it. As a practical problem this is a non-issue for me, as I only ever use filesystems off the root pool so I don't need to access it from the OMV interface, but it concerns me that I may in future "lose" e.g. the owncloud filesystem, which I absolutely cannot delete and recreate or manage without.


    I should note also that while it's possible that an export/import/reboot as above might fix the issues, I'd have to remove and later recreate significant amounts of service settings in order to get to the point where I could export my NAS storage pool, so that's a far from ideal solution.

  • OK, OMV just lost track of the re-created ZFS filesystem that I made to get around this problem. This filesystem is mounted at boot time, so I cannot understand why this is happening. This is now a serious problem for me, as my owncloud installation is dependent on that partition existing. Again, it looks as though this is as a result of OMV deleting the appropriate mntent entry from config.xml. Why this filesystem and not the others? No idea. The only difference is that the other filesystems have been around for several years, but this one was created within the last month.

  • OK, a second filesystem has now gone missing, one of the older ones. Recreating the mntent entries by hand using the mntent entries from the filesystems just results in the recreated entries being deleted. Help, please!

  • I may be going for some kind of replying to my own posts record here. Working on the basis of ellnic's suggestion, I stopped nginx and all of the services using my ZFS filesystems, ran a "zpool export" manually and then rebooted the machine. This seems to have fixed the problem for now, although I still don't have a mntent for the root of the pool.


    (Speculation: I wonder if the current tmpfile bug is causing mntents to generate spurious error conditions and then OMV is aggressively removing these filesystems as "non-existent".)

  • I haven't been able to get it to resolve this time round. It has only happened to me once or twice before and it was an age ago, OMV 3 I think. I managed to resolve by exporting/importing in one instance, and in another I do seem to remember that I typoed the ZFS mount point and it had a trailing '/', this also caused (eg. /mnt/Tank/ instead of /mnt/Tank). Changing the mount point to omit the trailing '/' fixed it. When setting up new pools, I would do this via CLI then visit OMVs ZFS tab and have it save the changes, moving forward. However, that is probably a separate issue.


    For now, I have edited the AFP config manually as I am lucky that this server doesn't reference the ZFS pools via OMV for anything else. It's only AFP via OMV (and mainly some convenience, reporting/monitoring etc) and Emby and a few other things [that are separately installed via CLI] which have their own configs and are unaffected. My pools mount at boot, it's just the mntent problem. This does seem to have happened since Kernel 4.15, but then that doesn't explain why others have experienced the issue on 4.14 etc.

  • OK... just my 2 cts on the problem.


    If I don't touch/click the ZFS panel, my ZFS mntent in config.xml isn't deleted and everything works fine with OMV & ZFS (stats, drop down shared folder, etc...).
    So as long as there's no patch available, just don't touch the ZFS panel in OMV and do your ZFS things with SSH.


    At the moment, that's just a GUI bug and nothing gets hurt on the ZFS background.

    Lian Li PC-V354 (with Be Quiet! Silent Wings 3 fans)
    ASRock Rack x470D4U | AMD Ryzen 5 3600 | Crucial 16GB DDR4 2666MHz ECC | Intel x550T2 10Gb NIC

    1 x ADATA 8200 Pro 256MB NVMe for System/Caches/Logs/Downloads
    5 x Western Digital 10To HDD in RAID 6 for Datas
    1 x Western Digital 2To HDD for Backups

    Powered by OMV v5.6.26 & Linux kernel 5.10.x

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!