ZFS suddenly stopped automounting pools

  • Hello, all.


    I've got two ZFS pools in my system and both were working fine, but now one of them stops being automounted. I'm not sure what is causing the issue, it was working fine before the holidays, and when I came back it wasn't any longer.


    zfs list shows that all pools and volumes have mountpoints set, and running zfs mount -a successfully mounts the pool that wasn't being automounted. The trouble is that even though doing this mounts the pool/volumes, and OMV's dashboard (Storage>Filesystems) will now show the pools/volumes as mounted, the shared folders aren't working, they all show empty values for the device column.


    Am using OMV 5.2.1 with Proxmox kernel, and all packages/plugins are up to date.


    Any ideas how I can fix this??

  • More details:


    All mountpoints are indeed configured
    zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    Tank 482 3.8T 240K /tank
    Tank/documents 2.01G 3.8T 2.01G /tank/documents
    Tank/pictures 397G 3.8T 397G /tank/pictures
    Tank/videos 83G 3.8T 83G /tank/videos


    The system attempts to mount
    grep zfs /var/log/syslog
    [tt]Jan 5 08:51:33 Server systemd-modules-load[399]: Inserted module 'zfs'
    Jan 5 08:51:33 Server systemd[1]: zfs-import-cache.service: Main process exited, code=exited, status=1/FAILURE
    Jan 5 08:51:33 Server systemd[1]: zfs-import-cache.service: Failed with result 'exit-code'.
    Jan 5 08:51:33 Server zfs[1192]: cannot mount '/tank': directory is not empty
    Jan 5 08:51:33 Server systemd[1]: zfs-mount.service: Main process exited, code=exited, status=1/FAILURE
    Jan 5 08:51:33 Server systemd[1]: zfs-mount.service: Failed with result 'exit-code'.


    Indeed, as mentioned, the directory is not empty
    l /tank
    pictures/ videos/
    Even if I manually delete these and reboot, they reappear


    I believe this is because
    cat /etc/fstab
    ...
    # >>> [openmediavault]
    /tank/videos/ /export/videos none bind,nofail,_netdev 0 0
    /tank/pictures/ /sftp/antioch/pictures none bind,rw,nofail 0 0
    # <<< [openmediavault]


    Removing those two shares from the NFS and SFTP services in OMV removed the entries from the fstab, and after manually deleting the folders from /tank/ and rebooting tank mounted without issue and all was well. Except now I don't have SFTP and NFS share abilities. :(


    I am by no means an expert, but I did some googling and I found some issues that looked to be the same. Interestingly, just like my case, things were working fine but then they suddenly weren't.


    zfs-mount fails because directory isn't empty, screws up bind mounts and NFS #47

    This happens because the order in which fstab-mounts and zfs-mounts happen is undefined.See zfs-mount.service and one of the auto-generated mount-units.
    Systemd orders the auto-generated mounts by filesystem hierachy. See systemd.mount(5).

    Which references a few other issues, including the following, which was marked resolved in 2016.
    Systemd: Replace zfs-mount.service with systemd.generator(7) #4898
    zfs-mount.service is called too late on Debian/Jessie with ZFS root #4474
    Centos: systemd-journald.service misses the zfs-mount.service dependency #8060
    The last post of which says:

    ZFS has a systemd mount generator these days.

    I guess this is what is being referred to?
    [WIP] Prototype for systemd and fstab integration #4943
    Which points to this PR:
    Fixes for the systemd mount generator #9611


    So, I wonder if this issue is now resolved upstream? I know that the PR referenced above is rather recent, so it will take a long time to propogate out. However, this comment in suggests that the issue can be worked around in zol 0.8.x:


    zfs-mount.service is called too late on Debian/Jessie with ZFS root #4474

    This should be resolved. The Root-on-ZFS HOWTO includes a work-around, and with 0.8.x's mount generator, this is correctly solved. I'm going to close this. If this is still an issue for someone, try the workaround of setting mountpoint=legacy on the affected datasets and putting them in /etc/fstab


    I'm not yet sure what this means, but in either case (the workaround mentioned above, or the fix from the PR above), it looks like it will require some changes to OMV and/or the ZFS plugin to use the newly supported systemd/fstab mount system?


    Any help would be great, especially from the ZFS plugin author.


    Thank you!

  • How did you do this?


    Also, I know that some of the posts referred to Root-of-ZFS, but if you read through the issues you'll see that it's the same problem, as outlined above with the out-of-order mounting.

  • How did you do this?

    I don't recall the exact details, but it involves opening the file named /lib/systemd/system/openmediavault-engined and adding the text zfs.target near the top line where it says After: local-fs.target blah blah.


    I did other modifications to other systemd files, but I think this is the one that would fix your issue; what the above does is try to ensure the zfs mounts are ready before starting the OMV engine, which populates all the services and shares and whatever.


    Note that if your disks needed for the pool are not available (like mine sometimes are because USB) it won't stop the boot and just move on to setting up the shares, but I don't think that should be an issue if you have stable drive connections.

  • I had the same problem, me and my buddy racked our brains for a few hours...started fiddling and think I might have found a solution... (albeit probably not the RIGHT way to fix it...but a working fix)


    It seems that the docker was loading before (or during) the zfs or any such (btrfs) array was fully initialized and mounted by the zfs daemon, causing the bindings and the daemons to pitch a fit and stop working


    I tried and tried and tried to figure out why and then started thinking that it might be a timing issue of the services starting, I even went so far as to stop the docker service, manually import the zpool, then restart the service....when that happened it worked fine


    Then wrote a /etc/rc.local script to do it "automagically" which is a VERY nasty and brute force way of doing things


    well I did a few things


    first verified which runlevel I was in

    Code
    # runlevel 
    N 5

    then went into /etc/rc5.d and found S01docker coincidentally ALL the links in there were marked S01 (start sequentially at the same time more or less I guess) and pointed to the files in /etc/init.d


    So I just in /etc/rc5.d

    Code
    #mv S01docker S05docker

    which changed it to a larger number and moved it behind all the other processes starting before it


    Then I went into /etc/init.d and changed docker to read in the ### BEGIN INIT INFO section


    Code
    # Required-Start: $all $syslog $remote_fs


    from

    Code
    # Required-Start: $syslog $remote_fs

    which means it need all services before it loaded/started before it loads


    Now on two reboots...all my zfs mounts are there and docker is running happily so it seems that it is persistent


    I did remove the /etc/rc.local that I created to stop docker, mount the zfs, then restart docker too...the above fix seems to make docker start last AFTER everything else is done and all the zfs volumes have been mapped


    I also put a symlink from /var/lib/docker to my /zfsmount/docker so the data is stored in my large storage array


    I hope this helps...

  • Correction...once I started putting containers in..on reboot it failed


    I went ahead and just stopped the service from starting all together


    Code
    # systemctl disable docker

    Then created a /etc/rc.local file


    Put in there to force mount my zfs pool


    Bash
    #!/bin/sh
    /sbin/zfs mount -O -a
    exit 0


    This...seems to work fine, odd thing is...with docker off but using portiainer, the docker containers are running, but there is no docker to interfere with the filesystem reboot after reboot


    Currently have OMV5 working with zfs and docker and all my docker containers are in a filesystem called "appdata" and they are persistent and working well


    Heck I could probably just skip the /etc/rc.local file all together

    Yeah..it's ugly...improper...but it DOES work!


    Idea gotten from here

    https://utcc.utoronto.ca/~cks/…tRestriction?showcomments


    Also...further digging...proper location to do the -O (overlay mount) change would be in /etc/systemd/system/zfs.target.wants/zfs-mount.service which is actually a link to /lib/systemd/system/zfs-mount.service for systemd type OS's


    In the

    Code
    [Service]
    Type=oneshot
    RemainAfterExit=yes
    ExecStart=/sbin/zfs mount -a

    change to

    Code
    [Service]
    Type=oneshot
    RemainAfterExit=yes
    ExecStart=/sbin/zfs mount -O -a

    No need for a depreciated /etc/rc.local file

    /lib/systemd/system/zfs-mount.service

    3 Mal editiert, zuletzt von warhawk8080 () aus folgendem Grund: edit to show correct mount option some directories would not mount and report "not empty" but the -O -a will ignore and mount fine

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!