ZFS pool not mounting on boot

  • Hey all, hoping to get a little input on this. I'm running 5.3.10-1 with Proxmox kernel 5.3.18-1-pve with a zpool I've had for I'd guess about a year that contains a bunch of files i consumer in various parts of my network, via SMB and NFS, for my home lab (ISOs, backup images, etc) and games. It also contains an AppData directory I use to store configuration and persistent storage for my ~12 docker containers which I manage with Portainer.


    Every time I reboot now, the zpool does not mount. Its mountpoint populates with my NFS export and an AppData folder. It's as if docker or NFS is starting before ZFS. But I disabled my NFS share and uninstalled Docker, and it's still the same behaviour.


    I found that /etc/defaults/zfs had the line for auto-mounting commented but didn't want to undo that, since my SWAG was that the ZFS plugin handles this differently.


    Thanks for any help!

  • well.


    frustratingly, this is fixed, and frustratingly i have no idea why. i saw a "FAILED" message flash by regarding being unable to mount a ZFS pool from a cache file, but couldn't find it in my logs, so installed bootlogd and rebooted. of course on the reboot, zpool mounted fine and my containers are fine. aah.


    EDIT: Nope, rebooted again for good measure and zpool did not auto mount....

  • oh my, bootlogd didn't work because it's not a good idea anymore, https://unix.stackexchange.com…o-unmask-bootlogd-service


    journalctl -b gave me the boot messages i wanted; here are some logs that might help.


    Code
    Apr 20 12:13:14 mynas zpool[864]: cannot import 'myzpool': one or more devices is currently unavailable
    Apr 20 12:13:14 mynas systemd[1]: zfs-import-cache.service: Main process exited, code=exited, status=1/FAILURE
    Apr 20 12:13:14 mynas systemd[1]: zfs-import-cache.service: Failed with result 'exit-code'.
    Apr 20 12:13:14 mynas systemd[1]: Failed to start Import ZFS pools by cache file.


    but once i've booted and imported the pool manually, this is zpool status:



    is this possibly due to a required zpool upgrade?

  • oh, it's probably related to this.


    https://github.com/openzfs/zfs/issues/3918


    i plugged in a USB SSD the other day to test somethign and it's still plugged in. maybe the devices are being named differently on boot and being called via their /dev names? seems so:


    Code
    # zdb -C | grep path:
                    path: '/dev/sdj1'
                    path: '/dev/sdk1'
                    path: '/dev/sdl1'
                    path: '/dev/sdi1'

    so. i messed around a bit with the zfs-import-cache and zfs-import-scan functions.


    zpool export myzpool

    systemctl stop zfs-import-cache
    mv /etc/zfs/zpool.cache /etc/zfs/zpool.cache.bak

    systemctl enable zfs-import-scan

    systemctl start zfs-import-scan

    zpool status now shows:


    and zdb -C | grep path: shows

    Zitat

    # zdb -C | grep path:

    path: '/dev/disk/by-id/wwn-###################-part1'

    path: '/dev/disk/by-id/wwn-###################-part1'

    path: '/dev/disk/by-id/wwn-###################-part1'

    path: '/dev/disk/by-id/wwn-###################-part1'


    so that's good. i feel like i must have set somethign up wrong for the members of my zpool to be designated by their /dev/sd* value...but anyway, i want to reenable zfs-import-cache but it's failing now:


    ah, interesting. this may have just been because i didn't understand how the zpool.cache file works, which i still don't. but basically did what this user said: https://github.com/openzfs/zfs/issues/8831


    # zfs set cache=none myzpool

    # zfs set cache=/etc/zfs/zpool.cache myzpool


    then able to successfully re-run sytemctl start zfs-import-cache


    trying a reboot...

  • cool, system has survived 3 reboots without issue. so i think in fact this whole problem might have been caused by me simply plugging in a USB drive and rebooting. i wonder if this is just to do with how i set up my zfs volume, or if others would have similar issues? does the zfs plugin default to using the /dev/sda designations when creating a zfs pool or did i do that my own silly self?

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!