Posts by ndroftheline

    cool, system has survived 3 reboots without issue. so i think in fact this whole problem might have been caused by me simply plugging in a USB drive and rebooting. i wonder if this is just to do with how i set up my zfs volume, or if others would have similar issues? does the zfs plugin default to using the /dev/sda designations when creating a zfs pool or did i do that my own silly self?

    oh, it's probably related to this.


    https://github.com/openzfs/zfs/issues/3918


    i plugged in a USB SSD the other day to test somethign and it's still plugged in. maybe the devices are being named differently on boot and being called via their /dev names? seems so:


    Code
    # zdb -C | grep path:
    path: '/dev/sdj1'
    path: '/dev/sdk1'
    path: '/dev/sdl1'
    path: '/dev/sdi1'

    so. i messed around a bit with the zfs-import-cache and zfs-import-scan functions.


    zpool export myzpool

    systemctl stop zfs-import-cache
    mv /etc/zfs/zpool.cache /etc/zfs/zpool.cache.bak

    systemctl enable zfs-import-scan

    systemctl start zfs-import-scan

    zpool status now shows:


    and zdb -C | grep path: shows

    Quote

    # zdb -C | grep path:

    path: '/dev/disk/by-id/wwn-###################-part1'

    path: '/dev/disk/by-id/wwn-###################-part1'

    path: '/dev/disk/by-id/wwn-###################-part1'

    path: '/dev/disk/by-id/wwn-###################-part1'


    so that's good. i feel like i must have set somethign up wrong for the members of my zpool to be designated by their /dev/sd* value...but anyway, i want to reenable zfs-import-cache but it's failing now:


    ah, interesting. this may have just been because i didn't understand how the zpool.cache file works, which i still don't. but basically did what this user said: https://github.com/openzfs/zfs/issues/8831


    # zfs set cache=none myzpool

    # zfs set cache=/etc/zfs/zpool.cache myzpool


    then able to successfully re-run sytemctl start zfs-import-cache


    trying a reboot...

    oh my, bootlogd didn't work because it's not a good idea anymore, https://unix.stackexchange.com…o-unmask-bootlogd-service


    journalctl -b gave me the boot messages i wanted; here are some logs that might help.


    Code
    Apr 20 12:13:14 mynas zpool[864]: cannot import 'myzpool': one or more devices is currently unavailable
    Apr 20 12:13:14 mynas systemd[1]: zfs-import-cache.service: Main process exited, code=exited, status=1/FAILURE
    Apr 20 12:13:14 mynas systemd[1]: zfs-import-cache.service: Failed with result 'exit-code'.
    Apr 20 12:13:14 mynas systemd[1]: Failed to start Import ZFS pools by cache file.


    but once i've booted and imported the pool manually, this is zpool status:



    is this possibly due to a required zpool upgrade?

    well.


    frustratingly, this is fixed, and frustratingly i have no idea why. i saw a "FAILED" message flash by regarding being unable to mount a ZFS pool from a cache file, but couldn't find it in my logs, so installed bootlogd and rebooted. of course on the reboot, zpool mounted fine and my containers are fine. aah.


    EDIT: Nope, rebooted again for good measure and zpool did not auto mount....

    Hey all,


    I've got a fully-updated OMV 5.3 box running the ZFS plugin and I had configured zfsnap per discussion here https://github.com/OpenMediaVa…nmediavault-zfs/issues/22 about a year ago. Today I was having some (probably unrelated) ZFS issues and wanted to roll back a folder's contents, and discovered that my latest snapshot is over two months old - very much as if my zfsnap cron job stopped working. Around that time I did have to reinstall from scratch so I might have missed something.


    Jobs look fine to me:



    I ssh'ed in and tried to confirm the command was working...but # zfsnap returns with "command not found". It's definitely installed; apt confirms that. Any ideas?

    Hey all, hoping to get a little input on this. I'm running 5.3.10-1 with Proxmox kernel 5.3.18-1-pve with a zpool I've had for I'd guess about a year that contains a bunch of files i consumer in various parts of my network, via SMB and NFS, for my home lab (ISOs, backup images, etc) and games. It also contains an AppData directory I use to store configuration and persistent storage for my ~12 docker containers which I manage with Portainer.


    Every time I reboot now, the zpool does not mount. Its mountpoint populates with my NFS export and an AppData folder. It's as if docker or NFS is starting before ZFS. But I disabled my NFS share and uninstalled Docker, and it's still the same behaviour.


    I found that /etc/defaults/zfs had the line for auto-mounting commented but didn't want to undo that, since my SWAG was that the ZFS plugin handles this differently.


    Thanks for any help!

    At the weekend i've decided to upgrade my NAS to OMV v5 with a clean install and run into exactly the same problem and error message.
    Portainer won't start with Proxmox kernel only with Debian's one. After 3 days of tinkering finally i have figured out what the heck is going on:
    If you use Debian Netinst ISO, it will automatically install AppArmor by default, instead OMV does not.
    And the reason of this error message is, that AppArmor finds Portainer as a security threat and blocks it. Check dmesg and you will see it.
    So you have 3 options to solve it:

    • Add --security-opt apparmor:unconfined option to the docker run command
    • Create new/modify the docker-default security profile in /etc/apparmor.d as described in Docker Doc: https://docs.docker.com/engine/security/apparmor/
    • Remove AppArmor completely from the system: apt-get --yes purge --autoremove apparmor

    I choosed the third one, for now :)


    But the big question still remains. Why behaves AppArmor with these kernels differently, even if docker-default profile remains still the same.

    Hi sfu420! Thanks so much for the investigation on this.


    I decided I wanted to start with your option 1, which for mysetup worked with this command:


    docker run -d --name=portainer-apparmor-unconfined -p 9001:9000 -p 8001:8000 --security-opt apparmor:unconfined --restart=unless-stopped -v /var/run/docker.sock:/var/run/docker.sock -v /gnosis/AppData/portainer:/data portainer/portainer


    how would i make this change persist? where do i modify the default docker run command for portainer in omv? ah, nevermind...when i tried to launch one of my other docker containers i also encountered permissions problems. i'm going through with @Hubrer 's post for disabling apparmor without removing it...yes, that worked fine. thanks. but now apparmor is disabled! and i don't want that.


    but yeah - where would one file a bug for this? is it a bug? kind of a weird one ofc because here I'm using Proxmox' kernel for ZFS support but not using it on Proxmox, heh....

    Hello community,


    I got a little overexcited about mergerfs and added a drive I want to keep separate, and then copied several TB of data to the mergerfs volume. I want to remove that drive from the mergerfs and move all the content that's been distributed to it to the other drives.


    What is a recommended method to do this? Perhaps something like mergerfs.vacate that I haven't found yet? Or can I just remove the drive from the union filesystem, then rsync the files from the drive to the mergerfs mount? I'd just like to know if I'm doing it a dumb/inefficient way, I guess.


    Thanks for these great projects!