UnionFS 4.0.2 - changing create policy caused 2x conflicting mergerfs processes with different mount options

  • I may have found a potential bug/feature in the openmediavault-unionfilesystems (4.0.2) plugin...


    I have a mergerfs pool (which I also export via NFS) of 3x drives - the create policy has always been (until today) "Existing path, least free space".


    I accidentally filled up one of my drives earlier and subsequently realised I didn't really need any path preservation so decided to change the create policy via the web UI to "Most free space".


    I did so, saved changes, stopped the NFS server, unmounted the mergerfs pool, remounted the mergerfs pool and restarted the NFS server.


    All seemed well at first, but it appeared that the create behaviour had actually become "Least free space". The Web UI still reflected the correct chosen option.


    I reported the bug to trapexit via his github: https://github.com/trapexit/mergerfs/issues/664 but we determined that actually it wasn't a problem with mergerfs itself.


    I noticed 2x separate mergerfs processes were running - this behaviour persisted across the first reboot but went away once I manually killed both mergerfs processes and re-mounted.


    Strangely one of the mergerfs processes had the correct mount option "category.create=mfs" but the other (which originally had started a few minutes later) had "category.create=lfs" (an option I had not even chosen - remember: the original option I changed from was "eplfs").


    Anyway, after killing both, and rebooting I now have the correct "mfs" behaviour.


    It must be something to do with the plugin/web interface since i try to never make manual changes to my OMV server, preferring to do everything via the web UI but in this case it was not consistent...

    • Offizieller Beitrag

    mergerfs is not service and rebooting is the best solution when altering a pool. I thought the plugin said this but I guess it is just in the changelog. So, it is not a bug. We chose not to have the plugin try to remount the filesystem (which is required when changing mergerfs options) because it is a pain (filesystem might be in use).

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Fair enough - I'm almost certain this behaviour persisted across the first reboot though...it could have been rectified after having re-refreshed the web interface after the first reboot...perhaps an old setting was still stuck in there. I certainly only had one line in my /etc/fstab for the pool mount and the config.xml looked good...it's odd.

    • Offizieller Beitrag

    If it survived reboot, fstab wasn't changed. That means the apply either didn't happen or failed. Both would have been noticeable to you. This is one of the plugins I personally use the most. So, I am quite certain that it is working as designed.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!