UnionFS 4.0.2 - changing create policy caused 2x conflicting mergerfs processes with different mount options

    • UnionFS 4.0.2 - changing create policy caused 2x conflicting mergerfs processes with different mount options

      I may have found a potential bug/feature in the openmediavault-unionfilesystems (4.0.2) plugin...

      I have a mergerfs pool (which I also export via NFS) of 3x drives - the create policy has always been (until today) "Existing path, least free space".

      I accidentally filled up one of my drives earlier and subsequently realised I didn't really need any path preservation so decided to change the create policy via the web UI to "Most free space".

      I did so, saved changes, stopped the NFS server, unmounted the mergerfs pool, remounted the mergerfs pool and restarted the NFS server.

      All seemed well at first, but it appeared that the create behaviour had actually become "Least free space". The Web UI still reflected the correct chosen option.

      I reported the bug to trapexit via his github: github.com/trapexit/mergerfs/issues/664 but we determined that actually it wasn't a problem with mergerfs itself.

      I noticed 2x separate mergerfs processes were running - this behaviour persisted across the first reboot but went away once I manually killed both mergerfs processes and re-mounted.

      Strangely one of the mergerfs processes had the correct mount option "category.create=mfs" but the other (which originally had started a few minutes later) had "category.create=lfs" (an option I had not even chosen - remember: the original option I changed from was "eplfs").

      Anyway, after killing both, and rebooting I now have the correct "mfs" behaviour.

      It must be something to do with the plugin/web interface since i try to never make manual changes to my OMV server, preferring to do everything via the web UI but in this case it was not consistent...
    • mergerfs is not service and rebooting is the best solution when altering a pool. I thought the plugin said this but I guess it is just in the changelog. So, it is not a bug. We chose not to have the plugin try to remount the filesystem (which is required when changing mergerfs options) because it is a pain (filesystem might be in use).
      omv 5.1.0 usul | 64 bit | 5.3 proxmox kernel | omvextrasorg 5.1.5
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • Fair enough - I'm almost certain this behaviour persisted across the first reboot though...it could have been rectified after having re-refreshed the web interface after the first reboot...perhaps an old setting was still stuck in there. I certainly only had one line in my /etc/fstab for the pool mount and the config.xml looked good...it's odd.
    • If it survived reboot, fstab wasn't changed. That means the apply either didn't happen or failed. Both would have been noticeable to you. This is one of the plugins I personally use the most. So, I am quite certain that it is working as designed.
      omv 5.1.0 usul | 64 bit | 5.3 proxmox kernel | omvextrasorg 5.1.5
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • Users Online 1

      1 Guest