MergerFS - Salt - always issues when changing something

  • I am getting a little frustrated wiping my mergerFS pool every time I need to change something. Today I bit the bullet and did it again.. even though I need to redo a bunch of things.

    I set up a new pool with 2 new disks.

    I haven't used them anywhere.. yet.. so they just configured as a cache pool.. not sure what to use them for.. but anyway..


    I then went to change the properties of the pool from:


    defaults,allow_other,use_ino


    to


    defaults,allow_other,use_ino,moveonenospc=true


    I hit ok, hit apply.. and boom... back to sq. 1 - can someone give me some advice here... and NOT "delete the pool and create it again".. this happens on both my pools.

    1 is in use with shared folders, being used for docker stuff etc.. so yeah.. that pool is in use sure.. but looking back to when I ran omv4 I did the same things as now on omv5.. back then I never had issues.. I know OMV5 is now using salt so it's pears and apples..


    Log excerpt:


    Code
     Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; omv-salt deploy run fstab 2>&1' with exit code '1': debian: ---------- ID: create_filesystem_mountpoint_ddcef464-7442-469f-b21e-6beda192e1d3 Function: file.accumulated Result: True Comment: Accumulator create_filesystem_mountpoint_ddcef464-7442-469f-b21e-6beda192e1d3 for file /etc/fstab was charged by text Started: 15:15:11.506429 Duration: 0.308 ms Changes: ---------- ID: mount_filesystem_mountpoint_ddcef464-7442-469f-b21e-6beda192e1d3 Function: mount.mounted Name: /srv/dev-disk-by-label-disk00 Result: True Comment: Target was already mounted Started: 15:15:11.507008 Duration: 98.303 ms Changes: ---------- umount: Forced remount because options (acl) changed ---------- ID: create_filesystem_mountpoint_55509fde-c866-4392-bb44-9cd50a389b27 Function: file.accumulated Result: True Comment: Accumulator create_filesystem_mountpoint_55509fde-c866-4392-bb44-9cd50a389b27 for file /etc/fstab was charged by text Started: 15:15:11.605428 Duration: 0.501 ms Changes: ---------- ID: mount_filesystem_mountpoint_55509fde-c866-4392-bb44-9cd50a389b27 Function: mount.mounted Name: /srv/dev-disk-by-label-disk01 Result: True Comment: Target was already mounted Started: 15:15:11.605983 Duration: 52.688 ms Changes: ---------- umount: Forced remount because options (acl) changed ---------- ID: create_filesystem_mountpoint_fd6bf1f2-b144-4a5c-ac31-65ee43b42c2b Function: file.accumulated Result: True Comment: Accumulator 
    
    &
    
    True Comment: Changes were made Started: 15:15:12.190075 Duration: 5.146 ms Changes: ---------- diff: --- +++ @@ -21,6 +21,6 @@ /dev/disk/by-label/disk05 /srv/dev-disk-by-label-disk05 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2 /dev/disk/by-label/disk04 /srv/dev-disk-by-label-disk04 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2 /srv/dev-disk-by-label-disk00:/srv/dev-disk-by-label-disk02:/srv/dev-disk-by-label-disk03:/srv/dev-disk-by-label-disk01 /srv/8a973c04-47d5-4d36-a1f1-d61f5f8c6a25 fuse.mergerfs defaults,allow_other,use_ino,category.create=mfs,minfreespace=15G,fsname=ragnarok:8a973c04-47d5-4d36-a1f1-d61f5f8c6a25,x-systemd.requires=/srv/dev-disk-by-label-disk00,x-systemd.requires=/srv/dev-disk-by-label-disk02,x-systemd.requires=/srv/dev-disk-by-label-disk03,x-systemd.requires=/srv/dev-disk-by-label-disk01 0 0 -/srv/dev-disk-by-label-cache01:/srv/dev-disk-by-label-cache00 /srv/b1ab25c4-09ad-4ca2-9751-dc963c42d711 fuse.mergerfs defaults,allow_other,use_ino,category.create=epmfs,minfreespace=4G,fsname=cache:b1ab25c4-09ad-4ca2-9751-dc963c42d711,x-systemd.requires=/srv/dev-disk-by-label-cache01,x-systemd.requires=/srv/dev-disk-by-label-cache00 0 0 +/srv/dev-disk-by-label-cache01:/srv/dev-disk-by-label-cache00 /srv/b1ab25c4-09ad-4ca2-9751-dc963c42d711 fuse.mergerfs defaults,allow_other,use_ino,moveonenospc=true,category.create=epmfs,minfreespace=4G,fsname=cache:b1ab25c4-09ad-4ca2-9751-dc963c42d711,x-systemd.requires=/srv/dev-disk-by-label-c



    full log here:


    https://pastebin.com/raw/60twPkM1

    • Offizieller Beitrag

    Looking at salt deploy plugin code is suppose to mount again the point. Since is in use umount will fail. Can you confirm is in use by some pid?


    The bool “mount” parameter for salt mount.mounted will have to be passed dynamically on each save and apply.

    In the old version the fstab will be rewritten and user will reboot to have the new policies.

  • Looking at salt deploy plugin code is suppose to mount again the point. Since is in use umount will fail. Can you confirm is in use by some pid?


    The bool “mount” parameter for salt mount.mounted will have to be passed dynamically on each save and apply.

    In the old version the fstab will be rewritten and user will reboot to have the new policies.

    Can you give me a hint on how to check if it's in use by a pid? not sure how I do that.

  • After Reboot "Accept Changes" Reboot "Accept Changes" (2 times) it works.


    If you want I can add a new testdrive to get the error again.


    I just deleted a share, now I get another error.


    Stay healthy Bernd

  • subzero79 - There was an update to salt..yesterday.. or day before .. so not sure if we can count on my findings .. but...


    I removed all services .. smb, rsync server etc.


    I was able to make a change to the mergerFS pool.

    I was also able to initialize a new harddrive and mount it without issue..


    also mount & unmount remote share..


    all of the above usually gives me these errors.. so that's positive..


    But I want to make some more tests before believing this issue is gone..


    but so far.. it's good... I hope the reason is the update I installed yesterday.. that would make the most sense.

    • Offizieller Beitrag

    I just submitted PR to fix this, you will still need to reboot to make changes effective.


    https://github.com/OpenMediaVa…-unionfilesystems/pull/38


    The mergerfs docs detail that changes can be done on the fly to mergerfs point without remount, including policy change or adding more branches. But the plugin will require quite some code to accommodate that option. I haven't checked yet with salt docs if something can be done about it.

    • Offizieller Beitrag

    New version of the plugins (unionfilesystems and mergerfsfolders) in the repo now.

  • I tested again (am updating as I type) and I see the same issue... I need to remove shares/rsync etc.


    will test again after the update.. which is running now.


    so subzero79 - you are saying that I should be able to:


    change mergerfs pool ie. add drive, remove drive etc.

    and change pool attributes

    without needing a reboot?


    earlier the issue happened no matter what I did. The last update seems to have fixed this..

    • Offizieller Beitrag

    Atm because how the salt module works it attempts to make change (remount) the pool while on use that on Linux is a no go.

    The patch checks if the pool is mounted if is mounted it will not attempt to remount. So changes will only be effective on reboot, as usual anyway.


    Is possible (mergers) to make the changes to the pool on the fly like branches and policy but that requires plugin code. Fuse options still require the pool to be remounted.

    • Offizieller Beitrag

    So changes will only be effective on reboot, as usual anyway.

    This is the way the OMV 4.x version of the plugin worked. So, I don't mind keeping the same behavior. If someone is changing pool options so much that they don't like the reboots, that is strange and they will just have to discover the mount -o remount,rw command :)

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Just to follow up again.


    This issue is ...either back.. or never went away :(


    OMV version: 5.4.6-1


    MergerFS version:


    mergerfs version: 2.28.3

    FUSE library version: 2.9.7-mergerfs_2.29.0

    fusermount version: 2.9.9

    using FUSE kernel interface version 7.29




    small excerpt here:


    Code
    ailed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; omv-salt deploy run fstab 2>&1' with exit code '1': debian: ---------- ID: create_filesystem_mountpoint_ddcef464-7442-469f-b21e-6beda192e1d3 Function: file.accumulated Result: True Comment: Accumulator create_filesystem_mountpoint_ddcef464-7442-469f-b21e-6beda192e1d3 for file /etc/fstab was charged by text Started: 18:18:06.461443 Duration: 0.408 ms Changes: ---------- ID: mount_filesystem_mountpoint_ddcef464-7442-469f-b21e-6beda192e1d3 Function: mount.mounted Name: /srv/dev-disk-by-label-disk00 Result: True Comment: Target was already mounted Started: 18:18:06.462280 Duration: 75.161 ms Changes: ---------- umount: Forced remount because options (user_xattr) changed ---------- ID: create_filesystem_mountpoint_55509fde-c866-4392-bb44-9cd50a389b27 Function: file.accumulated Result: True Comment: Accumulator create_filesystem_mountpoint_55509fde-c866-4392-bb44-9cd50a389b27 for file /etc/fstab was charged by text Started: 18:18:06.537567 Duration: 0.561 ms Changes: ---------- ID:

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!