I am getting a little frustrated wiping my mergerFS pool every time I need to change something. Today I bit the bullet and did it again.. even though I need to redo a bunch of things.
I set up a new pool with 2 new disks.
I haven't used them anywhere.. yet.. so they just configured as a cache pool.. not sure what to use them for.. but anyway..
I then went to change the properties of the pool from:
defaults,allow_other,use_ino
to
defaults,allow_other,use_ino,moveonenospc=true
I hit ok, hit apply.. and boom... back to sq. 1 - can someone give me some advice here... and NOT "delete the pool and create it again".. this happens on both my pools.
1 is in use with shared folders, being used for docker stuff etc.. so yeah.. that pool is in use sure.. but looking back to when I ran omv4 I did the same things as now on omv5.. back then I never had issues.. I know OMV5 is now using salt so it's pears and apples..
Log excerpt:
Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; omv-salt deploy run fstab 2>&1' with exit code '1': debian: ---------- ID: create_filesystem_mountpoint_ddcef464-7442-469f-b21e-6beda192e1d3 Function: file.accumulated Result: True Comment: Accumulator create_filesystem_mountpoint_ddcef464-7442-469f-b21e-6beda192e1d3 for file /etc/fstab was charged by text Started: 15:15:11.506429 Duration: 0.308 ms Changes: ---------- ID: mount_filesystem_mountpoint_ddcef464-7442-469f-b21e-6beda192e1d3 Function: mount.mounted Name: /srv/dev-disk-by-label-disk00 Result: True Comment: Target was already mounted Started: 15:15:11.507008 Duration: 98.303 ms Changes: ---------- umount: Forced remount because options (acl) changed ---------- ID: create_filesystem_mountpoint_55509fde-c866-4392-bb44-9cd50a389b27 Function: file.accumulated Result: True Comment: Accumulator create_filesystem_mountpoint_55509fde-c866-4392-bb44-9cd50a389b27 for file /etc/fstab was charged by text Started: 15:15:11.605428 Duration: 0.501 ms Changes: ---------- ID: mount_filesystem_mountpoint_55509fde-c866-4392-bb44-9cd50a389b27 Function: mount.mounted Name: /srv/dev-disk-by-label-disk01 Result: True Comment: Target was already mounted Started: 15:15:11.605983 Duration: 52.688 ms Changes: ---------- umount: Forced remount because options (acl) changed ---------- ID: create_filesystem_mountpoint_fd6bf1f2-b144-4a5c-ac31-65ee43b42c2b Function: file.accumulated Result: True Comment: Accumulator
&
True Comment: Changes were made Started: 15:15:12.190075 Duration: 5.146 ms Changes: ---------- diff: --- +++ @@ -21,6 +21,6 @@ /dev/disk/by-label/disk05 /srv/dev-disk-by-label-disk05 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2 /dev/disk/by-label/disk04 /srv/dev-disk-by-label-disk04 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2 /srv/dev-disk-by-label-disk00:/srv/dev-disk-by-label-disk02:/srv/dev-disk-by-label-disk03:/srv/dev-disk-by-label-disk01 /srv/8a973c04-47d5-4d36-a1f1-d61f5f8c6a25 fuse.mergerfs defaults,allow_other,use_ino,category.create=mfs,minfreespace=15G,fsname=ragnarok:8a973c04-47d5-4d36-a1f1-d61f5f8c6a25,x-systemd.requires=/srv/dev-disk-by-label-disk00,x-systemd.requires=/srv/dev-disk-by-label-disk02,x-systemd.requires=/srv/dev-disk-by-label-disk03,x-systemd.requires=/srv/dev-disk-by-label-disk01 0 0 -/srv/dev-disk-by-label-cache01:/srv/dev-disk-by-label-cache00 /srv/b1ab25c4-09ad-4ca2-9751-dc963c42d711 fuse.mergerfs defaults,allow_other,use_ino,category.create=epmfs,minfreespace=4G,fsname=cache:b1ab25c4-09ad-4ca2-9751-dc963c42d711,x-systemd.requires=/srv/dev-disk-by-label-cache01,x-systemd.requires=/srv/dev-disk-by-label-cache00 0 0 +/srv/dev-disk-by-label-cache01:/srv/dev-disk-by-label-cache00 /srv/b1ab25c4-09ad-4ca2-9751-dc963c42d711 fuse.mergerfs defaults,allow_other,use_ino,moveonenospc=true,category.create=epmfs,minfreespace=4G,fsname=cache:b1ab25c4-09ad-4ca2-9751-dc963c42d711,x-systemd.requires=/srv/dev-disk-by-label-c
full log here: