Posts by Symbiot

    strange. I am seeing the following error for the remote mount..

    I've verified that the folder is present..


    Code
    8a973c04-47d5-4d36-a1f1-d61f5f8c6a25
    dev-disk-by-label-cache01
    dev-disk-by-label-disk03
    dev-disk-by-label-docker00
    ftp
    90354344-01e3-4a7c-9d22-46a30d9b0fea


    Just to follow up again.


    This issue is ...either back.. or never went away :(


    OMV version: 5.4.6-1


    MergerFS version:


    mergerfs version: 2.28.3

    FUSE library version: 2.9.7-mergerfs_2.29.0

    fusermount version: 2.9.9

    using FUSE kernel interface version 7.29




    small excerpt here:


    Code
    ailed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; omv-salt deploy run fstab 2>&1' with exit code '1': debian: ---------- ID: create_filesystem_mountpoint_ddcef464-7442-469f-b21e-6beda192e1d3 Function: file.accumulated Result: True Comment: Accumulator create_filesystem_mountpoint_ddcef464-7442-469f-b21e-6beda192e1d3 for file /etc/fstab was charged by text Started: 18:18:06.461443 Duration: 0.408 ms Changes: ---------- ID: mount_filesystem_mountpoint_ddcef464-7442-469f-b21e-6beda192e1d3 Function: mount.mounted Name: /srv/dev-disk-by-label-disk00 Result: True Comment: Target was already mounted Started: 18:18:06.462280 Duration: 75.161 ms Changes: ---------- umount: Forced remount because options (user_xattr) changed ---------- ID: create_filesystem_mountpoint_55509fde-c866-4392-bb44-9cd50a389b27 Function: file.accumulated Result: True Comment: Accumulator create_filesystem_mountpoint_55509fde-c866-4392-bb44-9cd50a389b27 for file /etc/fstab was charged by text Started: 18:18:06.537567 Duration: 0.561 ms Changes: ---------- ID:

    I tested again (am updating as I type) and I see the same issue... I need to remove shares/rsync etc.


    will test again after the update.. which is running now.


    so subzero79 - you are saying that I should be able to:


    change mergerfs pool ie. add drive, remove drive etc.

    and change pool attributes

    without needing a reboot?


    earlier the issue happened no matter what I did. The last update seems to have fixed this..

    subzero79 - There was an update to salt..yesterday.. or day before .. so not sure if we can count on my findings .. but...


    I removed all services .. smb, rsync server etc.


    I was able to make a change to the mergerFS pool.

    I was also able to initialize a new harddrive and mount it without issue..


    also mount & unmount remote share..


    all of the above usually gives me these errors.. so that's positive..


    But I want to make some more tests before believing this issue is gone..


    but so far.. it's good... I hope the reason is the update I installed yesterday.. that would make the most sense.

    Looking at salt deploy plugin code is suppose to mount again the point. Since is in use umount will fail. Can you confirm is in use by some pid?


    The bool “mount” parameter for salt mount.mounted will have to be passed dynamically on each save and apply.

    In the old version the fstab will be rewritten and user will reboot to have the new policies.

    Can you give me a hint on how to check if it's in use by a pid? not sure how I do that.

    I am getting a little frustrated wiping my mergerFS pool every time I need to change something. Today I bit the bullet and did it again.. even though I need to redo a bunch of things.

    I set up a new pool with 2 new disks.

    I haven't used them anywhere.. yet.. so they just configured as a cache pool.. not sure what to use them for.. but anyway..


    I then went to change the properties of the pool from:


    defaults,allow_other,use_ino


    to


    defaults,allow_other,use_ino,moveonenospc=true


    I hit ok, hit apply.. and boom... back to sq. 1 - can someone give me some advice here... and NOT "delete the pool and create it again".. this happens on both my pools.

    1 is in use with shared folders, being used for docker stuff etc.. so yeah.. that pool is in use sure.. but looking back to when I ran omv4 I did the same things as now on omv5.. back then I never had issues.. I know OMV5 is now using salt so it's pears and apples..


    Log excerpt:


    Code
    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; omv-salt deploy run fstab 2>&1' with exit code '1': debian: ---------- ID: create_filesystem_mountpoint_ddcef464-7442-469f-b21e-6beda192e1d3 Function: file.accumulated Result: True Comment: Accumulator create_filesystem_mountpoint_ddcef464-7442-469f-b21e-6beda192e1d3 for file /etc/fstab was charged by text Started: 15:15:11.506429 Duration: 0.308 ms Changes: ---------- ID: mount_filesystem_mountpoint_ddcef464-7442-469f-b21e-6beda192e1d3 Function: mount.mounted Name: /srv/dev-disk-by-label-disk00 Result: True Comment: Target was already mounted Started: 15:15:11.507008 Duration: 98.303 ms Changes: ---------- umount: Forced remount because options (acl) changed ---------- ID: create_filesystem_mountpoint_55509fde-c866-4392-bb44-9cd50a389b27 Function: file.accumulated Result: True Comment: Accumulator create_filesystem_mountpoint_55509fde-c866-4392-bb44-9cd50a389b27 for file /etc/fstab was charged by text Started: 15:15:11.605428 Duration: 0.501 ms Changes: ---------- ID: mount_filesystem_mountpoint_55509fde-c866-4392-bb44-9cd50a389b27 Function: mount.mounted Name: /srv/dev-disk-by-label-disk01 Result: True Comment: Target was already mounted Started: 15:15:11.605983 Duration: 52.688 ms Changes: ---------- umount: Forced remount because options (acl) changed ---------- ID: create_filesystem_mountpoint_fd6bf1f2-b144-4a5c-ac31-65ee43b42c2b Function: file.accumulated Result: True Comment: Accumulator
    &
    True Comment: Changes were made Started: 15:15:12.190075 Duration: 5.146 ms Changes: ---------- diff: --- +++ @@ -21,6 +21,6 @@ /dev/disk/by-label/disk05 /srv/dev-disk-by-label-disk05 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2 /dev/disk/by-label/disk04 /srv/dev-disk-by-label-disk04 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2 /srv/dev-disk-by-label-disk00:/srv/dev-disk-by-label-disk02:/srv/dev-disk-by-label-disk03:/srv/dev-disk-by-label-disk01 /srv/8a973c04-47d5-4d36-a1f1-d61f5f8c6a25 fuse.mergerfs defaults,allow_other,use_ino,category.create=mfs,minfreespace=15G,fsname=ragnarok:8a973c04-47d5-4d36-a1f1-d61f5f8c6a25,x-systemd.requires=/srv/dev-disk-by-label-disk00,x-systemd.requires=/srv/dev-disk-by-label-disk02,x-systemd.requires=/srv/dev-disk-by-label-disk03,x-systemd.requires=/srv/dev-disk-by-label-disk01 0 0 -/srv/dev-disk-by-label-cache01:/srv/dev-disk-by-label-cache00 /srv/b1ab25c4-09ad-4ca2-9751-dc963c42d711 fuse.mergerfs defaults,allow_other,use_ino,category.create=epmfs,minfreespace=4G,fsname=cache:b1ab25c4-09ad-4ca2-9751-dc963c42d711,x-systemd.requires=/srv/dev-disk-by-label-cache01,x-systemd.requires=/srv/dev-disk-by-label-cache00 0 0 +/srv/dev-disk-by-label-cache01:/srv/dev-disk-by-label-cache00 /srv/b1ab25c4-09ad-4ca2-9751-dc963c42d711 fuse.mergerfs defaults,allow_other,use_ino,moveonenospc=true,category.create=epmfs,minfreespace=4G,fsname=cache:b1ab25c4-09ad-4ca2-9751-dc963c42d711,x-systemd.requires=/srv/dev-disk-by-label-c



    full log here:


    https://pastebin.com/raw/60twPkM1

    I ended up doing a lot of cleaning up in config.xml and finally got it sorted by running :

    Code
    omv salt deploy run


    Finally it mounted everything without incident.
    it kept failing on umount of the mergerFS filesystem. It kept saying it was - busy because of residual shared folder config in config.xml that 'something' hadn't cleaned up.

    After updating today the problem persists.
    I cleared my mergerfs config and redid my drives into 1 pool.
    Still I get this error:


    Code
    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; omv-salt deploy run fstab 2>&1' with exit code '1': /usr/lib/python3/dist-packages/salt/utils/decorators/signature.py:31: DeprecationWarning: `formatargspec` is deprecated since Python 3.5. Use `signature` and the `Signature` object directly *salt.utils.args.get_function_argspec(original_function) debian: ---------- ID: create_filesystem_mountpoint_ddcef464-7442-469f-b21e-6beda192e1d3 Function: file.accumulated Result: True Comment: Accumulator create_filesystem_mountpoint_ddcef464-7442-469f-b21e-6beda192e1d3 for file /etc/fstab was charged by text Started: 20:49:15.933200 Duration: 0.3 ms Changes: ---------- ID: mount_filesystem_mountpoint_ddcef464-7442-469f-b21e-6beda192e1d3 Function: mount.mounted Name: /srv/dev-disk-by-label-disk00 Result: True Comment: Target was already mounted Started: 20:49:15.933767 Duration: 77.907 ms Changes: ---------- umount: Forced remount because options (acl) changed ---------- ID: create_filesystem_mountpoint_55509fde-c866-4392-bb44-9cd50a389b27 Function: file.accumulated Result: True Comment: Accumulator create_filesystem_mountpoint_55509fde-c866-4392-bb44-9cd50a389b27 for file /etc/fstab was charged by text Started: 20:49:16.011825 Duration: 0.533 ms Changes: ---------- ID: mount_filesystem_mountpoint_55509fde-c866-4392-bb44-9cd50a389b27 Function: mount.mounted Name: /srv/dev-disk-by-label-disk01 Result: True Comment: Target was already mounted Started: 20:49:16.012416 Duration: 24.858 ms Changes: ---------- umount: Forced remount because options (acl) changed ---------- ID: create_filesystem_mountpoint_fd6bf1f2-b144-4a5c-ac31-65ee43b42c2b Function: file.accumulated Result: True Comment: Accumulator create_filesystem_mountpoint_fd6bf1f2-b144-4a5c-ac31-65ee43b42c2b for file /etc/fstab was charged by text Started: 20:49:16.037400 Duration: 0.513 ms Changes: ---------- ID: mount_filesystem_mountpoint_fd6bf1f2-b144-4a5c-ac31-65ee43b42c2b Function: mount.mounted Name: /srv/dev-disk-by-label-disk02 Result: True Comment: Target was already mounted Started: 20:49:16.037971 Duration: 24.749 ms Changes: ---------- umount: Forced remount because options (acl) changed ---------- ID: create_filesystem_mountpoint_14b5456f-b4e9-4ab9-a8f9-660ec905fcd0 Function: file.accumulated Result: True Comment: Accumulator create_filesystem_mountpoint_14b5456f-b4e9-4ab9-a8f9-660ec905fcd0 for file /etc/fstab was charged by text Started: 20:49:16.062850 Duration: 0.512 ms Changes: ---------- ID: mount_filesystem_mountpoint_14b5456f-b4e9-4ab9-a8f9-660ec905fcd0 Function: mount.mounted Name: /srv/dev-disk-by-label-disk03 Result: True Comment: Target was already mounted Started: 20:49:16.063418 Duration: 24.674 ms Changes: ---------- umount: Forced remount because options (acl) changed ---------- ID: create_filesystem_mountpoint_fa668eaf-1d4b-4f49-9244-ef6a43c5be55 Function: file.accumulated Result: True Comment: Accumulator create_filesystem_mountpoint_fa668eaf-1d4b-4f49-9244-ef6a43c5be55 for file /etc/fstab was charged by text Started: 20:49:16.088220 Duration: 0.513 ms Changes: ---------- ID: mount_filesystem_mountpoint_fa668eaf-1d4b-4f49-9244-ef6a43c5be55 Function: mount.mounted Name: /srv/dev-disk-by-label-parity01 Result: True Comment: Target was already mounted Started: 20:49:16.088791 Duration: 24.717 ms Changes: ---------- umount: Forced remount because options (acl) changed ---------- ID: create_unionfilesystem_mountpoint_124b685f-80d6-4347-bf0d-ec017ffa2b44 Function: file.accumulated Result: True Comment: Accumulator create_unionfilesystem_mountpoint_124b685f-80d6-4347-bf0d-ec017ffa2b44 for file /etc/fstab was charged by text Started: 20:49:16.113637 Duration: 0.542 ms Changes: ---------- ID: mount_filesystem_mountpoint_124b685f-80d6-4347-bf0d-ec017ffa2b44 Function: mount.mounted Name: /srv/82f5df62-d4aa-4e1b-945b-2f433faac735 Result: False Comment: Unable to unmount /srv/82f5df62-d4aa-4e1b-945b-2f433faac735: umount: /srv/82f5df62-d4aa-4e1b-945b-2f433faac735: target is busy.. Started: 20:49:16.114236 Duration: 16.345 ms Changes: ---------- umount: Forced unmount and mount because options (cache.files=off) changed ---------- ID: append_fstab_entries Function: file.blockreplace Name: /etc/fstab Result: True Comment: Changes were made Started: 20:49:16.131897 Duration: 2.559 ms Changes: ---------- diff: --- +++ @@ -16,5 +16,4 @@ /dev/disk/by-label/disk03 /srv/dev-disk-by-label-disk03 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2 /dev/disk/by-label/parity01 /srv/dev-disk-by-label-parity01 ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2 /srv/dev-disk-by-label-disk00:/srv/dev-disk-by-label-disk02:/srv/dev-disk-by-label-disk01:/srv/dev-disk-by-label-disk03 /srv/82f5df62-d4aa-4e1b-945b-2f433faac735 fuse.mergerfs defaults,allow_other,cache.files=off,use_ino,moveonenospc=true,category.create=mfs,minfreespace=30G,x-systemd.requires=/srv/dev-disk-by-label-disk00,x-systemd.requires=/srv/dev-disk-by-label-disk02,x-systemd.requires=/srv/dev-disk-by-label-disk01,x-systemd.requires=/srv/dev-disk-by-label-disk03 0 0 -//192.168.1.123/localdown /srv/e690e5b1-13eb-40c2-8e45-76ff3e237541 cifs _netdev,iocharset=utf8,vers=3.0,nofail,file_mode=0755, dir_mode=0755,credentials=/root/.cifscredentials-ae7b4f16-cf28-41ef-8614-3e772fd19380 0 0 # <<< [openmediavault] Summary for debian ------------- Succeeded: 12 (changed=7) Failed: 1 ------------- Total states run: 13 Total run time: 198.722 ms

    hi


    I have 2 servers.


    1 in remote, 1 at home.


    both run OMV.


    box remote = omv5
    box local = omv 4


    remote box is running openvpn-as in docker (through portainer etc)
    local is running openvpn-client from dperson/openvpn-client.


    The openVPN server works as expected. I've set it up to allow access to local resources. So downloading the ovpn file to my windows machine, connecting to phone internet sharing and starting openvpn gives me access to the resources. All good.


    Now I want it do work from my local omv4.
    from within the container I can ping the server vpn address. I can ssh from local box to the VPN ip of my server.
    But starting an rsync from local box to the remote box via the same IP and I get connection refused.


    docker is running in bridge mode and I haven't done anything in regards to ports.


    Ideas?

    Looks like it cannot find the : Jinja error: No such object: //system/fstab/mntent[uuid='ba3f52d4-34d4-41ae-b585-6818bc72c3c8']


    if it's not in /etc/fstab
    then look in /etc/openmediavault/config.xml


    if it's missing in fstab you may need to delete it in config.xml


    a reboot might also fix it :)