Error adding Unionfs

    • OMV 5.x (beta)
    • Error adding Unionfs

      Hello, I'm trying to make the snapraid and Union Filesystem combination.
      As a guide I'm using the video from Techno Dad Life called: "Snapraid and Unionfs: Advanced Array Options on Openmediavault (Better than ZFS and Unraid)".
      So here is my problem: When I add a filesystem to Unionfs, press Save, press Apply and confirm the changes I get the following error:(for some reason I can't turn the highlighting off)

      Brainfuck Source Code

      1. An error has occured
      2. Error #0:
      3. OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; omv-salt deploy run collectd 2>&1' with exit code '1': findfs: unable to resolve 'UUID=f14c14cb-693b-49f7-81ea-4753d869e31b'
      4. debian:
      5. Data failed to compile:
      6. ----------
      7. Rendering SLS 'base:omv.deploy.collectd.plugins.disk' failed: Jinja error: Command '['findfs', 'UUID=f14c14cb-693b-49f7-81ea-4753d869e31b']' returned non-zero exit status 1.
      8. Traceback (most recent call last):
      9. File "/usr/lib/python3/dist-packages/salt/utils/templates.py", line 394, in render_jinja_tmpl
      10. output = template.render(**decoded_context)
      11. File "/usr/lib/python3/dist-packages/jinja2/asyncsupport.py", line 76, in render
      12. return original_render(self, *args, **kwargs)
      13. File "/usr/lib/python3/dist-packages/jinja2/environment.py", line 1008, in render
      14. return self.environment.handle_exception(exc_info, True)
      15. File "/usr/lib/python3/dist-packages/jinja2/environment.py", line 780, in handle_exception
      16. reraise(exc_type, exc_value, tb)
      17. File "/usr/lib/python3/dist-packages/jinja2/_compat.py", line 37, in reraise
      18. raise value.with_traceback(tb)
      19. File "<template>", line 49, in top-level template code
      20. File "/var/cache/salt/minion/extmods/modules/omv_utils.py", line 165, in get_fs_parent_device_file
      21. return fs.get_parent_device_file()
      22. File "/usr/lib/python3/dist-packages/openmediavault/fs/__init__.py", line 163, in get_parent_device_file
      23. device = pyudev.Devices.from_device_file(context, self.device_file)
      24. File "/usr/lib/python3/dist-packages/openmediavault/fs/__init__.py", line 127, in device_file
      25. ['findfs', 'UUID={}'.format(self._id)]
      26. File "/usr/lib/python3/dist-packages/openmediavault/subprocess.py", line 63, in check_output
      27. return subprocess.check_output(*popenargs, **kwargs)
      28. File "/usr/lib/python3.7/subprocess.py", line 395, in check_output
      29. **kwargs).stdout
      30. File "/usr/lib/python3.7/subprocess.py", line 487, in run
      31. output=stdout, stderr=stderr)
      32. subprocess.CalledProcessError: Command '['findfs', 'UUID=f14c14cb-693b-49f7-81ea-4753d869e31b']' returned non-zero exit status 1.
      33. ; line 49
      34. ---
      35. [...]
      36. # "dir": "/srv/dev-disk-by-id-scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-1-part1",
      37. # "freq": 0,
      38. # "fsname": "008530ff-a134-4264-898d-9ce30eeab927",
      39. # }
      40. {% if salt['mount.is_mounted'](mountpoint.dir) %}
      41. {% set disk = salt['omv_utils.get_fs_parent_device_file'](mountpoint.fsname) %} <======================
      42. # Extract the device name from '/dev/xxx'.
      43. {% set _ = disks.append(disk[5:]) %}
      44. {% endif %}
      45. {% endfor %}
      46. # Append the root filesystem.
      47. [...]
      48. --- in /usr/share/php/openmediavault/system/process.inc:182
      49. Stack trace:
      50. #0 /usr/share/php/openmediavault/engine/module/serviceabstract.inc(60): OMV\System\Process->execute()
      51. #1 /usr/share/openmediavault/engined/rpc/config.inc(167): OMV\Engine\Module\ServiceAbstract->deploy()
      52. #2 [internal function]: Engined\Rpc\Config->applyChanges(Array, Array)
      53. #3 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
      54. #4 /usr/share/php/openmediavault/rpc/serviceabstract.inc(149): OMV\Rpc\ServiceAbstract->callMethod('applyChanges', Array, Array)
      55. #5 /usr/share/php/openmediavault/rpc/serviceabstract.inc(588): OMV\Rpc\ServiceAbstract->OMV\Rpc\{closure}('/tmp/bgstatus3R...', '/tmp/bgoutputsA...')
      56. #6 /usr/share/php/openmediavault/rpc/serviceabstract.inc(159): OMV\Rpc\ServiceAbstract->execBgProc(Object(Closure))
      57. #7 /usr/share/openmediavault/engined/rpc/config.inc(189): OMV\Rpc\ServiceAbstract->callMethodBg('applyChanges', Array, Array)
      58. #8 [internal function]: Engined\Rpc\Config->applyChangesBg(Array, Array)
      59. #9 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
      60. #10 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('applyChangesBg', Array, Array)
      61. #11 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('Config', 'applyChangesBg', Array, Array, 1)
      62. #12 {main}
      Display All
      Does anyone know the problem and how I can fix it?
      OMV version -> 5.1.1-1 (Usul)
      Kernel version -> 5.3.0-0bpo.2-amd64


      Many thanks in advance
    • What kind of filesystem? What are your options? What version of the plugin? I'm not able to replicate this.
      omv 5.1.2 usul | 64 bit | 5.3 proxmox kernel | omvextrasorg 5.1.7
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • dropje wrote:

      Is there more info that I can provide to troubleshoot?
      That screenshot looks fine. The output of sudo blkid might help.
      omv 5.1.2 usul | 64 bit | 5.3 proxmox kernel | omvextrasorg 5.1.7
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • Here is the output of sudo blkid:

      Source Code

      1. root@openmediavault:~# sudo blkid
      2. /dev/sdc1: LABEL="NASHDD2" UUID="222f773c-0ac3-4ea6-b138-61f043a2daa3" TYPE="ext4" PARTUUID="b9939e7a-1468-4ce0-806a-26dc1b0c9c6d"
      3. /dev/sdd1: UUID="a3dfdae2-eaff-42bf-8cec-c0d43c66ef82" TYPE="ext4" PARTUUID="3f1c896d-01"
      4. /dev/sda1: LABEL="NASHDD1" UUID="42a8c83a-8d30-4df4-bd00-cafb4c42b942" TYPE="ext4" PARTUUID="f733dbb7-d669-47a4-abf9-df9713dd92c1"
      5. /dev/sdg1: LABEL="NASHDD5" UUID="d5e66d91-492f-4a65-a3b0-56be024fdfc2" TYPE="ext4" PARTUUID="6b21772a-b20a-49ab-8a6f-3778a92394f5"
      6. /dev/sde1: LABEL="NASHDD3" UUID="70033ccc-0cc1-4579-b269-78e3217a600f" TYPE="ext4" PARTUUID="fb790f36-9f7f-4873-a479-387c7155ac1a"
      7. /dev/sdf1: LABEL="NASHDD4" UUID="d18ecf86-ef05-45b5-aa89-3ebffec83d80" TYPE="ext4" PARTUUID="8b938ac1-d573-4e9b-9211-4fb08d122256"
      8. /dev/sdb1: LABEL="NASHDD6" UUID="f5f61b72-bee9-4585-bbfb-d97d92c9b86b" TYPE="ext4" PARTUUID="0ff69ec2-66e0-4c84-814d-87cd8bf7375d"
      9. /dev/sdh1: LABEL="NASSSD1" UUID="925f0651-dbfa-4a22-81a6-70c609df42e1" TYPE="ext4" PARTUUID="bb85bb81-4fd3-4c38-922e-c8f6399dff1b"
    • Nope, I still get the same error. But after the error the filesystem appears to be working(see picture below).

      But the media filesystem that unionfs supposedly created doesn't exist.

      When I go back to Unionfs to edit media filesystem I get the following error:

      Source Code

      1. Error #0:
      2. OMV\Config\DatabaseException: Failed to execute XPath query '/config/services/unionfilesystems/filesystem[uuid='37a13d33-a232-4b0f-be67-ab3883abc3e7']'. in /usr/share/php/openmediavault/config/database.inc:78
      3. Stack trace:
      4. #0 /usr/share/openmediavault/engined/rpc/unionfilesystems.inc(166): OMV\Config\Database->get('conf.service.un...', '37a13d33-a232-4...')
      5. #1 [internal function]: OMV\Engined\Rpc\UnionFilesystems->get(Array, Array)
      6. #2 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
      7. #3 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('get', Array, Array)
      8. #4 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('UnionFilesystem...', 'get', Array, Array, 1)
      9. #5 {main}
      After the error, the edit window comes up and the name is gone as well as the devices that were selected (see picture below).


      I hope that this information is of any help.
    • Something still isn't right. Post the output of:

      grep mergerfs /etc/fstab
      dpkg -l | grep -e openm -e merg

      And what are you using to change the theme? I'm wondering if that is corrupting something as well.
      omv 5.1.2 usul | 64 bit | 5.3 proxmox kernel | omvextrasorg 5.1.7
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!

      The post was edited 1 time, last by ryecoaaron ().

    • Hi,

      I was just searching for this similar error with UnionFS plugin. I can add HDDSs and create a pool, but after I create a shared folder, I start to get this error. Every time I click on "Shared folder" tab, this error shows.

      Source Code

      1. Error #0:
      2. OMV\Exception: Couldn't extract an UUID from the provided path '/sharedfolders/Series-Pool'. in /usr/share/php/openmediavault/system/filesystem/backend/mergerfs.inc:87
      3. Stack trace:
      4. #0 /usr/share/php/openmediavault/system/filesystem/backend/mergerfs.inc(64): OMV\System\Filesystem\Backend\Mergerfs::extractUuidFromMountPoint('/sharedfolders/...')
      5. #1 /usr/share/php/openmediavault/system/filesystem/filesystem.inc(878): OMV\System\Filesystem\Backend\Mergerfs->getImpl('5-Series3-HDD:4...')
      6. #2 /usr/share/openmediavault/engined/rpc/sharemgmt.inc(158): OMV\System\Filesystem\Filesystem::getImplByMountPoint('/srv/9368d400-3...')
      7. #3 [internal function]: Engined\Rpc\ShareMgmt->enumerateSharedFolders(NULL, Array)
      8. #4 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
      9. #5 /usr/share/openmediavault/engined/rpc/sharemgmt.inc(205): OMV\Rpc\ServiceAbstract->callMethod('enumerateShared...', NULL, Array)
      10. #6 [internal function]: Engined\Rpc\ShareMgmt->getList(Array, Array)
      11. #7 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
      12. #8 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('getList', Array, Array)
      13. #9 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('ShareMgmt', 'getList', Array, Array, 1)
      14. #10 {main}
      Display All
      I cannot see the created shared folder as long as the unionfs pool is there. When I remove the pool, the shared folder shows up again.

      My fstab->

      Source Code

      1. UUID=d430b2fe-e5d9-4740-8dd8-ea1b48d7ee5f / ext4 noatime,nodiratime,discard,errors=remount-ro 0 1
      2. # swap was on /dev/sda5 during installation
      3. UUID=e8937a61-8c3d-41c4-b513-855caa99f211 none swap sw 0 0
      4. /dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0
      5. # >>> [openmediavault]
      6. /dev/disk/by-label/13-Series1-HDD /srv/dev-disk-by-label-13-Series1-HDD ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
      7. /dev/disk/by-label/14-Series2-HDD /srv/dev-disk-by-label-14-Series2-HDD ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
      8. /dev/disk/by-label/15-Series3-HDD /srv/dev-disk-by-label-15-Series3-HDD ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
      9. /srv/dev-disk-by-label-15-Series3-HDD:/srv/dev-disk-by-label-14-Series2-HDD:/srv/dev-disk-by-label-13-Series1-HDD /srv/9368d400-3871-428a-b909-6cc9f251b578 fuse.mergerfs defaults,allow_other,direct_io,use_ino$
      10. /srv/9368d400-3871-428a-b909-6cc9f251b578/Series-Pool /export/Series-Pool none bind,nofail 0 0
      11. # <<< [openmediavault]
      Display All
      output of the commands if it helps->
      grep mergerfs /etc/fstab

      Source Code

      1. root@OMV-2:~# grep mergerfs /etc/fstab
      2. /srv/dev-disk-by-label-15-Series3-HDD:/srv/dev-disk-by-label-14-Series2-HDD:/srv/dev-disk-by-label-13-Series1-HDD /srv/9368d400-3871-428a-b909-6cc9f251b578 fuse.mergerfs defaults,allow_other,direct_io,use_ino,noforget,category.create=eplfs,minfreespace=40G,x-systemd.requires=/srv/dev-disk-by-label-15-Series3-HDD,x-systemd.requires=/srv/dev-disk-by-label-14-Series2-HDD,x-systemd.requires=/srv/dev-disk-by-label-13-Series1-HDD 0 0
      dpkg -l | grep -e openm - merg

      Source Code

      1. root@OMV-2:~# dpkg -l | grep -e openm - merg
      2. (standard input):ii openmediavault 5.1.1-1 all openmediavault - The open network attached storage solution
      3. (standard input):ii openmediavault-clamav 5.0.1-1 all OpenMediaVault ClamAV plugin
      4. (standard input):ii openmediavault-keyring 1.0 all GnuPG archive keys of the OpenMediaVault archive
      5. (standard input):ii openmediavault-omvextrasorg 5.1.6 all OMV-Extras.org Package Repositories for OpenMediaVault
      6. (standard input):ii openmediavault-snapraid 5.0.1 all snapraid plugin for OpenMediaVault.
      7. (standard input):ii openmediavault-unionfilesystems 5.0.2 all Union filesystems plugin for OpenMediaVault.
      8. grep: merg: No such file or directory

      I have to say that first I did not see any fault, the fault started to come up after I filled one HDD and suddenly I could not move/write any more files when I came down to only 40G minimum free space I set. I have setting "Existing path, least free space", but it did not continue on a new drive? This was working perfectly on OMV4 but not here in OMV5.
    • nightrider wrote:

      I have setting "Existing path, least free space", but it did not continue on a new drive? This was working perfectly on OMV4 but not here in OMV5.
      Does the path exist on every drive in the pool?
      --
      Google is your friend and Bob's your uncle!

      RAID - Its ability to disappoint is inversely proportional to the user's understanding of it.

      ASRock Rack C2550D4I - 16GB CC - Silverstone DS380
    • nightrider wrote:

      This was working perfectly on OMV4 but not here in OMV5.
      The version of mergerfs is exactly the same. I really don't why people are having problems on the OMV 5.x version of the plugin. I can't replicate these problems. It is salt mounting the drive but strange that all of my tests work (yes, I actually use this plugin),
      omv 5.1.2 usul | 64 bit | 5.3 proxmox kernel | omvextrasorg 5.1.7
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • nightrider wrote:

      I have to say that first I did not see any fault, the fault started to come up after I filled one HDD and suddenly I could not move/write any more files when I came down to only 40G minimum free space I set. I have setting "Existing path, least free space", but it did not continue on a new drive? This was working perfectly on OMV4 but not here in OMV5.
      I found someone who has the same problem: michaelxander.com/diy-nas/. They have the following explanation:

      https://michaelxander.com/diy-nas/ wrote:

      Note: The default policy epmfs doesn’t fit me, because since v2.25 path preserving policies will no longer fall back to non-path preserving policies. This means once you run out of space on drives that have the
      relative path, adding a new file will fail (out of space error).
      So, is it a good idea to make the "most freespace" the default option instead of "Existing path, most freespace"?

      ryecoaaron wrote:

      And what are you using to change the theme? I'm wondering if that is corrupting something as well.
      I'm using a plugin in my browser. The omv theme is not changed. The plugin is called Dark Reader.

      Here is the output from: grep mergerfs /etc/fstab.

      Source Code

      1. root@openmediavault:~# grep mergerfs /etc/fstab
      2. /srv/dev-disk-by-label-NASHDD1:/srv/dev-disk-by-label-NASHDD6:/srv/dev-disk-by-label-NASHDD2:/srv/dev-disk-by-label-NASHDD4 /srv/4ee40582-7941-4584-bda8-c0a8a91c0b7b fuse.mergerfs defaults,allow_other,direct_io,use_ino,category.create=epmfs,minfreespace=4G,x-systemd.requires=/srv/dev-disk-by-label-NASHDD1,x-systemd.requires=/srv/dev-disk-by-label-NASHDD6,x-systemd.requires=/srv/dev-disk-by-label-NASHDD2,x-systemd.requires=/srv/dev-disk-by-label-NASHDD4 0 0

      Here is the output from: dpkg -l | grep -e openm -e merg.

      Source Code

      1. root@openmediavault:~# dpkg -l | grep -e openm -e merg
      2. ii mergerfs 2.28.2~debian-buster amd64 another FUSE union filesystem
      3. ii openmediavault 5.1.1-1 all openmediavault - The open network attached storage solution
      4. ii openmediavault-apttool 3.6 all apt tool plugin for OpenMediaVault.
      5. ii openmediavault-diskstats 5.0.1-1 all OpenMediaVault disk monitoring plugin
      6. ii openmediavault-flashmemory 5.0.1 all folder2ram plugin for OpenMediaVault
      7. ii openmediavault-keyring 1.0 all GnuPG archive keys of the OpenMediaVault archive
      8. ii openmediavault-omvextrasorg 5.1.6 all OMV-Extras.org Package Repositories for OpenMediaVault
      9. ii openmediavault-resetperms 5.0 all Reset Permissions
      10. ii openmediavault-snapraid 5.0.1 all snapraid plugin for OpenMediaVault.
      11. ii openmediavault-unionfilesystems 5.0.2 all Union filesystems plugin for OpenMediaVault.
      12. ii openmediavault-wol 3.4.2 all OpenMediaVault WOL plugin
      Display All

      The post was edited 1 time, last by dropje ().

    • All of that looks correct. I really don't know what to change since I can't replicate it.
      omv 5.1.2 usul | 64 bit | 5.3 proxmox kernel | omvextrasorg 5.1.7
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • dropje wrote:

      I found someone who has the same problem: michaelxander.com/diy-nas/. They have the following explanation:
      Thank you very much, that solved my problem. It is not working with setting "Create policy" -> "Existing path, least free space", but it is working with "Least free space" option and to have same created relative path on all disks.


      ryecoaaron wrote:

      I can't replicate these problems.
      The error "Couldn't extract an UUID from the provided path" that showed up for me after created the NFS share when clicking back to "Shared folders" tab, disappeared after a reboot.

      Everything seems to work for now, except what I mentioned about "Existing path" that seems to not work in UnionFS plugin. Same what is mentioned in the article dropje linked to above.

      Now I am curious why that is? If that is the case as he mention in the article there, why is it an option in the plugin?

      The post was edited 2 times, last by nightrider ().

    • nightrider wrote:

      Now I am curious why that is? If that is the case as he mention in the article there, why is it an option in the plugin?
      The plugin includes all of the options (policies) from here - github.com/trapexit/mergerfs. If one doesn't work or is not working how you expect it, perhaps file an issue on the mergerfs github. Otherwise, maybe @trapexit (author of mergerfs) can explain more about this policy (epmfs) and why it wouldn't write to another disk if that disk didn't have any folders.
      omv 5.1.2 usul | 64 bit | 5.3 proxmox kernel | omvextrasorg 5.1.7
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • Path preservation is working just fine. What you described is exactly the behavior expected.

      I'm not sure I understand what you expect. Path preservation preserves the paths. As the docs mention it will only choose from branches where the relative base path of the thing being worked on exists. If you only have 1 drive with that 1 directory then it will only ever consider that drive. If it runs out of space you should rightly get out of space errors. The "change" referenced on that website was a bug fix. If you "fall back" to another drive... what's the point of path preservation in the first place? If you don't care what drive your data is on why would you reduce your speed and reliability by putting everything on one drive while the others sit around unused?

      Path preservation is a niche feature for people who want to *manually* manage their drives but have them appear as one pool.

      github.com/trapexit/mergerfs#path-preservation

      The post was edited 1 time, last by trapexit ().