Error adding Unionfs

    • OMV 5.x (beta)
    • Error adding Unionfs

      Hello, I'm trying to make the snapraid and Union Filesystem combination.
      As a guide I'm using the video from Techno Dad Life called: "Snapraid and Unionfs: Advanced Array Options on Openmediavault (Better than ZFS and Unraid)".
      So here is my problem: When I add a filesystem to Unionfs, press Save, press Apply and confirm the changes I get the following error:(for some reason I can't turn the highlighting off)

      Brainfuck Source Code

      1. An error has occured
      2. Error #0:
      3. OMV\ExecException: Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; omv-salt deploy run collectd 2>&1' with exit code '1': findfs: unable to resolve 'UUID=f14c14cb-693b-49f7-81ea-4753d869e31b'
      4. debian:
      5. Data failed to compile:
      6. ----------
      7. Rendering SLS 'base:omv.deploy.collectd.plugins.disk' failed: Jinja error: Command '['findfs', 'UUID=f14c14cb-693b-49f7-81ea-4753d869e31b']' returned non-zero exit status 1.
      8. Traceback (most recent call last):
      9. File "/usr/lib/python3/dist-packages/salt/utils/templates.py", line 394, in render_jinja_tmpl
      10. output = template.render(**decoded_context)
      11. File "/usr/lib/python3/dist-packages/jinja2/asyncsupport.py", line 76, in render
      12. return original_render(self, *args, **kwargs)
      13. File "/usr/lib/python3/dist-packages/jinja2/environment.py", line 1008, in render
      14. return self.environment.handle_exception(exc_info, True)
      15. File "/usr/lib/python3/dist-packages/jinja2/environment.py", line 780, in handle_exception
      16. reraise(exc_type, exc_value, tb)
      17. File "/usr/lib/python3/dist-packages/jinja2/_compat.py", line 37, in reraise
      18. raise value.with_traceback(tb)
      19. File "<template>", line 49, in top-level template code
      20. File "/var/cache/salt/minion/extmods/modules/omv_utils.py", line 165, in get_fs_parent_device_file
      21. return fs.get_parent_device_file()
      22. File "/usr/lib/python3/dist-packages/openmediavault/fs/__init__.py", line 163, in get_parent_device_file
      23. device = pyudev.Devices.from_device_file(context, self.device_file)
      24. File "/usr/lib/python3/dist-packages/openmediavault/fs/__init__.py", line 127, in device_file
      25. ['findfs', 'UUID={}'.format(self._id)]
      26. File "/usr/lib/python3/dist-packages/openmediavault/subprocess.py", line 63, in check_output
      27. return subprocess.check_output(*popenargs, **kwargs)
      28. File "/usr/lib/python3.7/subprocess.py", line 395, in check_output
      29. **kwargs).stdout
      30. File "/usr/lib/python3.7/subprocess.py", line 487, in run
      31. output=stdout, stderr=stderr)
      32. subprocess.CalledProcessError: Command '['findfs', 'UUID=f14c14cb-693b-49f7-81ea-4753d869e31b']' returned non-zero exit status 1.
      33. ; line 49
      34. ---
      35. [...]
      36. # "dir": "/srv/dev-disk-by-id-scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-1-part1",
      37. # "freq": 0,
      38. # "fsname": "008530ff-a134-4264-898d-9ce30eeab927",
      39. # }
      40. {% if salt['mount.is_mounted'](mountpoint.dir) %}
      41. {% set disk = salt['omv_utils.get_fs_parent_device_file'](mountpoint.fsname) %} <======================
      42. # Extract the device name from '/dev/xxx'.
      43. {% set _ = disks.append(disk[5:]) %}
      44. {% endif %}
      45. {% endfor %}
      46. # Append the root filesystem.
      47. [...]
      48. --- in /usr/share/php/openmediavault/system/process.inc:182
      49. Stack trace:
      50. #0 /usr/share/php/openmediavault/engine/module/serviceabstract.inc(60): OMV\System\Process->execute()
      51. #1 /usr/share/openmediavault/engined/rpc/config.inc(167): OMV\Engine\Module\ServiceAbstract->deploy()
      52. #2 [internal function]: Engined\Rpc\Config->applyChanges(Array, Array)
      53. #3 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
      54. #4 /usr/share/php/openmediavault/rpc/serviceabstract.inc(149): OMV\Rpc\ServiceAbstract->callMethod('applyChanges', Array, Array)
      55. #5 /usr/share/php/openmediavault/rpc/serviceabstract.inc(588): OMV\Rpc\ServiceAbstract->OMV\Rpc\{closure}('/tmp/bgstatus3R...', '/tmp/bgoutputsA...')
      56. #6 /usr/share/php/openmediavault/rpc/serviceabstract.inc(159): OMV\Rpc\ServiceAbstract->execBgProc(Object(Closure))
      57. #7 /usr/share/openmediavault/engined/rpc/config.inc(189): OMV\Rpc\ServiceAbstract->callMethodBg('applyChanges', Array, Array)
      58. #8 [internal function]: Engined\Rpc\Config->applyChangesBg(Array, Array)
      59. #9 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
      60. #10 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('applyChangesBg', Array, Array)
      61. #11 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('Config', 'applyChangesBg', Array, Array, 1)
      62. #12 {main}
      Display All
      Does anyone know the problem and how I can fix it?
      OMV version -> 5.1.1-1 (Usul)
      Kernel version -> 5.3.0-0bpo.2-amd64


      Many thanks in advance
    • What kind of filesystem? What are your options? What version of the plugin? I'm not able to replicate this.
      omv 5.1.2 usul | 64 bit | 5.3 proxmox kernel | omvextrasorg 5.1.9
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • dropje wrote:

      Is there more info that I can provide to troubleshoot?
      That screenshot looks fine. The output of sudo blkid might help.
      omv 5.1.2 usul | 64 bit | 5.3 proxmox kernel | omvextrasorg 5.1.9
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • Here is the output of sudo blkid:

      Source Code

      1. root@openmediavault:~# sudo blkid
      2. /dev/sdc1: LABEL="NASHDD2" UUID="222f773c-0ac3-4ea6-b138-61f043a2daa3" TYPE="ext4" PARTUUID="b9939e7a-1468-4ce0-806a-26dc1b0c9c6d"
      3. /dev/sdd1: UUID="a3dfdae2-eaff-42bf-8cec-c0d43c66ef82" TYPE="ext4" PARTUUID="3f1c896d-01"
      4. /dev/sda1: LABEL="NASHDD1" UUID="42a8c83a-8d30-4df4-bd00-cafb4c42b942" TYPE="ext4" PARTUUID="f733dbb7-d669-47a4-abf9-df9713dd92c1"
      5. /dev/sdg1: LABEL="NASHDD5" UUID="d5e66d91-492f-4a65-a3b0-56be024fdfc2" TYPE="ext4" PARTUUID="6b21772a-b20a-49ab-8a6f-3778a92394f5"
      6. /dev/sde1: LABEL="NASHDD3" UUID="70033ccc-0cc1-4579-b269-78e3217a600f" TYPE="ext4" PARTUUID="fb790f36-9f7f-4873-a479-387c7155ac1a"
      7. /dev/sdf1: LABEL="NASHDD4" UUID="d18ecf86-ef05-45b5-aa89-3ebffec83d80" TYPE="ext4" PARTUUID="8b938ac1-d573-4e9b-9211-4fb08d122256"
      8. /dev/sdb1: LABEL="NASHDD6" UUID="f5f61b72-bee9-4585-bbfb-d97d92c9b86b" TYPE="ext4" PARTUUID="0ff69ec2-66e0-4c84-814d-87cd8bf7375d"
      9. /dev/sdh1: LABEL="NASSSD1" UUID="925f0651-dbfa-4a22-81a6-70c609df42e1" TYPE="ext4" PARTUUID="bb85bb81-4fd3-4c38-922e-c8f6399dff1b"
    • Nope, I still get the same error. But after the error the filesystem appears to be working(see picture below).

      But the media filesystem that unionfs supposedly created doesn't exist.

      When I go back to Unionfs to edit media filesystem I get the following error:

      Source Code

      1. Error #0:
      2. OMV\Config\DatabaseException: Failed to execute XPath query '/config/services/unionfilesystems/filesystem[uuid='37a13d33-a232-4b0f-be67-ab3883abc3e7']'. in /usr/share/php/openmediavault/config/database.inc:78
      3. Stack trace:
      4. #0 /usr/share/openmediavault/engined/rpc/unionfilesystems.inc(166): OMV\Config\Database->get('conf.service.un...', '37a13d33-a232-4...')
      5. #1 [internal function]: OMV\Engined\Rpc\UnionFilesystems->get(Array, Array)
      6. #2 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
      7. #3 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('get', Array, Array)
      8. #4 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('UnionFilesystem...', 'get', Array, Array, 1)
      9. #5 {main}
      After the error, the edit window comes up and the name is gone as well as the devices that were selected (see picture below).


      I hope that this information is of any help.
    • Something still isn't right. Post the output of:

      grep mergerfs /etc/fstab
      dpkg -l | grep -e openm -e merg

      And what are you using to change the theme? I'm wondering if that is corrupting something as well.
      omv 5.1.2 usul | 64 bit | 5.3 proxmox kernel | omvextrasorg 5.1.9
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!

      The post was edited 1 time, last by ryecoaaron ().

    • Hi,

      I was just searching for this similar error with UnionFS plugin. I can add HDDSs and create a pool, but after I create a shared folder, I start to get this error. Every time I click on "Shared folder" tab, this error shows.

      Source Code

      1. Error #0:
      2. OMV\Exception: Couldn't extract an UUID from the provided path '/sharedfolders/Series-Pool'. in /usr/share/php/openmediavault/system/filesystem/backend/mergerfs.inc:87
      3. Stack trace:
      4. #0 /usr/share/php/openmediavault/system/filesystem/backend/mergerfs.inc(64): OMV\System\Filesystem\Backend\Mergerfs::extractUuidFromMountPoint('/sharedfolders/...')
      5. #1 /usr/share/php/openmediavault/system/filesystem/filesystem.inc(878): OMV\System\Filesystem\Backend\Mergerfs->getImpl('5-Series3-HDD:4...')
      6. #2 /usr/share/openmediavault/engined/rpc/sharemgmt.inc(158): OMV\System\Filesystem\Filesystem::getImplByMountPoint('/srv/9368d400-3...')
      7. #3 [internal function]: Engined\Rpc\ShareMgmt->enumerateSharedFolders(NULL, Array)
      8. #4 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
      9. #5 /usr/share/openmediavault/engined/rpc/sharemgmt.inc(205): OMV\Rpc\ServiceAbstract->callMethod('enumerateShared...', NULL, Array)
      10. #6 [internal function]: Engined\Rpc\ShareMgmt->getList(Array, Array)
      11. #7 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
      12. #8 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('getList', Array, Array)
      13. #9 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('ShareMgmt', 'getList', Array, Array, 1)
      14. #10 {main}
      Display All
      I cannot see the created shared folder as long as the unionfs pool is there. When I remove the pool, the shared folder shows up again.

      My fstab->

      Source Code

      1. UUID=d430b2fe-e5d9-4740-8dd8-ea1b48d7ee5f / ext4 noatime,nodiratime,discard,errors=remount-ro 0 1
      2. # swap was on /dev/sda5 during installation
      3. UUID=e8937a61-8c3d-41c4-b513-855caa99f211 none swap sw 0 0
      4. /dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0
      5. # >>> [openmediavault]
      6. /dev/disk/by-label/13-Series1-HDD /srv/dev-disk-by-label-13-Series1-HDD ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
      7. /dev/disk/by-label/14-Series2-HDD /srv/dev-disk-by-label-14-Series2-HDD ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
      8. /dev/disk/by-label/15-Series3-HDD /srv/dev-disk-by-label-15-Series3-HDD ext4 defaults,nofail,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl 0 2
      9. /srv/dev-disk-by-label-15-Series3-HDD:/srv/dev-disk-by-label-14-Series2-HDD:/srv/dev-disk-by-label-13-Series1-HDD /srv/9368d400-3871-428a-b909-6cc9f251b578 fuse.mergerfs defaults,allow_other,direct_io,use_ino$
      10. /srv/9368d400-3871-428a-b909-6cc9f251b578/Series-Pool /export/Series-Pool none bind,nofail 0 0
      11. # <<< [openmediavault]
      Display All
      output of the commands if it helps->
      grep mergerfs /etc/fstab

      Source Code

      1. root@OMV-2:~# grep mergerfs /etc/fstab
      2. /srv/dev-disk-by-label-15-Series3-HDD:/srv/dev-disk-by-label-14-Series2-HDD:/srv/dev-disk-by-label-13-Series1-HDD /srv/9368d400-3871-428a-b909-6cc9f251b578 fuse.mergerfs defaults,allow_other,direct_io,use_ino,noforget,category.create=eplfs,minfreespace=40G,x-systemd.requires=/srv/dev-disk-by-label-15-Series3-HDD,x-systemd.requires=/srv/dev-disk-by-label-14-Series2-HDD,x-systemd.requires=/srv/dev-disk-by-label-13-Series1-HDD 0 0
      dpkg -l | grep -e openm - merg

      Source Code

      1. root@OMV-2:~# dpkg -l | grep -e openm - merg
      2. (standard input):ii openmediavault 5.1.1-1 all openmediavault - The open network attached storage solution
      3. (standard input):ii openmediavault-clamav 5.0.1-1 all OpenMediaVault ClamAV plugin
      4. (standard input):ii openmediavault-keyring 1.0 all GnuPG archive keys of the OpenMediaVault archive
      5. (standard input):ii openmediavault-omvextrasorg 5.1.6 all OMV-Extras.org Package Repositories for OpenMediaVault
      6. (standard input):ii openmediavault-snapraid 5.0.1 all snapraid plugin for OpenMediaVault.
      7. (standard input):ii openmediavault-unionfilesystems 5.0.2 all Union filesystems plugin for OpenMediaVault.
      8. grep: merg: No such file or directory

      I have to say that first I did not see any fault, the fault started to come up after I filled one HDD and suddenly I could not move/write any more files when I came down to only 40G minimum free space I set. I have setting "Existing path, least free space", but it did not continue on a new drive? This was working perfectly on OMV4 but not here in OMV5.
    • nightrider wrote:

      I have setting "Existing path, least free space", but it did not continue on a new drive? This was working perfectly on OMV4 but not here in OMV5.
      Does the path exist on every drive in the pool?
      --
      Google is your friend and Bob's your uncle!

      RAID - Its ability to disappoint is inversely proportional to the user's understanding of it.

      ASRock Rack C2550D4I - 16GB CC - Silverstone DS380
    • nightrider wrote:

      This was working perfectly on OMV4 but not here in OMV5.
      The version of mergerfs is exactly the same. I really don't why people are having problems on the OMV 5.x version of the plugin. I can't replicate these problems. It is salt mounting the drive but strange that all of my tests work (yes, I actually use this plugin),
      omv 5.1.2 usul | 64 bit | 5.3 proxmox kernel | omvextrasorg 5.1.9
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • nightrider wrote:

      I have to say that first I did not see any fault, the fault started to come up after I filled one HDD and suddenly I could not move/write any more files when I came down to only 40G minimum free space I set. I have setting "Existing path, least free space", but it did not continue on a new drive? This was working perfectly on OMV4 but not here in OMV5.
      I found someone who has the same problem: michaelxander.com/diy-nas/. They have the following explanation:

      https://michaelxander.com/diy-nas/ wrote:

      Note: The default policy epmfs doesn’t fit me, because since v2.25 path preserving policies will no longer fall back to non-path preserving policies. This means once you run out of space on drives that have the
      relative path, adding a new file will fail (out of space error).
      So, is it a good idea to make the "most freespace" the default option instead of "Existing path, most freespace"?

      ryecoaaron wrote:

      And what are you using to change the theme? I'm wondering if that is corrupting something as well.
      I'm using a plugin in my browser. The omv theme is not changed. The plugin is called Dark Reader.

      Here is the output from: grep mergerfs /etc/fstab.

      Source Code

      1. root@openmediavault:~# grep mergerfs /etc/fstab
      2. /srv/dev-disk-by-label-NASHDD1:/srv/dev-disk-by-label-NASHDD6:/srv/dev-disk-by-label-NASHDD2:/srv/dev-disk-by-label-NASHDD4 /srv/4ee40582-7941-4584-bda8-c0a8a91c0b7b fuse.mergerfs defaults,allow_other,direct_io,use_ino,category.create=epmfs,minfreespace=4G,x-systemd.requires=/srv/dev-disk-by-label-NASHDD1,x-systemd.requires=/srv/dev-disk-by-label-NASHDD6,x-systemd.requires=/srv/dev-disk-by-label-NASHDD2,x-systemd.requires=/srv/dev-disk-by-label-NASHDD4 0 0

      Here is the output from: dpkg -l | grep -e openm -e merg.

      Source Code

      1. root@openmediavault:~# dpkg -l | grep -e openm -e merg
      2. ii mergerfs 2.28.2~debian-buster amd64 another FUSE union filesystem
      3. ii openmediavault 5.1.1-1 all openmediavault - The open network attached storage solution
      4. ii openmediavault-apttool 3.6 all apt tool plugin for OpenMediaVault.
      5. ii openmediavault-diskstats 5.0.1-1 all OpenMediaVault disk monitoring plugin
      6. ii openmediavault-flashmemory 5.0.1 all folder2ram plugin for OpenMediaVault
      7. ii openmediavault-keyring 1.0 all GnuPG archive keys of the OpenMediaVault archive
      8. ii openmediavault-omvextrasorg 5.1.6 all OMV-Extras.org Package Repositories for OpenMediaVault
      9. ii openmediavault-resetperms 5.0 all Reset Permissions
      10. ii openmediavault-snapraid 5.0.1 all snapraid plugin for OpenMediaVault.
      11. ii openmediavault-unionfilesystems 5.0.2 all Union filesystems plugin for OpenMediaVault.
      12. ii openmediavault-wol 3.4.2 all OpenMediaVault WOL plugin
      Display All

      The post was edited 1 time, last by dropje ().

    • All of that looks correct. I really don't know what to change since I can't replicate it.
      omv 5.1.2 usul | 64 bit | 5.3 proxmox kernel | omvextrasorg 5.1.9
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • dropje wrote:

      I found someone who has the same problem: michaelxander.com/diy-nas/. They have the following explanation:
      Thank you very much, that solved my problem. It is not working with setting "Create policy" -> "Existing path, least free space", but it is working with "Least free space" option and to have same created relative path on all disks.


      ryecoaaron wrote:

      I can't replicate these problems.
      The error "Couldn't extract an UUID from the provided path" that showed up for me after created the NFS share when clicking back to "Shared folders" tab, disappeared after a reboot.

      Everything seems to work for now, except what I mentioned about "Existing path" that seems to not work in UnionFS plugin. Same what is mentioned in the article dropje linked to above.

      Now I am curious why that is? If that is the case as he mention in the article there, why is it an option in the plugin?

      The post was edited 2 times, last by nightrider ().

    • nightrider wrote:

      Now I am curious why that is? If that is the case as he mention in the article there, why is it an option in the plugin?
      The plugin includes all of the options (policies) from here - github.com/trapexit/mergerfs. If one doesn't work or is not working how you expect it, perhaps file an issue on the mergerfs github. Otherwise, maybe @trapexit (author of mergerfs) can explain more about this policy (epmfs) and why it wouldn't write to another disk if that disk didn't have any folders.
      omv 5.1.2 usul | 64 bit | 5.3 proxmox kernel | omvextrasorg 5.1.9
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • Path preservation is working just fine. What you described is exactly the behavior expected.

      I'm not sure I understand what you expect. Path preservation preserves the paths. As the docs mention it will only choose from branches where the relative base path of the thing being worked on exists. If you only have 1 drive with that 1 directory then it will only ever consider that drive. If it runs out of space you should rightly get out of space errors. The "change" referenced on that website was a bug fix. If you "fall back" to another drive... what's the point of path preservation in the first place? If you don't care what drive your data is on why would you reduce your speed and reliability by putting everything on one drive while the others sit around unused?

      Path preservation is a niche feature for people who want to *manually* manage their drives but have them appear as one pool.

      github.com/trapexit/mergerfs#path-preservation

      The post was edited 1 time, last by trapexit ().

    • trapexit wrote:

      As the docs mention it will only choose from branches where the relative base path of the thing being worked on exists. If you only have 1 drive with that 1 directory then it will only ever consider that drive. If it runs out of space you should rightly get out of space errors.
      I did set up the pool with 2 drives and it did not work for me with "Existing path, least free space", I could not continue write files to that pool. I did try with the same relative path on both drives, only thing was that one of the drives was full and already reached the minimum free space. I do not know, maybe it was only a temporary bug.

      trapexit wrote:

      If you don't care what drive your data is on why would you reduce your speed and reliability by putting everything on one drive while the others sit around unused?
      I like the idea of having all the files relatively in order on the drives, I just add a new drive when the pool starts to be filled up. For my use case, OMV in conjunction with MergerFS and Snapraid is the perfect solution. I store my files for long term use, I mean I write it once and then leave it there and I do not have to spin up all the drives unless I need to access that specific file. Power saving and HDD endurance at its best.

      How much performance do I loose really by using it like I do? I mean I write it once and then leave it there?
    • nightrider wrote:

      I did set up the pool with 2 drives and it did not work for me with "Existing path, least free space", I could not continue write files to that pool. I did try with the same relative path on both drives, only thing was that one of the drives was full and already reached the minimum free space. I do not know, maybe it was only a temporary bug.

      Did you create the *full* relative paths on both drives and try creating something *in* that directory?

      nightrider wrote:

      I like the idea of having all the files relatively in order on the drives, I just add a new drive when the pool starts to be filled up.

      Then why not just use **ff** with an appropriate **minfreespace**? Also... filling up drives one at a time, if you've not already filled N-1 drives in your collection would be wasteful and increases data risk or time to recover. That may not be your situation but if you have less than minfreespace on multiple drives mfs or lus are generally the best option.


      What does "relatively in order" mean? Order of when you created them? I'm only familiar with two reasons for that 1) someone wants to be able to hot remove the drive so they can take it elsewhere and want sets of data on that drive. Like taking a drive on vacation for watching a whole TV show. That's a super niche case given most would stream or would transfer to another device. and 2) You don't have any backup and would rather lose everything written around the same time rather than the random'ish layout of mfs, lus, etc.

      Besides those niche cases using ff in a general setup only has negatives.

      nightrider wrote:

      I store my files for long term use, I mean I write it once and then leave it there and I do not have to spin up all the drives unless I need to access that specific file. Power saving and HDD endurance at its best.

      Most everyone using mergerfs has that pattern and most use mfs or lus create & mkdir policies.

      You're mistaken thinking drives won't spin up or that the endurance will be the best. Drives will spin up if data from them is necessary. That includes any metadata. The OS does cache some data but on the whole it will often not have the data needed when a directory listing happens or whatnot. Many pieces of software in the media space must scan the file to pull metadata, file format, etc. so any scan they do will spin up all drives even if the metadata was cached. mergerfs can't control how software behaves. It can't know what it is looking for. If "foo" happens to be on the last drive and the app is searching for "foo" then every drive before the last will have to be active to give the kernel the entries for them. It's extraordinarily difficult to limit spinup if you have any sort of activity. Torrents, Plex, etc. If you stage your data you can limit it but I find few can do so practically.

      As for endurance there is very mixed data on how power cycling affects drives. I've seen some reports that said it had no obvious effect and others that said it significantly impacted them. If I had to bet I'd say the latter is more likely to be true because like starting a car the starting of a drive is a more jolting and energy intensive process. The physical and electrical stress is higher. It's not uncommon to fear restarting a system when the drive is acting up due to the possibility of it not starting back up.


      nightrider wrote:

      How much performance do I loose really by using it like I do? I mean I write it once and then leave it there?

      I'm not sure what your usage patterns are so it's impossible to comment. Your only as fast as the slowest part. If you colocate data on a drive and then access that data in parallel then that will perform worse than if the data was on 2 different drives.

      The post was edited 1 time, last by trapexit ().

    • trapexit wrote:

      Did you create the *full* relative paths on both drives and try creating something *in* that directory?
      Yes the same folder name on both drives. This have been working on my old OMV3 in the past. I am in the process to move over to a new server build running OMVs in VMs on Proxmox with HBA passthrough. OMV is only a NAS for me, docker apps I run on other VMs instead.


      trapexit wrote:

      What does "relatively in order" mean? Order of when you created them?
      Yes, exactly, i like to have the option of hot swap, but not exactly for the reason as you described though. I understand your point about the increased risk of data loss, but this is why we have Snapraid to reduce the risk of that.

      By using "mfs" or "lus", let say for example, you have 3 drives in a pool, these 3 drives is filled to 70% all together and you add 1 more drive, this will make all the new data to be written to drive number 4 only until it also reach 70%? Then you have the same problem with most recent data written on 1 drive, am I right? Or does MergerFS have an option of balancing out the data like it moves it over to the new drive so you will have same even (lower) percentage written data on all 4 drives? If this is possible, that would be a very cool and powerful feature.

      You are right though about if several users access the pool, then I really see the benefit of balancing out all the data for increased speed. Maybe I will use this "mfs" option in the future, especially if there was a feature to balancing out the data as described above when adding new drives to the pool.


      trapexit wrote:

      You're mistaken thinking drives won't spin up or that the endurance will be the best. Drives will spin up if data from them is necessary.
      I do understand all of that. For sure, access data and spin-up drives several times a day will only harm the HDDs more than keeping them spinning. For my use case it can go very long time in periods (weeks) until I need to access a file on that pool. This is why OMV with MergerFS and Snapraid is the perfect solution for me compared to using FreeNAS with ZFS. With MergerFS you can always add more drives to a pool that you cannot do with ZFS.

      Another thought I have, is it not possible in the future to add a SSD cache to MergerFS that can keep all the Metadata and so on, so that not all the drives needs to spin-up when an app like Plex or Kodi needs to scan the drives for new files? That would be an even more powerful feature.

      I have to thank you for your thorough written answers here, I appreciate it. Please continue and improve MergerFS, i very much appreciate your work on this app. (If there is anything more that can be improved that is) .... :)
    • There are the mergerfs tools which offer a tool to balance drives. You'd install the drive, run the balance tool, then use as normal. Or you use the rand policy.

      What data do you propose to cache on this SSD? Plex is not just reading filesystem info (stat and readdir, basically a **ls**). It's reading file data. Also... when would that data be cached? Would mergerfs or another tool have to read the entire pool and try to figure out which data might be needed? What few blocks of every file *might* just have the metadata some random app will want? If it caches on demand then the drives would still need to be spun up for mergerfs to know if new data was available. Plex scanning is configurable. Mine is once a day. If mergerfs' timeout was shorter than that then it'd spin up the drives more than once a day.

      This problem is not really solvable. People underestimate what's going on and what is possible. Many people work on drives under mergerfs directly. It's not practical to watch those behaviors so caches would get out of sync more easily. The best way to deal with keeping drives from spinning is to not use them.
    • Users Online 1

      1 Guest