Error adding Unionfs

  • Hello, I'm trying to make the snapraid and Union Filesystem combination.
    As a guide I'm using the video from Techno Dad Life called: "Snapraid and Unionfs: Advanced Array Options on Openmediavault (Better than ZFS and Unraid)".
    So here is my problem: When I add a filesystem to Unionfs, press Save, press Apply and confirm the changes I get the following error:(for some reason I can't turn the highlighting off)


    Does anyone know the problem and how I can fix it?
    OMV version -> 5.1.1-1 (Usul)
    Kernel version -> 5.3.0-0bpo.2-amd64



    Many thanks in advance

    • Offizieller Beitrag

    What kind of filesystem? What are your options? What version of the plugin? I'm not able to replicate this.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • thank you for your quick response.


    I'm using version 5.0.2

    After I press save and confirm I get the previous error message.
    Is there more info that I can provide to troubleshoot?

    • Offizieller Beitrag

    Is there more info that I can provide to troubleshoot?

    That screenshot looks fine. The output of sudo blkid might help.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Here is the output of sudo blkid:

    Code
    root@openmediavault:~# sudo blkid
    /dev/sdc1: LABEL="NASHDD2" UUID="222f773c-0ac3-4ea6-b138-61f043a2daa3" TYPE="ext4" PARTUUID="b9939e7a-1468-4ce0-806a-26dc1b0c9c6d"
    /dev/sdd1: UUID="a3dfdae2-eaff-42bf-8cec-c0d43c66ef82" TYPE="ext4" PARTUUID="3f1c896d-01"
    /dev/sda1: LABEL="NASHDD1" UUID="42a8c83a-8d30-4df4-bd00-cafb4c42b942" TYPE="ext4" PARTUUID="f733dbb7-d669-47a4-abf9-df9713dd92c1"
    /dev/sdg1: LABEL="NASHDD5" UUID="d5e66d91-492f-4a65-a3b0-56be024fdfc2" TYPE="ext4" PARTUUID="6b21772a-b20a-49ab-8a6f-3778a92394f5"
    /dev/sde1: LABEL="NASHDD3" UUID="70033ccc-0cc1-4579-b269-78e3217a600f" TYPE="ext4" PARTUUID="fb790f36-9f7f-4873-a479-387c7155ac1a"
    /dev/sdf1: LABEL="NASHDD4" UUID="d18ecf86-ef05-45b5-aa89-3ebffec83d80" TYPE="ext4" PARTUUID="8b938ac1-d573-4e9b-9211-4fb08d122256"
    /dev/sdb1: LABEL="NASHDD6" UUID="f5f61b72-bee9-4585-bbfb-d97d92c9b86b" TYPE="ext4" PARTUUID="0ff69ec2-66e0-4c84-814d-87cd8bf7375d"
    /dev/sdh1: LABEL="NASSSD1" UUID="925f0651-dbfa-4a22-81a6-70c609df42e1" TYPE="ext4" PARTUUID="bb85bb81-4fd3-4c38-922e-c8f6399dff1b"
    • Offizieller Beitrag

    If you disable monitoring in the Monitoring tab, do you still get any errors?

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Nope, I still get the same error. But after the error the filesystem appears to be working(see picture below).

    But the media filesystem that unionfs supposedly created doesn't exist.

    When I go back to Unionfs to edit media filesystem I get the following error:


    Code
    Error #0:
    OMV\Config\DatabaseException: Failed to execute XPath query '/config/services/unionfilesystems/filesystem[uuid='37a13d33-a232-4b0f-be67-ab3883abc3e7']'. in /usr/share/php/openmediavault/config/database.inc:78
    Stack trace:
    #0 /usr/share/openmediavault/engined/rpc/unionfilesystems.inc(166): OMV\Config\Database->get('conf.service.un...', '37a13d33-a232-4...')
    #1 [internal function]: OMV\Engined\Rpc\UnionFilesystems->get(Array, Array)
    #2 /usr/share/php/openmediavault/rpc/serviceabstract.inc(123): call_user_func_array(Array, Array)
    #3 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod('get', Array, Array)
    #4 /usr/sbin/omv-engined(537): OMV\Rpc\Rpc::call('UnionFilesystem...', 'get', Array, Array, 1)
    #5 {main}

    After the error, the edit window comes up and the name is gone as well as the devices that were selected (see picture below).


    I hope that this information is of any help.

    • Offizieller Beitrag

    Something still isn't right. Post the output of:


    grep mergerfs /etc/fstab
    dpkg -l | grep -e openm -e merg


    And what are you using to change the theme? I'm wondering if that is corrupting something as well.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    Einmal editiert, zuletzt von ryecoaaron ()

  • Hi,


    I was just searching for this similar error with UnionFS plugin. I can add HDDSs and create a pool, but after I create a shared folder, I start to get this error. Every time I click on "Shared folder" tab, this error shows.

    I cannot see the created shared folder as long as the unionfs pool is there. When I remove the pool, the shared folder shows up again.


    My fstab->

    output of the commands if it helps->
    grep mergerfs /etc/fstab

    Code
    root@OMV-2:~# grep mergerfs /etc/fstab
    /srv/dev-disk-by-label-15-Series3-HDD:/srv/dev-disk-by-label-14-Series2-HDD:/srv/dev-disk-by-label-13-Series1-HDD		/srv/9368d400-3871-428a-b909-6cc9f251b578	fuse.mergerfs	defaults,allow_other,direct_io,use_ino,noforget,category.create=eplfs,minfreespace=40G,x-systemd.requires=/srv/dev-disk-by-label-15-Series3-HDD,x-systemd.requires=/srv/dev-disk-by-label-14-Series2-HDD,x-systemd.requires=/srv/dev-disk-by-label-13-Series1-HDD	0 0

    dpkg -l | grep -e openm - merg

    Code
    root@OMV-2:~# dpkg -l | grep -e openm - merg
    (standard input):ii  openmediavault                  5.1.1-1                             all          openmediavault - The open network attached storage solution
    (standard input):ii  openmediavault-clamav           5.0.1-1                             all          OpenMediaVault ClamAV plugin
    (standard input):ii  openmediavault-keyring          1.0                                 all          GnuPG archive keys of the OpenMediaVault archive
    (standard input):ii  openmediavault-omvextrasorg     5.1.6                               all          OMV-Extras.org Package Repositories for OpenMediaVault
    (standard input):ii  openmediavault-snapraid         5.0.1                               all          snapraid plugin for OpenMediaVault.
    (standard input):ii  openmediavault-unionfilesystems 5.0.2                               all          Union filesystems plugin for OpenMediaVault.
    grep: merg: No such file or directory


    I have to say that first I did not see any fault, the fault started to come up after I filled one HDD and suddenly I could not move/write any more files when I came down to only 40G minimum free space I set. I have setting "Existing path, least free space", but it did not continue on a new drive? This was working perfectly on OMV4 but not here in OMV5.

  • I have setting "Existing path, least free space", but it did not continue on a new drive? This was working perfectly on OMV4 but not here in OMV5.

    Does the path exist on every drive in the pool?

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

    • Offizieller Beitrag

    This was working perfectly on OMV4 but not here in OMV5.

    The version of mergerfs is exactly the same. I really don't why people are having problems on the OMV 5.x version of the plugin. I can't replicate these problems. It is salt mounting the drive but strange that all of my tests work (yes, I actually use this plugin),

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I have to say that first I did not see any fault, the fault started to come up after I filled one HDD and suddenly I could not move/write any more files when I came down to only 40G minimum free space I set. I have setting "Existing path, least free space", but it did not continue on a new drive? This was working perfectly on OMV4 but not here in OMV5.

    I found someone who has the same problem: https://michaelxander.com/diy-nas/. They have the following explanation:

    Note: The default policy epmfs doesn’t fit me, because since v2.25 path preserving policies will no longer fall back to non-path preserving policies. This means once you run out of space on drives that have the
    relative path, adding a new file will fail (out of space error).

    So, is it a good idea to make the "most freespace" the default option instead of "Existing path, most freespace"?


    And what are you using to change the theme? I'm wondering if that is corrupting something as well.

    I'm using a plugin in my browser. The omv theme is not changed. The plugin is called Dark Reader.


    Here is the output from: grep mergerfs /etc/fstab.

    Code
    root@openmediavault:~# grep mergerfs /etc/fstab
    /srv/dev-disk-by-label-NASHDD1:/srv/dev-disk-by-label-NASHDD6:/srv/dev-disk-by-label-NASHDD2:/srv/dev-disk-by-label-NASHDD4             /srv/4ee40582-7941-4584-bda8-c0a8a91c0b7b  fuse.mergerfs   defaults,allow_other,direct_io,use_ino,category.create=epmfs,minfreespace=4G,x-systemd.requires=/srv/dev-disk-by-label-NASHDD1,x-systemd.requires=/srv/dev-disk-by-label-NASHDD6,x-systemd.requires=/srv/dev-disk-by-label-NASHDD2,x-systemd.requires=/srv/dev-disk-by-label-NASHDD4  0 0


    Here is the output from: dpkg -l | grep -e openm -e merg.


    • Offizieller Beitrag

    All of that looks correct. I really don't know what to change since I can't replicate it.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I found someone who has the same problem: michaelxander.com/diy-nas/. They have the following explanation:

    Thank you very much, that solved my problem. It is not working with setting "Create policy" -> "Existing path, least free space", but it is working with "Least free space" option and to have same created relative path on all disks.



    I can't replicate these problems.

    The error "Couldn't extract an UUID from the provided path" that showed up for me after created the NFS share when clicking back to "Shared folders" tab, disappeared after a reboot.


    Everything seems to work for now, except what I mentioned about "Existing path" that seems to not work in UnionFS plugin. Same what is mentioned in the article dropje linked to above.


    Now I am curious why that is? If that is the case as he mention in the article there, why is it an option in the plugin?

    • Offizieller Beitrag

    Now I am curious why that is? If that is the case as he mention in the article there, why is it an option in the plugin?

    The plugin includes all of the options (policies) from here - https://github.com/trapexit/mergerfs. If one doesn't work or is not working how you expect it, perhaps file an issue on the mergerfs github. Otherwise, maybe @trapexit (author of mergerfs) can explain more about this policy (epmfs) and why it wouldn't write to another disk if that disk didn't have any folders.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Path preservation is working just fine. What you described is exactly the behavior expected.


    I'm not sure I understand what you expect. Path preservation preserves the paths. As the docs mention it will only choose from branches where the relative base path of the thing being worked on exists. If you only have 1 drive with that 1 directory then it will only ever consider that drive. If it runs out of space you should rightly get out of space errors. The "change" referenced on that website was a bug fix. If you "fall back" to another drive... what's the point of path preservation in the first place? If you don't care what drive your data is on why would you reduce your speed and reliability by putting everything on one drive while the others sit around unused?


    Path preservation is a niche feature for people who want to *manually* manage their drives but have them appear as one pool.


    https://github.com/trapexit/mergerfs#path-preservation

  • As the docs mention it will only choose from branches where the relative base path of the thing being worked on exists. If you only have 1 drive with that 1 directory then it will only ever consider that drive. If it runs out of space you should rightly get out of space errors.

    I did set up the pool with 2 drives and it did not work for me with "Existing path, least free space", I could not continue write files to that pool. I did try with the same relative path on both drives, only thing was that one of the drives was full and already reached the minimum free space. I do not know, maybe it was only a temporary bug.


    If you don't care what drive your data is on why would you reduce your speed and reliability by putting everything on one drive while the others sit around unused?

    I like the idea of having all the files relatively in order on the drives, I just add a new drive when the pool starts to be filled up. For my use case, OMV in conjunction with MergerFS and Snapraid is the perfect solution. I store my files for long term use, I mean I write it once and then leave it there and I do not have to spin up all the drives unless I need to access that specific file. Power saving and HDD endurance at its best.


    How much performance do I loose really by using it like I do? I mean I write it once and then leave it there?

  • Zitat von nightrider

    I did set up the pool with 2 drives and it did not work for me with "Existing path, least free space", I could not continue write files to that pool. I did try with the same relative path on both drives, only thing was that one of the drives was full and already reached the minimum free space. I do not know, maybe it was only a temporary bug.


    Did you create the *full* relative paths on both drives and try creating something *in* that directory?


    Zitat von nightrider

    I like the idea of having all the files relatively in order on the drives, I just add a new drive when the pool starts to be filled up.


    Then why not just use **ff** with an appropriate **minfreespace**? Also... filling up drives one at a time, if you've not already filled N-1 drives in your collection would be wasteful and increases data risk or time to recover. That may not be your situation but if you have less than minfreespace on multiple drives mfs or lus are generally the best option.



    What does "relatively in order" mean? Order of when you created them? I'm only familiar with two reasons for that 1) someone wants to be able to hot remove the drive so they can take it elsewhere and want sets of data on that drive. Like taking a drive on vacation for watching a whole TV show. That's a super niche case given most would stream or would transfer to another device. and 2) You don't have any backup and would rather lose everything written around the same time rather than the random'ish layout of mfs, lus, etc.


    Besides those niche cases using ff in a general setup only has negatives.


    Zitat von nightrider

    I store my files for long term use, I mean I write it once and then leave it there and I do not have to spin up all the drives unless I need to access that specific file. Power saving and HDD endurance at its best.


    Most everyone using mergerfs has that pattern and most use mfs or lus create & mkdir policies.


    You're mistaken thinking drives won't spin up or that the endurance will be the best. Drives will spin up if data from them is necessary. That includes any metadata. The OS does cache some data but on the whole it will often not have the data needed when a directory listing happens or whatnot. Many pieces of software in the media space must scan the file to pull metadata, file format, etc. so any scan they do will spin up all drives even if the metadata was cached. mergerfs can't control how software behaves. It can't know what it is looking for. If "foo" happens to be on the last drive and the app is searching for "foo" then every drive before the last will have to be active to give the kernel the entries for them. It's extraordinarily difficult to limit spinup if you have any sort of activity. Torrents, Plex, etc. If you stage your data you can limit it but I find few can do so practically.


    As for endurance there is very mixed data on how power cycling affects drives. I've seen some reports that said it had no obvious effect and others that said it significantly impacted them. If I had to bet I'd say the latter is more likely to be true because like starting a car the starting of a drive is a more jolting and energy intensive process. The physical and electrical stress is higher. It's not uncommon to fear restarting a system when the drive is acting up due to the possibility of it not starting back up.



    Zitat von nightrider

    How much performance do I loose really by using it like I do? I mean I write it once and then leave it there?


    I'm not sure what your usage patterns are so it's impossible to comment. Your only as fast as the slowest part. If you colocate data on a drive and then access that data in parallel then that will perform worse than if the data was on 2 different drives.

  • Did you create the *full* relative paths on both drives and try creating something *in* that directory?

    Yes the same folder name on both drives. This have been working on my old OMV3 in the past. I am in the process to move over to a new server build running OMVs in VMs on Proxmox with HBA passthrough. OMV is only a NAS for me, docker apps I run on other VMs instead.



    What does "relatively in order" mean? Order of when you created them?

    Yes, exactly, i like to have the option of hot swap, but not exactly for the reason as you described though. I understand your point about the increased risk of data loss, but this is why we have Snapraid to reduce the risk of that.


    By using "mfs" or "lus", let say for example, you have 3 drives in a pool, these 3 drives is filled to 70% all together and you add 1 more drive, this will make all the new data to be written to drive number 4 only until it also reach 70%? Then you have the same problem with most recent data written on 1 drive, am I right? Or does MergerFS have an option of balancing out the data like it moves it over to the new drive so you will have same even (lower) percentage written data on all 4 drives? If this is possible, that would be a very cool and powerful feature.


    You are right though about if several users access the pool, then I really see the benefit of balancing out all the data for increased speed. Maybe I will use this "mfs" option in the future, especially if there was a feature to balancing out the data as described above when adding new drives to the pool.



    You're mistaken thinking drives won't spin up or that the endurance will be the best. Drives will spin up if data from them is necessary.

    I do understand all of that. For sure, access data and spin-up drives several times a day will only harm the HDDs more than keeping them spinning. For my use case it can go very long time in periods (weeks) until I need to access a file on that pool. This is why OMV with MergerFS and Snapraid is the perfect solution for me compared to using FreeNAS with ZFS. With MergerFS you can always add more drives to a pool that you cannot do with ZFS.


    Another thought I have, is it not possible in the future to add a SSD cache to MergerFS that can keep all the Metadata and so on, so that not all the drives needs to spin-up when an app like Plex or Kodi needs to scan the drives for new files? That would be an even more powerful feature.


    I have to thank you for your thorough written answers here, I appreciate it. Please continue and improve MergerFS, i very much appreciate your work on this app. (If there is anything more that can be improved that is) .... :)

  • There are the mergerfs tools which offer a tool to balance drives. You'd install the drive, run the balance tool, then use as normal. Or you use the rand policy.


    What data do you propose to cache on this SSD? Plex is not just reading filesystem info (stat and readdir, basically a **ls**). It's reading file data. Also... when would that data be cached? Would mergerfs or another tool have to read the entire pool and try to figure out which data might be needed? What few blocks of every file *might* just have the metadata some random app will want? If it caches on demand then the drives would still need to be spun up for mergerfs to know if new data was available. Plex scanning is configurable. Mine is once a day. If mergerfs' timeout was shorter than that then it'd spin up the drives more than once a day.


    This problem is not really solvable. People underestimate what's going on and what is possible. Many people work on drives under mergerfs directly. It's not practical to watch those behaviors so caches would get out of sync more easily. The best way to deal with keeping drives from spinning is to not use them.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!