Mergerfs seems to neglect one disk

  • Hi,


    I am using mergerfs with a pool of 4 disks. I am using the policy "existing path, most free space" and the additional options "defaults,allow_other,direct_io,use_ino,ignorepponrename=true".


    Now three of the disks are 86% full, their usage history looks as in attachment "data4". But one disk is only 76% percent full, the usage history is in Attachment "data1". As you can see, even when the usage of the other disk increased heavily during last December, there is no real change for Disk 1.


    At first I expected the "existing path" policy to be the problem, but this is *not* the case: the directory where all the new files go is present on every disk.


    Any idea what could be the reason for this behaviour?

  • Hi trapexit,


    I finally managed to dig a bit deeper and found the following:


    When new files are created in the pool, it never happens on Data1. Occasionally, a new directory is created on Data1, but without the containing files; instead, the directory is also created on another drive in the pool (with the files). So it is possible to write to the file system.


    I added some screenshots of the mounted files, the file systems, and the mergerfs configuration.


    Edit: I also added a screenshot of a newly created directory: The directory is created on both Data1 and Data4, but the files are written to Data4 (although it has less free space).


    Does this behaviour make any sense to you?

    • Offizieller Beitrag

    This is exactly what existing path most free space is going to do. Read the mergerfs wiki in the policy area again. If you want mergerfs to write to the disk with the most free space, then you have to use a policy that allows that. But because you are using existing path, it is going to write to the existing path first.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.2 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • But the path *is existing* on both volumes! If you look at the last screenshot: The directory written to is "Media/News". A new directory for the new file is generated on Data1, but no file is written to it. Instead, the same directory is generated on volume "Data4", where the files end up being stored.

    • Offizieller Beitrag

    But the path *is existing* on both volumes!

    Seems you are irritated with my response. Remember that I can't see your whole system and I know very well that epmfs works. So, something is not meeting the criteria of epmfs. The entire path including every subdirectory that the file will be written in exists? You have the path so obfuscated in your screenshot that I didn't look at it much.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.2 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • OK, while trying to reproduce the behaviour, I believe I finally managed to find the cause: The "News" directory, where all the new files go, actually exists an all file systems, but the "temp" directory, which is used for incoplete downloads, does *not* exist on "Data1". So the download starts on one of the other volumes, and after completion of the download, the file stays on the same volume, even when it's moved to another directory. So I guess you were right after all.


    Thanks again for your help!

  • macom

    Hat das Label gelöst hinzugefügt.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!