mergerFS balancing (error code 23)

  • I set up my NAS with mergerFS & SnapRAID in the following way:

    • 3 data drives (2 + 2 + 3 GB = 7GB mergerFS pool - initially w "existing path - mfs" & 400GB "min free space")
    • 1 parity drive (16 GB)

    I then started to move files onto the NAS. Unsurprisingly, it filled the 3rd drive 1st, but it did so all the way until only 399.69 GB were available (and the other drives were still empty) - and it stopped with an error message "drive full" on the Win10 system I was copying the files from.


    After a bit of searching the forum, I changed my policy to "most free space" and continued to copy files ... et voila - drives 1 & 2 were now filling in tandem. - So far so good.


    But now I have very different "fill levels":

    ... so I figured, I'd "balance the pool", but I get the following error (code 23) and nothing changes:


    Looks to me it's only looking at the first two (already balanced) drives and concludes there is nothing to do ...


    So I am wondering ...

    • have I somehow "locked in" the data (now on D3) by initially creating them with the epmfs policy?
      • (and is the only way to "rebalance" to recreate the data with the new policy?)
    • and/or am I doing something simple wrong (and I'm just not seeing it) with the balance?
    • Offizieller Beitrag

    have I somehow "locked in" the data (now on D3) by initially creating them with the epmfs policy?

    No. Once you changed the policy, mergerfs has no idea how the files were put on the underlying filesystem.


    (and is the only way to "rebalance" to recreate the data with the new policy?)

    No. You could just move files from the full disk to the others outside of mergerfs.


    and/or am I doing something simple wring (and I'm just not seeing it" with the balance

    I don't know enough about the balance utility. It is possible the plugin is doing something wrong with it but it is simply doing a mergerfs.balance.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Thanks ryecoaaron ... re: the first two point, I thought so - but it's good to have it confirmed.


    But that means, I am none the wiser so far ... I don't know enough about mergerFS either (obviously) or what balancing does in particular. I was going for "intuitive" - but apparently that only gets me this far ... ;)


    I am using the "built in" balancing function:


    Maybe someone here has a better grip on what's supposed to happen? ... should I run a cron job instead? - Maybe with some kind of option/flag set?

    • Offizieller Beitrag

    Maybe someone here has a better grip on what's supposed to happen? ... should I run a cron job instead? - Maybe with some kind of option/flag set?

    In my much smaller tests, it does balance the drives. I honestly don't think it needs to be run all the time. Personally, I think it is easier to just mv the files from one drive to the other but I thought it sounded like a nice option for the plugin. I have never used the utility on my own pools.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Hmmm ... I wonder if somehow D3 (the "full" one) is prevented from partaking in balancing because it has reached its "usage threshold" (that I set to 85%) - but then again, that should only be a warning triggering a notification (and turning the bar red).


    I'll change the threshold to 90% and see if that makes a difference.

    Zitat

    I think it is easier to just mv the files from one drive to the other ...

    Sure - but if I have a "balance tool", I'd like for it do what it suggests it does - if not, I tend to think it's user error (you know ... "PEBKAC") and dig in.


    Also, I'm not sure what exactly you mean by "moving files" - the shared folder sits on the mergerFS pool (not an individual drive), so there isn't really a "from" or "to" for me to move anything in between (isn't that the whole point of a unionFS?). Unless, you mean "delete some files and then put them back" - which under the new policy (mfS) should go where it's supposed to.


    Happy to try and report back ...

    • Offizieller Beitrag

    but if I have a "balance tool", I'd like for it do what it suggests it does - if not, I tend to think it's user error (you know ... "PEBKAC") and dig in.

    If I had time and a large test machine to try this on, I would. Think of it as an experimental new feature. Help from users is needed for some things.


    I'm not sure what exactly you mean by "moving files" - the shared folder sits on the mergerFS pool (not an individual drive), so there isn't really a "from" or "to" for me to move anything in between (isn't that the whole point of a unionFS?). Unless, you mean "delete some files and then put them back" - which under the new policy (mfS) should go where it's supposed to.

    A pool is just an imaginary filesystem on top of other filesystems. So, you can still perform actions on those underlying filesystem just like the pool isn't there. If I had the following paths in my pool


    /srv/dev-disk-by-uuid-2d3093f0-1829-4616-9822-c4db793cdf17

    /srv/dev-disk-by-uuid-cb9a4ff5-d81c-42b6-8f18-984d935a7837

    /srv/dev-disk-by-uuid-f4986fb7-838b-41ab-bc30-cfd22e29a4d1


    and my pool is /srv/mergerfs/pool1, I can still


    mv /srv/dev-disk-by-uuid-2d3093f0-1829-4616-9822-c4db793cdf17/some/folder /srv/dev-disk-by-uuid-cb9a4ff5-d81c-42b6-8f18-984d935a7837/some/


    mergerfs will not know this move command is happening and it will cause no problems.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Got it, that makes sense ... and is a bit terrifying. I come from the UI world of Windows, so using uuid-level mv of files - from w/i a mergerFS pool that I barely comprehend - is a bit daunting ... but I'll give it a try and report back (probably will take a while, though).


    I tried a few things earlier (hoping to stay w/in the UI of OMV and Windows):

    • raised "usage warning threshold" to 90% ... did nothing else than turn red bar green (no surprise, it's for notifications only)
    • reduced "min free space" in pool to 300 GB ... also did nothing (by which I mean running "balance" still had the same results)

    I also deleted 500 GB from the share (files that should reside on D3), but "file system" still shows no change (399.83 GB available) ... even though I also went into SMB/CIFS and manually deleted the share's recycle bin.


    ... so I'm a tad stumped because of that last - but I'll travelling next week - maybe "magically" things work when I return ... ;)

    • Offizieller Beitrag

    I come from the UI world of Windows, so using uuid-level mv of files - from w/i a mergerFS pool that I barely comprehend - is a bit daunting ...

    You can use a double file manager on the CLI. It is called "midnight commander". You can install it with

    apt install mc

    and start it with

    mc


    You get two windows. In one window you can navigate to the first filesystem and in the other to the second and then move folder from left to right (or vice versa). Of course you do many more things with it.

    • Offizieller Beitrag

    You could also use WinSCP to drag and drop over ssh in a Windows GUI.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • okay - update time ... (spoiler: it worked)

    I re-did the whole "experiment" ... set-up is the same as before: 3 data drives (2 + 2 + 3 GB = 7GB mergerFS pool) & 1 parity drive (16 GB)


    mergerfs is set up w "existing path - mfs" ... I tried a more aggressive 1750GB "min free space", but this parameter just doesn't seem to do anything with this setting. Again, the 3rd filled up (past the min free space point) - even with separate copy events. This time I didn't mess with the mergerfs settings (kept "existing path - mfs"), but went to balancing the pool w/i the UI:

    ... and lo & behold, it actually started assessing and moving content between ALL three drives this time. The result (after an over night run):

    before:    after:

    Balancing works as advertised ... :love: - it really DID balance those drive to w/i 2% (drives c,d,e) and no errors:


    So I am wondering if changing the pool's setting (from "existing path" to "most free space") somehow made mergerfs ignore the third drive (or treat it differently) last time ...? (I know it should be agnostic how the data go there, but it kept trying to balance drives c & d only (and was done immediately ... plus the error 23 message).

  • Thanks gderf ... I thought I did - at least enough to have certain expectations. Based on the the documentation (which I did consult on this matter), I was under the impression to expect the following:


    "most free space": The directories/files would get created/copied to the drive with the most free space (drive d in my case). It would continue to do this until the free space there is less than the free space on another drive - at which point it would start filling up the drives equally (as they all have the same amount of space free)


    "existing path - mfs": The directories/files would get created/copied to the drive with the most free space (drive d in my case). It would continue to do this pretty much indefinitely. (I thought that this however would get reassessed for each "new path" (which I took to mean either new folder or copy event) ... the intent being to NOT spread files everywhere, but to have some level of cohesion.


    The "min free space" limit I took to act as something akin to an "override" in that if that limit would be reached (regardless of policy), the system would start with another drive - until all are below that limit, at which point it would resume with the 1st drive. (granted this part I not all the sure on)


    I have questioned these assumptions, but nothing else makes sense to me as these policies are set at the pool and not shared folder level.


    What am getting wrong, here?


    Either way, none of that explains the previous balancing behavior ... does it?

  • The part you are missing is that with any existing path policy, the paths must already exist on a drive if data is to be written into those paths.


    If the paths do not already exist they will not be created, and the data will have to be written to another drive within the pool where the paths do already exist.

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

  • Okay ... but how does the absence of a path - any path for that matter - factor into this? (maybe I'm still not getting it)


    I am starting with empty drives - so the first thing copied would create A path ... which subsequently would be followed, correct? That IS what I am observing. Only it doesn't "stop". Are you saying that because that is the ONLY path that exists, it is the only one that will be written to?


    In that case, wouldn't the "min free space" limit kick in at some point and divert the files to another drive in the pool? (otherwise, what's the point of a pool to begin with if only one drive gets used indefinitely)


    And ... how would another path ever get created? (I assumed that on each new folder creation or separate copy event this would get re-assessed).


    (and why would that affect balancing the pool?)

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!