Mergerfs question

  • Hi!
    I have installed latest OMV and configured SnapRaid and MergerFS.
    I have an external harddisk which I have all of my backup stuff.
    So I started by creating an MergerFS pool and created an RSync Module pointing to a location on the pool.
    I then copied the files with Rsync from the external backup harddisk to the pool.
    I have a pool of four harddisk in the poo,, but it looks like only one of the harddisks was mostly used for the the files as now one of the drives is almost full while the other are not yet half full.
    Is this how it should be?
    Is there any command to adjust the imbalance and spread the files more evenly between the drives?


    Here is also a notice I get about this issue:


    Resource limit matched Service fs_srv_dev-disk-by-label-DISK1



    Date: Sat, 12 Aug 2017 05:40:22
    Action: alert
    Host: nas.dasilva.network
    Description: space usage 99.4% matches resource limit [space usage>85.0%]


    Regards,
    Cesar da Silva

  • When you defined to pool originally what Create Policy did you specify?

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

  • Thank you for your quick reply.
    The Create Policy I selected was and still is "Existing path, most free space". Here are also the options: defaults,allow_other,direct_io,use_ino.
    I thought that this would spread the load between the disks, as it would use the harddisk with the most space available at the moment.

  • Do you have identical existing relative paths on all the disks?

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

  • No, I don't. Here is an ls of all of the drives including the Pool for referens.


    cesar@nas:/srv$ ls dev-disk-by-label-DISK1
    lost+found Media Nextcloud snapraid.parity Virtualbox
    cesar@nas:/srv$ ls dev-disk-by-label-DISK2
    Logs Media snapraid.conf.bak snapraid.content.lock
    lost+found snapraid.conf snapraid.content Virtualbox
    cesar@nas:/srv$ ls dev-disk-by-label-DISK3
    lost+found Media snapraid.conf snapraid.conf.bak snapraid.content
    cesar@nas:/srv$ ls dev-disk-by-label-DISK4
    Files lost+found OMV-Backup snapraid.conf snapraid.content tmp
    cesar@nas:/srv$ ls c908b475-65da-4325-ac82-1a48b8dcd4c6/
    Files Media snapraid.conf snapraid.content.lock Virtualbox
    Logs Nextcloud snapraid.conf.bak snapraid.parity
    lost+found OMV-Backup snapraid.content tmp

  • The way I understand it is that if your pool is "existing path most free space" files that are written to an existing path common to all pooled drives will be written to the drive with the most free space first.


    As the drive with the most free space begins to fill, one of the others will have more free space, and the writes will go there until another drive in the pool is the one that has the most free space. This would spread the files over the pooled drives such that they would all have about the same amount of free space.


    But this would not work unless the files being written were destined to a common path. And you say you don't have such a common path scenario.


    Maybe you should try another pool policy, such as "Most Free Space" instead.


    Of course I could be understanding megerfs incorrectly and if so, the above is all wrong and worthless.

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

  • Maybe you should try another pool policy, such as "Most Free Space" instead.



    Of course I could be understanding megerfs incorrectly and if so, the above is all wrong and worthless.


    I'm no expert, but I think you are exactly right, and I chose "most free space" for the same reason when I set up my pool. Also, I THINK this will improve SnapRAID's performance as well. During a sync, data can be pulled from all populated drives and the speed will increase as a result. If most of your data is limited to one drive, most of your sync process will be limited by the speed of that single drive.


    Ex: 1 drive syncs at 110 MB/s, 5 drives sync at 5 x 110 MB/s = 550 MB/s

  • Thank you for your replies.
    I was also thinking the same way you are, that it should save a file to the drive which has the most space available at that time.
    Is there a way now to have the files being distributed evenly between the drives automatically or do I need to recreate the pool from scratch?

  • You should be able to edit the pool and just change its policy. It won't do anything to the actual files.

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

  • Thank you for your replies.
    I was also thinking the same way you are, that it should save a file to the drive which has the most space available at that time.
    Is there a way now to have the files being distributed evenly between the drives automatically or do I need to recreate the pool from scratch?


    If you mean you want it to rebalance the existing files, I don't believe that MergerFS can do that. I'm no expert, though. Maybe look at the MergerFS documentation.

    • Offizieller Beitrag

    If you mean you want it to rebalance the existing files, I don't believe that MergerFS can do that. I'm no expert, though. Maybe look at the MergerFS documentation.

    There is a balance utility (I think) available on the mergerfs github but github is down right now.

    omv 7.0-32 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.9 | compose 7.0.9 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Hi, this result is also the same result I have just experienced and clearly it is lack of knowledge that was the cause. Maybe a popup box to appear in the browser when choosing a policy, that explains the policy, would help?


    I do however have an additional problem.


    My OMV to OMV over SMB copy failed when a 40GB file was the last to fill a drive. It only had 11GB left on the drive. Yet it continued to copy anyway.
    Is this also user error? Due to the minimum free space?


    thanks in advance
    regards

    Fan of OMV, but not a fan of over-complication.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!