Mergerfs question

    • OMV 3.x

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • Mergerfs question

      I have installed latest OMV and configured SnapRaid and MergerFS.
      I have an external harddisk which I have all of my backup stuff.
      So I started by creating an MergerFS pool and created an RSync Module pointing to a location on the pool.
      I then copied the files with Rsync from the external backup harddisk to the pool.
      I have a pool of four harddisk in the poo,, but it looks like only one of the harddisks was mostly used for the the files as now one of the drives is almost full while the other are not yet half full.
      Is this how it should be?
      Is there any command to adjust the imbalance and spread the files more evenly between the drives?

      Here is also a notice I get about this issue:

      Resource limit matched Service fs_srv_dev-disk-by-label-DISK1

      Date: Sat, 12 Aug 2017 05:40:22
      Action: alert
      Description: space usage 99.4% matches resource limit [space usage>85.0%]

      Cesar da Silva
    • No, I don't. Here is an ls of all of the drives including the Pool for referens.

      cesar@nas:/srv$ ls dev-disk-by-label-DISK1
      lost+found Media Nextcloud snapraid.parity Virtualbox
      cesar@nas:/srv$ ls dev-disk-by-label-DISK2
      Logs Media snapraid.conf.bak snapraid.content.lock
      lost+found snapraid.conf snapraid.content Virtualbox
      cesar@nas:/srv$ ls dev-disk-by-label-DISK3
      lost+found Media snapraid.conf snapraid.conf.bak snapraid.content
      cesar@nas:/srv$ ls dev-disk-by-label-DISK4
      Files lost+found OMV-Backup snapraid.conf snapraid.content tmp
      cesar@nas:/srv$ ls c908b475-65da-4325-ac82-1a48b8dcd4c6/
      Files Media snapraid.conf snapraid.content.lock Virtualbox
      Logs Nextcloud snapraid.conf.bak snapraid.parity
      lost+found OMV-Backup snapraid.content tmp
    • New

      The way I understand it is that if your pool is "existing path most free space" files that are written to an existing path common to all pooled drives will be written to the drive with the most free space first.

      As the drive with the most free space begins to fill, one of the others will have more free space, and the writes will go there until another drive in the pool is the one that has the most free space. This would spread the files over the pooled drives such that they would all have about the same amount of free space.

      But this would not work unless the files being written were destined to a common path. And you say you don't have such a common path scenario.

      Maybe you should try another pool policy, such as "Most Free Space" instead.

      Of course I could be understanding megerfs incorrectly and if so, the above is all wrong and worthless.
      OMV 3.x - ASRock Rack C2550D4I - 16GB ECC - Silverstone DS380
    • New

      gderf wrote:

      Maybe you should try another pool policy, such as "Most Free Space" instead.

      Of course I could be understanding megerfs incorrectly and if so, the above is all wrong and worthless.

      I'm no expert, but I think you are exactly right, and I chose "most free space" for the same reason when I set up my pool. Also, I THINK this will improve SnapRAID's performance as well. During a sync, data can be pulled from all populated drives and the speed will increase as a result. If most of your data is limited to one drive, most of your sync process will be limited by the speed of that single drive.

      Ex: 1 drive syncs at 110 MB/s, 5 drives sync at 5 x 110 MB/s = 550 MB/s
    • New

      thunderlight1 wrote:

      Thank you for your replies.
      I was also thinking the same way you are, that it should save a file to the drive which has the most space available at that time.
      Is there a way now to have the files being distributed evenly between the drives automatically or do I need to recreate the pool from scratch?

      If you mean you want it to rebalance the existing files, I don't believe that MergerFS can do that. I'm no expert, though. Maybe look at the MergerFS documentation.