UnionFS with Policy existing path most free space doesn't work correctly

    • OMV 4.x

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • UnionFS with Policy existing path most free space doesn't work correctly

      Hello guys,

      I have installed a new omv system with 3 data disks, the snap raid plugin and for pooling unionFS with the policy ep MfS, but I mean this policy does not work correctly.

      All content data where stored on one disk in my pool, in my case on sdc with 1,7 Tb free space. Actually the sdb drive has 2 Tb free space.???

      Each copy job to the data disks are controlled by the policy?

      Thx
    • Just saying that it does not work is not enough information to get any meaningful help.

      You need to describe how your file systems are structured, specifically what directories exist on each drive and what the destination of the write was when it did not work as you expected.
      OMV 4.x - ASRock Rack C2550D4I - 16GB ECC - Silverstone DS380
    • Ok, I`ll try it with more details.

      In addition to an SSD Drive for the OMV System, i have four HDDs in operation for Data Storage.

      /dev/sda 500 GB (Data)
      /dev/sdb 2 TB (Data)
      /dev/sdc 2 TB (Data

      /dev/sdd 120 GB (OMV System)
      /dev sde 4 TB ( Parrity)

      I integrated the Data Drives as EXT4 file System and pooled them via the Plugin UnionFS



      I have created a File Share for the Pool


      As well as an SMB release created



      After I mounted the pool in Windows, I have created various folders on it and filled them in various copying processes with data.But it was always the same hard drive described ...Not as in the policy with the most free memory.

      Directorys existing only on the Pool, not on each Drive!
    • It happens, because of the existing path addition in the policy. A created folder is initially saved on only one drive of your data pool (randomly on one of the drives with the most free space). Therefor your files will be only copied to the drive where the folder is located.

      You have to choose the most free space policy, if you want a performance as you described.
    • That was the crucial clue. Many Thanks. "Existing Path" ...Have it changed only to mostly free space and behold, the data is written to the other disk. It does not matter whetherthe directories already exist or are newly created.How is the recommendation?Better first make a plate full or really always those with the most free space to occupy?
    • matthias80 wrote:

      That was the crucial clue. Many Thanks. "Existing Path" ...Have it changed only to mostly free space and behold, the data is written to the other disk. It does not matter whetherthe directories already exist or are newly created.How is the recommendation?Better first make a plate full or really always those with the most free space to occupy?
      That was the crucial clue. Many Thanks. "Existing Path" ...Have it changed only to mostly free space and behold, the data is written to the other disk. It does not matter whetherthe directories already exist or are newly created.How is the recommendation?Better first make a plate full or really always those with the most free space to occupy?
      This is the reason I asked you to provide the directory structure of your pooled drives.

      If you have not yet read the documentation for mergerfs, you should. It answers many, if not all questions.

      github.com/trapexit/mergerfs
      OMV 4.x - ASRock Rack C2550D4I - 16GB ECC - Silverstone DS380
    • The first recommendation would be to have one or more directories off the root of your drives. Just writing a lot of files files into the root is not a good idea.

      Here's how I have mine set up. I do not use the Union Filesystems plugin here, rather I manually set it up in /etc/fstab outside the openmediavailt section.

      I have four data drives, labeled d1-d4.

      Each drive has a directory below the root like this with subdirectories:

      \
      multimedia-content-d1
      movies
      music
      tv series

      \
      multimedia-content-d2
      movies
      music
      tv series

      \
      multimedia-content-d3
      movies
      music
      tv series

      \
      multimedia-content-d14
      movies
      music
      tv series


      I have three entries in /etc/fstab for mergerfs:

      # >>> [sftp-mergerfs]
      /srv/*/multimedia-content-d*/movies /srv/dev-disk-by-label-d1/sftp/outgoing/movies fuse.mergerfs defaults,category.create=eplfs,minfreespace=100G,allow_other,fsname=mergerfs-movies 0 0
      /srv/*/multimedia-content-d*/music /srv/dev-disk-by-label-d1/sftp/outgoing/music fuse.mergerfs defaults,category.create=eplfs,minfreespace=100G,allow_other,fsname=mergerfs-music 0 0
      /srv/*/multimedia-content-d*/tv-series /srv/dev-disk-by-label-d1/sftp/outgoing/tv-series fuse.mergerfs defaults,category.create=eplfs,minfreespace=100G,allow_other,fsname=mergerfs-tv-series 0 0
      # <<< [sftp-mergerfs]

      In the above, the *s are globs (wildcards).

      So each fstab entry applies to every drive mounted in /srv and all the similarly named multimedia-content folders within.

      The mergerfs pool directory is the /sftp/outgoing directory on d1 which has movies, music, and tv-series directories like this:

      \
      sftp
      incoming
      outgoing
      movies
      music
      tv-series


      The policy is existing path least free space with a 100GB reserve on each drive.

      The way it works is when I write folders with files in them for example, to /srv/dev-disk-by-label-d1/sftp/outgoing/movies, mergerfs figures out which drive in the pool has the least amount of free space and if there is enough room left on the drive to take the folder and its files, it writes them to that drive.

      Eventually that drive will fill up (but still have 100GB of free space on it), then another drive will fill up and so on until they are all full (except for the 100GB remaining free space on each one).

      But I will not get to that point because I will have added another new empty drive with the same directory structure to the pool before I run entirely out of space.

      I suppose it wouldn't make any practical difference to use a epmfs policy instead, I just prefer to do it the other way.

      That's the way I do it here, but as I said I did not use the Union File Systems plugin to set it up. It may or may not have worked with my existing directory structures, but I'll admit I didn't try. I already had a working hand configured AUFS setup left over when I upgraded to OMV 3, but AUFS is not supported there so I just hand converted it to mergerfs.
      OMV 4.x - ASRock Rack C2550D4I - 16GB ECC - Silverstone DS380
    • matthias80 wrote:

      Yes... Read the Docs... I know...

      Whats your recommendation for the Policy?

      I use both mergerfs and SnapRAID. For me, the MFS policy is ideal. Your syncs and scrubs will be faster overall if the data is spread as evenly between the drives as possible. I do not make any directories on specific drives, I just use the root of the pool and have directories as follows:

      /srv/poolmountpathgoeshere/
      • Storage
        • Backups

        • Misc

        • Torrents

        • Usenet
      • Media
        • !Unsorted

        • Books

        • Comics

        • Movies

        • Movies Kids

        • Music

        • Music Playlists

        • TV Shows

        • TV Shows Kids


      My directories are all capitalized because I transferred it all from a Windows server setup, which drives me a little bit crazy sometimes. I don't do anything special in fstab; I simply allow the UnionFS plugin to form the proper entry after I have made the pool in the plugin.

      The post was edited 4 times, last by flvinny521 ().

    • flvinny521 wrote:

      matthias80 wrote:

      Yes... Read the Docs... I know...

      Whats your recommendation for the Policy?
      I use both mergerfs and SnapRAID. For me, the MFS policy is ideal. Your syncs and scrubs will be faster overall if the data is spread as evenly between the drives as possible.

      Doesn't any advantage of the data being spread as evenly as possible end as soon as you add a new empty disk into the array?
      OMV 4.x - ASRock Rack C2550D4I - 16GB ECC - Silverstone DS380
    • gderf wrote:

      Doesn't any advantage of the data being spread as evenly as possible end as soon as you add a new empty disk into the array?

      Yes, because then all new data goes to the new drive and syncs are running only at the speed of that single drive (I am sure you know this, just typing it out to work through it in my own head.) However, I only add 8TB drives at this point, and when I get a new one, I use the mergerfs-tools rebalancing utility to unload some of the data from the old drives onto the new again.

      The post was edited 1 time, last by flvinny521 ().

    • sno0k wrote:

      It happens, because of the existing path addition in the policy. A created folder is initially saved on only one drive of your data pool (randomly on one of the drives with the most free space). Therefor your files will be only copied to the drive where the folder is located.

      You have to choose the most free space policy, if you want a performance as you described.
      Thank u snoOk that is exactly what i needed!

      My set up: 2 hdd`s. First is full(of media/movies)(4tb-left 2gb of free space) and second hdd 4tb free.

      I did noticed a bit slower speed from laptop 2 Omv but it does a job.
      -EDIT: After few hours speed off transfer from laptop to Omv is now normal.

      The post was edited 1 time, last by exyu74 ().