Disk space issue with mergerfs and snapraid

  • Hope this is the right section to post this....


    My old server build with OMV 5 seemed to go off without a hitch but I've recently decided to migrate (most of) the data from it to a new custom build as my daily driver and use it as a backup. In doing so, I've started fresh with the build by using OMV 6 which immediately brought about the fact that I need to use mergerfs as opposed to unionfs (not a huge difference, honestly). In setting all of this up I tried to replicate my old setup as best I could. I have 3 10TB drives setup for data and 1 10TB drive for parity. I've been using Rsync for the last few days to move just my media volume over.


    It's gone fine but today It hit about 10TB and it seems like it hasn't broken up the data and I"m getting outofspace errors now. I'm Rsync-ing over to the mergerfs pool on the new server, which is 27TB usable. IF I SSH into the server and run ```df -H``` it shows the pool as 10TB used and 20TB available, but the other two data drives have hardly any data on them. It seems like it ran out of Parity space and I don't know what to do know. All 4 drives are the same size so why is it now stalling on this rsync and saying it's out of space?


    If I run a snapraid sync it says it can't complete because it's out of space as well.


    What should I do here?

  • In OMV 5, the implementation of a Union Filesystem was via the mergerfs application. In OMV 6 it is the same, it is merely called what it is, mergerfs. There is and was no difference in the underlying implementation.


    The most common cause of what you are seeing regarding having a drive fill up and no data being written to the other drives in the pool is misunderstanding the pool creation policy and its relationship to the directory structures on the disks.


    By far, the most frequently seen configuration that displays this problem is having a creation policy that uses an "existing path" specification but not having the appropriate already existing paths on all the disks you want to accept such data. Carefully review the creation policy choices, and understand fully what they mean before selecting one. The mergerfs documentation explains this, and the documentation is available for review by selecting the Documentation icon within the mergerfs plugin or by reading it directly here: https://github.com/trapexit/mergerfs


    Running out of parity space when running a SnapRAID sync can be caused by having identically sized data and parity disks and having one or more data disks that are 100% full with no reserved free space. The reason is that the parity disk must be slightly larger that the size of the largest data disk's filesystem. If you have data disks that no reserved space on them and fill them up, an identically sized parity disk will be slightly not large enough.


    The solution is to not fill your data disks to 100%, either by having a small amount of reserved space configured into the filesystems, or keeping track of the free space and cease writing to them when they get nearly full. However, you will want to have your parity disk filesystems configured with zero reserve space and do not write any files to them other than the parity file, the quota files, and possibly a snapraid content file.


    Although a mergerfs pool can be created with a minimum free space setting, this is only obeyed when the files are written into the pool directory. This setting will not reserve any free space if you write files to a drive outside the pool.

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 6.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

    Edited once, last by gderf ().

  • Thank you for this, I didn't realize how much UnionFS was shipped via Merger so that makes sense. I hadn't noticed any true difference in the UI other than the name.


    As big a deal as I made about making sure I used "Most Free Space" in my first setup, I somehow allowed it to be "Existing Path - Most Free Space" this time so I totally screwed up there. Thanks for pointing that out, I made the change and rebooted the system and it seems to be working now. I do still have a 15GB threshold on the drives so that it won't get too full, however, I do like writing to the pool itself and my containers are set up for that so I want to keep all of that as streamlined as possible.


    I think this will resolve the issues though, thanks!

  • Another useful tip from the SnapRAID manual:


    "In Linux, to get more space for the parity, it's recommended to format the parity file-system with the -m 0 -T largefile4 options. Like:

    Code
    mkfs.ext4 -m 0 -T largefile4 DEVICE

    On an 8 TB disk you can save about 400 GB. This is also expected to be as fast as the default, if not faster."


    Therefore less need to worry about the data disks filling up as the parity disk is effectively bigger than them.


    SnapRAID

    Inwin MS04 case with 315 W PSU

    ASUS Prime H310i-Plus R2.0 board

    Two port PCI-E SATA card

    16GB Kingston DDR4

    Intel Pentium Coffee Lake G5400 CPU

    Samsung Evo M.2 256GB OS drive

    4x4TB WD Red NAS drives + 1x4TB + 1x5TB Seagate drives - MergerFS pool

    Seagate 5TB USB drives - SnapRAID parity x 2

    • Official Post

    The partition, e.g. /dev/sdb1


    How to Create a New Ext4 File System (Partition) in Linux
    In this article, we will explain how to create a new ext4 file system (partition) in Linux systems using parted command-line tool.
    www.tecmint.com

    mkfs(8) — util-linux — Debian bullseye — Debian Manpages

  • Installed omv6, installed via gui 2 disks, system dev/sda1 and "downloads" dev/sdb. it happens that when the system is rebooted, they change values, the system one becomes sdb1 and for loading sda. how to make it not change places? in omv6 the uuid is written to disk by default.

    sorry, english is not my native language, i'm learning.

    • Official Post

    how to make it not change places?

    You cannot. It is not always predictable which device name (sda, sdb, etc) is assigned. That is why OMV mounts the filesystem by UUID.

  • It's about convenience. I wanted the absolute path with the mount point to always be tied to the device name. And that this name does not change when the system is rebooted.

  • It's about convenience. I wanted the absolute path with the mount point to always be tied to the device name. And that this name does not change when the system is rebooted.

    This is not possible.

    Linux assignes /dev/sd* with a first come/first served once it boots.

    Sometimes a drive is recognized first than the other and get's a previous letter.


    This created problems in the past, for the same reason you are experiencing:

    Sometimes a drive is sda, sometimes it's sdb and so on.


    After the drives were starting to be recognized (and mounted) by their UUID, the above ended.

    Mountpoint of drive is always UUID regardeless of it's sdX.


    Use it or lose it.

  • Thanks, I understand you.

    And also, can you tell me the command with the keys, formatting the disk- to save 400GB, on an 8TB disk? I saw it somewhere, I can't find it.

  • And also, can you tell me the command with the keys, formatting the disk- to save 400GB, on an 8TB disk? I saw it somewhere, I can't find it.

    mkfs.ext4 -m 0 -T largefile4 DEVICE

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 6.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!