OMV, UnionFS, Snapraid

  • I've been struggling with my OMV install, with unionfs, snapraid. Honestly I'm not even sure how to phrase some of my questions. Since I believe my issues are revolving around unionfs and snapraid, I decided I'd try to post this in the plugins forum...


    Setup:
    OMV 4, latest release.
    4x4TB disks, that I pooled in Unionfs as a single volume "media".
    Presently it has 7.99 Tb free out of almost 11.
    I also have snapraid with 3 drives being content, and 1 being parity.


    I have a few sharedfolders I set up on device "media", which is the name of my drive pool in UnionFS.



    Problems-
    Under the Union Filesystem tab, it only lists 3 drives instead of 4 for some reason. First, since the 4th drive is parity for snapraid, should that been in the pool to begin with?


    If I try to re-add the 4th drive to the pool, will it erase my information already on the pool?


    That leads me to my second problem, which I discovered last night while trying to move a lot of files around. it appears that although I selected "existing path, most free space" policy, it is only storing information on 1 of the 3 drives. As I got a disk full error during an rsync. When I looked at the filesystem tab, it showed only 1 of the pool drives full, and the other two with barely anything on them.


    I'm afraid to just go playing around with things, as I have already transferred a fair amount of data to the system. if I play with the pool system, like trying to change the policy on the pool, will that loose data, or if I break the pool itself and try to setup a new one, what will happen to the data stored on the volumes.


    Lastly, is there anything I need to do to stop using snapraid, besides disabling the plugin in OMV? Something is faulty with my configuration as I cant scrub the disks, so i want to set it up again from scratch. Does anyone know of any guides or threads that describe the process of stopping it? I know there are content, and .lock files on the drives when I got to the /srv/drives... should I erase those manually from the command line, or will disabling the plugin remove them automatically.... etc...


    Thanks for any help from anyone...

    • Offizieller Beitrag

    Under the Union Filesystem tab, it only lists 3 drives instead of 4 for some reason. First, since the 4th drive is parity for snapraid, should that been in the pool to begin with?

    The parity drive should not be in the pool.


    If I try to re-add the 4th drive to the pool, will it erase my information already on the pool?

    No. unionfilesystems work on top of existing filesystems and only tell the operating system where to read or write. They never delete files or format or anything destructive unless told to and those commands are passed to the underlying filesystem. Read this - https://github.com/trapexit/mergerfs/blob/master/README.md


    if I play with the pool system, like trying to change the policy on the pool, will that loose data, or if I break the pool itself and try to setup a new one, what will happen to the data stored on the volumes.

    As above, nothing will happen to the data.


    Lastly, is there anything I need to do to stop using snapraid, besides disabling the plugin in OMV? Something is faulty with my configuration as I cant scrub the disks, so i want to set it up again from scratch. Does anyone know of any guides or threads that describe the process of stopping it? I know there are content, and .lock files on the drives when I got to the /srv/drives... should I erase those manually from the command line, or will disabling the plugin remove them automatically.... etc...

    snapraid isn't a running service. There is no need to "disable" snapraid. If you haven't scheduled the diff from the plugin, it won't do anything. Read this - https://www.snapraid.it/manual

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I have a similar setup, and a similar problem, all the data on one drive.


    did you find a setting that spread the data across the drives?


    what I'm looking for is either a setting that fills a drive and moves on to the next, or preferably one that spreads the data evenly across all drives. it doesn't matter on which drive an individual file is put.

  • thanks macon - can that be configured purely thru gui ? - i dont want any snapraid - just the unified disks function

    mergerfs is the name of the package used by the Unionfs plugin to pool disks.


    There is no requirement to use snapraid and mergerfs together.


    mergerfs can be configured on OMV using the Union File System plugin or you can configure it by hand if you wish.


    mergerfs is well documented see:


    Code
    https://github.com/trapexit/mergerfs

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

    Einmal editiert, zuletzt von gderf ()

  • I know this is an old thread but my question relates similarly (besides updated software). I have OMV 5.6.2-1 on the latest Armbian 21.02.3 Buster with Linux 5.10.21-rockchip64. Built the system a few weeks back and loaded all media from various HHD Lan disks onto new server.


    Current pertinent file structure from df -h:


    /dev/sda1 137G 4.7G 130G 4% / (OS System on SSD)

    /dev/sda2 90G 952M 88G 2% /data (2nd Partition on SSD)


    /dev/sdb1 7.3T 5.6T 1.8T 77% /srv/dev-disk-by-uuid (Parity disk)

    /dev/sdd1 7.3T 6.6T 748G 90% /srv/dev-disk-by-uuid (Data1 disk)

    /dev/sdc1 7.3T 674G 6.7T 10% /srv/dev-disk-by-uuid (Data2 disk)

    media:d 15T 7.3T 7.4T 50% /srv/ (Union FS merger)


    Snapraid sync, diff and scrub working fine with no errors. As you can see from this that nearly all data from media is on "sdc" and not splitting between ssd1 & sdc1. I've set up Union FS this way.....which I thought was right to share all "media" data across the 2 - 8TB drives.



    Very Noob questions

    (1) Even thought I downloaded the Merger FS folder plugin I did NOT do anything with this this add-on. (i.e. I've not "added" any drives)...do I need to and if so....how?

    (2) Can or should I delete (start over) on all Snapraid info and rebuilt the right way (hopefully without deleting data)?


    Thanks a bunch from a new OMV user.....

  • There are two different plugins available:


    a) OMV-unionfilesystems

    b) OMV-mergerfsfolders


    In your case you don´t need b) if you want to pool complete drives.


    For the different mergerfs policies you may want to see the documentation.

    or you may have a look here UnionFS and file distribution

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

    Einmal editiert, zuletzt von cabrio_leo ()

  • Yeah.....I can see what I did with how files are distributed across the two 8TB HHDs I now have. Cabrio-leo's link above in post#10 gave me a perfect incite to how the various OMV UnionFS options build the "media" arrays. I did the typical noob thing of "ready-shoot-aim" before reading how the array build types vary.


    A few posts here RE: UnionFS and file distribution gave an idea to start a root file folder(s) to match the file structures on data1 onto Data2. One guys said to try this and see if using the same "Existing path, most free space" will continue to build onto Data2 if Data1 reaches the set limit. Can anyone verify this????


    I guess plan B would be to restructure my files to put Video on Data1 and everything else on Data2 as the root file structures. I'm really thinking long term and I don't want to change the array to "most free space" and randomly put files across the 2 drives for rebuilding if a drive fails. Tell me if I'm wrong in my thought process here!


    Root file structures from WInSCP looks like this attachment (obviously the Movies video folder is the large file data hog).....thanks

  • If you don't want the data randomly distributed between the disks (which is sort of what mergerfs is for) you can - like you described - just access the individual drives and their filesystem to copy whole folders there. I do that as well when I need the data later on to be possibly separated without having to copy everything again.


    Still you'll be able to see the whole structure in the pool that mergerfs offers.

    That is the beauty of it - two or more completely separate and independent filesystems that can also be mounted and used as one at the same time. :)


    There's even a not documented feature to automatically replicate physical harddrives if you keep beans in your case once free space reaches less then 5%. But only those damn Seagate drives unfortunately.

  • I believe to make all the data to be distributed evenly/balanced we can make one drive out of three with the help of union fs using "existing path, most free space" and then create a share folder on this merged drive. Then technically any data written into this folder would spread evenly between all four disks. Snapraid would only take care of the individual disks in the pool. I made the block diagram of how I see it is done (attached).


    Question, which option is better for the recovery?

  • I believe to make all the data to be distributed evenly/balanced we can make one drive out of three with the help of union fs using "existing path, most free space" and then create a share folder on this merged drive. Then technically any data written into this folder would spread evenly between all four disks.

    Many who have tried this without fully understanding the chosen creation policy, and thus having failed to properly implement it, arrive back here when it doesn't work as expected.

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

    Einmal editiert, zuletzt von gderf ()

  • Many who have tried this without fully understanding the chosen creation policy, and thus have failed to properly implement it, arrive back here when it doesn't work as expected.

    I am new to Snap RAID and Union FS so, what would be the better policy choice in this case? I am trying to achieve something like RAID-5, where my media data is replenished ever-so-often. Is "most free space" be the best choice? One disadvantage I can think of here is that when you read the file that was spread between 3 disks would make all of them spin.


    Thanks.

  • It's not about which policy is best. It's about not understanding a policy and not being properly configured for it to work as expected. Unless you read the mergerfs documentation you will likely run into the same problems others do.


    Another misconception is that mergerfs will split a file across multiple disks. It won't.

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

  • Unless you read the mergerfs documentation you will likely run into the same problems others do.

    I've read the documentation, and I believe I grasped the concept now but there is still some confusion .


    Another misconception is that mergerfs will split a file across multiple disks. It won't.

    This one is crystal clear now.


    So, I guess the benefit of using Union FS is group data, which is spread across multiple disks, into one folder. Growing your data collection will fill up the drives in the pool correctly if your "existing path, most free space" policy configured so that you have paths for your data folders on each disk.


    A. So for example, if I want to add Union FS to the existing set of 3 disks with the disk 1 having 3 folders: movies, documents and backup, I will have to create the empty duplicates of these 3 folders on the other 2 disks. Then group all 3 into one with Union FS and choose "existing path, most free space" policy. Now, what happens if I decide to add a new folder - pictures? Do I have to manually add it to all 3 disks?


    B. On the other hand, if I set Union FS plugin with the "most free space" method, it should distribute data across all three disks based on the most available space.


    So in the case of first example(A), I guess I don't benefit from using the Union FS. I can stick to just Snap RAID with 3 data disks and 1 parity. I can then reference my shares between the 3 disks, such as: movies-disk 1, documents-disk 2, etc.


    Need more clarity in understanding different setup options. Is there a link to a good example of the different setups?


    Thanks,

  • For existing path policies to work the paths must already exist on the drives you want the policy to write into. Non previously existing paths will not be automatically created. This is a commonly seen misconfiguration/misunderstanding - the user has an existing path policy but didn't create the directory path on all drives he wishes to be pool members. One disk fills up and no files are written to the other disks.


    You can web search for mergerfs HowTos. I have never read any of them so I can't say if they are any good or not. The official documentation is quite good but assumes the user already has some knowledge in this area and understands the terms and concepts used.


    I prefer to use a least free space (lfs but not eplfs) policy because I want disks to fill up before moving onto a newly added disk. I don't pool drives, I pool directories. I don't use the OMV mergerfs plugins. I have one created by hand mergerfs pool statement in fstab that has never been changed since putting it there years ago. All newly added drives I wish to be in a pool have the required empty directory structure put in place when they are added to the machine.


    There is no relationship between SnapRAID and mergerfs other than that SnapRAID will not operate on a mergerfs pool mount point. Such mountpoints are explicitly ignored and this is reported via a message: WARNING! Ignoring mount point ‘your mergerfs mountpoint here’ because it appears to be in a different device. (I am ignoring SnapRaid's optional built in pooling capability here).

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!