Enhancement: openmediavault-unionfilesystems aufs create option

  • Hi,


    The openmediavault-unionfilesystems plugin is good but unfortunately defaults to an AUFS create parameter of mfs (most free space). Agreeing with a popular guide, I'd prefer to use pmfsrr, as the more ideal option.



    Without this switch, adding new TV episodes, for example, will cause them to go on a different drive than where they already exist, which I suppose is fine if you never leave OMV (but create a scattered mess if you decide to just use the drives later unpooled), but it also means that you would have to have multiple drives spin-up to access the files for one TV Series rather than just the one.


    As I understand it, the minimum amount of space option is necessary as well if you are using SnapRAID, where you usually do not want the system completely filling the drive, as you need it to not be completely full to avoid issues with the parity drive(s) running out of space.


    I've overridden it in /etc/fstab by replacing create=mfs with create=pmfsrr:10000000000 but it would be nice in a future version if the GUI added the options to select the create type and let you specific the minimum space, even if it was hard coded into a single dropdown with a few size selections. If we would were wishing, :D it would be nice to be able to edit the aufs entry too, presently you can just add another drive or delete it. That said, I am thankful the GUI plugin exists at all, even if I have to edit the entries it makes.


    ...Donovan

    • Offizieller Beitrag

    I hadn't looked at aufs in such a long time that I never knew that flag existed. I never liked pmfs but pmfsrr seems to be more like mhddfs without the fuse penalty. We already have a multi-unit field for mhddfs. So, it wouldn't be hard to add this.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • From what I can gather trying to Google that flag, it is a recent addition... even the Sourceforge parameters page doesn't yet show it (probably an oversight). Zackreed.me has a few great server articles and it was there that I read about his going back and forth between mhddfs and aufs over months before finally settling with aufs. In additional to being a bit faster, he was getting frequent disconnects with mhddfs with large files.

    • Offizieller Beitrag

    Until this can be added to the plugin, you can set a value for OMV_FSTAB_MNTOPS_AUFS in /etc/default/openmediavault.


    OMV_FSTAB_MNTOPS_AUFS="create=pmfsrr:10000000000"
    then
    omv-mkconf fstab
    then reboot.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    sorry for crosspost, can you please revise this bug too:


    I saw the bugtracker report, the github report, your pm, and this post. I think that is enough notification.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I apologize; I may have overridden it in fstab, but I hadn't yet rebooted and instead just went back into unRAID until I had time to play with OMV again.


    I got an error on boot but decided to be through and try the override as well. It appears the AUFS in OMV 1.0 may not be new enough?


    Code
    [   20.396354] aufs: module is from the staging directory, the quality is unknown, you have been warned
    .
    [   20.397026] aufs 3.2.x-debian
    [   20.397154] aufs au_opts_parse:1114:mount[1826]: wrong value, create=pmfsrr:10000000000


    A little Googling suggests the new pmfsrr option is in aufs 3.9 and up.

    Zitat

    In the newer versions of aufs 3.9 and up (I believe) there is a new create mode called pmfsrr which seems more like what you were looking for. It enables aufs to write to other disks when the drive/s containing the parent folders get full in a round robin fashion.


    Again, my apologies for unintentionally making it appear that I'd been successfully using that switch with OMV 1.0; I ran out of time to play with OMV that night and didn't get back at it until today.


    ...Donovan

    • Offizieller Beitrag

    I was just doing some research on that. Even Debian Experimental only has aufs-tools 3.2. So, we may be waiting a while on this one.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Interesting. Guess that's a sign I should probably try mhddfs; it might also not have that doubled up path issue I mentioned in another post, hopefully. :)


    Edit: Just learned that mhddfs doesn't have a similar option and will just write to the drive with the most space. If I'm adding TV episodes, I want them to go to the parent drive where the folder already exists. This is unfortunate; I may have to leave OMV, arragh. Thanks for the effort earlier.

    • Offizieller Beitrag

    Actually mhddfs will write to the same drive until the amount left set in the threshold field. If you make it a large number, it will act like aufs.


    Leave OMV for what? Is there another OS with a pooling feature that always writes to parent folder? Does setting the following option work or do you really need the round robin part?


    OMV_FSTAB_MNTOPS_AUFS="create=pmfs"

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Greyhole has that feature... called sticky files.


    https://github.com/gboudreau/G…ter/greyhole.example.conf


    Greetings
    David

    "Well... lately this forum has become support for everything except omv" [...] "And is like someone is banning Google from their browsers"


    Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.

    Upload Logfile via WebGUI/CLI
    #openmediavault on freenode IRC | German & English | GMT+1
    Absolutely no Support via PM!

  • Actually mhddfs will write to the same drive until the amount left set in the threshold field. If you make it a large number, it will act like aufs.


    Leave OMV for what? Is there another OS with a pooling feature that always writes to parent folder? Does setting the following option work or do you really need the round robin part?


    OMV_FSTAB_MNTOPS_AUFS="create=pmfs"


    Well, I have unRAID now, and there you set a folder depth level at which you don't split files across drives. So if I had:
    \TV Series\Mythbusters\Mythbusters S01E01 Amazing stuff.mkv
    And set the split level to 2, it would always put anything below TV Series, such as the Mythbusters folder, on the same drive. If I added a new series folder, it would go on the drive with the most free space. The control with unRAID is so precise, that you can even tell it which drives to choose from when storing new files, depending on the pool it's in, TV Series versus Movies or anything else. Sadly, unRAID is a one parity system that I've already had fail with (thankfully) no loss of data. One drive was new and empty, the other had a read error but the files are fully intact - thus cluing in to the foolishness of single parity on a 14 drive system (another drive is on the way).


    By leaving, I was referring to Ubuntu Server specifically to get the pmfsrr option but I'm curious by what you said about mhddfs. My intrepretation of it was that it always wrote to the drive with the most free space, are you saying it actually will write to the drive containing the parent folder if it already exists on a drive and only when it is full, move on to the drive with the most free space? ie. I come back to add TV episodes, will they go to the drive with the existing folder?


    Zitat

    When you create a new file in the virtual filesystem, mhddfs will look at the free space, which remains on each of the drives. If the first drive has enough free space, the file will be created on that first drive. Otherwise, if that drive is low on space (has less than specified by “mlimit” option of mhddfs, which defaults to 4 GB), the second drive will be used instead. If that drive is low on space too, the third drive will be used. If each drive individually has less than mlimit free space, the drive with the most free space will be chosen for new files.


    You're right though, I can just use PMFS with AUFS. Not sure how I forgot about that or ruled it out. I will give it a go when I get a chance again to see why I'm having a permissions issue with the share. I have two drives with the same UUID so I'll have to fix that; might be causing the issue. No time tonight unfortunately.


    Thanks again for pointing that out! ^^


    ...Donovan

    • Offizieller Beitrag

    My intrepretation of it was that it always wrote to the drive with the most free space, are you saying it actually will write to the drive containing the parent folder if it already exists on a drive and only when it is full, move on to the drive with the most free space? ie. I come back to add TV episodes, will they go to the drive with the existing folder?


    aufs always write to the drive with the most free space when using create=mfs. mhddfs will also write to the drive with the most free space if you set the threshold at 100%. mhddfs with the threshold set at the default 4gb will write on drive 1 until it has 4 gb left. Where it writes has nothing to do with where the parent folder is located.


    All that said, if you always read and write to the pool, it shouldn't matter which drive the files are on :) If you use mdadm raid, each file is split across all the drives :)

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • ^^ Yes, in a typical raid you've got files striped across every single drive and they all have to be spun up just to access anything.
    The beauty of low key home systems like unRAID or SnapRAID in OMV is that it is completely unrequired to have all of those drives spinning 24/7.


    Example: TV Series
    Option A) Throw new TV episodes all over 10+ drives in my array indiscriminately. They all show up in the same folder, so who cares, right?
    Option B) Have the system be smart enough to add new TV episodes only to the drive that already has the episodes for that series. End result looks the same.


    With option A), over time, 10+ drives may have to spin up just for me to scan the TV episodes in the folder of one on-going TV series.
    With option B), only one drive has to spin up to accomplish the same thing.

    Less power required, less time is required to spin the drives up and thus a quicker response to the I/O request.


    Another example comes to mind, let's say parity fails and a drive or two is lost:
    Option B) I've lost all of the TV series that were on the failed drives
    Option A) I've lost various TV episodes for all on-going TV shows because files were stored all across the array and now have the nightmare to reacquire all of those episodes versus option A, where I could have just redownloaded boxsets of each season if not the entire series.


    I know you were mostly teasing but my mind would go crazy just knowing files are thrown all over the place. :D


    The compromise with OMV/Debian's older AUFS create=pmfs is that the pool will try to always write to the parent folder. In unRAID, you could tell it that:
    1. \TV Series\ can go on drives 2, 5, 6, 7, 11, 12
    2. Keep all of the files for each series on one drive but it can put a new tv series folder, under \TV Series\, on any drive


    My understanding of create=pmfs is that writes to the pool might fail once one drive fills up, as almost everything in my server is under a primary folder of TV Series, Movies, Personal, or Software and there's no way to do the unRAID split level feature. The way to avoid this is to find out what tv series is on what drive and write directly to that drive to avoid this. This is definitely inconvenient and most would not like bother, I'm sure.
    Edit: See below. I think I'm wrong here.


    With the create=pmfsrr option in aufs 3.9+, the system would move on and select a new drive to write to, even if it ran out of space while writing a file - it would transfer all of the file that had previous been written, to the new drive.


    It sounds like I have talked myself away from OMV again. I really don't want files spread throughout all of my array...


    I'll have to investigate compiling the newer aufs onto Wheezy or abandon ship back to Ubuntu LTS; I had it running a few months ago as a test. The Windows clients didn't even notice the server had changed and all mapped drives/Kodi still worked.
    Edit: Obviously, if it could simply be compiled, someone would have already done it. https://tracker.debian.org/pkg/aufs-tools


    I understand the power and beauty of OMV in simplifying things and fully respect what a nice integrated solution it is. As with many things, with a single programmer behind it, there are pros and cons (unRAID is much the same way, except commercial, though there are finally two programmers, but it's still a flawed solution with single parity). I need to stay on the Debian or Ubuntu platform so I can also put a Bluecherry DVR server on the box, or I would try something like FreeNAS.


    Nothing worth doing is ever easy, right? :)
    Sorry for the mini-novel. Mostly helping myself think through this.


    ...Donovan


    Edit: I re-read the AUFS options, and as far as I can tell, AUFS will actually move on to another drive after all... I think. I have to say the descriptions of how it works certainly could be a bit simpler to understand. :S

  • Clearest explanation I can find, which suggests that pmfs should do what I want:


    • Offizieller Beitrag

    Here is my thinking...


    With option A, you *might* have to spin all 10 drives up. If you have large enough drives and you are using mhddfs with threshold set to a size large enough to hold one season of a typical show, the chances of that season being split across more than one drive are almost zero. If you add a one new show each week, there is a chance I guess.


    OMV is one programmer but the plugins have many. Even the aufs/mhddfs/unionfilesystems plugin has two :)

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • This is just a workaround that I use, which may or may not work depending on your setup...


    If you manage your TV Shows via software like Sickbeard, Sickrage or Sonarr, you can set them up to write directly to aufs branches. That way, new episodes are posted to the same folder every time. You can then setup file sharing via samba, nfs or whatever, sharing the pooled directory. Although you don't get the full benefits of pooling, in my case at least, it's reading files where I need pooling the most.


    But then, I only have three drives in my pool, and only two of them store TV Shows. With 14 drives, you might find this way less convenient.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!