AUFS Pool issues. Data is not spanning drives

  • My setup is going to be qty4 4TB HDDs w/ snapraid and pooling (i also like the idea of spinning down the unused drives). I am going to use the server for Plex, Couchpotato, Sabz, and Sickbeard. The plex metadata, and temporary downloads are going to be outside of the AUFS + Snapraid pool.


    I setup AUFS and SAMBA for the share in my test VM. The problem is when i copied a file multiple times to the pool, it said i ran out of free space on the disk. It was odd because the other disk in the pool had plenty of free space.


    Question, Is it normal for AUFS not to automatically move the data to the next drive once the first drive becomes full? Also, is there a way to tell AUFS that if Drive 1 has less than 10GB off free space start writing data to drive 2? If AUFS doesn't fit my needs, please feel free to recommend another solution (i heard about mhddfs, but there doesnt appear to be a OMV plugin for it)


    Below are my pool settings. I followed the Guide for AUFS and Snapraid. I didnt check MFS because i would like the first drive to become full (within 10GB or so) and then AUFS moves to the next drive.(would help w/ keeping the other drives spundown.)




    Below is the error i received while copying the Debian ISO to the pool samba share multiple times.




    This shows SDB1 had enough space to fit the 228MB file.




    thanks for the help,
    ~Mike

  • Thanks for the quick reply.


    Below are the steps I followed to setup the pool. I left everything as default settings (so MFS was unchecked, and UDBA was checked).



    aufs setup:
    1.) Create a shared folder for each drive - d1, d2, d3. **I did this, but I labeled each share differently. Would this have caused the problems since the shares aren't the same name? (for example, SDB1 = D1 = shared folder D1. SDC1 - D2 = shared folder D2...)


    2.) Create a shared folder on any drive called poolshare. **Or should I have created 'poolshare' on all my data drives, and then only use the one on d1 for SAMBA


    3.) Set the bind share as poolshare.


    4.) Set three branches as d1, d2, d3.


    5.) Check the mfs checkbox if you want the drives balanced. Leave unchecked to preserve folder structure. **what happens if you make a new folder under a parent folder that's shared on all the data drives? Will AUFS make the new folder on the other data drives?


    6.) Use the poolshare shared folder in other plugins (samba for example). No need to use d1,d2,d3 in any plugin.

    • Offizieller Beitrag

    You set it up correctly. I would check the mfs box though. Also, only use the poolshare in samba. Don't use one of the branch shares in samba. If there was existing data on the drives, aufs won't move it.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • ryecoaaron - Thank you for your comments.


    I was trying to make is so AUFS would fill up D1, then move to D2, then D3 etc. This way I know exactly were my data lives, and it can spin down the other drives since they are not being used (saves power and less heat). I started with empty disks and copied the Debian iso (via samba share) multiple times to see what happens when a disk fills up. I was expecting that once d2(AUFS started filling it first) became full, the files would then move to d1 which was empty.


    Is that an incorrect assumption? Is there any trigger to tell AUFS that once the disk only has 10GB free space left to move to the next disk?


    Thanks again!
    ~Mike

    • Offizieller Beitrag

    It will only do that if you put the files in the same directory and mfs is unchecked. I don't know of any way to move once 10 GB is free.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Thats my problem though. I left MFS unchecked on purpose thinking it will fill D1 and then move to D2. The disks were empty and it was a new pool. However, after it was done filling the first disk, i received an error saying not enough space, even thought the second disk was available.


    I am going to do a scratch install and try this again. I will update with the outcome.

  • My test came to the same conclusion. Either I am setting something up wrong, I don't understand how AUFS works, or AUFS isn't doing what it is supposed to.


    I did a brand new install Debian 7.6 and then a brand new install of OMV 0.6.0. I added 3 1GB virtual disk and made them EXT4. I made them into shared folders D1, D2, D3.


    When I started the test all the disks were empty. I opened the network share and made a folder named 'Test' in poolshare. After that I started copying files into the test folder. Eventually, disk2 (sdc1) filled up and the transfer stops with an error.


    I was expecting that once disk2 (sdc1) became full, AUFS would make a 'Test' folder on another filesystem and start writing data to it. Or, since 'Test' folder was created on a empty pool, it would write it to all the file systems.


    Is my expectation incorrect? Will AUFS work for my needs? (Fill a disk, then move onto the next disk, then the next disk, etc)


    Thanks,
    ~Mike


    Screenshots

    • Offizieller Beitrag

    aufs with mfs checked is supposed to write to the drive with the most free space. That evaluation is made every 30 seconds. It is not meant to fill one drive and move to the next. With mfs unchecked, it uses pmfs which tries to use the same parent folder but will write to the drive with the most free space if the drive with the parent directory is full.


    That said, I don't why it doesn't work for some. Maybe the pool isn't mounting right on some systems which would make one drive fill and never move to another drive.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • ryecoaaron - thanks for the info.


    I wasn't waiting 30 seconds between copies. I will give that a shot. If it doesn't work, i might just enable MFS.


    Out of curiosity, is mhddfs on the drawing boards as a plugin? It looks like it does exactly what i thought AUFS did (fill drive, move to the next... and you can set freespace buffer)


    https://romanrm.net/mhddfs


    Thanks again for your assistance.


    ~Mike

    • Offizieller Beitrag

    I guess it wouldn't be hard to copy the aufs plugin and convert it to mhddfs. If I get time...

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Hi,


    I know in Ubuntu getting AUFS to consistently mount in fstab is sometimes hit and miss. People have recommended adding a line to /etc/rc.local instead.


    Kryspy

    • Offizieller Beitrag

    I don't have the logging date working right so I just had it in a testing folder. It is in the kralizec-testing repo now.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • There's a mhddfs plugin? That's great news. How do I add the kralizec-testing repo to my OVM installation? I have installed the Extras plugin giving me access to its repo and several others; is there a plugin I can install that gives me access to the kralizec-testing repo?


    Thanks.

    • Offizieller Beitrag

    Do you still have OMV 0.5.x installed?

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!