Posts by synci

    Can you add the fstab options "nonempty" and "direct_io" to the mergerfs plugin ?
    So its selectable in the unionfs plugin and nobody has to edit fstab manual.

    I ask because i added this options to my fstab and after todays update it got overwritten.


    In the morning the problem returned ... wihle downloading and postprocessing these.
    All my harddrives and my mhddfs pool got dismounted, had to restart. (Is there any quicker solution instead of restart ?)
    In the syslog i can see that mkvmerge caused the segfault, maybe thats my problem.
    Updated mkvtoolnix now, it was more then 2 years old. (Debian package added to my /etc/apt/sources.list now)

    Du hast ja nur einen Aufs Pool ohne Spiegelung, wenn Du 3 Platten rausnimmst fehlen die Daten jener Hdds ...
    Ich nehme an Dein Problem ist der fehlende Platz/Einschub für die neue größere Platte ?
    Du könntest die neue Platte auch erstmal via USB im passenden Gehäuse anschließen für den Transfer der Daten.

    I have tried mergerfs few days ago, also got transport endpoint errors ... so decided to switch back to mhddfs.

    Installed OMV completely new(100% clean), installed latest Union Filessystems plugin (testing repo), created a mhddfs pool with 4x4TB WD Reds.
    Saw in the log that mhddfs nosegfault version got correctly installed.
    Still transport endpoint errors, i dont know what to do and why this happen.
    Problem occurs when downloading some GB and extracting them.
    Also when rsync some GB to my OMV backup machine.

    What are the correct steps to fullcheck my hardrives for bad sectors/errors on debian, maybe thats the problem ?
    Sorry im a linux beginner, but ssh and some more are no problem.

    All i can see in syslog is this:

    Thanks !

    Sure sorry.
    Running OMV 2.1.18, latest union filesystems(testing repo) and mergerfs (installed with apt-get) with 4x4TB WD RED policy lfs.
    What settings do you mean ?
    Just tell me what you need and where i can get them, im a real beginner :)

    Thanks for the statement, makes sense.
    I have 4x4TB drives, after many years of operation each drive have directory /movie ...
    One drive is nearly full thats correct.

    I think it happens because the 3 other drives have nearly the same freespace left.
    Sabnzbd is downloading to my pool and it will use drive 1, and i move it to the movie dir on my pool.
    Now he uses drive 2 because it have more freespace then drive 1.

    I will move to policy "lfs", then everything should be okay for me.

    When i want to switch to fwfs instead of epmfs, how should i do that correctly ?
    Btw. everything works very well, sometimes i have sabnzbd rename issues but i think i can solve it with the fwfs policy :-)
    Thanks !

    Think i got it, edited in /etc/openmediavault/config.xml and /etc/fstab, then rebooted.
    My last problem is that when i cut a file from f.e /media/UID/downloads/movies and want to insert it in my movie folder it get copied to another hard disk.
    Mhddfs used the same harddisk when inserting files, what is the correct mergerfs policy for it ? I tried epmfs and fwfs now.

    Thanks a lot !

    Thanks a lot guys, thats what im looking for.
    mhddfs no-segvault version was nice but not perfect.

    Can i install your .deb package now and get a third pooling option (mergerfs) within unionfileystem plugin page ?


    Is updating the repo possible? I have been running the 'no-segfault' version for quite a while now and have not seen the "transport endpoint is not connected" once. It does seem that the pooling plugin is much more stable in MHDDFS mode when making use of this package.“

    Same here, works fine, without "no-segfault" version i got endpoint errors every few hours and i had to reboot my OMV machine.
    Im on 2.1.14 now.

    At first thanks for that great plugin !
    I have a little but urgent question:

    I have configured 2 pools, each with 4x 4TB drives.
    I want to use rsync to backup pool1 to pool2.
    Is it needed to exclude *.wh.* or *aufs* files in rsync ?

    Im asking because i want to rsync pool1 to pool2 and not the single drives.

    Thanks !

    Sorry i think i know the answer already:)
    *.wh.* files are only stored on the single disks right ?
    So rsync between both poolshares is no problem.