Posts by trapexit

    Has this been done? I've been told by some that there are now two plugins both which setup mergerfs. That seems to be increasing confusion. Is this true or are they using old versions that were updated and now have both?


    Who is the maintainer? Is there anything I could do to help here? Either on the OMV or mergerfs side?


    I'm sorry if this has been addressed but I'm not a user of OMV and haven't had the time to poke around.

    You **can** put the config and cache in a mergerfs pool but as the docs mention you have to have page caching enabled because Jellyfin likely doesn't fallback to regular file IO when mmap fails with SQLite3. Plex is the same way. IMO it's a bug on their end but there isn't much I can do about it without some really ugly hacks.

    Author of mergerfs here.


    As I understand the unionfs plugin supported multiple technologies but now only mergerfs. Perhaps it's just a documentation problem but I seem to notice a pretty high rate of questions in the vain of "How is the UnionFS plugin and mergerfs different?" or asking about the plugin in ways that implies the person doesn't know that mergerfs is ultimately being used which I suspect makes it harder to find information about mergerfs.


    So I'm wondering if it'd be worth renaming the plugin to "mergerfs" or "unionfs (mergerfs)" or somehow make it clearer what's the relationship and where info can be found.


    I don't use OMV (and therefore the plugin) so I might be missing something but from the outside there does seem to be some confusion.


    Thanks.

    It's not likely mergerfs per se but how it's being interacted with. Over the years people have reported drastically different performance results from what on the surface appear to be similar setups but clearly aren't. The problem has been tracking down exactly the differences. If you look at the benchmarking examples in the mergerfs docs you can see how drastically the per read/write payload size has on throughput. Unfortunately, mergerfs isn't really in a position to help with apps that may use smaller than idea sizes. Perhaps I could read more than the user asks and cache writes for a time to limit the trips into the kernel but that would increase complexity. It's on my experimentation todo list. Another thing that impacts throughput is the latency. If the latency is higher and the app is serially dispatching requests (which is the norm) then throughput suffers.


    Anyway... wrt this situation... could be a number of things. You've got TCP with it's complex behaviors, the network filesystem and it's things, mergerfs and the core drives. If any of the "rhythms" get out of sync it could lead to this.


    Tracking it down: if you're able to provide me with some trace logs (using strace) of mergerfs in the middle of a transfer for both the 1Gbps and 10Gbps I can look to see the interactions between Samba? (NFS?) and mergerfs.


    How are you testing *exactly*? Try to remove any and all variables. If you have a network filesystem mounted on the client use "dd if=/mnt/cifs/file of=/dev/null bs=1M" kind of thing to remove the local disk from the equation and play with the per block read size to see if that makes a big difference. On the server you can put mergerfs into nullrw mode to remove the underlying drives from the situation. And take a look over the performance tweaking section. If you play with a few settings and see a noticeable difference it might suggest what the true cause is. Also I'd suggest trying a different protocol (FTP, scp, etc.) if you haven't to rule out that as either a cause or catalyst.


    https://github.com/trapexit/mergerfs#nullrw
    https://github.com/trapexit/mergerfs#performance-tweaking

    If you (or anyone else) happen to be in the Manhattan/NYC area I've a few of those ICYCube's and a esata SansDigital I'm not using and willing to part with on the cheap.


    Regarding other chipsets... in my journeys it always seemed like every single enclosure had its haters. Very hard to suss out which were systemically bad and what were just one offs.

    https://github.com/trapexit/bbf


    My own tool which is similar to badblocks.


    https://github.com/trapexit/me…ki/Real-world-deployments


    Those are the enclosures I used to use. They worked OK but I would have occasional issues that I simply don't have now with my HBA. I was using them over eSATA however so I can't speak to their USB stability. I know this isn't ideal but 1 to 1 drive to controller setup is probably better when using USB. Almost all sata -> usb bridges will reset the whole set of drives if something acts up. Was why I used eSATA.

    It *could* be the OS drivers as well being flaky but that's going to be harder to show without having the exact same device to compare against. When I used USB3/eSATA enclosures I had 4 of the same so it was clear that one of them was physically bad vs a software bug. Though even then... it's plausible that the hardware issue could be worked around in software.


    It's complicated is what I'm getting at :)

    I'd never say 100% but, yes, very very likely. Preferably you'd be able to check the drives outside the suspect enclosure but I've had individual ports on controllers go bad so if the same drive is the one having issues try changing the port it's on (changing the physical drive order in the bay). If the same bay is acting up then it's the controller but that one port. I've seen this a number of times.


    If SMART checks come back OK as well as badblock or bbf (self promotion) then while still possible to be a problem it is less likely.

    I don't see how without a way to narrow down what the issue is. Clearly the USB device being reset isn't a good sign. It could be bad controller, bad usb cable, bad drive. To find out which you need to be able to swap each out.

    Don't know if those are related given the time difference between them but certainly could be. USB3 - SATA controllers are not known for being the most stable and it's totally possible that the drive is fine and the controller is flaky. Does this happen often enough that if you put the drive in another enclosure you'd expect to reproduce this in a short time?


    Also ensure your Pi power supply and cable are up to spec. I'm less familiar with the 4 but previous versions could behave poorly the power was not delivered up to spec.