MergerFS slow read speed only on 10G connection

  • Hi,


    I have a NAS with Core i5-3550, 16GB ram, and 120GB SSD for OS. (OMV 4 with latest updates) Storage is a MergerFS drive pool with 2x Toshiba N300 and 2x Hitachi NAS 7200rpm drives. They all do around 180-200MB/s read and write, so more than 1G connection.


    I recently replaced the network card for an Intel X550-T2 (dual port 10G). One of the ports connects to my regular 1G network, and the NAS has no trouble maxing it out reading or writing, wether it's to the MergerFS pool or a share on a single drive. (I get 113MB/s)


    The other port is connected directly to my desktop with CAT7 to another Intel X550-T2. Now here's the weird part: I can get max drive speed read and write when I access a share on a single drive. (180-200MB/s)But when I access a share on the MergerFS pool, write speed stays the same but read speeds are even slower than on the 1G link (80-90MB/s).



    Anyone have an idea?

    • Offizieller Beitrag

    What policy do you use with mergerfs? Do you have only one copy of each file? Do you have only one copy of each folder?


    Can you notice a difference when reading a very big file, compared to reading many small?

  • What policy do you use with mergerfs? Do you have only one copy of each file? Do you have only one copy of each folder?


    Can you notice a difference when reading a very big file, compared to reading many small?


    Policy is Most Free Space. I'm not sure what you mean by having only one copy... I transferred the folders and files to the pool and let MergerFS do its thing. (it creates the folders on different drives and spreads the files)


    I'll try transferring smaller files as I only tried larger files to test sequential speed. The problem is only when reading from the pool through the 10G link. Reading from the pool through the 1G link is OK, and reading from a regular share (not on pool) through the 10G link is also OK.

  • MergerFS is slower, that's just a fact of life. I use it for the convenience of not having to browse through multiple shares to find a file, and typically only use it for read access. When uploading to the server I use FTP or SMB to connect to individual shares so I can max out my paltry 1gb ethernet.

  • MergerFS is slower, that's just a fact of life. I use it for the convenience of not having to browse through multiple shares to find a file, and typically only use it for read access. When uploading to the server I use FTP or SMB to connect to individual shares so I can max out my paltry 1gb ethernet.


    Reads are faster through 1G ethernet (113MB/s) than through 10G ethernet (85MB/s). I'd expect to at least get the same speed on 10G.



    I tested with smaller files, I get around 60MB/s through 1G or 10G link.

  • Then I misunderstood your original post. That is odd. Have you tried other protocols over the 10G connection that do not involve mergerfs? For example, if you use FTP directly to/from a share over 10G, the hard disk should be the limiting factor.

  • Then I misunderstood your original post. That is odd. Have you tried other protocols over the 10G connection that do not involve mergerfs? For example, if you use FTP directly to/from a share over 10G, the hard disk should be the limiting factor.

    Yes, SMB shares on a single drive I get the max drive speed. (180-200MB/s) And it goes around 1GB/s if I transfer the file a second time, probably cached in ram then. So the 10G card works ok.

    • Offizieller Beitrag

    Reads are faster through 1G ethernet (113MB/s) than through 10G ethernet (85MB/s). I'd expect to at least get the same speed on 10G.

    This isn't a problem with mergerfs. It is the network setup and/or driver and/or 10g firmware or something else.


    mergerfs is a fuse filesystem. It has a lot of overhead and will saturate 1G but 10G would take a lot of cpu that I doubt your old i5 will handle. There are ways to tune mergerfs for faster reads or faster writes but not both.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • This isn't a problem with mergerfs. It is the network setup and/or driver and/or 10g firmware or something else.
    mergerfs is a fuse filesystem. It has a lot of overhead and will saturate 1G but 10G would take a lot of cpu that I doubt your old i5 will handle. There are ways to tune mergerfs for faster reads or faster writes but not both.

    But not using the MergerFS pool (SMB share on a single drive), I get 180-200MB/s reads with that same network card.

    • Offizieller Beitrag

    But not using the MergerFS pool (SMB share on a single drive), I get 180-200MB/s reads with that same network card.

    samba with your network adapter driver never uses a userland fuse filesystem. I'll see if I can setup mergerfs on my 10GBe systems that I know get 750MB/s+ (nvme storage) to see what speeds drop to.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    With a single drive mergerfs pool on nvme shared via nfs, this was the best speed I could get with dd writing. As I said, this same setup can easily hit 750 MB/s+. If you are using spinning disks and samba on an older i5, your speeds aren't too far off.
    20971520000 bytes (21 GB, 20 GiB) copied, 79.1027 s, 265 MB/s

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • samba with your network adapter driver never uses a userland fuse filesystem. I'll see if I can setup mergerfs on my 10GBe systems that I know get 750MB/s+ (nvme storage) to see what speeds drop to.

    Yes, I wanted to show that it's not the 10G card limiting performance.



    I get 113MB/s reads when accessing the pool through the 1G connection, and 85MB/s reads when using the 10G connection. It should at least match the 1G speed.

    • Offizieller Beitrag

    I get 113MB/s reads when accessing the pool through the 1G connection, and 85MB/s reads when using the 10G connection. It should at least match the 1G speed.

    I can't explain this. mergerfs never does anything with networking. It always has a layer between it like samba or nfs. Maybe your 1G NIC driver needs less cpu than the 10G NIC driver??

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I can't explain this. mergerfs never does anything with networking. It always has a layer between it like samba or nfs. Maybe your 1G NIC driver needs less cpu than the 10G NIC driver??


    CPU usage is 5% in both cases. (reading through 1G or 10G port)


    Oh well, not a huge deal for now, I upgraded to 10G to get faster writes to the server, and that works as expected. I'll try a clean install of OMV on another drive when I have the time.

    • Offizieller Beitrag

    I'll try a clean install of OMV on another drive when I have the time.

    I don't think a fresh install is going to help much. If you want closer to 10G speeds, you are going to have to use raid.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    What you need to do is figure out what is the bottleneck and change that. And then identify the new bottleneck and then change that. And so on. We can only guess. Only you can find out what actually is wrong.


    If Ubuntu, EXT4 and NFS both for server and client is slow, then you have to examine cabling, network settings and driver settings. Otherwise try OMV, EXT4 and NFS on the server. Then SMB. And so on, until you see what is the problem. Both reading and writing.


    I suspect Ubuntu, EXT4 and NFS and really fast SSDs in both ends could be the fastest out of the box, with default settings. Perhaps. That could provide an upper bound for what the hardware is capable of. Most likely the SSDs will be then be the bottleneck, perhaps unless you use fast NVMe in both ends?


    If Ubuntu, EXT4 and NFS in both ends is bad, then one NIC might be bad. Or cabling. Or the network settings. Or the driver. Otherwise, change one thing at a time until you know what the problem is.


    Is it the server network cable? The client network cable? The switch? The operating system? The server drive? The server filesystem? The client drive? The client operating system? The client filesystem? The network settings? The network interface drivers? Something else?

  • Bottleneck is something in MergerFS slows down reads when using 10G network, very simple to identify with the testing I've done.



    The person who made the other thread I linked to mentioned this:
    "What caught my eye was the user.mergerfs.fuse_msg_size: 32 option which should be 256 according to the docs but is only used in kernel >=4.20 and debian stretch is currently max 4.19. It would make sense if it has something to do with the small 128k read sizes. Could increasing this lower the overhead?"

    • Offizieller Beitrag

    While I don't understand why the network adapter would make a difference, maybe @trapexit has an idea?

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!