Union Filesystems (mergerfs) slow write and read?

  • Hi,


    Just switched from 2.0 to 3.0, and my old Pooling (aufs) plugin is no longer. So I have found out that Union Filesystems is the recommended one. I tried it out and I only get 10 MB/s read and write speeds.
    With my previous setup I got over 50MB/s, maybe around 70MB/s.


    Why is there such a huge difference?


    I use Existing path, most free space and the standard options:
    defaults,allow_other,direct_io,use_ino


    Is there any way to switch to AUFS again with 3.0?


    Right now it feels like a downgrade.


    Thank you!

  • what hardware are you using? mergerFS is having higher HW requirements resulting on lower R/W speeds on slower machines. on an old HP N40L i am getting around 50-60 MB/s with MergerFS, while accessing the data directly gives me 80-90 MB/s.

    here it was recommended to disable direct_io, i have'n tried it yet, but i doubt it will give any speed boost, since i see that when copying to a MergerFS share my CPU utilization is over 200%

    SuperMicro CSE-825, X11SSH-F, Xeon E3-1240v6, 32 GB ECC RAM, LSI 9211-8i HBA controller, 2x 8 TB, 1x 4 TB, 1x3TB, MergerFS+SnapRAID

    Powered by Proxmox VE

    • Offizieller Beitrag

    Why is there such a huge difference?

    aufs is a kernel module and mergerfs is a fuse module. fuse has quite a bit more overhead.


    Is there any way to switch to AUFS again with 3.0?

    Nope. aufs isn't supported in the backports kernel which is default and there is no aufs plugin.


    Right now it feels like a downgrade.

    I use mergerfs myself and my writes can saturate gigabit (it is a backup server and not usually read from). Not sure why you are seeing such slow speeds. What kind of system? If you are doing mainly reads, then I would try removing the direct_io flag.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • what hardware are you using? mergerFS is having higher HW requirements resulting on lower R/W speeds on slower machines. on an old HP N40L i am getting around 50-60 MB/s with MergerFS, while accessing the data directly gives me 80-90 MB/s.


    here it was recommended to disable direct_io, i have'n tried it yet, but i doubt it will give any speed boost, since i see that when copying to a MergerFS share my CPU utilization is over 200%

    aufs is a kernel module and mergerfs is a fuse module. fuse has quite a bit more overhead.

    Nope. aufs isn't supported in the backports kernel which is default and there is no aufs plugin.

    I use mergerfs myself and my writes can saturate gigabit (it is a backup server and not usually read from). Not sure why you are seeing such slow speeds. What kind of system? If you are doing mainly reads, then I would try removing the direct_io flag.

    My hardware is the following:
    Intel Xeon E5450 3.0GHz and 4gb RAM (motherboard limit), with and Intel X25M SSD disk as OS disk.
    The harddrives are standard drives, but I have had read/write speeds easily over 50MB/s before.


    My network is Cat 7 cables with gigabit everything, Asus AC68-U router. Everything is connected with cables, not wireless.


    This write test was with Windows 10 in the same network.


    Did a test outside with FTP and I get the same speeds, so something is bottlenecking.


    Removing the direct_io flag gave me a whooping 11,5MB/s in read speed, write speed was the same as before.

    • Offizieller Beitrag

    Removing the direct_io flag gave me a whooping 11,5MB/s in read speed, write speed was the same as before.

    A couple of things...


    Have you tested the pool using the command line? This would take networking out of the equation. I would use dd to test. dd if=/dev/zero of=/mnt/point/of/pool bs=1M count=10000 conv=fdatasync


    Your speed look suspiciously like a system is running at 100MB instead of gigabit. Not saying it is but that is the right speed. Doesn't take much for a cable to bad or not plugged in correctly.


    Cat7 for gigabit?

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • try to do a local copy, e.g. copying between the system disk and one of the data disks to see the difference

    SuperMicro CSE-825, X11SSH-F, Xeon E3-1240v6, 32 GB ECC RAM, LSI 9211-8i HBA controller, 2x 8 TB, 1x 4 TB, 1x3TB, MergerFS+SnapRAID

    Powered by Proxmox VE

  • Removing the direct_io flag gave me a whooping 11,5MB/s in read speed, write speed was the same as before.

    In other words: you're running on Fast Ethernet and not Gigabit Ethernet any more for whatever reasons :) You can also directly test for this bottleneck with jperf.exe on Windows and an 'iperf -s' running on the OMV box.

  • In other words: you're running on Fast Ethernet and not Gigabit Ethernet any more for whatever reasons :) You can also directly test for this bottleneck with jperf.exe on Windows and an 'iperf -s' running on the OMV box.

    try to do a local copy, e.g. copying between the system disk and one of the data disks to see the difference

    A couple of things...
    Have you tested the pool using the command line? This would take networking out of the equation. I would use dd to test. dd if=/dev/zero of=/mnt/point/of/pool bs=1M count=10000 conv=fdatasync


    Your speed look suspiciously like a system is running at 100MB instead of gigabit. Not saying it is but that is the right speed. Doesn't take much for a cable to bad or not plugged in correctly.


    Cat7 for gigabit?

    Wow, maybe that's the case. I will try all the tests you have.
    But when I tried iperf -s, this is the only output:
    iperf -s
    ------------------------------------------------------------
    Server listening on TCP port 5001
    TCP window size: 85.3 KByte (default)
    ------------------------------------------------------------
    Does that tell you anything?


    How come it is in 100mbit instead of gigabit, if that is the case? Is there a setting, or is the cable the only problem?
    Does IPv6 affect any of this, because that is disabled on OMV and my router?

  • Does that tell you anything?

    Not of course. Grab jperf.exe and run it on Windows against the IP address of your OMV box. But if that's too much hassles do the filesystem test, if you get there more than 12MB/s you also confirmed that you have a network problem (many reasons possible, just work you through from bottom to top checking cables/connectors first, then settings -- since you don't use a switch but a GbE router the reason might be as simple as the router gotten a firmware update and enabling 'green mode' on GbE ports or stuff like that)

  • A couple of things...
    Have you tested the pool using the command line? This would take networking out of the equation. I would use dd to test. dd if=/dev/zero of=/mnt/point/of/pool bs=1M count=10000 conv=fdatasync


    Your speed look suspiciously like a system is running at 100MB instead of gigabit. Not saying it is but that is the right speed. Doesn't take much for a cable to bad or not plugged in correctly.


    Cat7 for gigabit?

    Tried that command on the pool.
    dd if=/dev/zero of=/srv/8879bda3-9234-4107-b900-f6618aec30b6/Media bs=1M count=10000 conv=fdatasync
    dd: failed to open ‘/srv/8879bda3-9234-4107-b900-f6618aec30b6/Media’: Is a directory


    Because that's where Union Filesystems put the pool, my /mnt/ folder is empty. I have the pool mounted and as a shared folder.


    try to do a local copy, e.g. copying between the system disk and one of the data disks to see the difference

    How do I do this, I can't access the system disk in Windows, only the storage disks. I can access both via SFTP but that doesn't help.

    Not of course. Grab jperf.exe and run it on Windows against the IP address of your OMV box. But if that's too much hassles do the filesystem test, if you get there more than 12MB/s you also confirmed that you have a network problem (many reasons possible, just work you through from bottom to top checking cables/connectors first, then settings -- since you don't use a switch but a GbE router the reason might be as simple as the router gotten a firmware update and enabling 'green mode' on GbE ports or stuff like that)

    Tried jperf.exe now against 192.168.1.2 (my NAS), but I'm not sure which port I should run it against, because I get connection refused on everything except 80.

  • I just went in to Network settings and added this in the Options:
    speed 1000 duplex full


    Now I get 112MB/s both ways, read and write.


    How come it wasn't automatically set to gigabit if it was available?


    Is there some old drivers for my network card?
    When the OMV 3.0 install was done, I went to update, and there were lots of updates, tried to install them all, but some failed, and were gone after a refresh. So maybe I don't have the latest version?

    • Offizieller Beitrag

    How come it wasn't automatically set to gigabit if it was available?

    That is usually driver related and probably due to the age. Intel NICs usually work the best. What kind of NIC is on the board?

    When the OMV 3.0 install was done, I went to update, and there were lots of updates, tried to install them all, but some failed, and were gone after a refresh. So maybe I don't have the latest version?

    Some failed? Like what?


    Tried that command on the pool.
    dd if=/dev/zero of=/srv/8879bda3-9234-4107-b900-f6618aec30b6/Media bs=1M count=10000 conv=fdatasync
    dd: failed to open ‘/srv/8879bda3-9234-4107-b900-f6618aec30b6/Media’: Is a directory

    You have to give it a filename. Sorry I didn't explicitly say that in the post.
    dd if=/dev/zero of=/srv/8879bda3-9234-4107-b900-f6618aec30b6/Media/test.dd bs=1M count=10000 conv=fdatasync

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • That is usually driver related and probably due to the age. Intel NICs usually work the best. What kind of NIC is on the board?

    Some failed? Like what?

    You have to give it a filename. Sorry I didn't explicitly say that in the post.dd if=/dev/zero of=/srv/8879bda3-9234-4107-b900-f6618aec30b6/Media/test.dd bs=1M count=10000 conv=fdatasync

    Okey, doesn't matter if it works now, but not my first thought.


    I don't know, as everything was gone from the Updates sections after they failed. So I have no idea.

  • While in the end this had nothing to do with mergerfs... it's always a good idea when testing such things to start at the bottom and move up. Do some basic dd based tests on the native drive, then mergerfs, then locally through the network filesystem then remotely.


    mergerfs actually has a special mode for testing. The `nullrw` option causes mergerfs not to actually read or write to the backing devices which gives you theoretical max throughput. You can check the mergerfs docs for details.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!