Download capped at 0.5 Gbit/s while upload is fine (full gbit)

  • I am putting this under network because it is NOT related to samba as my tests turned out. So let me begin...


    I noticed that my file download from my omv is capped at around halve of what it should be (0.5 Gbit/s).


    So naturally I first blamed samba :P but it turned out that it also happened when I transferred stuff with sftp. So not samba.


    Then it was time to blame my raid, although having just upgraded that I highly doubted it, but tried it anyway. Copying stuff inside OMV around from/to my raid (to a nvme) always yielded way more then what the nic is capable off, so another miss.


    Then I blamed my windows client only to notice that another client had the same limitations.


    Then I blamed the NICs on both side and checked with the switch if they connected there full duplex and 1 Gbit/s (which they do).


    Then for whatever reason I did an UPLOAD instead of a DOWNLOAD to my OMV and to my surprise I got the full 1 Gbit/s of what my network can handle.


    So now I am very confused and hope anyone here has an educated guess on what is going on here and if there is a way to fix this, so I can get my full 1 Gbit/s speed down from my NAS again.

  • Find a way to perform iperf testing to rule out any link level problems.

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.


  • Edit: run again with more buffer, now seeing the same results as described in my initial post.

  • both are not very well, please try to use a cat6 cable between Server and client (direct conection , not use switch between) and see if trhoughput is near 112MB/s that is the fastes as possible on a 1GB link

  • Did you consider networking hardware as root cause?

    it could potentially be as simple as a broken wire in the network cable or bad connection to router/switch port.

    omv 6.9.6-2 (Shaitan) on RPi CM4/4GB with 64bit Kernel 6.1.21-v8+

    2x 6TB 3.5'' HDDs (CMR) formatted with ext4 via 2port PCIe SATA card with ASM1061R chipset providing hardware supported RAID1


    omv 6.9.3-1 (Shaitan) on RPi4/4GB with 32bit Kernel 5.10.63 and WittyPi 3 V2 RTC HAT

    2x 3TB 3.5'' HDDs (CMR) formatted with ext4 in Icy Box IB-RD3662-C31 / hardware supported RAID1

    For Read/Write performance of SMB shares hosted on this hardware see forum here

  • I have worked with computers for over 40 years now, so please do not ask me to-do the basics. I have already done this before coming here and asking for help.

    But to make it crystal clear, I already checked cables, even tried direct without a switch in between and used a 2nd NIC in the NAS for the same results. :)


    I had full 1 Gbit/s network speed (both directions) with my NAS (same NAS, some network, some PC) when I had TrueNAS running for over 4 years now and only now switched to OMV because I wanted docker support, which is pita on FreeBSD. I am also pretty sure I had full 1 Gbit/s when I tested with OMV the first weeks before I did the switch.

    My house has Cat7 wires and everything else works fine 1 Gbit/s wise. Although your idea about the cable would not explain why it is 1 Gbit/s UPLOAD but 0.5 GBit/s DOWNLOAD (on the same cable).

    It is definitely NOT the environment but something coming from Debian, as you can see above I adjusted some iperf settings and the numbers no say the same thing as this screenshot from my taskmanager.





    It seems to me that the system is unable to fetch/catch enough data into a sending queue when it comes to sending stuff onto the NIC (as sending inside to other hardware is fine, so is receiving).

  • I have worked with computers for over 40 years now, so please do not ask me to-do the basic

    so you should know that crystal balls are in short supply or in other words little details have been provided to allow for troubleshooting


    It is definitely NOT the environment

    I'd recommend the post "Ethernet was slower only in one direction on one device"

    from Jeff Geerling proving the opposite experience


    the relevant part copied here for convenience but its highly recommended to read the full post!

    "when one of my Macs was getting 100 Mbps sometimes, and 10 Gbps others, seemingly depending on the direction the wind was blowing. In that case, it turns out one of the 8 wires in one keystone jack was rubbing against the shielded keystone casing, and causing the entire cable run to downrate to 100 Mbps... but only sometimes."

    omv 6.9.6-2 (Shaitan) on RPi CM4/4GB with 64bit Kernel 6.1.21-v8+

    2x 6TB 3.5'' HDDs (CMR) formatted with ext4 via 2port PCIe SATA card with ASM1061R chipset providing hardware supported RAID1


    omv 6.9.3-1 (Shaitan) on RPi4/4GB with 32bit Kernel 5.10.63 and WittyPi 3 V2 RTC HAT

    2x 3TB 3.5'' HDDs (CMR) formatted with ext4 in Icy Box IB-RD3662-C31 / hardware supported RAID1

    For Read/Write performance of SMB shares hosted on this hardware see forum here

    Einmal editiert, zuletzt von mi-hol ()

  • so you should know that crystal balls are in short supply or in other words little details have been provided to allow for troubleshooting

    My apologise. You are right about the fact that I should have started with a clearer picture for you guys. In not doing so, you guessed my level of cluelessness based on what and how I wrote it. So I got what I deserved out of it. :)


    I'd recommend the post "Ethernet was slower only in one direction on one device"

    from Jeff Geerling proving the opposite experience

    That is a very interesting read. Although if you compare the writers story with my problem, you will see that it is a little different at key points.


    • I changed the cable and nothing changed
    • The speed problem is limited to only one computer, but on every NIC you use there.
    • Using a different set of computers (with no OMV), I reach the full speed every time in both directions in my network.
    • he talks about having speed issues "sometimes", I have it "all the time". The download speed from OMV is capped at around 0.5 Gbit/s (see the edited iperf above)


    I just had a crazy idea and did the following. To explain the settings, you have to know that so far I only used a 2nd NIC inside my NAS only for docker and my DMZ for docker containers. To achieve that I use a docker network in maclan mode, so OMV so far had no direct access itself to that NIC (unconfigured enp1s0f0.102).


    • configured the 2nd NIC in OMV so it can be used directly with OMV and assigned an ip address.
    • started ipferf3 directly on an OMV/Debian command line and did a test resulting in the same bad/capped performance
    • created a new docker network with parent NIC DMZ in bridge mode (so the settings of the configured OMV NIC has to be used)
    • started a docker container (latest debian image with just network tools and iperf installed) in that network with same bad results (half gbit capped)
    • Then I changed the network and started the same docker container on the docker network I use for my DMZ (configured in maclan mode ) and got FULL 1 Gbit/s both way.

    These tests even went over my opnsense firewall hardware as it had to cross networks from my windows computer inside my LAN over to the DMZ network.


    So if the NIC is used which is configured by OMV/Debian the result is a capped transfer speed in one direction. Using the docker container in a network configured in maclan mode I get the full speed of the NIC in both directions.


    In case you are not familiar with docker and/or maclan mode. This doe not use the host as a bridge but simulate the container being its own hardware on the network with its own mac address, completely ignoring anything which is configured on the LAN adapter inside the host. Before I did these test I had it running with nothing configured at all in OMV, meaning the NIC showed as DISABLED in the network GUI of OMV.


    I hope I have explained a bit better why it seems to me a debian/omv config issue with the NIC/base system and not something outside of OMV.

    • Offizieller Beitrag

    I hope I have explained a bit better why it seems to me a debian/omv config issue with the NIC/base system and not something outside of OMV

    I'm going to have to disagree, I looked at this when you initially posted, it's a long time since I've come across this and you are far more knowledgeable and I would have to seriously dig into the grey cells to get my head around it.


    I've just copied a 2Gb file back to my W10 workstation which I had uploaded last night to OMV and the transfer rate is the same in both directions and I'm using Cat5e through a switch.

  • you will see that it is a little different at key points

    Actually I didn't expect Jeff's issue to be identical with yours, but he had to question basic assumption to find the root cause.

    In enterprise environments I often found the root cause of strange issues to stem from outdated firmware or driver versions (80%) or new issues in latest versions (20%)

    omv 6.9.6-2 (Shaitan) on RPi CM4/4GB with 64bit Kernel 6.1.21-v8+

    2x 6TB 3.5'' HDDs (CMR) formatted with ext4 via 2port PCIe SATA card with ASM1061R chipset providing hardware supported RAID1


    omv 6.9.3-1 (Shaitan) on RPi4/4GB with 32bit Kernel 5.10.63 and WittyPi 3 V2 RTC HAT

    2x 3TB 3.5'' HDDs (CMR) formatted with ext4 in Icy Box IB-RD3662-C31 / hardware supported RAID1

    For Read/Write performance of SMB shares hosted on this hardware see forum here

    2 Mal editiert, zuletzt von mi-hol ()

  • You need to do iperf testing in both directions. To do this you must run it simultaneously on both machines as either client or server as appropriate.


    Then you need to run as client on one machine against the server running on the other. Then reverse the roles and test again. This will you show the effective link speed in both directions.


    Use the same settings that produced 947 Mbits/sec for you.

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

  • You need to do iperf testing in both directions. To do this you must run it simultaneously on both machines as either client or server as appropriate.


    Then you need to run as client on one machine against the server running on the other. Then reverse the roles and test again. This will you show the effective link speed in both directions.

    and that would achieve anything different from the -R switch I used in what way exactly?


    I already know the effective link speed and more speed testing will not change the result without changing anything first.


    Actually I didn't expect Jeff's issue to be identical with yours, but he had to question basic assumption to find the root cause.

    In enterprise environments I often found the root cause of strange issues to stem from outdated firmware or driver versions (80%) or new issues in latest versions (20%)

    aye, it's all good. I appreciate you are trying to help instead just telling me (it works for me, so it can not be a problem like others do). ;)


    My gut still tells me it is the installation and I have to find it there. As it is usually right I will keep digging there and leave you guys alone.

    Edit:

    I did found something interesting. Remembered I still had an usb stick in the internal socket I tested OMV with and booted from there. Turns out with that install I get the full speed in both directions, so it IS definitely something wrong with my normal install. Both installs use the same OMV in the end, but I installed them differently. My current main installation used the "install debian first, then OMV on top of it method", the usb install used the original omv iso for the install.

    • Offizieller Beitrag

    My current main installation used the "install debian first, then OMV on top of it method", the usb install used the original omv iso for the install.

    This makes no difference. Both methods install the openmediavault package using apt. The install script does add optimizations to samba that the ISO doesn't. I find it hard to believe they would cause the issue but just remove them from the Samba settings extra option box to test.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • This makes no difference. Both methods install the openmediavault package using apt. The install script does add optimizations to samba that the ISO doesn't. I find it hard to believe they would cause the issue but just remove them from the Samba settings extra option box to test.

    and you are right, it is NOT the different installation method. It is ...drumroll... DOCKER.


    I reinstalled from scratch and kept testing. All was good until I hit the Install button for docker. Afterwards its down to that 0.5.

    If I remove docker again AND reboot the full speed comes back.

    So whatever docker is doing with my main NIC its bad. Now I "only" have to find what's exactly doing it and how to reverse it.

    • Offizieller Beitrag

    I reinstalled from scratch and kept testing. All was good until I hit the Install button for docker. Afterwards its down to that 0.5.

    If I remove docker again AND reboot the full speed comes back.

    So whatever docker is doing with my main NIC its bad. Now I "only" have to find what's exactly doing it and how to reverse it.

    docker adds firewall rules but I have never seen those rules cause a performance change and I have docker installed on every OMV system I have. I will be curious to see what you find.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Maybe related to this? docker iptables

    Tupsi as you see people try with crystal balls but without knowing your HW & kernel version details its a shot in the dark at best

    omv 6.9.6-2 (Shaitan) on RPi CM4/4GB with 64bit Kernel 6.1.21-v8+

    2x 6TB 3.5'' HDDs (CMR) formatted with ext4 via 2port PCIe SATA card with ASM1061R chipset providing hardware supported RAID1


    omv 6.9.3-1 (Shaitan) on RPi4/4GB with 32bit Kernel 5.10.63 and WittyPi 3 V2 RTC HAT

    2x 3TB 3.5'' HDDs (CMR) formatted with ext4 in Icy Box IB-RD3662-C31 / hardware supported RAID1

    For Read/Write performance of SMB shares hosted on this hardware see forum here

  • Maybe related to this? docker iptables


    Check the end of the thread and see if you have the mentioned network adapter.

    Thank you very much for that, this is it. My network card identifies as e1000e instead of e1000, but it seems effected anyway. Doing what is written on that hetzner page did the trick, so I will leave that here as reference if another comes by with that odd piece.


    ethtool -K <interface> tso off gso off


    (taken from https://docs.hetzner.com/robot…rformance-intel-i218-nic/)


    Everyone who tried to help, thanks for that! I have a feeling you all know by first hand experience how frustrating it can be, if you do not find an answer to a question where you once started with "can't be that hard, can it?". Turned out again it was. So thanks again for the help guys, much appreciated!



    and for people (like me) wondering afterwards how to-do that on every boot, here is a nice how-to. Just scroll to the end and use the systemd variant.


    https://linuxhint.com/use-etc-rc-local-boot/

    Einmal editiert, zuletzt von Tupsi () aus folgendem Grund: added link to systemd howto for a simple script.

  • geaves

    Hat das Label gelöst hinzugefügt.
  • What a great thread and thanks to everyone, especially Tupsi for persevering. This issue had me pulling my hair out for weeks now trying to find the root cause of my network being half speed in one direction for my OMV NAS running on a Lenovo M910q Tiny. I ruled out anything physical on my new Cat6 LAN setup early on, got into all sorts of network troubleshooting, tried so many iperf setups, performance tuning, etc. etc.


    Only when I googled for half gigabit speed, this thread showed up. I tried the fix above to disable TCP segmentation offloading, and it works!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!