10gbps network performing at 1gbps

  • To preface things, I did a bunch of omv-release-upgrades to get from 2.x to 4.x. I ended up sorting through a LOT of problems, but I think all of my errors are resolved as far as that goes.


    I have a Mellanox ConnectX on both my OMV box and my Windows 10 box, connecting to a Unifi 10gbps port. Both OS's show that they're connected at 10gbps, however all testing is showing that the throughput on the linux box is only operating at 1gbps. iperf, smb, nfs, etc. all capping out at about 112MB/s or 1gbps. Ethtool is showing the links peed at 10000Mb/s. Is there some sort of configuration I need to check that would force the interface to be limited to 1gbps throughput?

    • Offizieller Beitrag

    I have a similar setup as well. Are you sure your hardware (disks in particular) is fast enough?

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I have allmost the same setup.
    Are you shure you mount the right network share via the rigt interface ?
    Can you disable the other NIC and try th speed test again..

    Yeah, I am connecting to the correct interface, but you clued me in on something which resolved it. The problem was the my eth5 interface (where my mellanox is) didn't have a default route, so all traffic was exiting on eth0 because that's where the gateway was bound to. I added a default route and ran ifconfig eth0 down and then I started to see about 9gbps throughput (which I'm happy with) in iperf, and tested a file transfer at about 300-500MB/s (also happy with that).

  • Yeah, I am connecting to the correct interface, but you clued me in on something which resolved it. The problem was the my eth5 interface (where my mellanox is) didn't have a default route, so all traffic was exiting on eth0 because that's where the gateway was bound to. I added a default route and ran ifconfig eth0 down and then I started to see about 9gbps throughput (which I'm happy with) in iperf, and tested a file transfer at about 300-500MB/s (also happy with that).


    I'm using bond mode as well, but it's slow (10Mbps).


    Can you post your screenshot of the NIC config ?

    OMV v5.0
    Asus Z97-A/3.1; i3-4370
    32GB RAM Corsair Vengeance Pro

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!