slow transfer speeds with mellanox 40gbe

  • my current setup


    WIndows 10 box with mellanox connectX3 MCX354A-FCBT 40gbe NIC card - ethernet connection enabled


    connected directely with DAC QFSP+ cable to OMV V5.5 setup with ZFZ RAIDZ2 using 4 3.5 inch drives, mellanox connectX3 MCX354A-FCBT 40gbe NIC card.


    when i transfer files across this network, through SMB, the Max speed im getting is only 270MB/sec.



    What am i doing wrong> Ive tried different filesystems too,BTRFS. but still similar speeds.


    Where is the bottle neck, Some help would be most appreciated,



    I was thinking i can adjust the Various settings in the Windows Mellanox driver such as MTU etc. but how can i do this on OMV side.


    Thanks



    • Offizieller Beitrag

    How much speed did you think you were going to get out of spinning disk? What speed is each side connected at?


    Why the large font?

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!



  • Nice font size! :D

  • is that about the right speed i should be expecting?


    my 1GBE connection i get 100MB/sec. , in an ideal world i thought 10GB would give me 1000MB/sec but more likely 600MB/sec, so i thought 40GBE should be faster still.




    is about 200-300MB/sec max i should be expecting?

  • How much speed did you think you were going to get out of spinning disk? What speed is each side connected at?


    Why the large font?

    sorry about the large font dont know what happend, what do you mean speed at each side i dont quite understand what you mean?


    So is about 200-300MB/sec max i should be expecting to get from my setup, that seems a little on the low side. I thought at least i would achieve towards 1000MB/sec but im at a 1/5 of that.


    Any ways to improve upon that.

    • Offizieller Beitrag

    is about 200-300MB/sec max i should be expecting?

    Yes. Each one of those disks probably can do about 150 MB/s each.


    my 1GBE connection i get 100MB/sec. , in an ideal world i thought 10GB would give me 1000MB/sec but more likely 600MB/sec, so i thought 40GBE should be faster still.

    That is what those connections will do if your storage is fast enough. Your storage is nowhere near fast enough to saturate 10GBe. 40GBe will require hardware that you will probably not be able to afford.


    what do you mean speed at each side i dont unsderstand what you mean?

    Network adapter can connect at different speeds. On Linux ethtool will show you the connection speed. On Windows, you have to look at the status of the network adapter. I'm guessing your adapters aren't connecting at 40GBe. The fact you are hitting 270 means you are probably connected at 10GBe.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Yes. Each one of those disks probably can do about 150 MB/s each.

    That is what those connections will do if your storage is fast enough. Your storage is nowhere near fast enough to saturate 10GBe. 40GBe will require hardware that you will probably not be able to afford.

    Network adapter can connect at different speeds. On Linux ethtool will show you the connection speed. On Windows, you have to look at the status of the network adapter. I'm guessing your adapters aren't connecting at 40GBe. The fact you are hitting 270 means you are probably connected at 10GBe.


    on the OMV side i ran a ip add



    on the windows side the connection says 40gbe Full duplex, (but its connected via ethernet protocol rather than IB although i was told that potentially maxes out at at 10gbe, not 100% sure about this)




    on the OMV side,



    Supported ports: [ FIBRE ]
    Supported link modes: 1000baseKX/Full
    10000baseKX4/Full
    10000baseKR/Full
    40000baseCR4/Full
    40000baseSR4/Full
    56000baseCR4/Full
    56000baseSR4/Full
    Supported pause frame use: Symmetric Receive-only
    Supports auto-negotiation: Yes
    Supported FEC modes: Not reported
    Advertised link modes: 1000baseKX/Full
    10000baseKX4/Full
    10000baseKR/Full
    40000baseCR4/Full
    40000baseSR4/Full
    Advertised pause frame use: Symmetric
    Advertised auto-negotiation: Yes
    Advertised FEC modes: Not reported
    Link partner advertised link modes: 40000baseCR4/Full
    Link partner advertised pause frame use: No
    Link partner advertised auto-negotiation: Yes
    Link partner advertised FEC modes: Not reported
    Speed: 40000Mb/s
    Duplex: Full
    Port: Direct Attach Copper
    PHYAD: 0
    Transceiver: internal
    Auto-negotiation: on
    Supports Wake-on: d
    Wake-on: d
    Current message level: 0x00000014 (20)
    link ifdown
    Link detected: yes



    However the MTU onthe OMV side reaed


    <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000






    How do increase the MTU on the commandline ?

  • Yes. Each one of those disks probably can do about 150 MB/s each.

    That is what those connections will do if your storage is fast enough. Your storage is nowhere near fast enough to saturate 10GBe. 40GBe will require hardware that you will probably not be able to afford.

    Network adapter can connect at different speeds. On Linux ethtool will show you the connection speed. On Windows, you have to look at the status of the network adapter. I'm guessing your adapters aren't connecting at 40GBe. The fact you are hitting 270 means you are probably connected at 10GBe.


    on the OMV side i ran a ip add



    on the windows side the connection says 40gbe Full duplex, (but its connected via ethernet protocol rather than IB although i was told that potentially maxes out at at 10gbe, not 100% sure about this)




    on the OMV side,



    Supported ports: [ FIBRE ]
    Supported link modes: 1000baseKX/Full
    10000baseKX4/Full
    10000baseKR/Full
    40000baseCR4/Full
    40000baseSR4/Full
    56000baseCR4/Full
    56000baseSR4/Full
    Supported pause frame use: Symmetric Receive-only
    Supports auto-negotiation: Yes
    Supported FEC modes: Not reported
    Advertised link modes: 1000baseKX/Full
    10000baseKX4/Full
    10000baseKR/Full
    40000baseCR4/Full
    40000baseSR4/Full
    Advertised pause frame use: Symmetric
    Advertised auto-negotiation: Yes
    Advertised FEC modes: Not reported
    Link partner advertised link modes: 40000baseCR4/Full
    Link partner advertised pause frame use: No
    Link partner advertised auto-negotiation: Yes
    Link partner advertised FEC modes: Not reported
    Speed: 40000Mb/s
    Duplex: Full
    Port: Direct Attach Copper
    PHYAD: 0
    Transceiver: internal
    Auto-negotiation: on
    Supports Wake-on: d
    Wake-on: d
    Current message level: 0x00000014 (20)
    link ifdown
    Link detected: yes



    However the MTU onthe OMV side reaed


    <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000






    How do increase the MTU on the commandline ?









    so i figured out the ifconfig command line to increase the MTU on both sides , tried s few differnt figures 1500,3000,4000) not much gain to be honest

    • Offizieller Beitrag

    so i figured out the ifconfig command line to increase the MTU on both sides , tried s few differnt figures 1500,3000,4000) not much gain to be honest

    Yep. As I said, your storage is too slow. If you have nvme on both sides, you would be able to use more of that speed.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Sorry but could you explain that? I would think it should be at least 200...250 MB/s/Disc

    i tried using a sinlge ssd on my OMV copying over to a NVME on my pc , a slight increase to 300-310 MB/sec.


    a slight increase obviously, but still far off what im expecting, DO you think also using window OS and SMB is also effecting the speeds im achieving?

    • Offizieller Beitrag

    Sorry but could you explain that? I would think it should be at least 200...250 MB/s/Disc

    Explain what? That is how fast spinning hard drives are. Definitely not 200-250 MB unless they are a 15k rpm sas drive. Where did you get your rates from??


    i tried using a sinlge ssd on my OMV copying over to a NVME on my pc , a slight increase to 300-310 MB/sec.

    Makes sense.


    a slight increase obviously, but still far off what im expecting,

    I don't understand why you think it is going to be more. Most SSDs max out at 500 MB/s with large files in perfect conditions.


    DO you think also using window OS and SMB is also effecting the speeds im achieving?

    No. I still think it is your storage as I have mentioned many times. Even with 3 GB/s nvme on both sides, it can be tough to get over 700-800 MB/s.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Explain what? That is how fast spinning hard drives are. Definitely not 200-250 MB unless they are a 15k rpm sas drive. Where did you get your rates from??


    May be 150 in average daily use. But patman was talking about maximum speed. My single Ironwolf hdd achieves max transfer rates of 200..250 MB/s in daily use. Even more in certain situations..
    (https://www.tomshardware.com/r…-pro-12tb-hdd,5443-3.html)

    • Offizieller Beitrag

    May be 150 in average daily use. But patman was talking about maximum speed. My single Ironwolf hdd achieves max transfer rates of 200..250 MB/s in daily use. Even more in certain situations..

    And I was talking about normal drives. Not the latest 7200 RPM large cache NAS drive. Even my WD Red Pros don't break 200. And he didn't specify what drives were being used, I used an average of about 150 which is what I would say is the average of every drive I own (I have a pile of over 30 drives...).


    Either way, a raid z2 array has two parity drives. I wouldn't expect it to get more than than the speed of two drives which is still (even with your fast drives) WELL under 10GBe let alone 40 GBe. So, my point stands that storage is the performance bottleneck.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • And I was talking about normal drives. Not the latest 7200 RPM large cache NAS drive. Even my WD Red Pros don't break 200. And he didn't specify what drives were being used, I used an average of about 150 which is what I would say is the average of every drive I own (I have a pile of over 30 drives...).
    Either way, a raid z2 array has two parity drives. I wouldn't expect it to get more than than the speed of two drives which is still (even with your fast drives) WELL under 10GBe let alone 40 GBe. So, my point stands that storage is the performance bottleneck.


    Totally agree that storage is his bottleneck. Nevertheless it depends on what particular drive he is using to make an evaluation of his transfer rates. Especially since he talks about maximum transfer rate. That's why I found it a bit surprising why you simply say that he can't expect more than 150 MB/s per disk anyway. You don't need to feel attacked at all. I only asked if you could explain the 150.

  • Totally agree that storage is his bottleneck. Nevertheless it depends on what particular drive he is using to make an evaluation of his transfer rates. Especially since he talks about maximum transfer rate. That's why I found it a bit surprising why you simply say that he can't expect more than 150 MB/s per disk anyway. You don't need to feel attacked at all. I only asked if you could explain the 150.

    im using 14TB iron work standard drives 7200 in a raidz2 config with 4 drives in total at present, although i planne to increase up to 6 or possible 8 of these drives. just testing at moment

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!