10gbe connection

  • Hi everyone, i have a custom built OMV server that has a 10gb Mellanox card in it and a QNAP TS-873 with the same card using a DAS cable and am transferring 40TB worth of data from OMV to QNAP. I'm transferring back to the QNAP bc I'm adding more disks to my OMV and reconfiguring it. I'm only getting 190MB/s on the transfer rate for some reason. Again, this is a direct connection, no switch or anything involved in the mix. I'm using HSB3 on the QNAP NAS to do the transfer. But I'm only seeing 190MB/s transfer rates. Even when i do a speed test, its showing 203MB/s tops. Does anyone have any experience in this that could help me get faster speeds? I have a RAID5 on both devices.

    When I had the Mellanox card in my VM server to my QNAP NAS, i was seeing 10gb speeds just fine. But for some reason, between these two, I'm not. Since I'm transferring such a large amount, the speed difference would definitely help. Over night, it only transferred 5.54TB. Any help would be much appreciated!


    Thanks!

  • Either the used data transfer client isn't the smartest

    or

    many small files are limiting performance

    or

    could be a driver issue to me, but upgrading it seems quite difficult as I take it from reading this

    In case you have a supported OS you might be lucky

    omv 6.9.6-2 (Shaitan) on RPi CM4/4GB with 64bit Kernel 6.1.21-v8+

    2x 6TB 3.5'' HDDs (CMR) formatted with ext4 via 2port PCIe SATA card with ASM1061R chipset providing hardware supported RAID1


    omv 6.9.3-1 (Shaitan) on RPi4/4GB with 32bit Kernel 5.10.63 and WittyPi 3 V2 RTC HAT

    2x 3TB 3.5'' HDDs (CMR) formatted with ext4 in Icy Box IB-RD3662-C31 / hardware supported RAID1

    For Read/Write performance of SMB shares hosted on this hardware see forum here

    Edited once, last by mi-hol ().

  • Thanks! I'll take a look at that link here shortly. Maybe it is a driver issue. I know on the QNAP side, it works fine. But OMV is limited on the data it shows for the 10gb card. Doesn't really show its 10gb. Just gives a name of en50e0 or something like that.

    • Official Post

    Doesn't really show its 10gb. Just gives a name of en50e0 or something like that

    In the WebUI look in System Information -> Report, it will give you a list of interfaces, then Interface Information will give you more information

    Raid is not a backup! Would you go skydiving without a parachute?


    OMV 7x amd64 running on an HP N54L Microserver

  • Ok, this is what i'm seeing. I am totally confused now as to which is the actually Mellanox card. I thought i was using it, but its showing 5 total NIC connections in OMV under Networking. But there is only 3 built onto the motherboard and the single port Mellanox 10gb card. Which is which??




    ================================================================================

    = Network interfaces

    ================================================================================

    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

    valid_lft forever preferred_lft forever

    inet6 ::1/128 scope host

    valid_lft forever preferred_lft forever

    2: enp2s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000

    link/ether 00:25:90:c0:d4:90 brd ff:ff:ff:ff:ff:ff

    inet 10.1.244.161/16 brd 10.1.255.255 scope global dynamic enp2s0f0

    valid_lft 49661sec preferred_lft 49661sec

    3: enp2s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000

    link/ether 00:25:90:c0:d4:91 brd ff:ff:ff:ff:ff:ff

    inet 10.1.244.162/16 brd 10.1.255.255 scope global dynamic enp2s0f1

    valid_lft 49661sec preferred_lft 49661sec

    4: enp130s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000

    link/ether f4:52:14:4a:db:70 brd ff:ff:ff:ff:ff:ff

    inet 172.10.10.15/16 brd 172.10.255.255 scope global enp130s0

    valid_lft forever preferred_lft forever

    5: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default

    link/ether 02:42:2d:76:83:d8 brd ff:ff:ff:ff:ff:ff

    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0

    valid_lft forever preferred_lft forever

    inet6 fe80::42:2dff:fe76:83d8/64 scope link

    valid_lft forever preferred_lft forever

    7: vethb8e18e8@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default

    link/ether 8e:8e:b5:41:77:b2 brd ff:ff:ff:ff:ff:ff link-netnsid 0

    inet6 fe80::8c8e:b5ff:fe41:77b2/64 scope link

    valid_lft forever preferred_lft forever

    9: vethf0850a4@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default

    link/ether 4a:a0:56:38:6e:54 brd ff:ff:ff:ff:ff:ff link-netnsid 1

    inet6 fe80::48a0:56ff:fe38:6e54/64 scope link

    valid_lft forever preferred_lft forever

    --------------------------------------------------------------------------------

    Interface information enp2s0f1:

    ===============================

    Settings for enp2s0f1:

    Supported ports: [ TP ]

    Supported link modes: 10baseT/Half 10baseT/Full

    100baseT/Half 100baseT/Full

    1000baseT/Full

    Supported pause frame use: Symmetric

    Supports auto-negotiation: Yes

    Supported FEC modes: Not reported

    Advertised link modes: 10baseT/Half 10baseT/Full

    100baseT/Half 100baseT/Full

    1000baseT/Full

    Advertised pause frame use: Symmetric

    Advertised auto-negotiation: Yes

    Advertised FEC modes: Not reported

    Speed: 1000Mb/s

    Duplex: Full

    Port: Twisted Pair

    PHYAD: 1

    Transceiver: internal

    Auto-negotiation: on

    MDI-X: off (auto)

    Supports Wake-on: d

    Wake-on: d

    Current message level: 0x00000007 (7)

    drv probe link

    Link detected: yes

    --------------------------------------------------------------------------------

    Driver information enp2s0f1:

    ============================

    driver: igb

    version: 5.9.0-0.bpo.5-amd64

    firmware-version: 1.61, 0x8000090e

    expansion-rom-version:

    bus-info: 0000:02:00.1

    supports-statistics: yes

    supports-test: yes

    supports-eeprom-access: yes

    supports-register-dump: yes

    supports-priv-flags: yes

    --------------------------------------------------------------------------------

    Interface information docker0:

    ==============================

    Settings for docker0:

    Supported ports: [ ]

    Supported link modes: Not reported

    Supported pause frame use: No

    Supports auto-negotiation: No

    Supported FEC modes: Not reported

    Advertised link modes: Not reported

    Advertised pause frame use: No

    Advertised auto-negotiation: No

    Advertised FEC modes: Not reported

    Speed: 10000Mb/s

    Duplex: Unknown! (255)

    Port: Other

    PHYAD: 0

    Transceiver: internal

    Auto-negotiation: off

    Link detected: yes

    --------------------------------------------------------------------------------

    Driver information docker0:

    ===========================

    driver: bridge

    version: 2.3

    firmware-version: N/A

    expansion-rom-version:

    bus-info: N/A

    supports-statistics: no

    supports-test: no

    supports-eeprom-access: no

    supports-register-dump: no

    supports-priv-flags: no

    --------------------------------------------------------------------------------

    Interface information vethb8e18e8:

    ==================================

    Settings for vethb8e18e8:

    Supported ports: [ ]

    Supported link modes: Not reported

    Supported pause frame use: No

    Supports auto-negotiation: No

    Supported FEC modes: Not reported

    Advertised link modes: Not reported

    Advertised pause frame use: No

    Advertised auto-negotiation: No

    Advertised FEC modes: Not reported

    Speed: 10000Mb/s

    Duplex: Full

    Port: Twisted Pair

    PHYAD: 0

    Transceiver: internal

    Auto-negotiation: off

    MDI-X: Unknown

    Link detected: yes

    --------------------------------------------------------------------------------

    Driver information vethb8e18e8:

    ===============================

    driver: veth

    version: 1.0

    firmware-version:

    expansion-rom-version:

    bus-info:

    supports-statistics: yes

    supports-test: no

    supports-eeprom-access: no

    supports-register-dump: no

    supports-priv-flags: no

    --------------------------------------------------------------------------------

    Interface information vethf0850a4:

    ==================================

    Settings for vethf0850a4:

    Supported ports: [ ]

    Supported link modes: Not reported

    Supported pause frame use: No

    Supports auto-negotiation: No

    Supported FEC modes: Not reported

    Advertised link modes: Not reported

    Advertised pause frame use: No

    Advertised auto-negotiation: No

    Advertised FEC modes: Not reported

    Speed: 10000Mb/s

    Duplex: Full

    Port: Twisted Pair

    PHYAD: 0

    Transceiver: internal

    Auto-negotiation: off

    MDI-X: Unknown

    Link detected: yes

    --------------------------------------------------------------------------------

    Driver information vethf0850a4:

    ===============================

    driver: veth

    version: 1.0

    firmware-version:

    expansion-rom-version:

    bus-info:

    supports-statistics: yes

    supports-test: no

    supports-eeprom-access: no

    supports-register-dump: no

    supports-priv-flags: no

    --------------------------------------------------------------------------------

    Interface information enp130s0:

    ===============================

    Settings for enp130s0:

    Supported ports: [ FIBRE ]

    Supported link modes: 1000baseKX/Full

    10000baseKR/Full

    Supported pause frame use: Symmetric Receive-only

    Supports auto-negotiation: No

    Supported FEC modes: Not reported

    Advertised link modes: 1000baseKX/Full

    10000baseKR/Full

    Advertised pause frame use: Symmetric

    Advertised auto-negotiation: No

    Advertised FEC modes: Not reported

    Speed: 10000Mb/s

    Duplex: Full

    Port: Direct Attach Copper

    PHYAD: 0

    Transceiver: internal

    Auto-negotiation: off

    Supports Wake-on: d

    Wake-on: d

    Current message level: 0x00000014 (20)

    link ifdown

    Link detected: yes

    --------------------------------------------------------------------------------

    Driver information enp130s0:

    ============================

    driver: mlx4_en

    version: 4.0-0

    firmware-version: 2.33.5220

    expansion-rom-version:

    bus-info: 0000:82:00.0

    supports-statistics: yes

    supports-test: yes

    supports-eeprom-access: no

    supports-register-dump: no

    supports-priv-flags: yes

    --------------------------------------------------------------------------------

    Interface information enp2s0f0:

    ===============================

    Settings for enp2s0f0:

    Supported ports: [ TP ]

    Supported link modes: 10baseT/Half 10baseT/Full

    100baseT/Half 100baseT/Full

    1000baseT/Full

    Supported pause frame use: Symmetric

    Supports auto-negotiation: Yes

    Supported FEC modes: Not reported

    Advertised link modes: 10baseT/Half 10baseT/Full

    100baseT/Half 100baseT/Full

    1000baseT/Full

    Advertised pause frame use: Symmetric

    Advertised auto-negotiation: Yes

    Advertised FEC modes: Not reported

    Speed: 1000Mb/s

    Duplex: Full

    Port: Twisted Pair

    PHYAD: 1

    Transceiver: internal

    Auto-negotiation: on

    MDI-X: on (auto)

    Supports Wake-on: pumbg

    Wake-on: g

    Current message level: 0x00000007 (7)

    drv probe link

    Link detected: yes

    --------------------------------------------------------------------------------

    Driver information enp2s0f0:

    ============================

    driver: igb

    version: 5.9.0-0.bpo.5-amd64

    firmware-version: 1.61, 0x8000090e

    expansion-rom-version:

    bus-info: 0000:02:00.0

    supports-statistics: yes

    supports-test: yes

    supports-eeprom-access: yes

    supports-register-dump: yes

    supports-priv-flags: yes

    • Official Post

    Which is which

    You're looking at 2, 3, 4


    4 appears to be the 10Gb which is listed as enp130s0, but that doesn't appear to have an ipv4 address, it looks as if 2 & 3 are picking up ipv4 addresses, that's what it appears from the output.


    Are they all connected? One way might be to resolve it is to use omv-firstaid from the cli, but to do that suggestion would be to use a monitor and keyboard connected to your omv box, and if possible disable any nics other than the Mellanox.

  • Thanks! I did add an IP to 4 and tested again and still getting the same speeds. i assigned it 172.10.10.30 and didn't see any improvements in transfer rates.

  • 4: enp130s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000

    link/ether f4:52:14:4a:db:70 brd ff:ff:ff:ff:ff:ff

    inet 172.10.10.15/16 brd 172.10.255.255 scope global enp130s0

    valid_lft forever preferred_lft forever


    Yep that one is the 10gb card. I changed the IP to 10.10.10.10 on QNAP and .15 on the OMV server and tested the transfer speed again. Results were the same. 202MB/s on average. Nothing higher than 203MB/s.

    • Official Post

    and .15 on the OMV server and tested the transfer speed

    I don't know how you have done that as the information you have posted above is the same as #5


    The only other option I can suggest is to install iperf on each and test the connection using that, and/or change the network cable

    • Official Post

    Thats how i added it and changed it to 10.10.10.15.

    In that case your only option is try use iperf to test the connection


    EDIT: Either you have asked the exact same question on reddit or there is someone else with the exact same problem with exactly the same hardware and set up...the outcome of that was to test with iperf.

  • That was me on Reddit as well. Figured I would ask on here just in case there were others that had this issue that is not on Reddit. But yeah, you're right, I'm going to have to try the iperf test.

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!