Cannot achieve gigabit speeds

  • Hi everyone!


    I have a laptop running OMV. It has an ethernet port capable of gigabit speeds (I checked it via cli with ethtool, one of the advertised modes says 1000baseT/full) and I even tried connecting via a usb to ethernet adapter (also gigabit capable), but when I check the dashboard, the link establishes as 100Mbits only.


    Is there a way to manually select the speed? As far as I know, that's negotiated between the network adapter and my router....


    Thanks in advance!


    PS: I already shared a folder and speeds are around 10Mbytes, both up and down. I am using a HDD connected via usb 3.0, so speeds over 100MBytes should not be a problem. I checked via lsusb and it seems to be working at the correct speed rate.

  • tomik-po11

    Hat den Titel des Themas von „Can negociate gigabit speeds“ zu „Cannot achieve gigabit speeds“ geändert.
  • It could be a cable issue. Cable needs to be at least Cat 5e, and even then you should swap out other cables and test to make sure.

    OMV 5.6.26-1 (Usul); Shuttle XPC SH67H3; Intel Core i5-2390T; 8 GB DDR3-1333 RAM; 128GB SanDisk Z400s SSD (OS); Samsung 860 EVO 1TB (primary storage); WD Red 2TB (backup and archive storage).

  • Add me to the list of people stuck at 100Mbps speeds on a 1000Mbps Ethernet connection...


    OMV fully updated on Debian 32bit. Clients on Win10/11. Both sides acknowledge 1000Mbps Ethernet; but transfer speeds locked at ~11MB/s.




  • RxBrad networking and drivers are controlled by Debian. Debian by now moved to kernel 5.10.60 or later, hence 5.10.0 looks quite outdated to me. Are you sure that the kernel used is receiving & installing patches?

    omv 6.9.6-2 (Shaitan) on RPi CM4/4GB with 64bit Kernel 6.1.21-v8+

    2x 6TB 3.5'' HDDs (CMR) formatted with ext4 via 2port PCIe SATA card with ASM1061R chipset providing hardware supported RAID1


    omv 6.9.3-1 (Shaitan) on RPi4/4GB with 32bit Kernel 5.10.63 and WittyPi 3 V2 RTC HAT

    2x 3TB 3.5'' HDDs (CMR) formatted with ext4 in Icy Box IB-RD3662-C31 / hardware supported RAID1

    For Read/Write performance of SMB shares hosted on this hardware see forum here

  • The real end to end read/write performance is influenced by many factors.

    While copying a large amount of data, please monitor your system with the dstat command and provide some information about your CPU , RAM and Network load.

    And please provide some information about your end to end setup.

    E.g.

    - How do you access your NAS share? SMB?

    - What System are you copying and reading data from? Windows 10?

    - What filesystem are you using on your OMV disks?

    - What disks in general? (the exact type, as some have a limited write cache which impacts performance heavily when transferring larger amounts of data.).

    etc.


    Why is this of interest?

    Because all these things are in the way of your data being copied from a client to your NAS.

    E.g. (I know, this is a simplified description. Bear with me.)

    The data gets read by your Windows client from the local disk in NTFS format, then put through your local network adapter in nice chunks of data (according to your network settings like MTU / Frames. Make sure to use jumbo frames), on your NAS the network card receives all these packages and it gets pieced together again to the original data. Then this data runs through your NAS system, the SMB protocol takes it's toll on performance for encoding/decoding, and finally it shall get stored on your disk on the NAS. But as there is different filesystem in place (e.g. ext4 / btrfs / whatever) your data package runs through the cpu again before it finally gets written to the disk. The disk itself receives the package and puts it to its internal write cache. Depending on the size of this cache and the write speed of your disk this can take time again, because if the write cache is full your package needs to "wait" in the system before even getting accepted by the disk. When the disks (e.g. spinning magnetic disks) finally receive the data from the cache it finally gets written.


    All these steps in between can affect performance. It is not unusual that you do not get the theoretical full read/write speed that is mathematically possible, when you only look at your disk metrics and the network. Always consider that your data package runs through multiple stacks on both sides of the network, which can heavily affect the real world performance.


    I learned this the hard way, as I built a NAS but was only able to write with 14 MB/s to my share from my windows system. Finally my CPU was the bottleneck. It was a multi core system, but the system only used one core for encoding/decoding all the SMB and filesystem stuff, which created a bottleneck there.


    So if you want to find your bottleneck you need to look at the full stack your data package is running through and trigger your little Sherlock Holmes to find it.


    ################################################


    Edit: Sorry, it is late, and I have been a bit unclear. When copying over the network using SMB the source and target filesystem on your disks of course has not a big impact. The main impact has the SMB protocol on your cpu, software raid configurations, cache limits of the used disks, MTU / frame setting of your network adapters and the hardware in between (switch and router).

    When copying localy from an ext4 disk to a NTFS disk, or vice versa, the NTFS linux driver takes it's toll on the CPU when converting the data. At least on my system (yes, the hardware is not the latest top notch) it only uses one core and puts significant load there creating a bottleneck.

    When using the network the bottleneck is again in the CPU but created by the SMB implementation.

    I get a satisfying speed out of my setup, but it is not the theoretically maximum possible.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!