SMB speed over LAN?

    • SMB speed over LAN?

      Hi

      Having tried a number of NAS and home server distros I've settled on OMV on the basis I found it way more logical to set up than any of the others, and have installed it on an old HP Gen6 Microserver (NL40 model I think). I have two SATA drives drives - one for the OS and one for the data so no RAID at the moment. The data drive is a WD Green and this hosts the file shares.

      The server has a 1Gbps LAN socket and it's connected by ethernet to router which incorporates 1Gb speed switch.

      I've been testing the speeds for the SMB shares I have using a LAN testing app and I'm getting 221Mbps write and 287Mbps read speeds. How does that sound? I realise there is more to these speeds than simply expecting to get the full 1Gb of the card - the CPU speed etc. Should I be expecting more? If so are there any settings to play with to accomplish this.

      Thanks in advance
    • SMB has always been flakey on transfer speed compared to NFS or FTP. Not just on Samba but on Windows itself. It's not even very consistent. The same file can run at different speeds when multiple tests are run.

      To benchmark your network first use something like iperf, it has no disk subsystem dependancies, running from memory to memory. Once you know the network is OK, test with FTP. You should get decent rates from FTP. That will give you some comparison for your SMB. Don't expect it to be as fast as FTP though.
    • Nozza wrote:

      Should I be expecting more?
      Sure. Look at the transfer rates of this inexpensive ARM toy running OMV4: hardkernel.com/shop/odroid-hc2-home-cloud-two/

      Around 100 MB/s with Windows Explorer and Windows 10. How does your test setup look like?

      After testing with iperf I would suggest to use Helios LanTest since more standardized than copying of random data in Explorer.
    • Hijacking this thread since I have similar issue, but different setup. I have ROCK64 with OMV installed, I today upgraded to latest:
      • Processor: ARMv8 Processor rev4 (v8l)
      • OMV Release: 4.1.19-1, Codename: Arrakis
      • Kernel: Linux 4.4.132-1075-rockchip-ayufan-ga83beded8524
      I got writing speed around 50-60MB/s with USB3 hard drive, and after I measured with iperf3 and I am getting only 450mps at gigabit line. I ruled out:
      • router and cable, since anything else (eg laptop) on this very cable can get 950mps on iperf3
      • hard drives, I disconnected them, measured only ethernet speed without anything connected to ROCK64
      I am not sure which version of OMV was there before I upgraded it, but I am pretty sure, I was able to copy at 100MB/s in the past, so only thing I can think of is upgrade which could mess something like ethernet drivers, dont know. Any advice would be greatly appreciated, thanks.
    • lakyljuk wrote:

      I got writing speed around 50-60MB/s with USB3 hard drive, and after I measured with iperf3 and I am getting only 450mps at gigabit line. I ruled out:

      Is HDD able to give more?
      Many disks dramatically lose performance when they increase % of used space. SSD too when it comes to writing data, the famous 60-70%.
      But on the other hand you have a low iperf so the problem probably somewhere else ....
      For example I have an average of 100MB/s according to iperf but smb and 90% hdd I have only 55MB/s. In my case, the hdd is a bottleneck.
    • lakyljuk wrote:

      measured with iperf3 and I am getting only 450mps at gigabit line
      Can you provide iperf3 output of both directions (in the mode that shows retransmits, too lazy to check whether that's with -R or without). And do you get strange messages in dmesg output? A good idea would be to provide additional output from arnbianmonitor -u after conducting the new iperf3 test runs.
    • tkaiser wrote:

      Can you provide iperf3 output of both directions (in the mode that shows retransmits, too lazy to check whether that's with -R or without). And do you get strange messages in dmesg output? A good idea would be to provide additional output from arnbianmonitor -u after conducting the new iperf3 test runs.
      client - Win10 PC, server ROCK64

      iperf3 normal mode (client sends, server receives):

      Source Code

      1. D:\Instalačky\iperf>iperf3 -c 192.168.1.6
      2. Connecting to host 192.168.1.6, port 5201
      3. [ 4] local 192.168.1.2 port 18893 connected to 192.168.1.6 port 5201
      4. [ ID] Interval Transfer Bandwidth
      5. [ 4] 0.00-1.00 sec 59.0 MBytes 495 Mbits/sec
      6. [ 4] 1.00-2.00 sec 54.0 MBytes 453 Mbits/sec
      7. [ 4] 2.00-3.00 sec 54.2 MBytes 455 Mbits/sec
      8. [ 4] 3.00-4.00 sec 53.6 MBytes 450 Mbits/sec
      9. [ 4] 4.00-5.00 sec 54.6 MBytes 459 Mbits/sec
      10. [ 4] 5.00-6.00 sec 55.0 MBytes 461 Mbits/sec
      11. [ 4] 6.00-7.00 sec 54.4 MBytes 456 Mbits/sec
      12. [ 4] 7.00-8.00 sec 54.4 MBytes 457 Mbits/sec
      13. [ 4] 8.00-9.00 sec 54.4 MBytes 456 Mbits/sec
      14. [ 4] 9.00-10.00 sec 55.6 MBytes 466 Mbits/sec
      15. - - - - - - - - - - - - - - - - - - - - - - - - -
      16. [ ID] Interval Transfer Bandwidth
      17. [ 4] 0.00-10.00 sec 549 MBytes 461 Mbits/sec sender
      18. [ 4] 0.00-10.00 sec 549 MBytes 461 Mbits/sec receiver
      19. iperf Done.
      Display All


      iperf3 reverse mode (client receives, server sends):


      Source Code

      1. D:\Instalačky\iperf>iperf3 -c 192.168.1.6 -R
      2. Connecting to host 192.168.1.6, port 5201
      3. Reverse mode, remote host 192.168.1.6 is sending
      4. [ 4] local 192.168.1.2 port 18897 connected to 192.168.1.6 port 5201
      5. [ ID] Interval Transfer Bandwidth
      6. [ 4] 0.00-1.00 sec 77.0 MBytes 646 Mbits/sec
      7. [ 4] 1.00-2.00 sec 79.4 MBytes 666 Mbits/sec
      8. [ 4] 2.00-3.00 sec 81.5 MBytes 684 Mbits/sec
      9. [ 4] 3.00-4.00 sec 80.9 MBytes 679 Mbits/sec
      10. [ 4] 4.00-5.00 sec 80.3 MBytes 674 Mbits/sec
      11. [ 4] 5.00-6.00 sec 81.0 MBytes 679 Mbits/sec
      12. [ 4] 6.00-7.00 sec 81.4 MBytes 683 Mbits/sec
      13. [ 4] 7.00-8.00 sec 82.2 MBytes 690 Mbits/sec
      14. [ 4] 8.00-9.00 sec 81.9 MBytes 687 Mbits/sec
      15. [ 4] 9.00-10.00 sec 82.2 MBytes 689 Mbits/sec
      16. - - - - - - - - - - - - - - - - - - - - - - - - -
      17. [ ID] Interval Transfer Bandwidth Retr
      18. [ 4] 0.00-10.00 sec 809 MBytes 678 Mbits/sec 0 sender
      19. [ 4] 0.00-10.00 sec 808 MBytes 678 Mbits/sec receiver
      20. iperf Done.
      Display All


      dmesq doesnt show anything wrong. Output from armbianmonitor is here

      The post was edited 1 time, last by lakyljuk ().

    • Hmm... the ayufan images do not provide that much info when calling armbianmonitor.

      Can you check RX/TX Offloading with ethtool? Or check numbers with ethtool --offload eth0 rx off tx off and ethtool --offload eth0 rx on tx on again.

      And checking cpufreq while running the test would also be interesting (maybe it's a problem with cpufreq scaling and the SoC remains at 408 MHz): armbianmonitor -m
    • tkaiser wrote:

      Can you check RX/TX Offloading with ethtool? Or check numbers with ethtool --offload eth0 rx off tx off and ethtool --offload eth0 rx on tx on again.

      And checking cpufreq while running the test would also be interesting (maybe it's a problem with cpufreq scaling and the SoC remains at 408 MHz): armbianmonitor -m
      When setting rx off, tx off - it made no difference. When setting rx on tx on, I get slightly higher upload speeds to server around 530 mbits (was 461 before), but reverse mode displays almost the same values as before, around 670 mbits. I monitored CPU, it jumps between two values 408 MHz and 1296 MHz, peak load was around 15% during iperf3 tests. I checked, governor is ondemand, which is default with this build I guess.