SMB speed over LAN?

  • Hi


    Having tried a number of NAS and home server distros I've settled on OMV on the basis I found it way more logical to set up than any of the others, and have installed it on an old HP Gen6 Microserver (NL40 model I think). I have two SATA drives drives - one for the OS and one for the data so no RAID at the moment. The data drive is a WD Green and this hosts the file shares.


    The server has a 1Gbps LAN socket and it's connected by ethernet to router which incorporates 1Gb speed switch.


    I've been testing the speeds for the SMB shares I have using a LAN testing app and I'm getting 221Mbps write and 287Mbps read speeds. How does that sound? I realise there is more to these speeds than simply expecting to get the full 1Gb of the card - the CPU speed etc. Should I be expecting more? If so are there any settings to play with to accomplish this.


    Thanks in advance

  • SMB has always been flakey on transfer speed compared to NFS or FTP. Not just on Samba but on Windows itself. It's not even very consistent. The same file can run at different speeds when multiple tests are run.


    To benchmark your network first use something like iperf, it has no disk subsystem dependancies, running from memory to memory. Once you know the network is OK, test with FTP. You should get decent rates from FTP. That will give you some comparison for your SMB. Don't expect it to be as fast as FTP though.

  • Should I be expecting more?

    Sure. Look at the transfer rates of this inexpensive ARM toy running OMV4: https://www.hardkernel.com/shop/odroid-hc2-home-cloud-two/


    Around 100 MB/s with Windows Explorer and Windows 10. How does your test setup look like?


    After testing with iperf I would suggest to use Helios LanTest since more standardized than copying of random data in Explorer.

  • Hijacking this thread since I have similar issue, but different setup. I have ROCK64 with OMV installed, I today upgraded to latest:

    • Processor: ARMv8 Processor rev4 (v8l)
    • OMV Release: 4.1.19-1, Codename: Arrakis
    • Kernel: Linux 4.4.132-1075-rockchip-ayufan-ga83beded8524

    I got writing speed around 50-60MB/s with USB3 hard drive, and after I measured with iperf3 and I am getting only 450mps at gigabit line. I ruled out:

    • router and cable, since anything else (eg laptop) on this very cable can get 950mps on iperf3
    • hard drives, I disconnected them, measured only ethernet speed without anything connected to ROCK64

    I am not sure which version of OMV was there before I upgraded it, but I am pretty sure, I was able to copy at 100MB/s in the past, so only thing I can think of is upgrade which could mess something like ethernet drivers, dont know. Any advice would be greatly appreciated, thanks.

  • I got writing speed around 50-60MB/s with USB3 hard drive, and after I measured with iperf3 and I am getting only 450mps at gigabit line. I ruled out:


    Is HDD able to give more?
    Many disks dramatically lose performance when they increase % of used space. SSD too when it comes to writing data, the famous 60-70%.
    But on the other hand you have a low iperf so the problem probably somewhere else ....
    For example I have an average of 100MB/s according to iperf but smb and 90% hdd I have only 55MB/s. In my case, the hdd is a bottleneck.

  • measured with iperf3 and I am getting only 450mps at gigabit line

    Can you provide iperf3 output of both directions (in the mode that shows retransmits, too lazy to check whether that's with -R or without). And do you get strange messages in dmesg output? A good idea would be to provide additional output from arnbianmonitor -u after conducting the new iperf3 test runs.

  • Can you provide iperf3 output of both directions (in the mode that shows retransmits, too lazy to check whether that's with -R or without). And do you get strange messages in dmesg output? A good idea would be to provide additional output from arnbianmonitor -u after conducting the new iperf3 test runs.

    client - Win10 PC, server ROCK64


    iperf3 normal mode (client sends, server receives):


    iperf3 reverse mode (client receives, server sends):




    dmesq doesnt show anything wrong. Output from armbianmonitor is here

  • Hmm... the ayufan images do not provide that much info when calling armbianmonitor.


    Can you check RX/TX Offloading with ethtool? Or check numbers with ethtool --offload eth0 rx off tx off and ethtool --offload eth0 rx on tx on again.


    And checking cpufreq while running the test would also be interesting (maybe it's a problem with cpufreq scaling and the SoC remains at 408 MHz): armbianmonitor -m

  • Can you check RX/TX Offloading with ethtool? Or check numbers with ethtool --offload eth0 rx off tx off and ethtool --offload eth0 rx on tx on again.


    And checking cpufreq while running the test would also be interesting (maybe it's a problem with cpufreq scaling and the SoC remains at 408 MHz): armbianmonitor -m

    When setting rx off, tx off - it made no difference. When setting rx on tx on, I get slightly higher upload speeds to server around 530 mbits (was 461 before), but reverse mode displays almost the same values as before, around 670 mbits. I monitored CPU, it jumps between two values 408 MHz and 1296 MHz, peak load was around 15% during iperf3 tests. I checked, governor is ondemand, which is default with this build I guess.

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!