how to find bottlenecks and increase write/read speeds

    • how to find bottlenecks and increase write/read speeds


      I just put together my first omv setup and for my very first test I have very slow write and read speeds.
      *i.e.* during a big file copy (2Gb) my linux computer, connected trough smb, showed ~500kb/s write and ~600kb/s read

      I don't have much knowledge in all this stuff so I would like to know what you think could be a bottleneck and what I need to improve first.

      OMV is installed on an HP thin client t5740, which has 1Gb ram and Intel Atom N280 1.66GHz CPU.

      I wired a sata 160Gb hard disk (7200 rpm, 3gb/s) to the motherboard.

      In the configuration I setup SMB and a shared folder.

      My omv machine is wired through ethernet to the router and my Linux computer is connected wirelessly to this same router.

      Let me know what you think I should test. Thanks! :thumbsup:
    • As a general approach, first check the health of your OMV device - personally, I use htop, iftop, iotop and nmon (they probably need installing with apt-get install...)

      Then, when you are doing the file transfers, just see how the system is coping... is the CPU maxing out, or memory, etc.

      That will at least give you an idea of where to start looking.

      If you want to test the raw network performance, you can use iperf - that will at least give you the raw achievable throughput between your linux computer and OMV, so you can compare to see if it is SMB that is slowing down the connection (you could test NFS shares too...)

      For iperf, run it in server mode on OMV, ie something like:

      Source Code

      1. iperf -s -p 12345

      Then on your linux computer (can also do on Windows with cygwin), run iperf in client mode:

      Source Code

      1. iperf -c <name/IP of OMV> -p 12345 -n 100M

      That will transfer 100MB over port 12345.
      You should then see something like:

      Source Code

      1. [ ID] Interval Transfer Bandwidth
      2. [ 3] 0.0-48.6 sec 100 MBytes 17.3 Mbits/sec

      Note that iperf just tests the network, so you should also use iftop to "see" what's happening because it won't affect the disks at all
      If you use iotop to monitor the disks, press "o" to just see what is actually accessing the disk
      for nmon, use "d" to see which disk(s) are being used and "l" for the CPU graph

      Again, personally, I would run this inside screen or tmux, so that you can see multiple windows at the same time :)
    • Also in adittion to iperf to test the network you can also use dd to test the hard disk speed.

      dd if=/dev/zero of=/directory/file bs=64M count=1000

      This will start writing zeroes to a file that you said in "of".
      bs is to say the size of the datablocks 64M is a fine number to do a speed test
      count says how many blocks will be writen.
      I recommend to run iotop while doing that to check the speed in real time.

      Take care when using dd because it can also be used to destroy all the data in any device of your system depending in what parameters you use.
    • Thanks!

      I already did the iperf thing.

      Here is the result

      Source Code

      1. [ ID] Interval Transfer Bandwidth
      2. [ 3] 0.0-191.7 sec 100 MBytes 4.38 Mbits/sec

      Investigating all the rest right now

      I have done the same iperf test with htop open:
      Ram and CPU don't rise at all during the test
      <7% for ram
      <10% for cpu

      NMON d+l
      CPU is not rising, I see about 10% in the graph
      DISK: I sometimes see writes on SDA1, about 10 to 25 KB, then 0 then again ~10KB
      but that continues when the iperf test is done.

      IOTOP o
      I don't see much there during iperf, some lines appear and disappear.
      When downloading a file from sata disk (sdb) I saw a line with about 700-800KB reading in SMB
      when I try that again I don't see anything anymore.

      IFTOP is hard to understand

      The post was edited 4 times, last by extnction ().

    • There are plenty of threads about samba optimization on this forum. Try some of the optimizations from those.
      omv 4.1.17 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13 plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • extnction wrote:

      From all this I guess we can rule out cpu and RAM.

      :) yep

      4.38 Mbits/sec seems very slow to me.

      As a comparison my result above (17.3 Mbits/sec) was using my laptop to my OMV box over my wifi...

      Presuming that iperf isn't affected by samba, I'd say that you have basic network issues somewhere... the biggest suspect is wifi

      Just to rule out anything else, you could boot the NAS from another linux distro (ie, on a USB stick?) and run iperf again... that would mean you've moved OMV out of the equation... similarly, if you can repeat the test with your linux computer (a laptop?) connected directly to the router as well, that would be good to compare.

      Check whether your wifi is overlapping with someone else's wifi too.. perhaps there's someone else on the same channel... moving your wifi to another channel might help.