Slow transfer speed. Where's the bottleneck ?

  • I have OVM installed on a HP Microserver N40L, but I haven't been able to obtain good transfer speeds, so I'm trying to find the bottleneck.


    I'm getting around 30-40 Mbps on transfers over my local network (everything is Gigabit, read on) on Samba, Nfs and FTP shares alike.


    I have been playing around and tweaking for a while, following the advice I could find on forums, but haven't been able to improve the speed.


    First of I have two Hitachi 1TB drives on software RAID1. These are working at reasonable speed :


    Network cards, modem, cables, everything is ok with Gigabit speed, which I can achieve between my computer and the OVM server with iperf :


    Code
    root@openmediavault:~# iperf -s
    ------------------------------------------------------------
    Server listening on TCP port 5001
    TCP window size: 85.3 KByte (default)
    ------------------------------------------------------------
    [  4] local 192.168.1.67 port 5001 connected with 192.168.1.40 port 58598
    [ ID] Interval       Transfer     Bandwidth
    [  4]  0.0-10.1 sec    132 MBytes    110 Mbits/sec


    On the server side, CPU is barely at 30-50% max on transfers, and RAM about 5-10%, so this shouldn't be a problem.


    I have tried to tweak the configurations of the various transfer protocols (especially Samba, but also NFS) to obtain better speeds, but I haven't succeed.


    Does anybody have a clue of where the bottleneck might be?


    Has anyone gotten close to 100Mbps? If yes, which protocol have you used for the shares?


    Thanks!


    P.S. : The /dev/vdb and /dev/vdc is because I am currently running OVM on a ProxMox virtual machine, but I was getting the same speeds of the test below when I had OVM installed directly on the server. Actually I tried the VM to get better performance, but the speeds look the same on every step of the test I did with both.

  • 400MB disk read is not really reprensentive, give me the results of the following commands


    Code
    cd /media/UUIDofyourDATADRIVE/
    dd if=/dev/zero of=tempfile bs=1mb count=30720
    dd if=tempfile of=/dev/zero bs=1mb count=30720
    rm tempfile


    Yes, I get over 100Megabyte per second via SMB, but I have an i5 and a hardware controller for my raid5.


    Greetings
    David

    "Well... lately this forum has become support for everything except omv" [...] "And is like someone is banning Google from their browsers"


    Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.

    Upload Logfile via WebGUI/CLI
    #openmediavault on freenode IRC | German & English | GMT+1
    Absolutely no Support via PM!

  • Hi David, and thanks for your reply.


    I have 28Gb of free space, so I run your test with 20Gb instead of 30 :


    Code
    root@openmediavault:/media/d7bf3c48-664c-4f78-b087-88a4f570ac73# dd if=/dev/zero of=tempfile bs=1M count=20480
    20480+0 records in
    20480+0 records out
    21474836480 bytes (21 GB) copied, 239.674 s, 89.6 MB/s
    
    
    root@openmediavault:/media/d7bf3c48-664c-4f78-b087-88a4f570ac73# dd if=tempfile of=/dev/zero bs=1M count=20480
    20480+0 records in
    20480+0 records out
    21474836480 bytes (21 GB) copied, 215.056 s, 99.9 MB/s


    I think the speed could be lower than usual because of the size of the file in regard of the free space left and fragmentation issues. With a 5Gb file I get better transfer rates.


    Code
    root@openmediavault:/media/d7bf3c48-664c-4f78-b087-88a4f570ac73# dd if=/dev/zero of=tempfile bs=1M count=5120
    5120+0 records in
    5120+0 records out
    5368709120 bytes (5.4 GB) copied, 49.9245 s, 108 MB/s
    
    
    root@openmediavault:/media/d7bf3c48-664c-4f78-b087-88a4f570ac73# dd of=/dev/zero if=tempfile bs=1M count=5120
    5120+0 records in
    5120+0 records out
    5368709120 bytes (5.4 GB) copied, 44.9963 s, 119 MB/s


    Regards,


    Arduino

  • Your iperf output sais 110Mbits per second. Which is not Gigabit. It's not 100 Mbits either, so it looks like you have collisions and/or retransmissions that reduce the effective bandwidth. Even on a good Gigabit link you cannot get more than 400-500Mbits net bandwidth because of fragmentation and protocol overhead. So check your wires and your interface error counters.

    • Offizieller Beitrag
    Zitat von "fizze"

    Even on a good Gigabit link you cannot get more than 400-500Mbits net bandwidth because of fragmentation and protocol overhead.


    Really? I can saturate a gigabit link with my N40L let alone my more powerful servers.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • fizze: you're right, I will look up to see what's going on with my link. Nevertheless I would be happy to get 100 Mbps on Samba transfers, and looking to iperf output there seems to be room to improve from the 30-40 Mbps I'm getting to the 110 Mbps on iperf.

  • The network could indeed be the issue, as I have found strange results on network speeds.


    OVM is on 192.168.1.67
    My pc in 192.168.1.40
    Server is 192.168.1.55


    I get poor iperf from 192.168.1.40 connecting to both 192.168.1.67 and 192.168.1.55 :


    Code
    iperf -c 192.168.1.67
    ------------------------------------------------------------
    Client connecting to 192.168.1.67, TCP port 5001
    TCP window size: 84.5 KByte (default)
    ------------------------------------------------------------
    [  3] local 192.168.1.40 port 36707 connected with 192.168.1.67 port 5001
    [ ID] Interval       Transfer     Bandwidth
    [  3]  0.0-10.0 sec   145 MBytes   122 Mbits/sec


    Code
    iperf -c 192.168.1.55
    ------------------------------------------------------------
    Client connecting to 192.168.1.55, TCP port 5001
    TCP window size: 21.6 KByte (default)
    ------------------------------------------------------------
    [  3] local 192.168.1.40 port 56930 connected with 192.168.1.55 port 5001
    [ ID] Interval       Transfer     Bandwidth
    [  3]  0.0-10.0 sec   265 MBytes   222 Mbits/sec


    But a lot better the other way around :


    Code
    iperf -c 192.168.1.40
    ------------------------------------------------------------
    Client connecting to 192.168.1.40, TCP port 5001
    TCP window size: 16.0 KByte (default)
    ------------------------------------------------------------
    [  3] local 192.168.1.67 port 48753 connected with 192.168.1.40 port 5001
    [ ID] Interval       Transfer     Bandwidth
    [  3]  0.0-10.0 sec  1.08 GBytes    923 Mbits/sec


    Code
    iperf -c 192.168.1.40
    ------------------------------------------------------------
    Client connecting to 192.168.1.40, TCP port 5001
    TCP window size: 23.8 KByte (default)
    ------------------------------------------------------------
    [  3] local 192.168.1.55 port 48844 connected with 192.168.1.40 port 5001
    [ ID] Interval       Transfer     Bandwidth
    [  3]  0.0-10.0 sec  1.07 GBytes   919 Mbits/sec
  • I got my hands on a Windows 7 machine and got 90-110 Mbps reading from the same Samba share and around 80 Mbps copying to it.


    So it is indeed a problem coming from my pc (Debian 7), I'll look into it and try to figure it out. OVM seems to be working great, so that's reassuring because I can stop wasting my time trying to tweak the server.


    So, as far as OVM is concerned the issue is solved. Thanks for your advice, now I know where to look...


    @tekkbebe : my switch sees a Gigabit connection coming from my pc, but I'm going to investigate known issues related to my nic.

  • I agree with tekkbebe, if you want full network speed best to use server grade Intel nic. I only use intel nics and have a 24 port server grade netgear switch with cat 6 cable. I always hit my full pipe all the time. Transfer on gigabit is pretty fast compared to 100 megabit. I would say its either bad wiring or your components are not server grade ie nics,switch. On software raid I hit 250 to 270 megs per second. That's not network but between software raid 5 from one raid to another.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!