File transfer speed multiple files - ?

  • Hello everyone !! I recently put together a system using OMV.


    Now I have a question with regards to file transfer performance over samba/cifs. (have not tried the test below with NFS or other methods)


    My system has two raid arrays:
    7 x 2tb drives in a raid 6 (5900 RPM drives)
    2 x 300gig drives in raid 1 (10,000 RPM drives)


    I have a quad gigabit network card configured to use LACP (my switch is setup to use LACP as well); the rest of the network is gigabit


    When I transfer large files from OMV to single computer over gigabit network I get sustained transfer of 90MB/s to 105MB/s, which I am happy with.
    If I have two machines transferring files; One machine from the Raid6 array and one from the raid 1 array, they both transfer at 90MB+ , so the Bonded interface LACP is working and I have higher bandwidth as I should with quad interfaces bonded LACP


    However when I transfer files to both machines from the same raid 6 array, my performance goes down significantly. It is if I was transferring just on a single gigabit connection and not through a bonded LACP interface. Combined transfers rate around 120MB/s which is the around the theoretical speed of a gigabit network. If I add a third machine the combined transfer rate remains the same, so as if it only limited to a single gigabit connection.


    Has anyone else run into this? Is this an issue with the speed of the raid 6? Two or more machines pulling large data from the same raid array?

    • Offizieller Beitrag

    You might be hitting a cpu limit. What cpu are you using?

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I have dual intel xenon E5606, Quad core 2.13ghz each, I have 24gig ram ECC DDR3-1600
    The Raid 6 is connected to an LSI HBA. Two SAS connection which fan out into eight SATA connections. LSI board is plugged into one of the pci-express 16 slot


    CPU load is light when I look at the graphs

  • I ran the following commands, which I found on another post here:


    dd conv=fdatasync if=/dev/md0 of=/tmp/test.img bs=1G count=3
    = 3.2G copied, 8.80292 s, 366 MB/s


    hdparm -Tt /dev/md0 (ran this twice)
    Timing cached reads: 10932 MB in 2.00 seconds = 5471.20 MB/sec
    Timing buffered disk reads: 1506 MB in 3.00 seconds = 501.53 MB/sec


    dd if=/dev/md0 of=/dev/null bs=1G count=3
    (3.2 GB) copied, 1.44132 s, 2.2 GB/s

    • Offizieller Beitrag

    It is fast enough. What NIC brand? Did you try the backports 3.16 kernel (button to install in omv-extras). Did you try different cables or switch?

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • It is an intel NIC I340-T4
    switch TP-LINK TL-SG2216 16-Port Gigabit Smart Switch
    I have tried multiple cables and even built some new ones.


    I do not think it is the switch or the NIC because when I have two clients pulling data from different arrays on the OMV they are able to both reach transfer speeds above 100MB/s (over 200MB/s total), so that means that the LACP bond is working. It is just when I have two or more clients pull data off the raid6 array (the main data array) the transfer rate seems to not accede a combine speed of 120MB/s (the rate of single interface). I am going to run some tests and see if a pair of clients pull data off the raid 1 array if the same thing happens. The raid 1 array uses two 300gig 10K RPM drives.


    The clients all use SSD's as the disk the data storage disk.


    I have not tried the backports kernel

  • maybe @bonkersGER can add some of his experience with bonding.


    Greetings
    David

    "Well... lately this forum has become support for everything except omv" [...] "And is like someone is banning Google from their browsers"


    Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.

    Upload Logfile via WebGUI/CLI
    #openmediavault on freenode IRC | German & English | GMT+1
    Absolutely no Support via PM!

    • Offizieller Beitrag

    The 3.16 will have a newer driver which may be more efficient/optimized especially with a newer intel NIC. Try it.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • So I installed the 3.16 kernel - no difference.


    I tested reading data from OMV raid 1 array to two clients at the same time and everything worked like it should. Both clients transferring files at about 100MB/s.


    I also tried writing operation to the raid 1 array and it works as expected. writing at about 100MB/s


    Finally I tried writing to the raid6 array and the write operation worked like it should. Both clients transferring data to the OMV at about 100MB/s


    However no matter what clients I use the reading operation from the raid6 to two clients at the same time is limited to a total transfer rate of 125MB/s


    I have tested several different clients and logged on users, but it does not seem to make a difference. Transferring data from the raid6 is limited.


    All the testing so far showed that the NIC LACP bonding is working and the switch is setup properly. It must be something within my raid6 array in OMV, but I am not sure what to look at next or why this would be happening.


    Writing to the array seems to work like it should with bonded interface and multiple clients, but transferring data out does not ??????

  • Check if iftop (i think thats the name of it) shows that BOTH interfaces are used, I slightly doubt it. @bonkersGER found some more tripstones like that LACP calculates which interface to use based on the mac address, so it could happen that even multiple clients are served via the same interface.


    Greetings
    David

    "Well... lately this forum has become support for everything except omv" [...] "And is like someone is banning Google from their browsers"


    Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.

    Upload Logfile via WebGUI/CLI
    #openmediavault on freenode IRC | German & English | GMT+1
    Absolutely no Support via PM!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!