SMB GURUS PLEASE HELP (Is anyone actually getting 10GB speeds on OMV?)

  • So I have several machines running OMV and I was just wondering if I am getting bottlenecked by my hardware or am I missing something else


    On my Lenovo EMC2 machine I am running 12x4TB spinning IronWolf (5800rpm) drives in a RAID6 config and getting around 200MB/s-450MB/s with an average of 250MB/s once it settles down.

    Which I expected based on a couple of RAID calculators I used online. A little slower than expected, yet still within tolerance considering these aren't IronWolf Pro disks and the hardware is a bit dated.


    On my Wiwynn SV315 SFF10 I am running 10x2TB solid state LEXAR 2.5" drives in a RAID6 config and getting around 300MB/s-600MB/s with and average of 350MB/s once it settles down.


    I was quite surprised that the numbers were so low for the SSDs.


    I have read that 8 of those drives are on two different SFF8087 3GB/s backplanes and 2 of the drives are on a 6GB/s backplane(aka attached via sata to MB) Does the 3GB/s apply to each disk or for each set of 4 disks, does anyone know? That would effectively bring me down to about 750MB/s per disk if that is the case. Yet that is still well above the SSD capability of around 500MB/s Read/Write. Now there is a possibility of the backplane only supporting 3GB/s total for all 8 disks and thus bringing the speeds down to around 375MB/s which is on par for the results I am getting, But then that makes me wonder how I would get spikes of 600MB/s at all in that case.

    So I am assuming I am getting 3GB/s through each SFF8087 Cable that would be for 4 drives and therefore about 750MB/s per disk (if the disks could perform that high.)


    So all that being said, I am running cat6 -1ft patch cables on my switch, cat6-3ft cables from the front to the back of my server rack and lastly cat6 flat - 5ft extentions from the back of the server rack to each machine. Could the Cat6 flat cables be the issue since they arent standard type cables, yet they say cat6 they are just flat.


    Other than that, I have 10GB PCIE cards in my machines and have 10GB switches.

    And I am transferring from a Internal NVME that reads at around 1.5GB/s-3GB/s so that should be no issue.

    And also, I am mostly transferring large files, MP4 to be exact. In the 1GB-5GB range in folders of 10-20 each sometimes much more.

    So folder sizes of 10GB-300GB. Just Files / Folders full of MP4s.


    I just expected to get close to 10GB (1250MB/s) speeds (or even like 800MB/s Sustained) based on the online raid calculators saying I should get close to 1500MB/s with a 10 Disk RAID6 config with SSDs.


    Any other ideas or some insight to why my transfer speed is not as high as I would like?


    Thanks All.

    Chenbro NR12000 32GB Ram - Xeon E2-1230 V2 @ 3.3GHz - OMV6

    Supermicro X9DRI-LN4F+ 128GB Ram - 2x Xeon E5-2670 V2 @ 2.5GHz - OMV6

    2x Wiwynn Lyra SV315 10SFF 32GB Ram - 2x Xeon E5-2630 v2 @ 2.60GHz - OMV6

    2x Lenovo EMC PX12-400R 32GB Ram - Core i3-3220 @ 3.3GHz - OMV6

    Edited once, last by MultiUser ().

  • Im hoping an admin can weigh in here and give some advice or proceedure that will be useful to get my speeds up.


    I spent all this money on 10GB ethernet equipment, I would like to be able to get my moneys worth out of it ;( :P

    Chenbro NR12000 32GB Ram - Xeon E2-1230 V2 @ 3.3GHz - OMV6

    Supermicro X9DRI-LN4F+ 128GB Ram - 2x Xeon E5-2670 V2 @ 2.5GHz - OMV6

    2x Wiwynn Lyra SV315 10SFF 32GB Ram - 2x Xeon E5-2630 v2 @ 2.60GHz - OMV6

    2x Lenovo EMC PX12-400R 32GB Ram - Core i3-3220 @ 3.3GHz - OMV6

  • @raulfg3 & @ryecoaaron , Hey friends,


    Couple things, First Just wanted to know if I should configure anything in Windows 10 side of the house where I will be transferring to/from.


    Next,


    I was just wondering if the settings are still up to date and best practice in JULY2023?


    The Following from "4Feb2022":


    min receivefile size = 16384

    write cache size = 524288

    getwd cache = yes

    socket options = TCP_NODELAY IPTOS_LOWDELAY

    read raw = yes

    write raw = yes


    OR


    The Following from 2016:


    socket options = TCP_NODELAY SO_RCVBUF=65536 SO_SNDBUF=65536
    read raw = yes 

    write raw = yes 

    max xmit = 65535 

    dead time = 15
    getwd cache = yes


    AND


    I also saw the Following in a few places:


    min protocol = smb2




    YET I SAW THE FOLLOWING FROM ANOTHER SMB ROOKIE LIKE ME WITH THE FOLLOWING TESTS:


    -----


    So I see two kind-of different sets of code and 6 years apart. Plus this guys failed sounding attempts. From what he said in his post...

    But reading through those pages it doesn't seem much has changed.

    Would that be a fair assumption?


    So I am going to give both a try together and see what happens, I will post my results.


    But, @raulfg3 & @ryecoaaron OR anyone else please weigh in here and maybe we can just have an up to date post where the first line is the options that need to be set for best performance.. period.

    Instead of trying to sift through 8 years of SMB.

    I will happily edit this posts first message to include the most up to date / fastest settings available at the time of writing in JULY2023.

    And please don't think I am coming across as a doofus or a jerk. I just legit think there may be a better way to present the info... IDK



    Here goes nothing...My attempt #1 - A combination of everything I can decypher.


    min protocol = smb2

    min receivefile size = 16384

    write cache size = 524288

    getwd cache = yes

    socket options = TCP_NODELAY IPTOS_LOWDELAYSO_RCVBUF=65536 SO_SNDBUF=65536

    read raw = yes

    write raw = yes

    max xmit = 65536 

    dead time = 15


    I took what appeared to be the best options at a glance and put them all in the above code to see what will happen.


    For reference I will be Reading and Writing a 58GB file with MP4 Movies in it. All from a Windows 10 Pro Machine. It will be moved back and forth on a 2TB NVME Drive That sees typical local (on PC to PC, NVME to NVME) transfers around 3000MB/s-3500MB/s. Im using this method since I know the NVME should not be a bottleneck here.

    Results for ATTEMPT#1 - Full Code I came Up with

    WRITE SPEED - 300-400MB/s with much more steady speeds (like it isn't jumping all around so bad) I did hit a 20-30 second lull of around 300MB/s but for the most part it stayed around 360MB/s or 400MB/s.

    READ SPEED - 210MB/s -260MB/s again with steady speeds. Started out around 210MB/s for about 20 sec. then jumped to 250MB/s-270MB/s where it stayed for most of the remainder of the 58GB FILE, except it finished strong on the last 15-25 sec. it jumped again to around 285MB/s.


    Still not the speed I am looking for. However I did forget to put a space in line 5 after IPTOS_LOWDELAY. So I will test that space in there. And also will test with IPTOS_NODELAY.


    Results for ATTEMPT#2 - Fixed the Space

    WRITE SPEED - Added the space in there, and my speed plummeted to 160MB/s, which leads me to believe that either IPTOS_LOWDELAY is a problem OR SO_RCVBUF=65536 is a problem. Not sure which, but Ill try IPTOS_NODELAY next. I will say though that the 160MB/s was very steady.(very little delta on speed during transfer.)

    READ SPEED - A minor increase of a fairly steady 260MB/s-290MB/s


    Results for ATTEMPT#3 - Change IPTOS to NODELAY (Previously LOWDELAY)

    WRITE SPEED - No Significant Change.. Around 160MB/s

    READ SPEED - No Significant Change.. fairly steady 260MB/s-290MB/s


    So at this point I was doing better with an "error" (missing space) in my code which again leads me to believe that IPTOS setting could be a problem. But it still could be the SO_RCVBUF as well. The script from Feb2022 didn't even use the SO_RCVBUF which makes me wonder about that as well. SO, in an effort to get this right I am going to try 3 more things. 1st I will remove the SO_RCV and SO_SND codes. 2nd I will try the code straight up from 2016. 3rd I will try the code straight from Feb2022. Here we go.


    Results for ATTEMPT#4 - Remove SO_RCV and SO_SND codes

    Looks like:

    Code
    min protocol = smb2
    min receivefile size = 16384
    write cache size = 524288
    getwd cache = yes
    socket options = TCP_NODELAY IPTOS_NODELAY
    read raw = yes
    write raw = yes
    max xmit = 65536
    dead time = 15

    WRITE SPEED - Strong Start with 400MB/s off the bat, but quickly dropped and went up and down between 300MB/s-400MB/s and finally kind-of leveling off between 330MB/s and 380MB/s

    READ SPEED - Significant improvement with a start around 400MB/s and climbed to and peaked around 450MB/s-470MB/s.


    Still not the speeds I am looking for. However, it shows that the settings are taking effect and changing things quite drastically in some cases.

    For now I think socket options may be exhausted of options. Maybe. ;)


    Results for ATTEMPT#5 - 2016 Code Straight Up - (I have low confidence in this due to the age of it, however I want to give it a fair shot.)

    Looks like:

    Code
    socket options = TCP_NODELAY SO_RCVBUF=65536 SO_SNDBUF=65536
    read raw = yes 
    write raw = yes 
    max xmit = 65535 
    dead time = 15
    getwd cache = yes

    WRITE SPEED - Started around 350MB/s and then bounced between 300MB/s and 400MB/s. But for the most part stayed in the High 300s and even hit 425MB/s for a microsecond.

    READ SPEED - Started around 350 but quickly climbed to mid 400s and finally settled around 500MB/s-530MB/s for the last 20GB or so.


    Interpretation by me is that the READ SPEED went up a small bit but the WRITE SPEED just cannot seem to break the barrier of 400MB/s.

    Hoping this 2022 code will be the way forward.


    Results for ATTEMPT#6 - 2022 Code Straight Up

    Looks like:

    Code
    min receivefile size = 16384
    write cache size = 524288
    getwd cache = yes
    socket options = TCP_NODELAY IPTOS_LOWDELAY
    read raw = yes
    write raw = yes

    WRITE SPEED - Off the bat to 300MB/s and then up to 475MB/s pretty quick hitting 525 briefly but unfortunately dropping and hovering in the 360MB/s-380MB/s range and leveling out there for the rest of the transfer. So no major improvement, yet PROOF that the system can hit 525MB/s

    READ SPEED - Off the bat 300MB/s, climbing to 440MB/s rapidly then slowly climbing and plateauing in the 470MB/.s-500MB/s range.


    At this point I DONT KNOW WHAT ELSE TO DO. I have tried the recommendations and even tried to tweak a little bit. But I am nowhere near 10GB ethernet speeds.










    SMB GURUS PLEASE REACH OUT!

    ;( ;( ;( ;( ;( ;( ;( ;(










    revise : problem #5

    AND Thanks buddy.


    Yep. I might even try min protocol = smb2

    AND Thanks to you also.

    Chenbro NR12000 32GB Ram - Xeon E2-1230 V2 @ 3.3GHz - OMV6

    Supermicro X9DRI-LN4F+ 128GB Ram - 2x Xeon E5-2670 V2 @ 2.5GHz - OMV6

    2x Wiwynn Lyra SV315 10SFF 32GB Ram - 2x Xeon E5-2630 v2 @ 2.60GHz - OMV6

    2x Lenovo EMC PX12-400R 32GB Ram - Core i3-3220 @ 3.3GHz - OMV6

    Edited 7 times, last by MultiUser ().

  • MultiUser

    Changed the title of the thread from “Speeds are slower than expected...” to “SMB GURUS PLEASE HELP (Is anyone actually getting 10GB speeds on OMV?)”.
  • I have tinkered and tested so many SMB configurations I don't know what to do with myself.

    Is there another way to get 10GB speed on OMV? Surely, 10GB is possible... right? :?:

    Chenbro NR12000 32GB Ram - Xeon E2-1230 V2 @ 3.3GHz - OMV6

    Supermicro X9DRI-LN4F+ 128GB Ram - 2x Xeon E5-2670 V2 @ 2.5GHz - OMV6

    2x Wiwynn Lyra SV315 10SFF 32GB Ram - 2x Xeon E5-2630 v2 @ 2.60GHz - OMV6

    2x Lenovo EMC PX12-400R 32GB Ram - Core i3-3220 @ 3.3GHz - OMV6

    • Official Post

    Is there another way to get 10GB speed on OMV? Surely, 10GB is possible... right?

    I have no problem hitting 10GB speed on OMV with the default tuning parameters. I don't think there is anything OMV can do here. You added tuning parameters for samba but you might need to try newer/different firmware for the 10GB adapter. You might need to try different kernels (I use the proxmox kernel). I would try different cables too.

    omv 7.4.14-1 sandworm | 64 bit | 6.11 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.15 | compose 7.2.16 | k8s 7.3.1-1 | cputemp 7.0.2 | mergerfs 7.0.5 | scripts 7.0.9


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • (I use the proxmox kernel).

    how do you do that?

    Chenbro NR12000 32GB Ram - Xeon E2-1230 V2 @ 3.3GHz - OMV6

    Supermicro X9DRI-LN4F+ 128GB Ram - 2x Xeon E5-2670 V2 @ 2.5GHz - OMV6

    2x Wiwynn Lyra SV315 10SFF 32GB Ram - 2x Xeon E5-2630 v2 @ 2.60GHz - OMV6

    2x Lenovo EMC PX12-400R 32GB Ram - Core i3-3220 @ 3.3GHz - OMV6

  • omv6:omv6_plugins:kernel [omv-extras.org]

    Could you give a brief explanation of the benefits of doing the proxmox kernel?

    Chenbro NR12000 32GB Ram - Xeon E2-1230 V2 @ 3.3GHz - OMV6

    Supermicro X9DRI-LN4F+ 128GB Ram - 2x Xeon E5-2670 V2 @ 2.5GHz - OMV6

    2x Wiwynn Lyra SV315 10SFF 32GB Ram - 2x Xeon E5-2630 v2 @ 2.60GHz - OMV6

    2x Lenovo EMC PX12-400R 32GB Ram - Core i3-3220 @ 3.3GHz - OMV6

    • Official Post

    Could you give a brief explanation of the benefits of doing the proxmox kernel?

    It's an Ubuntu kernel. The advantage is that it is more modern, it has drivers that previous kernels do not have and could solve your problem.

    The disadvantage is the same, that it is a more modern kernel, therefore it may still contain some errors.

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!