Very fast writes, but slow reads over SMB 2.5GBE connection

  • Hi all,


    I've been running OMV6 on a PC over a 1 GBE link successfully for a while but both my Desktop PC and my NAS PC are capable of 2.5GBE so I decided to upgrade the network.


    Today I installed a new 2.5 GBE switch so I can get my full speed, but I have an unexpected issue:


    - Writing a large 6 GB file to OMV from a Win 10 PC = I get the full 2.5 gigabits

    - Reading a large 6 GB file from OMV to the same PC = The maximum speed is 1 gigabit


    1. The OMV hard disk is capable of sequential read/writes of about 250 MB/s so I'm confident the drive is not the issue (tested)
    2. I tried different Ethernet cables
    3. My OMV PC is running the latest updates

    I found it very strange that the reads are capped at exactly 1 gigabit, but the writes are reaching full speed. Normally you would expect the opposite to happen... :rolleyes:


    This is so confusing, any suggestions what to check?

    Minisforum TH50 - CPU: i5-11320H | RAM: 16GB | SSD: 512GB | ETH: 2.5GBE

  • So an update on this after running some tests


    Info:

    ====

    NAS Boot SSD: 512 GB Kingston

    NAS Storage HDD: Seagate Expansion Desktop - External Hard Drive 14 TB 3.5 inches USB 3.0 Model Number: STKP14000402

    Using EXT4 on both drives


    Test

    ====

    - Midnight Commander: copy a large 17GB file from HDD to internal SSD at around 190 MB/s read speed

    - Reading the same 17GB file with HDD attached directly to Windows PC 235 MB/s

    - Reading the same 17GB file over SMB (from NAS to Windows PC) is around 135 MB/s

    - Writing the same file

    - Writing from PC to NAS is very fast at around 235 MB/s - amazing!


    Conclusion

    ==========

    Windows: HDD is fast at around 235MB/s read speed

    Linux: HDD is 20% slower for local reads compared to Windows

    Linux: HDD is 40% slower for SMB reads compared to Windows

    Linux: HDD writes are always at maximum speed - amazing!


    I don't understand why the reads are slow but the writes are OK.

    Minisforum TH50 - CPU: i5-11320H | RAM: 16GB | SSD: 512GB | ETH: 2.5GBE

    • Offizieller Beitrag

    What kernel?

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    It is possible the driver for the 2.5GBe network interface is not fully optimized on Linux. A lot of manufacturers don't write Linux drivers.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • The thing is, that when I load files on the onboard SSD, they transfer fast over ethernet, so I don't suspect the drivers.


    And when data is cached into ram first (ie. copy the same file twice) I get full ethernet speed, that's why this is so strange for me.


    I'm thinking maybe its is a weird linux bug in the way data is cached into ram before it becomes accessible to the kernel.


    I was reading this which is very interesting, describes how linux first caches the data before it even reaches the applications: https://www.yugabyte.com/blog/…ce-tuning-memory-disk-io/


    I want to learn more about kernel read ahead, and see if there is something I can tweak (not feeling brave enough yet :P )


    Maybe I should just leave it alone and ignore the slow speed, its not such a big deal

    Minisforum TH50 - CPU: i5-11320H | RAM: 16GB | SSD: 512GB | ETH: 2.5GBE

    • Offizieller Beitrag

    I'm thinking maybe its is a weird linux bug in the way data is cached into ram before it becomes accessible to the kernel.

    Not a Linux bug. Still a driver issue for your system. Just maybe not a network interface driver. My primary server has no problem saturating ssd speeds without cached data.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Wow I think I may have solved it:

    I'm consistently reading at maximum speed now.



    Learn the current read ahead block size, mine was 256:

    Code
    blockdev --getra /dev/sda


    Set a higher read ahead block size:

    Code
    blockdev --setra 16384 /dev/sda


    I think this means 16384 blocks of 512 bytes = so 8MB

    Now my read can reach the maximum speed of my drive.

    We can make the change permanent by running the above command during startup.

    I will run some more tests to confirm fully, but I'm 99% sure this was the solution.


    Edit: Confirmed solved - the read ahead cache made all the difference. I tested over 100GB of data transfers and I'm reaching maximum sequential speed consistently now.

    Minisforum TH50 - CPU: i5-11320H | RAM: 16GB | SSD: 512GB | ETH: 2.5GBE

    2 Mal editiert, zuletzt von jonnerino ()

  • jonnerino

    Hat das Label gelöst hinzugefügt.
  • jonnerino

    Hat das Label OMV 6.x hinzugefügt.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!