Files transfers vs files size.

  • I have an example of two different files (movie) on the OMV. Could you please explain to me why speed transfer is higher for first file (700MB), and lower for second (16GB)?


    - firs file is 700MB


    - second file is 16GB


    OMV - 6.3.1-1 (Shaitan)
    Kernel - Linux 5.16.0-0.bpo.4-amd64

    CPU i5-3570
    RAM 8GB
    HDD OS - WB BLUE 250GB
    HDD 2x Samsung EcoGreen F2 HD154UI 1500 GB (mirror) (max drive speed 107 MB/s)

    2 Mal editiert, zuletzt von Stylishh ()

    • Offizieller Beitrag

    I have an example of two different files (movie) on the OMV. Could you please explain to me why speed transfer is higher for first file (700MB), and lower for second (16GB)?

    In "bytes per second", large files usually transfer faster.


    However, there's a variety of potential reasons why smaller files might transfer faster. Network speed and settings; 100mbs or 1 Gig?, MTU packet fragmentation at 100MBS, Jumbo frame size settings in a 1Gig network , etc.
    (The speed indicated suggests that you're running 1 GBS between source and destination.)
    Clients running at different speeds on a mixed network with 100mbs on one end, 1 Gig on the other. If one client is running full duplex, the other 1/2 duplex. There's also other potential traffic on the network that may be competing with any given transfer as well. (Streaming data and other network layer bottlenecks.)


    The source and destination (server / client) are also factors. If either end of the transfer is busy, processing, the speed of a large and long transfer may be impacted.


    In the bottom line, there are a lot of possibilities.

  • 700MB fit nicely in FS buffer (DRAM) but when you exceed a certain size smbd has to write stuff to disk and this is the bottleneck.


    Your disks are old and slow and I hope you know that sequential performance can drop even more when the disks get filled. With HDDs from the last decade sequential read/write performance always looks like this (outer tracks are almost twice as fast and then it gets slower and slower and slower):



    Unfortunately most people think hdparm would be a benchmark but it's not (since it always shows you maximum performance that you get only when your HDD is filled with not more than 10% of its capacity)

  • In the bottom line, there are a lot of possibilities.

    Easy to test. On the OMV machine do an 'sudo apt install htop iozone3 iperf3', then run htop in one shell while Explorer copies to verify you're not bottlenecked by CPU, then check with iperf3 network performance and finally confirm that storage is the bottleneck (not relying on moronic tools like hdparm but doing a 'chdir' to your share and let 'iozone -a -g 8000m -s 8000m -i 0 -i 1 -r 1024K' confirm HDD sequential transfer speed possible based on how much disk capacity is already used (the 8000m is 'twice the DRAM' just in case)

    • Offizieller Beitrag

    Easy to test. On the OMV machine do an 'sudo apt install htop iozone3 iperf3', then run htop in one shell while Explorer copies to verify you're not bottlenecked by CPU, then check with iperf3 network performance and finally confirm that storage is the bottleneck (not relying on moronic tools like hdparm but doing a 'chdir' to your share and let 'iozone -a -g 8000m -s 8000m -i 0 -i 1 -r 1024K' confirm HDD sequential transfer speed possible based on how much disk capacity is already used (the 8000m is 'twice the DRAM' just in case)

    The speed difference between large and small file transfers in this instance, is not enormous. Around 25%. Also, there are unknown variables in the physical network in question.
    Noting that the source of a bottle neck could be on either end, or in the middle; I'm all for testing client / server HD performance (and it seems that you known exactly how to do that).
    But when it comes to transfer performance, Client/Server interface settings and the other particulars of the physical layer can't be ignored.

  • there are unknown variables in the physical network in question.

    But there's a graph:

    It starts with ~110 MB/s to drop down to HDD speed (-2-3 MB/s) after ~2 GB. Pretty obvious IMO where to start to test/confirm (and that's my main point: don't use crappy tools like hdparm since they report irrelevant numbers. Use a good tool that shows you actual disk performance and not theoretical maximum)

    • Offizieller Beitrag

    But there's a graph:


    This is classic.


    Having worked as a Network engineer (now retired) in a large datacenter, I've had conversations with the server jock's along these lines countless times before. The never ending quest was always centered on performance which distilled down to "transactional" speed in the upper levels of the OSI (their end) and "throughput" of the pipes at the lower levels (my end).


    What we all learned together was, we don't know what we don't know. Therefore, testing must encompass and control the largest set of variables possible. ((Of course, the real world has a way of trashing results from test environments where still more variables are introduced, many of them "intermittent".))
    In the averages and in the bottom line, neither side was completely right or completely wrong. It all works together.


    With that said, you may be right.


    In any case, it's apparent that you have real Linux command line expertise and testing "know how". Accordingly, allow me to salute your willingness to step up and help out a forum user.
    ________________________________________________


    (Oh and thanks for the memories. Now, to push those memories out of mind again,,)

  • The second one is a porno. So it is watched by the internet audience during the transfer and is slower 8o


    SCNR

    --
    Get a Rose Tattoo...


    HP t5740 with Expansion and USB3, Inateck Case w/ 3TB WD-Green
    OMV 5.5.23-1 Usul i386|4.19.0-9-686-pae

    • Offizieller Beitrag

    The second one is a porno. So it is watched by the internet audience during the transfer and is slower 8o


    SCNR

    That's what I'd call a WAG, but I like it. ;)
    In any case, there's nothing going on here that can't be resolved with - >


    In fact, after a few (several?) I've found that minor network issues resolve themselves,, for awhile... (Maybe, overnight.)

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!