Files transfers vs files size.

    • OMV 2.x

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • Files transfers vs files size.

      I have an example of two different files (movie) on the OMV. Could you please explain to me why speed transfer is higher for first file (700MB), and lower for second (16GB)?

      - firs file is 700MB


      - second file is 16GB

      OMV 2.2.X
      CPU Pentium G620 @ 2.60GHz
      RAM 4GB
      HDD OS - WB BLUE 250GB
      HDD 2x Samsung EcoGreen F2 HD154UI 1500 GB (mirror) (max drive speed 107 MB/s)

      The post was edited 2 times, last by Stylishh ().

    • Stylishh wrote:

      I have an example of two different files (movie) on the OMV. Could you please explain to me why speed transfer is higher for first file (700MB), and lower for second (16GB)?
      In "bytes per second", large files usually transfer faster.

      However, there's a variety of potential reasons why smaller files might transfer faster. Network speed and settings; 100mbs or 1 Gig?, MTU packet fragmentation at 100MBS, Jumbo frame size settings in a 1Gig network , etc.
      (The speed indicated suggests that you're running 1 GBS between source and destination.)
      Clients running at different speeds on a mixed network with 100mbs on one end, 1 Gig on the other. If one client is running full duplex, the other 1/2 duplex. There's also other potential traffic on the network that may be competing with any given transfer as well. (Streaming data and other network layer bottlenecks.)

      The source and destination (server / client) are also factors. If either end of the transfer is busy, processing, the speed of a large and long transfer may be impacted.

      In the bottom line, there are a lot of possibilities.
      Good backup takes the "drama" out of computing
      ____________________________________
      OMV 3.0.90 Erasmus
      ThinkServer TS140, 12GB ECC / 32GB USB3.0
      4TB SG+4TB TS ZFS mirror/ 3TB TS

      OMV 3.0.81 Erasmus - Rsync'ed Backup Server
      R-PI 2 $29 / 16GB SD Card $8 / Real Time Clock $1.86
      4TB WD My Passport $119

      The post was edited 1 time, last by flmaxey ().

    • 700MB fit nicely in FS buffer (DRAM) but when you exceed a certain size smbd has to write stuff to disk and this is the bottleneck.

      Your disks are old and slow and I hope you know that sequential performance can drop even more when the disks get filled. With HDDs from the last decade sequential read/write performance always looks like this (outer tracks are almost twice as fast and then it gets slower and slower and slower):



      Unfortunately most people think hdparm would be a benchmark but it's not (since it always shows you maximum performance that you get only when your HDD is filled with not more than 10% of its capacity)
      'OMV problems' with XU4 and Cloudshell 2? Nope, read this first. 'OMV problems' with Cloudshell 1? Nope, just Ohm's law or queue size.
    • flmaxey wrote:

      In the bottom line, there are a lot of possibilities.
      Easy to test. On the OMV machine do an 'sudo apt install htop iozone3 iperf3', then run htop in one shell while Explorer copies to verify you're not bottlenecked by CPU, then check with iperf3 network performance and finally confirm that storage is the bottleneck (not relying on moronic tools like hdparm but doing a 'chdir' to your share and let 'iozone -a -g 8000m -s 8000m -i 0 -i 1 -r 1024K' confirm HDD sequential transfer speed possible based on how much disk capacity is already used (the 8000m is 'twice the DRAM' just in case)
      'OMV problems' with XU4 and Cloudshell 2? Nope, read this first. 'OMV problems' with Cloudshell 1? Nope, just Ohm's law or queue size.
    • tkaiser wrote:

      flmaxey wrote:

      In the bottom line, there are a lot of possibilities.
      Easy to test. On the OMV machine do an 'sudo apt install htop iozone3 iperf3', then run htop in one shell while Explorer copies to verify you're not bottlenecked by CPU, then check with iperf3 network performance and finally confirm that storage is the bottleneck (not relying on moronic tools like hdparm but doing a 'chdir' to your share and let 'iozone -a -g 8000m -s 8000m -i 0 -i 1 -r 1024K' confirm HDD sequential transfer speed possible based on how much disk capacity is already used (the 8000m is 'twice the DRAM' just in case)
      The speed difference between large and small file transfers in this instance, is not enormous. Around 25%. Also, there are unknown variables in the physical network in question.
      Noting that the source of a bottle neck could be on either end, or in the middle; I'm all for testing client / server HD performance (and it seems that you known exactly how to do that).
      But when it comes to transfer performance, Client/Server interface settings and the other particulars of the physical layer can't be ignored.
      Good backup takes the "drama" out of computing
      ____________________________________
      OMV 3.0.90 Erasmus
      ThinkServer TS140, 12GB ECC / 32GB USB3.0
      4TB SG+4TB TS ZFS mirror/ 3TB TS

      OMV 3.0.81 Erasmus - Rsync'ed Backup Server
      R-PI 2 $29 / 16GB SD Card $8 / Real Time Clock $1.86
      4TB WD My Passport $119
    • flmaxey wrote:

      there are unknown variables in the physical network in question.
      But there's a graph:

      It starts with ~110 MB/s to drop down to HDD speed (-2-3 MB/s) after ~2 GB. Pretty obvious IMO where to start to test/confirm (and that's my main point: don't use crappy tools like hdparm since they report irrelevant numbers. Use a good tool that shows you actual disk performance and not theoretical maximum)
      'OMV problems' with XU4 and Cloudshell 2? Nope, read this first. 'OMV problems' with Cloudshell 1? Nope, just Ohm's law or queue size.
    • tkaiser wrote:

      But there's a graph:



      This is classic.

      Having worked as a Network engineer (now retired) in a large datacenter, I've had conversations with the server jock's along these lines countless times before. The never ending quest was always centered on performance which distilled down to "transactional" speed in the upper levels of the OSI (their end) and "throughput" of the pipes at the lower levels (my end).

      What we all learned together was, we don't know what we don't know. Therefore, testing must encompass and control the largest set of variables possible. ((Of course, the real world has a way of trashing results from test environments where still more variables are introduced, many of them "intermittent".))
      In the averages and in the bottom line, neither side was completely right or completely wrong. It all works together.

      With that said, you may be right.

      In any case, it's apparent that you have real Linux command line expertise and testing "know how". Accordingly, allow me to salute your willingness to step up and help out a forum user.
      ________________________________________________

      (Oh and thanks for the memories. Now, to push those memories out of mind again,,)
      Good backup takes the "drama" out of computing
      ____________________________________
      OMV 3.0.90 Erasmus
      ThinkServer TS140, 12GB ECC / 32GB USB3.0
      4TB SG+4TB TS ZFS mirror/ 3TB TS

      OMV 3.0.81 Erasmus - Rsync'ed Backup Server
      R-PI 2 $29 / 16GB SD Card $8 / Real Time Clock $1.86
      4TB WD My Passport $119

      The post was edited 1 time, last by flmaxey ().

    • Dropkick Murphy wrote:

      The second one is a porno. So it is watched by the internet audience during the transfer and is slower 8o

      SCNR
      That's what I'd call a WAG, but I like it. ;)
      In any case, there's nothing going on here that can't be resolved with - >

      In fact, after a few (several?) I've found that minor network issues resolve themselves,, for awhile... (Maybe, overnight.)
      Good backup takes the "drama" out of computing
      ____________________________________
      OMV 3.0.90 Erasmus
      ThinkServer TS140, 12GB ECC / 32GB USB3.0
      4TB SG+4TB TS ZFS mirror/ 3TB TS

      OMV 3.0.81 Erasmus - Rsync'ed Backup Server
      R-PI 2 $29 / 16GB SD Card $8 / Real Time Clock $1.86
      4TB WD My Passport $119