File transfers to OMV drop in speed after ~1GB

  • Setting up OMV on a Rock64 1GB board using the precompiled image. Setup went fine and configured a USB drive w/ shares, users, and access. Connecting to share w/ Win 10 PC works as expected. Moved a 2GB file to the share and write speeds start at around 75MBps and after 1GB or so of the file transferring the speeds drops to about 15MBps for the rest of the file. Reading files from the drive gives me a steady 75MBps the whole way through. Thought it might be a SAMBA issue so I tried the suggestions in the SAMBA tuning threads but it did not help. I tried moving the file over w/ FTP and saw the same issue. The drop off was slower but after about 1GB of the file moving it get slower and slower until it ended up at around 20MBps when the file finished. Any thoughts on what I could be looking for?

  • Maybe it's an MacOS issue

    It's not. And to nail the problem down the procedure is always the same: test storage and network individually. If you search the forum for 'iperf iozone' or 'iperf3 iozone3' you most probably find some posts by my elaborating on this.


    If the storage is the bottleneck then writes through the network will be fast until filesystem caches/buffers fill up and then the storage performance bottlenecks.


    On the ARM images both tools are installed by default, no idea about x86. And on macOS for iperf3 all you do is to install homebrew and then a sudo brew install iperf3.

  • htop
    htop.png



    iperf test to and from



    Still trying to figure out how to use iozone though. I found a command in another thread but not sure this is what you need? It's supposed to be writing to a usb 3.0 thumb drive.


    EDIT: Found another thread where you said to do cd $share first then run iozone. Did that and here are the results. Still not sure if this is what you need though. not sure im using iozone correctly.



  • Thanks @tkaiser!


    I'm not sure how to interpret htop though...
    htop_on_server2.png
    connected via AFP, as user rocky while copying 7 GB files.

  • Did I understand correctly, that the target medium is a thumb drive? This would be totally expected speeds with long copies than.
    Your network seems fine.
    You can test the drive using dd.
    dd if=/dev/zero of=path/inside/thumdrive/test.img bs=100M count=20 oflag=direct
    I am confident this will output the write speeds in which you saturated in long copies. This will create a 2gb file test.img, you can delete it after.

  • It's supposed to be writing to a usb 3.0 thumb drive

    These things can start to throttle based on access pattern. Your two iozone tests both seemed to test an USB drive since reported values are a bit too high for SD card or eMMC (or maybe you run off a really good high capacity eMMC module)?


    Anyway: the mode to test with iozone is


    Code
    cd /path/to/mountpoint
    iozone -e -I -a -s 1000M -r 128k -r 1024k -r 16384k -i 0 -i 1

    This will test sequential transfer speeds with three different block sizes (128KB, 1MB, 16MB) and will also show throttling effects if an USB thumb drives starts to heat up and then slows down after some time. That's why we test with several block sizes of increasing size since normally the larger the block size the higher the transfer speeds. If performance decreases you know the thumb drive throttles to prevent overheating.

  • I'm not sure how to interpret htop though...

    CPU utilization of the afpd task seems ok to me but on average every CPU core is utilized with 40% which seems to indicate that there's a lot of kernel background activity. As suggested in your other thread I would call iostat 10 in the background to get the bigger picture (armbianmonitor -m 10 should also suffice but that's not available on x86 installations)

  • Did I understand correctly, that the target medium is a thumb drive? This would be totally expected speeds with long copies than.
    Your network seems fine.
    You can test the drive using dd.
    dd if=/dev/zero of=path/inside/thumdrive/test.img bs=100M count=20 oflag=direct
    I am confident this will output the write speeds in which you saturated in long copies. This will create a 2gb file test.img, you can delete it after.

    Code
    root@rock64:~# dd if=/dev/zero of=/srv/dev-disk-by-label-HD01/HD01/test.img bs=100M count=20 oflag=direct
    20+0 records in
    20+0 records out
    2097152000 bytes (2.1 GB, 2.0 GiB) copied, 20.4388 s, 103 MB/s
  • O well, those unreliable external USB3 disk graveyards. Are you able to access all your disks via SMART (it works via eSATA but with USB3 it's always gambling)?

    When I tried to activate SMART monitoring on Device /dev/sda and dev/sdb, I get "communication failure" for each.
    Screenshot 2019-05-05 at 19.01.09.png
    However at the next attempt it suddenly worked:
    Screenshot 2019-05-05 at 19.04.51.png


    FANTEC has an eSATA port.
    Screenshot 2019-05-05 at 19.10.07.png
    Do you think an eSATA to USB cable would make any sense?

  • These things can start to throttle based on access pattern. Your two iozone tests both seemed to test an USB drive since reported values are a bit too high for SD card or eMMC (or maybe you run off a really good high capacity eMMC module)?
    Anyway: the mode to test with iozone is


    Code
    cd /path/to/mountpoint
    iozone -e -I -a -s 1000M -r 128k -r 1024k -r 16384k -i 0 -i 1

    This will test sequential transfer speeds with three different block sizes (128KB, 1MB, 16MB) and will also show throttling effects if an USB thumb drives starts to heat up and then slows down after some time. That's why we test with several block sizes of increasing size since normally the larger the block size the higher the transfer speeds. If performance decreases you know the thumb drive throttles to prevent overheating.

    cd /srv/dev-disk-by-label-HD01

  • Code
    kB reclen write rewrite read reread
    1024000 128 33658 36342 36771 37017
    1024000 1024 38906 38972 38676 38955
    1024000 16384 39763 39877 39832 39767

    That smells like USB 2.0 (less than 40 MB/s). You could provide output of armbianmonitor -u but since you're using an ayufan image and not an Armbian based OMV image amount of information may be limited. At least you should check lsusb and lsusb -t


    Do you think an eSATA to USB cable would make any sense?

    Nope. In those eSATA/USB drive enclosures there's one SATA port multiplier (with USB3 usually a JMicron JMB575) combined with one USB3-to-SATA bridge (most probably a JMS567). So all you would do is exchanging one USB3-to-SATA bridge with another which usually makes things even worse.

  • Do you think an eSATA to USB cable would make any sense?

    No! You need an eSATA-port or at least an SATA-port on the other side. And if you want to address more than one disk in the enclosure the sata/esata-port on the motherboard must support port-multiplier. It´s a game of luck to find a combination that works.


    Edit: time overlap ;)

    OMV 3.0.99 (Gray style)
    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1)- Fractal Design Node 304

  • Code
    lsusb
    Bus 005 Device 002: ID 152d:0551 JMicron Technology Corp. / JMicron USA Technology Corp.
    Bus 005 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
    Bus 004 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
    Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
    Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
    Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub


    Code
    lsusb -t
    /: Bus 05.Port 1: Dev 1, Class=root_hub, Driver=xhci-hcd/1p, 5000M
    |__ Port 1: Dev 2, If 0, Class=Mass Storage, Driver=usb-storage, 5000M
    /: Bus 04.Port 1: Dev 1, Class=root_hub, Driver=xhci-hcd/1p, 480M
    /: Bus 03.Port 1: Dev 1, Class=root_hub, Driver=ohci-platform/1p, 12M
    /: Bus 02.Port 1: Dev 1, Class=root_hub, Driver=ehci-platform/1p, 480M
    /: Bus 01.Port 1: Dev 1, Class=root_hub, Driver=dwc2/1p, 480M

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!