Extremely slow HD speeds

  • Hi, OMV and linux noob here, (but long IT nerdery experience). I put together an OMV box using the latest stable OMV 2 on an HP proliant server. I have two identical WD Red NAS hard drives mounted and used in a grey hole pool (Disk A is set up as the landing zone for the shares). Performance from disk B is perfectly adequate but read and write from disk A is extremely poor. I first noticed very slow writes to the samba shares but then noticed intermittent issues with playing movies on Plex. Some movies (which I later worked out are saved on disk A by greyhole) won't play more than a few seconds, others (stored on disk B) play fine.


    I have followed the diagnostics from the sticky post at the top of this forum and everything looks ok until I get to the performance tests:


    Code
    root@Beast:~# dd conv=fdatasync if=/dev/mapper/sda1-sda1 of=/tmp/test.img bs=1G count=3
    3+0 records in
    3+0 records out
    3221225472 bytes (3.2 GB) copied, 825.229 s, 3.9 MB/s
    
    
    root@Beast:~# dd if=/dev/mapper/sda1-sda1 of=/dev/null bs=1G count=3
    3+0 records in
    3+0 records out
    3221225472 bytes (3.2 GB) copied, 650.054 s, 5.0 MB/s


    The same tests on disk B look fine:


    Code
    root@Beast:~# dd conv=fdatasync if=/dev/mapper/sdb1-sdb1 of=/tmp/test.img bs=1G count=3
    3+0 records in
    3+0 records out
    3221225472 bytes (3.2 GB) copied, 29.2664 s, 110 MB/s
    root@Beast:~# dd if=/dev/mapper/sdb1-sdb1 of=/dev/null bs=1G count=3
    3+0 records in
    3+0 records out
    3221225472 bytes (3.2 GB) copied, 21.3882 s, 151 MB/s


    I also ran the following:


    Code
    root@Beast:~# sudo hdparm -Tt /dev/mapper/sda1-sda1
    
    
    /dev/mapper/sda1-sda1:
     Timing cached reads:     2 MB in 18.79 seconds = 108.99 kB/sec
     Timing buffered disk reads:   2 MB in 15.69 seconds = 130.50 kB/sec


    As you can see, not very good numbers. For the record, the identical drive B shows the following:


    Code
    root@Beast:~# sudo hdparm -Tt /dev/mapper/sdb1-sdb1
    
    
    /dev/mapper/sdb1-sdb1:
     Timing cached reads:   24812 MB in  2.00 seconds = 12421.07 MB/sec
     Timing buffered disk reads: 456 MB in  3.01 seconds = 151.69 MB/sec


    As I said, I'm a linux noob so don't really know where to start investigating further to sort this out, so any help would be very welcome. Thanks in advance.

  • I'm guessing that's bad?

  • doh, wrong disk. that'll teach me to just copy/paste.

  • Code
    iozone -e -I -a -s 100M -r 4k -r 1024k -i 0 -i 1 -i 2

    Please do a 'sudo apt install iozone3', then chdir to the disk in question, run the above call and post results.


    Well, here 199 looks good but IIRC those WD never increment the counter even if errors occur. Anyway: if performance is that bad I would check cabling first and then test again with the above call. This iozone call also tests random IO and here it gets interesting since based on numbers some conclusions are possible what's happening or where the bottleneck is (if it's cabling most transactions are dropped and retransmits happen due to CRC errors)

  • Oops, I wasn't aware that OMV2 is based on such an old Debian version. Only idea left (besides checking cabling) is the above (why the hell does code always gets inserted there?). At least that's how this guy here doing a lot of useful SD card benchmarks is doing it according to https://raw.githubusercontent.…rks/microsd-benchmarks.sh

  • Sorry, managed to get iozone installed by manually specifying the url to the correct stable version. Anyway, running your previous command gives the following:

    Code
    ignore this, I ran it on the wrong drive. Just waiting for the correct drive to finish (which is taking ages)

    Not really sure what I'm looking at to be honest! But your help so far has been appreciated.

  • I've been struggling to get the test to complete. It seems to hang after the first reread test. I have left it for multiple times for > 1 hour. Is this normal?


    Here's what I get anyway:


  • I've been struggling to get the test to complete. It seems to hang after the first reread test. I have left it for multiple times for > 1 hour. Is this normal?

    No, that's more or less an indication of something seriously wrong wrt head positioning. The random IO tests end up with the drive being busy moving the heads all the time between the various tracks and it seems there's something really wrong. At least it seems it's not a cabling issue since then sequential and random IO performance would've resulted in similar numbers.


    Those WD RED implement SMART selftests, you could try to execute 'smartctl -t short /dev/sda' (will tell you how it needs so you can check with 'smartctl -a' later) and if that works execute 'smartctl -t long /dev/sda' (might tage hours/ages, you can always check status in the meantime with 'smartctl -a /dev/sda')

  • Ok, this is the results of running smartctl -a after five minutes. Doesn't look like the test has worked to me, but I'm obviously no expert!


Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!