Poor NFS performance

    • OMV 4.x
    • Poor NFS performance

      Hello guys,

      I've built up my own NAS system based on:
      - ASRock J4205-ITX
      - 8GB DDR3 RAM
      - 64GB Samsung 470 as system disk
      - 2TB WD Red
      - 3TB WD Red
      - Debian Stretch 9.3
      - OMW Arrakis 4.0.16-1

      Now I'd like to share some folders with some Linux machines (Arch, ubuntu mate & xubuntu). As I have no Windows machines and therefore no real need for SMB shares I'd favour to share it via NFS. But unfortunately the NFS performance seems quite poor according to some benchmarks I've made:

      ProtocolSpeed [MB/s]
      NFS4, sync72.08
      NFS4, async
      NFS4, several options, see code below

      While performing the test neither the server nor the client did something challenging.

      The client was a Lenovo T61 with Arch on a 256GB Samsung 750 disk, which may be nearly on the edge as the T61 only offers SATA1 interfaces.
      All software on server and client was up-to-date.
      NFS ran with 8 threads and the export options were: rw,subtree_check,secure
      SMB/CIFS was shared with OMV default options

      Are there options thatcould speed up NFS transmission or is SMB really faster?


      1. michael@t61:~$ sudo mount -t cifs -o rw,uid=1026,username=michael //server.home./share /mnt/test/
      2. Password for michael@//server.home./share:
      3. michael@t61:~$ rsync -av --progress Projekte/aqemu/ubuntu_mate_HDA.img /mnt/test
      4. sending incremental file list
      5. ubuntu_mate_HDA.img
      6. 8,296,333,312 100% 115.27MB/s 0:01:08 (xfr#1, to-chk=0/1)
      7. sent 8,298,358,898 bytes received 35 bytes 112,902,842.63 bytes/sec
      8. total size is 8,296,333,312 speedup is 1.00
      9. michael@t61:~$ rm /mnt/test/ubuntu_mate_HDA.img
      10. michael@t61:~$ sudo umount /mnt/test
      11. michael@t61:~$ sudo mount -t nfs -o rw server.home.:/export/share /mnt/test/
      12. michael@t61:~$ rsync -av --progress Projekte/aqemu/ubuntu_mate_HDA.img /mnt/test
      13. sending incremental file list
      14. ubuntu_mate_HDA.img
      15. 8,296,333,312 100% 71.38MB/s 0:01:50 (xfr#1, to-chk=0/1)
      16. sent 8,298,358,898 bytes received 35 bytes 72,474,750.51 bytes/sec
      17. total size is 8,296,333,312 speedup is 1.00
      18. michael@t61:~$ rm /mnt/test/ubuntu_mate_HDA.img
      19. michael@t61:~$ sudo umount /mnt/test
      20. michael@t61:~$ sudo mount -t nfs -o rw server.home.:/share /mnt/test/
      21. michael@t61:~$ rsync -av --progress Projekte/aqemu/ubuntu_mate_HDA.img /mnt/test
      22. sending incremental file list
      23. ubuntu_mate_HDA.img
      24. 8,296,333,312 100% 72.08MB/s 0:01:49 (xfr#1, to-chk=0/1)
      25. sent 8,298,358,898 bytes received 35 bytes 70,624,331.34 bytes/sec
      26. total size is 8,296,333,312 speedup is 1.00
      27. michael@t61:~$ rm /mnt/test/ubuntu_mate_HDA.img
      28. michael@t61:~$ sudo umount /mnt/test
      29. michael@t61:~$ sudo mount -t nfs -o rw,async server.home.:/share /mnt/test/
      30. michael@t61:~$ rsync -av --progress Projekte/aqemu/ubuntu_mate_HDA.img /mnt/test
      31. sending incremental file list
      32. ubuntu_mate_HDA.img
      33. 8,296,333,312 100% 66.63MB/s 0:01:58 (xfr#1, to-chk=0/1)
      34. sent 8,298,358,898 bytes received 35 bytes 67,193,189.74 bytes/sec
      35. total size is 8,296,333,312 speedup is 1.00
      36. michael@t61:~$ rm /mnt/test/ubuntu_mate_HDA.img
      37. michael@t61:~$ sudo umount /mnt/test
      38. michael@t61:~$ sudo mount -t nfs -o rw,bg,intr,soft,users,noauto,_netdev,proto=tcp,retry=3,timeo=10 server.home.:/share /mnt/test/
      39. michael@t61:~$ rsync -av --progress Projekte/aqemu/ubuntu_mate_HDA.img /mnt/test
      40. sending incremental file list
      41. ubuntu_mate_HDA.img
      42. 8,296,333,312 100% 173.73MB/s 0:00:45 (xfr#1, to-chk=0/1)
      43. rsync: write failed on "/mnt/test/ubuntu_mate_HDA.img": Input/output error (5)
      44. rsync error: error in file IO (code 11) at receiver.c(393) [receiver=3.1.2]
      45. michael@t61:~$ sudo umount /mnt/test
      46. michael@t61:~$ sudo mount -t nfs -o rw,bg,intr,soft,users,noauto,_netdev,proto=tcp server.home.:/share /mnt/test/
      47. michael@t61:~$ rsync -av --progress Projekte/aqemu/ubuntu_mate_HDA.img /mnt/test
      48. sending incremental file list
      49. ubuntu_mate_HDA.img
      50. 8,296,333,312 100% 67.99MB/s 0:01:56 (xfr#1, to-chk=0/1)
      51. sent 8,298,358,898 bytes received 35 bytes 68,866,049.24 bytes/sec
      52. total size is 8,296,333,312 speedup is 1.00
      53. michael@t61:~$ rm /mnt/test/ubuntu_mate_HDA.img
      54. michael@t61:~$ sudo umount /mnt/test
      Display All
    • By watching the variable 'sockets-enqueued' in /proc/fs/nfsd/pool_stats (according to knfsd-stats.txt) I found out that 8 threads are not enough and result in ten-thousands of enqueued sockets. When I increased the number of threads to 64, the number of enqueued sockets remained constant.

      Then I've made more benchmarks, but I still don't know why NFS is so much slower.
      protocolrate [MB/s]export options

      The test file was about 3.5GB large and each test was run 3 times and the results were averaged. The file was copied using rsync as dd is not able to work over SMB/CIFS. You can find the test script below.

      The NFS client options were always the same:
      where most values are defaults on my machine (Arch Linux).


      1. #!/bin/sh
      2. function perform_copy {
      3. echo "$(mount | grep /mnt/test)"
      4. echo "copying data"
      5. loop=3
      6. while [ $loop -ne 0 ];
      7. do
      8. rm /mnt/test/Deadpool.ts
      9. # dd if=/home/me/Deadpool.ts of=/mnt/test/Deadpool.ts bs=1M
      10. rsync -av Deadpool.ts /mnt/test/
      11. loop=$[$loop-1]
      12. sleep 1
      13. done
      14. sudo umount /mnt/test
      15. sleep $1
      16. }
      17. echo "--------------------------------------------------"
      18. #sudo umount /mnt/test
      19. echo "cifs share"
      20. #sudo mount -t cifs -o rw //omv.local./test0 /mnt/test/
      21. perform_copy 3
      22. echo "--------------------------------------------------"
      23. echo "export options: rw,async,no_subtree_check,all_squash,anonuid=1028,anongid=100"
      24. sudo mount -t nfs -o rw,bg,intr,soft,proto=tcp omv.local.:/test1 /mnt/test/
      25. perform_copy 3
      26. echo "--------------------------------------------------"
      27. echo "export options: rw,subtree_check,secure,async"
      28. sudo mount -t nfs -o rw,bg,intr,soft,proto=tcp omv.local.:/test2 /mnt/test/
      29. perform_copy 3
      30. echo "--------------------------------------------------"
      31. echo "export options: rw,subtree_check,secure"
      32. sudo mount -t nfs -o rw,bg,intr,soft,proto=tcp omv.local.:/test3 /mnt/test/
      33. perform_copy 0
      Display All
    • Had exactly the same problem when accessing OMV4 NFS shares from OSX clients, very slow read/writes, that's why i came back to OMV3... I would really like to switch to OMV4 but that NFS issue is keeping me away from it... Is there a way to fix that please ?

      The post was edited 1 time, last by jaydee99 ().

    • I had an external RAID5 array attached via USB 3 to a Windows 10 PC and I could get 220 MB/s read and write performance.

      I attached this same RAID5 array to a computer (ROCK64 = 4 x Core ARM chip with 4GB RAM) running OMV3 via USB 3. This OMV3 computer is connected to my other computers via Gigabit Ethernet.

      Using SMB/CIFS I get 90 MB/s read and write out of my RAID5 array (40% of the performance).

      Using NFS I get 12 MB/s read and write out of my RAID5 array (5% of the performance) unfortunately I have a linux application that only works over NFS).

      I expected to loose a little performance going over a network and through another computer, but nothing like this...

      To summarise, I get the same performance issue as you, but with OMV3.

      The post was edited 3 times, last by AceVentura ().