Poor NFS performance

  • Hello guys,


    I've built up my own NAS system based on:
    - ASRock J4205-ITX
    - 8GB DDR3 RAM
    - 64GB Samsung 470 as system disk
    - 2TB WD Red
    - 3TB WD Red
    - Debian Stretch 9.3
    - OMW Arrakis 4.0.16-1


    Now I'd like to share some folders with some Linux machines (Arch, ubuntu mate & xubuntu). As I have no Windows machines and therefore no real need for SMB shares I'd favour to share it via NFS. But unfortunately the NFS performance seems quite poor according to some benchmarks I've made:


    ProtocolSpeed [MB/s]
    SMB/CIFS115.27
    NFS371.38
    NFS4, sync72.08
    NFS4, async66.63
    NFS4, several options, see code below67.99


    While performing the test neither the server nor the client did something challenging.


    The client was a Lenovo T61 with Arch on a 256GB Samsung 750 disk, which may be nearly on the edge as the T61 only offers SATA1 interfaces.
    All software on server and client was up-to-date.
    NFS ran with 8 threads and the export options were: rw,subtree_check,secure
    SMB/CIFS was shared with OMV default options


    Are there options thatcould speed up NFS transmission or is SMB really faster?


  • By watching the variable 'sockets-enqueued' in /proc/fs/nfsd/pool_stats (according to knfsd-stats.txt) I found out that 8 threads are not enough and result in ten-thousands of enqueued sockets. When I increased the number of threads to 64, the number of enqueued sockets remained constant.


    Then I've made more benchmarks, but I still don't know why NFS is so much slower.

    protocolrate [MB/s]export options
    SMB/CIFS108.97
    NFS84,51rw,async,no_subtree_check,all_squash,anonuid=1028,anongid=100
    NFS82.06rw,subtree_check,secure,async
    NFS72.16rw,subtree_check,secure



    The test file was about 3.5GB large and each test was run 3 times and the results were averaged. The file was copied using rsync as dd is not able to work over SMB/CIFS. You can find the test script below.


    The NFS client options were always the same:
    rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,soft,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.178.35,local_lock=none,addr=192.168.178.2
    where most values are defaults on my machine (Arch Linux).

  • Had exactly the same problem when accessing OMV4 NFS shares from OSX clients, very slow read/writes, that's why i came back to OMV3... I would really like to switch to OMV4 but that NFS issue is keeping me away from it... Is there a way to fix that please ?

  • I had an external RAID5 array attached via USB 3 to a Windows 10 PC and I could get 220 MB/s read and write performance.


    I attached this same RAID5 array to a computer (ROCK64 = 4 x Core ARM chip with 4GB RAM) running OMV3 via USB 3. This OMV3 computer is connected to my other computers via Gigabit Ethernet.


    Using SMB/CIFS I get 90 MB/s read and write out of my RAID5 array (40% of the performance).


    Using NFS I get 12 MB/s read and write out of my RAID5 array (5% of the performance) unfortunately I have a linux application that only works over NFS).


    I expected to loose a little performance going over a network and through another computer, but nothing like this...


    To summarise, I get the same performance issue as you, but with OMV3.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!