NFS Speeds sub-par

    • OMV 3.x
    • NFS Speeds sub-par

      Newbie to OMV here, and looking to get some help or insight. I've done much searching, reading, and observing of many different threads relative to NFS setup, speeds, etc. It would seem that with the evolution of NFS and if I was running NFS v3 or v4, with modern day GB NIC's and GB switches, I should be able to see a throughput with NFS in the range of at least 40-60MBps. I would imagine even higher than that, but I've seen many people getting at least 50 and 60MB. I'm running fairly new hardware (only about 3 yrs old) with GB NIC's and I have fresh Netgear GB switch. Interestingly, looking at my current setup (moving from Synology (XPenology actually) to OMV) my old setup wasn't getting optimal speeds either via NFS or CIFS.

      I'm hoping some folks here can be kind enough to help illuminate some of the areas I need to dig into to ensure I could get those speeds? Some details to help eliminate some of the general first line items I've checked:
      • Validated that all drives support fast write speeds (all fresh WD Red Drives attached via SATA ports supporting 4/6Gb) - dd says 90 MB, i assume part of this comes from using MergerFS to pool my drives, still surprising though as slowest should be 4Gb(500MB)
      • Validated NFS mount r/w size parameters on the mount - 1048576 and 131072 varying slightly in highest speed, but only a few MB there from one to other (auto-negotiated by client/server)
      • Tried using async vs sync vs not inputting a parameter, making negligible difference (1-2MB)
      • Heard about Jumbo frames, but have not gone through that process as it seems like that should only have a slight effect from my reading, many say its not the golden solution to fix such a discrepancy
      • Tried v3 vs v4 to see any difference, not really finding one in the speeds
      • Confirmed NIC's are registering as 1Gb to the local OS (Linux)
      • Transferring from OMV to Linux host (Ubuntu 16.04) that has mounted the NFS share
      • Validated using TCP as I'd like to ensure transmission vs retransmits, I don't believe TCP really has the overhead to cause this problem so I haven't bothered testing UDP
      • RSYNC is the tool being used and giving me the speed readings - I'm invoking this to rsync from /mnt/foldera /media/folderb - where the /mnt is a remotely mounted NFS share and /media is the local folder
        • As a side note, I believe I was seeing worse speeds when trying to RSYNC over the network, which I believe should be expected as SSH is hampering the speeds
      Again, appreciate any insight you might be able to help lend to how I can narrow down where my bottleneck is. Checked briefly on the network side and don't see anything I have that should be limiting any transmissions either. No QoS functionality or anything to block transmission. I'll be doing that stuff later to start limiting user transmission speeds, but for now it's wide open. And I do apologize if there is a definitive thread for this, but I've searched The Google :rolleyes: and mostly focused on the local forum results for specific OMV related troubleshooting.