10Gbe / NFS Tuning

  • Hello,


    I'm curious what speeds other people are seeing if OMV is serving an datastore to ESXI via NFS and single 10Gbe link ?


    My setup seems to hit a limit somewhere around 500 MB/s write and 700 Mb/s read (left picture)
    If I remove the 10Gbe limit (right picture) I see writes in the 700 Mb/s and reads hitting more than 1 GB/s
    OMV 5.x, ConnectX 10Gbe cards, Dell Switch with Jumbo Frame Support, MTU 9000 - everything else default.


    Many Thanks
    Mic

  • I think I figured things out.... mostly
    I was able to get local speeds with peaks in the 1500MB/s with SSDs in Raid 10 :-)
    Over the wire speeds to another host in the same cluster have been close to 1100 MB/s and I think that's about what there is to get from 10Gbe.
    Happy camper - this is definitively good enough for home use....

  • I think I figured things out.... mostly

    The changes you made might be helpful for other users :)


    this is definitively good enough for home use....

    Only for home user?? lol. That is good enough for enterprise :)

    omv 5.6.13 usul | 64 bit | 5.11 proxmox kernel | omvextrasorg 5.6.2 | kvm plugin 5.1.6
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • Happy to share - eventually it helps someone to squeeze out more performance.


    I'm using a small SSD (120GB) to boot an esxi host and to store a VM called NAS01 running the latest version of OMV. This Virtual Machine has native access (PCI-passthrough) to two AHCI cards presenting 10 physical disks and 4 SSDs. Everything else is pretty normal - OMV has an NFS export that is mounted by the ESXI hosts as datastrore. To tune this to optimum performance I did the following:
    - Dedicated virtual switch (backbone)
    - Dedicated 10Gbe network cards for storage
    - Dedicated physical 10Gbe Switch


    I was running a similar setup since almost 8 years - just rebuild and tuned things after freeing up more HDDs to throw into the mix


    PArameters:
    - MTU 9000 on all 10Gbe interfaces
    - Parameters for NFS export: "async,no_subtree_check,insecure"


    On the OMV side I ran the following fine-tuning options, basically treating the Vmxnet adapter like a physical Mellanox adapter


    - # Disable timestamps
    sysctl -w net.ipv4.tcp_timestamps=0


    # Selective acks
    sysctl -w net.ipv4.tcp_sack=1


    # Increase maximum length of processor input queues
    sysctl -w net.core.netdev_max_backlog=250000


    # Increase the TCP maximum and default buffer sizes
    sysctl -w net.core.rmem_max=4194304
    sysctl -w net.core.wmem_max=4194304
    sysctl -w net.core.rmem_default=4194304
    sysctl -w net.core.wmem_default=4194304
    sysctl -w net.core.optmem_max=4194304


    # Increase memory thresholds to prevent packet dropping:
    sysctl -w net.ipv4.tcp_rmem="4096 87380 4194304"
    sysctl -w net.ipv4.tcp_wmem="4096 65536 4194304"


    # Enable low latency mode for TCP:
    sysctl -w net.ipv4.tcp_low_latency=1


    # Buffer split evenly between TCP Window and Applications
    sysctl -w net.ipv4.tcp_adv_win_scale=1


    I have also attached a pic of the setup and the final test results with 10 x 1 TB 7200RPM disks in Raid 10 (mixed vendors, all more than 5 years old)


    Cheers
    M

  • Hello, I just installed a 10gbe nic (Asus xg-c100c) in my own openmediavault build and I'd like to try out those settings in my own build. Which config file are you editing for those settings?

  • Hello, I just installed a 10gbe nic (Asus xg-c100c) in my own openmediavault build and I'd like to try out those settings in my own build. Which config file are you editing for those settings?

    I think they are being run from the esxi shell / ssh session :)

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!