Happy to share - eventually it helps someone to squeeze out more performance.
I'm using a small SSD (120GB) to boot an esxi host and to store a VM called NAS01 running the latest version of OMV. This Virtual Machine has native access (PCI-passthrough) to two AHCI cards presenting 10 physical disks and 4 SSDs. Everything else is pretty normal - OMV has an NFS export that is mounted by the ESXI hosts as datastrore. To tune this to optimum performance I did the following:
- Dedicated virtual switch (backbone)
- Dedicated 10Gbe network cards for storage
- Dedicated physical 10Gbe Switch
I was running a similar setup since almost 8 years - just rebuild and tuned things after freeing up more HDDs to throw into the mix
- MTU 9000 on all 10Gbe interfaces
- Parameters for NFS export: "async,no_subtree_check,insecure"
On the OMV side I ran the following fine-tuning options, basically treating the Vmxnet adapter like a physical Mellanox adapter
- # Disable timestamps
sysctl -w net.ipv4.tcp_timestamps=0
# Selective acks
sysctl -w net.ipv4.tcp_sack=1
# Increase maximum length of processor input queues
sysctl -w net.core.netdev_max_backlog=250000
# Increase the TCP maximum and default buffer sizes
sysctl -w net.core.rmem_max=4194304
sysctl -w net.core.wmem_max=4194304
sysctl -w net.core.rmem_default=4194304
sysctl -w net.core.wmem_default=4194304
sysctl -w net.core.optmem_max=4194304
# Increase memory thresholds to prevent packet dropping:
sysctl -w net.ipv4.tcp_rmem="4096 87380 4194304"
sysctl -w net.ipv4.tcp_wmem="4096 65536 4194304"
# Enable low latency mode for TCP:
sysctl -w net.ipv4.tcp_low_latency=1
# Buffer split evenly between TCP Window and Applications
sysctl -w net.ipv4.tcp_adv_win_scale=1
I have also attached a pic of the setup and the final test results with 10 x 1 TB 7200RPM disks in Raid 10 (mixed vendors, all more than 5 years old)