Low IOPS on SMB Share despite good read/write speeds

  • So I have an OMV server running on an old desktop (i5-3570k, 16gb, OMV running off a USB 3.0 drive). On this server I have 10 7200rpm 3tb HDDs currently running in RAID10 (previously RAID6 with the same issue) and they are getting very low IOPs. I'm getting reading around 15-20 when I run diskspd. I have a second server running 3 of these same drives in RAID5 and my IOPs are 200-300. Both server are connected to my other servers via 10gbe connections. If I run CrystalDiskMark on both servers both have similar read and write speeds that fully saturate the 10gbe connection. Anyone know why my IOPs are so low on the larger server? Built it with hopes of using it as a main hub to host my VMs off of for ProxMox, but with this issue it isn't really much of an option at the moment.

  • Maybe we could help but only if you'd reveal all necessary hardware details

    omv 6.9.6-2 (Shaitan) on RPi CM4/4GB with 64bit Kernel 6.1.21-v8+

    2x 6TB 3.5'' HDDs (CMR) formatted with ext4 via 2port PCIe SATA card with ASM1061R chipset providing hardware supported RAID1


    omv 6.9.3-1 (Shaitan) on RPi4/4GB with 32bit Kernel 5.10.63 and WittyPi 3 V2 RTC HAT

    2x 3TB 3.5'' HDDs (CMR) formatted with ext4 in Icy Box IB-RD3662-C31 / hardware supported RAID1

    For Read/Write performance of SMB shares hosted on this hardware see forum here

    • Offizieller Beitrag

    Sounds like the pcie bus may be over loaded. Servers have better have better buses than desktops. Not sure how to test that but might be in the spec sheet. lspci -v should show your hardware.


    The proxmox kernel and any bios updates may help?


    On https://forums.untangle.com There are many posts about 10Gb throughput. It is not as easy as plug and pray. Untangle is debian based same as OMV.

    If you make it idiot proof, somebody will build a better idiot.

    3 Mal editiert, zuletzt von donh ()

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!