SSD raid array performance is worse than HDD array when moving large files over network
Clients: Windows 10 VM's running inside Proxmox hosts; Jumbo frames turned on and connected through 10G SFP+
OMV: dual CPU 2 x x5670 (2.9 GHz - 24 threads) 48 gigs of ram and connected over 10G SFP+
- HDD setup: 12 x 4TB in a raid 50 setup using mdadm - XFS file system
Moving files from windows clients to OMV HDD array over cifs/smb works great with speeds around 800 MB/s +; Even small random file transfers work pretty well.
I also have 8 x 250GB samsung 850 pro SSD's, which I have been playing around with.
However, the performance is worse from the SSD's vs HDD
I have setup single drives, raid0, raid5, raid50 ..... with xfs file system and I keep seeing the following:
- Moving files around 6 - 9 GB, works without issues: speeds same as HDD array
- However as files go above 9GB in size the transfer drops like a rock. On a 4 disk raid 0 I see speeds drop to 130 MB/s, but if I run single disk it goes to 30MB/s .....
- I also see my IOWAIT and system load go up as compared to HDD array ..... system load = 8+ and IOWAIT 4% to 12% (moving files to the HDD array does not really change the system load or IOWAIT)
I have even tried ZFS and the performance is even worse.
changed the SAS controller, but that did not make a difference. SAS controller = LSI 9211-8i HBA fully updated and in IT mode
Has anyone run into anything like this or know what else I should look at?