Hi all,
I just finished building a brand new machine for my OMV server. It's pretty beefy, with the intent (someday) of using ZFS/BRTRFS once it's 'baked-into' OMV.
Anyway, doing some very basic tests, I'm seeing write speeds that just don't make sense. I'm used to a single drive being able to do (conservatively) about 50MB/sec or so on SATA. However, here's what I'm getting on OMV:
omv ~ $ dd if=/dev/urandom of=./test.dd bs=1048576 count=2048
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 155.324 s, 13.8 MB/s
That's, IMHO, exceptionally slow for a 4x2TB WD Red RAID5 array when the CPU is not even close to pegged.
Read speeds are in the range that I think is maybe reasonable:
omv ~ $ dd if=./test.dd of=/dev/null bs=1048576
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 23.5908 s, 91.0 MB/s
Any ideas how the hell brand new hardware is getting a measly 13MB/sec write speed? What can I do to bring this up to something usable for running VMs/etc.? Are my expectations for software RAID in the wrong ballpark? I'd think that modern hardware wouldn't even break a sweat calculating parity at 5x that rate...
Anyway, I'm not sure what might be wrong, so I'm unsure what to post - please request info if I don't provide it.
Build:
- ASrock E3C224D2I server mobo
- Intel E3-1231v3 Xeon CPU
- Crucial 16GB ECC (2x8GB) CT2KIT102472BD160B
- 4x2TB WD Red drives (RAID5 in OMV) connected to the four SATA3 ports
- 1x320GB Hitachi 2.5" drive for OMV boot/install drive
- OMV 1.9 fresh install as of last night
Thanks!
Mike