Write speed fluctuations
-
- OMV 5.x
- someuser08
-
-
Please share hardware specifications of your box. I am using a different setup but my SMB speeds are always rock solid at about 112 MB/s.
-
It looks as if you have introduced some bottleneck(s) into your configuration. And there is not enough bits pouring out of the bottle to fill the cable. Or out of the cable into the bottle.
I would start by testing without any form of RAID.
-
-
its asrock j4105-itx/8gb ram/256gb ssd plus drives. Nothing special.
I wonder if that is a sign of an SMR drive (which would explain drops, but that would be weird for a surveillance HDD with optimized writes...)
If that is the case, I have a spare ssd - would caching help in this case?
-
After trying without RAID I would try using some other non-checksumming filesystem. Perhaps ext4.
I would avoid introducing more complexity. Instead try to reduce complexity.
Also test writing from some other machine, from SSD, to establish where the bottleneck really is.
Purple drives are great for writing, especially sequentially. I suspect that using purple drives with RAID may stripe the data and make it appear more like small random writes. Thrashing performance. Only a suspicion...
SMR is great for surveillance. Sequential writes. But a disaster for RAID not designed for SMR.
Add btrfs and checksums and it might make things even worse.
-
Is there a way to see the real time speed when doing writes from the command line? I tried DD locally to the array and get about 100MB/s which is less than expected, but its difficult to confirm the spikes...
-
-
Does performance improve and the instability go away if you don't use RAID and use ext4 instead of btrfs? Should be simple enough to test?
I often use iftop or iotop to check performance...
iftop - network
iotop - disksYou may need to install and run them as root.
sudo apt install iftop
sudo iftop -
great, thanks
Iotop shows the same fluctuations whilst I'm doing DD. From 250MB/s down to kilobytes. So it is the array.
Unfortunately I have spent 2 days copying the data from the old NAS (which was much slower 30-50MB/s, that's why I have not noticed this problem initially), so I'm not keep to destroy the volume just yet without knowing what I'm going to replace that with (or know all the options that need testing). I'm assuming its BTRFS and its raid that is culprit, so now the question would be what should i replace that with...
-
And FYI why I chose btrfs in the first place - I wanted to have raid 1 with 2 disks that can be grown into maximum of 3 (with capacity still 50%). Any other options out there that can do teh sane
-
-
revise your SMB cache, I change it to 128K and works better.
try SO_RCVBUF=132072 SO_SNDBUF=131072 in the extra options in webGUI and check for 64K, 128K , 256K to see if something changes.
PD: you need to stop and the start SMB to apply changes ( restart SMB so changes are aplied), before to start performance test.
-
As I figured out - its not SMB related, hdd/btrfs/raid combination...
-
Is there an alternative that can do raid1-like with 3 disks?
-
-
With 3 disks you are stuck with RAID5 or SnapRAID (two drives filled with data, one with parity).
If you want RAID1-like features, take a look at RAID 10, but requires 4 disks. -
Is there an alternative that can do raid1-like with 3 disks?
with 3 disk you can test ZFS RaidZ1
https://docs.oracle.com/cd/E19…819-5461/gcvjg/index.html
more info: [HOWTO] Instal ZFS-Plugin & use ZFS on OMV
-
But its not possible to move from 2 disks to 3 without data loss, right?
-
-
But its not possible to move from 2 disks to 3 without data loss, right?
yes, not possible
Jetzt mitmachen!
Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!