Hi,
I am currently testing OMV for my new Aoostar WTR MAX NAS build.
Before throwing any ciritical data on it, I wanted to test some features and disk-configurations.
(OMV 8.0.6-1, only OMV-Extras with MergerFS and Snapraid added)
I currently have 6x4TB WD RED drives installed, 3 old, 2 newer 1 almost unused drive.
I formated all drives with BTRFS in single mode and created a mergerfs pool with 5 folders, MergerFS1-5 on each drive.
So far, everything looked good, disks fill up according to policy, SMART is reporting fine on all drives.
I already tested the 32GB RAM I have installed, no errors.
At some point 2, 3 or even 4 disks at the same time lock up and get put into read only mode.
No error in the GUI, no information why this happens, nothing.
I am pretty lost what to look for here...
Almost all searches for this behaviour tells the user: BTRFS is the superior filesystem, your drive is broken, no other file system could detect this previously!
User: OK thanks, I will buy a new drive tomorrow.
Sorry, but I don't think that 4 drives of different age fail at the same time, same second even...
I also reinstalled OMV yesterday, reconfigured the drive pool, same result. 4 out of 6 disks went into read only mode in the night during file copy...
Is there some kind of BTRFS service running in the background that is not capable of working correctly if the drives are constantly in use?
The result this morning was a lot of errors and the single usable disk in mergerfs getting filled to the last remaining GB available.
(the 6th drive is currently filled with Snapraid parity data from a previous test that failed, because the drives locked the content files, Snapraid is currently not active)
In the log I could find some errors, mainly for sda, ironically the drive that was still functioning, to my eyes it looks like that the "bad tree block" error caused multiple disks to lock up at the same time. But I have no clue why... ![]()
The file copy is done with the 10G fiber connection, copying from a 4x10TB RAID that reads with up to 600MB/s, so all drives are under constand writing load with network not being a bottleneck. The system basically is always waiting for the 4TB drives to finish writing.