First time on this forum, new to OMV but quite used to linux and servers in general, I decided to give a try to OMV on a new server array.
Long time user of Synology and because they will have to comply with the upcoming "legal Australian backdoor", I preferred to move to an OS that don't have anything to sell! This way there is no reason for OMV to comply to these crazy laws. And let's face it, I like new challenges!
I figure synology RAID is using BTRFS for quite sometime in their production machines, and they heavily advertise on it. I had absolutely no problems with the RAID5-type arrays I've been stress-testing over the past two years, no hickups, monthly integrity reports and build checks were showing no errors at all. And anyways, I don't think Synology would advertise in such way without making some heavy tests to back up their claims.
However, the BTRFS kernel project is listing RAID5 and 6 as unstable, leaving its implementation to software integrators...
This level of contradiction between these two made me wonder why there is such mismatch in the perceived maturation of this filesystem.
I started by understanding how Synology RAID differs from classic RAID. Then I heard from Level1-forum that Synology RAID is on top of LVMs!!! I came to the conclusion that Synology configures a LVM of each disk right away, then manages the logical volume from that LVM to build the array.
So, why not trying this approach with OMV?!?
I have a new system, nothing stored on it yet... Let's do it.
Last detail, THIS IS A TEST, it is not meant to be used on a production machine. Don't do it!
Enough for the introduction, the newly assembled system is built around a C236 Intel chipset with a low power E3-1260L-V5 processor, 16GB of UDIMM-ECC. There is a RAID controller embedded in the system, but obviously, I won't use it here.
One major constrain I have is about the migration of the data, I have to migrate from a synology RAID array, with a limited quantity of hard drive. So I figured I would create a degraded RAID5 array. As a side note / is set on a separate SSD.
To make up this I had to install OMV (V4.1.21), then install the LVM plugin (omv-lvm2 V4.0.7-1 at the time of writing this) from the main repository.
Then I used the following two links to help me out in during the setup and syntax of mdadm:
Creating the LVM for each disk is easily done through the webUI. Very simple, take one disk, for instance /dev/sda and create a sda-LVM volume group then a HDD0-LVM logical volume. This way it matches the numbering I am used to put on the physical hard drive, good thing to avoid touching the fan.
BTW, I am soooo pleased with this project. OMV rocks! Seriously, a new era of open-source-self-hosting is upon us. Thanks to OpenMediaVault, its active community and Volker!
After that, I had to identify the volumes created using mdadm:
The two logical volumes created are /dev/mapper/sd$--LVM-HDD#--LVM
Let go ahead and create a new array. BTW, I tried to make a RAID6, but it failed, requesting for a minimum of 4 disks to be created.
I could have forced it somehow maybe, but I figured it is not complicated to migrate a RAID5 to a RAID6.
Anyhow, this is he syntax and the output:
Then I let it stand still during the rebuilding/building of the array.
Then volume formating in BTRFS, then created a shared folder.
Copying some data at the moment, then I will stress a little these two new disk. I am confident about them, they have been tested a little beforehand.
I hope this will help.
Thanks for reading this!
Updates coming soon.
Feel free to throw the hell out of my wrongdoing.