Hello,
First time on this forum, new to OMV but quite used to linux and servers in general, I decided to give a try to OMV on a new server array.
Long time user of Synology and because they will have to comply with the upcoming "legal Australian backdoor", I preferred to move to an OS that don't have anything to sell! This way there is no reason for OMV to comply to these crazy laws. And let's face it, I like new challenges!
I figure synology RAID is using BTRFS for quite sometime in their production machines, and they heavily advertise on it. I had absolutely no problems with the RAID5-type arrays I've been stress-testing over the past two years, no hickups, monthly integrity reports and build checks were showing no errors at all. And anyways, I don't think Synology would advertise in such way without making some heavy tests to back up their claims.
However, the BTRFS kernel project is listing RAID5 and 6 as unstable, leaving its implementation to software integrators...
This level of contradiction between these two made me wonder why there is such mismatch in the perceived maturation of this filesystem.
I started by understanding how Synology RAID differs from classic RAID. Then I heard from Level1-forum that Synology RAID is on top of LVMs!!! I came to the conclusion that Synology configures a LVM of each disk right away, then manages the logical volume from that LVM to build the array.
So, why not trying this approach with OMV?!?
I have a new system, nothing stored on it yet... Let's do it.
Last detail, THIS IS A TEST, it is not meant to be used on a production machine. Don't do it!
Enough for the introduction, the newly assembled system is built around a C236 Intel chipset with a low power E3-1260L-V5 processor, 16GB of UDIMM-ECC. There is a RAID controller embedded in the system, but obviously, I won't use it here.
One major constrain I have is about the migration of the data, I have to migrate from a synology RAID array, with a limited quantity of hard drive. So I figured I would create a degraded RAID5 array. As a side note / is set on a separate SSD.
To make up this I had to install OMV (V4.1.21), then install the LVM plugin (omv-lvm2 V4.0.7-1 at the time of writing this) from the main repository.
Then I used the following two links to help me out in during the setup and syntax of mdadm:
http://blog.mycroes.nl/2009/02…ingle-disk-to-3-disk.html
https://translate.google.com/t…raid-1-un-raid-5-sin.html
Creating the LVM for each disk is easily done through the webUI. Very simple, take one disk, for instance /dev/sda and create a sda-LVM volume group then a HDD0-LVM logical volume. This way it matches the numbering I am used to put on the physical hard drive, good thing to avoid touching the fan.
BTW, I am soooo pleased with this project. OMV rocks! Seriously, a new era of open-source-self-hosting is upon us. Thanks to OpenMediaVault, its active community and Volker!
After that, I had to identify the volumes created using mdadm:
root@openmediavault:~# fdisk -l
Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/sdc: 111.8 GiB, 120034123776 bytes, 234441648 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xc72f6d50
Device Boot Start End Sectors Size Id Type
/dev/sdc1 * 2048 217806847 217804800 103.9G 83 Linux
/dev/sdc2 217808894 234440703 16631810 8G 5 Extended
/dev/sdc5 217808896 234440703 16631808 8G 82 Linux swap / Solaris
Disk /dev/mapper/sda--LVM-HDD0--LVM: 3.7 TiB, 4000783007744 bytes, 7814029312 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/mapper/sdb--LVM-HDD1--LVM: 3.7 TiB, 4000783007744 bytes, 7814029312 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Alles anzeigen
The two logical volumes created are /dev/mapper/sd$--LVM-HDD#--LVM
Let go ahead and create a new array. BTW, I tried to make a RAID6, but it failed, requesting for a minimum of 4 disks to be created.
I could have forced it somehow maybe, but I figured it is not complicated to migrate a RAID5 to a RAID6.
Anyhow, this is he syntax and the output:
root@openmediavault:~# mdadm --create /dev/md0 --level=6 --raid-devices=2 /dev/mapper/sda--LVM-HDD0--LVM /dev/mapper/sdb--LVM-HDD1--LVM
mdadm: at least 4 raid-devices needed for level 6
root@openmediavault:~# mdadm --create /dev/md0 --level=5 --raid-devices=2 /dev/mapper/sda--LVM-HDD0--LVM /dev/mapper/sdb--LVM-HDD1--LVM
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
root@openmediavault:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 dm-1[2] dm-0[0]
3906883584 blocks super 1.2 level 5, 512k chunk, algorithm 2 [2/1] [U_]
[>....................] recovery = 0.3% (12684256/3906883584) finish=430.1min speed=150866K/sec
bitmap: 0/30 pages [0KB], 65536KB chunk
unused devices: <none>
root@openmediavault:~# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Mon Apr 15 15:12:28 2019
Raid Level : raid5
Array Size : 3906883584 (3725.89 GiB 4000.65 GB)
Used Dev Size : 3906883584 (3725.89 GiB 4000.65 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Mon Apr 15 15:14:38 2019
State : clean, degraded, recovering
Active Devices : 1
Working Devices : 2
Failed Devices : 0
Spare Devices : 1
Layout : left-symmetric
Chunk Size : 512K
Rebuild Status : 0% complete
Name : openmediavault:0 (local to host openmediavault)
UUID : 00000000:11111111:22222222:33333333
Events : 29
Number Major Minor RaidDevice State
0 253 0 0 active sync /dev/dm-0
2 253 1 1 spare rebuilding /dev/dm-1
Alles anzeigen
Then I let it stand still during the rebuilding/building of the array.
Then volume formating in BTRFS, then created a shared folder.
Copying some data at the moment, then I will stress a little these two new disk. I am confident about them, they have been tested a little beforehand.
I hope this will help.
Thanks for reading this!
Updates coming soon.
Feel free to throw the hell out of my wrongdoing.