Is there btrfs support planned for omv6?

  • I had read that btrfs was going to be the flagship filesystem for omv6, so I was excited to spin up a copy of omv6 today. I got to playing around with it in a VM and it seemed btrfs was still a second class citizen.

    The raid options are all mdadm, I cant see a way to do btrfs raid arrays in the gui.
    Ok I manually created the array on cli.
    Mounting the btrfs volume mounts it will none of the optimizations like no atime or compression. (defaults)
    Changing the mount options through config.xml and remounting it results in the filesystem going missing after every reboot.
    The gui mount options only seem to pick up one disk from the array, and that disk seems to be random.
    The space usage and capacity numbers are not correct for the array.

    I was eyeballing using this for a NAS for my family, but it is in such a state that noone else would be able to fix anything when something went wrong.

  • Have you ever considered that exactly the problems you are facing now with BTRFS are the reason that this file system is not fully integrated?

    Don't talk to me like I'm dumb.

    I am merely asking if this is going to be sorted out for omv6 or If I should just give up and move on.

    • Offizieller Beitrag

    Don't talk to me like I'm dumb.

    He wasn't. Since he is the developer of OMV and has only implemented basic support of btrfs, he is telling you the problems you are having is why he hasn't implemented more advanced support. English isn't his native language and you are reading into the response too much.

    I am merely asking if this is going to be sorted out for omv6 or If I should just give up and move on.

    Unless someone else writes a more advanced plugin, I don't think you are going to see more advanced support. If you have a requirement on btrfs, then you something else might be a better option.

  • He wasn't. Since he is the developer of OMV and has only implemented basic support of btrfs, he is telling you the problems you are having is why he hasn't implemented more advanced support. English isn't his native language and you are reading into the response too much.


    Unless someone else writes a more advanced plugin, I don't think you are going to see more advanced support. If you have a requirement on btrfs, then you something else might be a better option.

    Fair enough, maybe its just the language barrier.

    I want something that is going to have the self-healing and error correction capabilities like zfs or btrfs. You can put btrfs on an mdadm array, but it pretty much defeats the purpose of btrfs, because there is an opaque layer of abstraction between the disks and the filesystem, at least as far as btrfs knows. The error correction and self-healing cease to function.

    I much prefer the flexibility of btrfs arrays where you can just sort of throw whatever disks at them you want. It really is the perfect filesystem for a home NAS. The problem is I cant find a btrfs based NAS OS. TrueNAS/freeNAS is zfs, but zfs is inflexible and you have to buy all the capacity upfront or destroy and recreate the pool to expand a vdev. You can only add redundancy to a vdev after creation.

    I could build a server out to do exactly what I want it to do, but then there would be easy gui to use for the people to administer it when I'm not there.

    btrfs on omv would be the best of all worlds.

    • Offizieller Beitrag

    btrfs on omv would be the best of all worlds.

    I will see if I can recreate your issue on a VM. What features would you need in a btrfs plugin to allow other to administer it? And there is a zfs plugin (although not perfect) and with zfs 2.1, you will be able to expand pools by a disk at a time.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I will see if I can recreate your issue on a VM. What features would you need in a btrfs plugin to allow other to administer it? And there is a zfs plugin (although not perfect) and with zfs 2.1, you will be able to expand pools by a disk at a time.

    It would need to be able to create btrfs raid arrays (really the creation only needs to be done once, not critically important but probably easier than some of these others) , mount them with optimized options(noatime, compress=, etc....) and keep them mounted, mount them in degraded mode when a disk fails, btrfs replace on failed disk, btrfs balance, after those 3 it can just be remounted normally. Btrfs scrub cron jobs to check and heal the array from corruption. Adding extra disks can be done on the fly so that wouldnt be hard to add. Disk resize can also be done on the fly. It would need to show accurate btrfs filesystem size/availability.

    You have to offline them to expand partitions when replacing with a larger disk, but that could be done manually, Just getting the disk and array up and running at original capacity should be enough to admin through a gui. Like if a 2 TB disk fails and you replace it with 4TB it will just use 2TB until you expand the partition. But in the end you can add capacity to btrfs arrays pretty easily with new disks and/or replacement disks.

    btrfs can also switch raid modes pretty easily, but that would be an advanced feature most people wouldnt need to use

    btrfs also has good snapshot capabilities but those are less important for NAS than the ability to throw random hard disks at the array while using it's built in redundancy and error-correction.


    I didnt know zfs was adding the ability to add capacity to a vdev. That has been it's weak point for a while.


    As it is now I think you can create a btrfs filesystem on top of an mdadm array, but I dont know why you would ever want to do that. When I was doing my testing with it I just made 5x3 gig virtual sata drives and made a raid6 out of them.

    • Offizieller Beitrag

    Ok, so on a fresh install of OMV 6 from iso, I added four 8G disks. I created a raid 0 array. In the web interface, I picked the option to mount. My only choice was /dev/sdb but that isn't wrong since I could've used any of the four disks to mount. It mounted and show the correct size and free space. After a reboot, the array was mounted but still showing as missing. The fsname and dir entry in the mntent were correct and existed. The problem is that /proc/mounts shows:


    /dev/vdb on /srv/dev-disk-by-path-pci-0000-07-00.0 type btrfs (rw,relatime,space_cache,subvolid=5,subvol=/)


    The devicefile is /dev/vdb but when the code gets the UUID of the filesystem and uses findfs to get the associated devicefile, it gets /dev/vdc which explains why it is missing.


    root@btrfstest1:~# findfs UUID=da196a6c-67a6-40ea-9de2-c035cf34fb8c

    /dev/vdc


    I will have to think about a different way to associate these.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    It would need to be able to create btrfs raid arrays (really the creation only needs to be done once, not critically important but probably easier than some of these others) , mount them with optimized options(noatime, compress=, etc....) and keep them mounted, mount them in degraded mode when a disk fails, btrfs replace on failed disk, btrfs balance, after those 3 it can just be remounted normally. Btrfs scrub cron jobs to check and heal the array from corruption. Adding extra disks can be done on the fly so that wouldnt be hard to add. Disk resize can also be done on the fly. It would need to show accurate btrfs filesystem size/availability.

    That's a big list. It would be a substantial plugin. Not going to say it can't happen but I haven't even ported most of the omv-extras to OMV 6.x yet and the zfs plugin is going to be a huge re-write.


    You have to offline them to expand partitions when replacing with a larger disk, but that could be done manually, Just getting the disk and array up and running at original capacity should be enough to admin through a gui. Like if a 2 TB disk fails and you replace it with 4TB it will just use 2TB until you expand the partition. But in the end you can add capacity to btrfs arrays pretty easily with new disks and/or replacement disks.

    Taking disk(s)/array(s) offline is something OMV doesn't like.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • That's a big list. It would be a substantial plugin. Not going to say it can't happen but I haven't even ported most of the omv-extras to OMV 6.x yet and the zfs plugin is going to be a huge re-write.


    Taking disk(s)/array(s) offline is something OMV doesn't like.

    Yeah I know but this is what real btrfs support would mean, at least on the gui end. I had heard it was coming for omv6, even that it was going to be the central thing, but I guess that's just redditors.

    Yeah but in order to repair the degraded array it has to mount it as mount -o degraded. You can delete one of your virtual disks, replace it with one of another size to simulate a disk failure. I think its btrfs filesystem show that shows you the number of the missing disk then btrfs replace start $missingdisknumber /dev/$replacementdisk fixes the array with the new one

    • Offizieller Beitrag

    I had heard it was coming for omv6, even that it was going to be the central thing, but I guess that's just redditors.

    It was talked about for OMV 6 but was pushed off. But even then, it was never going to support raid.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I am also quite disappointed that, since the initial plan was to adopt BTRFS as the standard, not even a basic functionality in the GUI is left. But it is also hard to assess how relevant that is at all. BTRFS doesn't seem to be that popular. It's understandable, since Raid 5 is still not stated stable after all these years.

    • Offizieller Beitrag

    BTRFS doesn't seem to be that popular. It's understandable, since Raid 5 is still not stated stable after all these years.

    BTRFS has improved in my opinion (anecdotally) but BTRFS RAID options, as you say, are still unstable and we're talking several years at this point. (BTRFS 1.0 was intro'ed in 2008) It's the slow pace of development that may be causing the loss of interest. Similar filesystem projects have died on the vine.

    I'm of the opinion that XFS (stable and very mature) will have COW, sub-volumes, snapshot abilities, etc., added before long. -> Article. If that's the case, XFS + LVM or, maybe, XFS layered onto traditional RAID might be a usable alternative to ZFS.

  • btrfs is actively developed (check the links on https://btrfs.wiki.kernel.org/index.php/Main_Page ).


    My desktop system is Tumbleweed. With 3 to 7 updates a week, I appreciate the ability to rollback for the odd breaking change. I really appreciate it when I install something and manage to break things beyond repair, then a rollback is much quicker than a complete reinstall. For the base debian system, while btrfs would provide some nice features, until debian adopts it as a standard it would be best to stick with debian's default.


    As a data system, btrfs has a lot of pluses:

    - can be used with non-raid data and raid metadata, providing the ability to detect bitrot

    - raid allows the automatic repair of bitrot

    - the file system can be converted to / from non-raid to any type of raid on the fly quickly

    - setting up raid on large drives is mercifully quick.

    - mixed drive sizes are not a problem, even with raid.

    - drives can be added to or removed from raid (or non-raid) configurations on the fly.


    The disadvantages (with OMV):

    - command line is required to setup or convert


    The disadvantages (with raid):

    - raid5 / raid6 can have failure issues when there is a simultaneous power failure and drive failure (however this is not a btrfs exclusive)


    The raid issue can be resolved using a UPS, or as noted somewhere else in the forum, the metadata raid settings.


    The command line btrfs configuration works well with OMV (or is it the other way around?). A plugin would make life simpler, allowing more people to use a rock solid versatile file system with the simplicity of the OMV interface.


    I encourage anyone thinking about btrfs, or fear btrfs because of purported issues, the read through the wiki link above and checkout some of the video presentations (particularly the Facebook one).

    • Offizieller Beitrag

    I encourage anyone thinking about btrfs, or fear btrfs because of purported issues, the read through the wiki link above and checkout some of the video presentations (particularly the Facebook one).


    Balance the above out with the -> BTRFS status page which has been the same for years, give or take a couple of issues.

    And there's this accounting of BTRFS performance, by an experienced Enterprise admin.

    The CLI utilities for repair left a few things to be desired. (On the other hand, that's what backup is for.)


    Lastly, BTRFS performance is poor, if connected by USB. -> Performance (NAS users are very likely to do this.)
    _______________________________

    Don't get me wrong here. Filesystem / Kernel integration is a big deal. I could overlook BTRFS' performance, if the project could get just a few more nit-picks ironed out.

    I looked at your link and would note that I'd like to see the project focus on the features they advertised that don't work well, versus adding new features. Still, your point is taken.

  • BTRFS's documentation hasn't been updated in forever. They really need to update it. They need some PR people as well.

    The raid5/6 stuff is the write hole problem which exists for every raid solution everywhere without extensive logging implemented by stuff like zfs. Extensive logging, UPS or a literal battery on a raid card is the only way around this with raid. This problem is not exclusive to btrfs in any way, but for some reason people like to talk about it with that particular implementation of raid. Zfs's is the only widely-used non-battery solution i know. Mdadm has one but noone uses it. Mdadm has this exact same write hole problem that you can plug by having a separate flash device and mounting the mdadm raid with the --write-journal specifying to journal write ahead to a separate device. In the event you encounter the write hole it can read from the journal which writes the parity and data ahead of the real writes... to try to make sense of things in the array.

    Maybe this is the fault of the devs of btrfs for focusing on this universal raid problem when no other dev teams for other raid solutions do. It's just a given that the write hole problems exists with raid. Its a raid problem, not just a btrfs problem. Same thing with ECC and zfs. Zfs isnt any more vulnerable to non-ECC ram causing issues than any other filesystem, but somehow people view that as a zfs thing. Whatever is bad in ram will be bad in disk, regardless of your filesystem. That being said, mdadm, the default raid solution on OMV has the exact same problem that btrfs has with the write hole, and I'm willing to bet 99.9% of people running it aren't running a write journal device, and aren't worried about it. These types of caricatures of filesystems are more myth/misunderstanding than reality. If you pick raid 5/6 in mdadm and run it without a journaling device you are in no better situation that you would be in raid5/6 on btrfs.

    I will say that in linux, if you want to see where things are going to end up watch what fedora does. They are the trendsetters of the Linux world, and they have defaulted to btrfs. SUSE already runs btrfs, and fedora does, which means eventually RHEL will, which means even more development and support when the entirety of the enterprise linux world runs it. BTRFS is more widely used and more poplar than it has ever been, it just hasn't hit the mainstream desktop world of linux yet. It's on the cusp.

    I'm not sure what to make of USB performance, but there are optimizations to be made when mounting any filesystem in a particular way. I wouldn't put it past someone to just leave it as mount -o defaults (OMV does this), and be surprised when the performance isn't optimal across a different buses.
    .

    • Offizieller Beitrag

    I'm using ZFS with ECC Ram and I'm using an UPS. (I don't see an UPS or whole house surge protection as optional.) I'm giving thought to a small SSD for a ZIL drive but, since I'm working with the typical 1GB network bottle neck, I wouldn't expect much benefit.

    • Offizieller Beitrag

    So the write hole is not even BTRFS specific?! Ok that’s kind of weird. I really never heard about that. 🤔

    Traditional hardware and mdadm RAID has had the write hole since the beginning. Lose power during a write and it meant corruption. As previously mentioned, the fix was either to use an UPS or, in some cases, RAID adapters had a battery or a type of capacitor. I had an old Adaptec HBA with a battery.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!