How best to use BTRFS for both OS and Data in OMV6?

  • I made brief use of btrfs circa 2015, but not since. Now I’m thinking of moving a desktop/laptop install to btrfs and I’m wondering how btrfs might best be used on OMV6 for both OS and data drives. I know forum members macom and Soma do this and others probably do the same.


    Obviously this means resorting to the CLI and making use of snapper (with snap-sync?) and /or btrbk. Rolling back the OS is made convenient by using snapper with grub-btrfs. Limitations I’m aware of when using btrfs for both OS and data include:


    1. There seems little point in adopting a multi-subvolume layout for the OS as the system and various plugins write/read from many different files/dirs on the root filesystem. Hence OS snapshots can waste space.


    2. For the data drives, which are likely to be in a raid0, 1 or 10 profile, OMV6 controls the relevant “/etc/fstab” entries. You cannot selectively mount btrfs subvolumes. The btrfs mount options are controlled by the environmental variable OMV_FSTAB_MNTOPS_BTRFS, which is a global setting.


    3. Using snapper to snaphot subvols on data drive(s) is not ideal as the snaphots are create on the same subvol. So while you can access individual files and dirs in a snapshot on the server or via SMB/CIFS using shadow copies, you cannot easily rollback to a previous version of the subvol. Without the limit imposed by 2 above you could create dummy directories for snapper to use and mount subvolumes at these points. This limitation does not exist if you use btrbk as subvol snapshots are placed in a directory outside of the subvol.


    4. Part of configuring snapper can mean the creation of ACL on snapper directories. I don’t know if this can cause problems elsewhere in OMV6.


    5. The use of docker and/or virtual machines on any btrfs filesystems can result in poor performance. What is the impact and what can you do to reduce it? Turning off checksums by using “nocowdata” on selective subvols on anything with a raid profile sounds like a recipe for problems.


    Feedback on any of the above, especially point 5, would be appreciated.

  • votdev Thanks for the links. The possibility of creating BTRFS raid profiles, shared folders as subvols and snapshots via the webui is really a significant enhancement. When is likely to go live?


    With the separation of OS and data in OMV, there's no reason why you can't still deploy snapper + grub-btrfs on the OS for convenient rollback. Something to perform btrfs send/receive of OS and/or data would be a real bonus.

  • 5. The use of docker and/or virtual machines on any btrfs filesystems can result in poor performance. What is the impact and what can you do to reduce it? Turning off checksums by using “nocowdata” on selective subvols on anything with a raid profile sounds like a recipe for problems.

    Had this in draft, sorry.


    What I did to mitigate this was to create the folders I needed, give them the CHATTR +R prior to have them populated with files/folders.

    If the folder already had files before, the attribute would only be set on the new files/folders.


    As for the OS, remember that I'm running Pis, no GRUB (thank the Gods).


    1 -

    I can't say much on performance other than, it's fast for me on what I use it for.

    My boot time is on the 45/50 seconds mark on it even running BTRFS snapshot on boot (with 2 drives as DATA)

    OS runs on SSD and snapper takes care of / subvol snapshots (/dev/sda3) and @docker && @appdata (/dev/sda4) where docker root and ALL my docker-configs live.


    Other Pi OS runs on ext4 SDcard and takes the same amount of time to boot, with different services running (but with 4 drives as DATA)


    On both Pis DATA drives, there's no snapshots folder created (at the time, not much knowledge on what to use it for)


    I can show some outputs later on tonight (ATM I have some maintenance on the compound and power is ON and OFF).



    Regarding 2, 3 & 4

    I'm sorry but that is way over my head, :)

    Now that it's available via GUI, I need to start testing on the VMs and see what to do with it, ;)

  • 8| I got lost at snapper, I thought that was a fish :)

    Funny thing is: IT IS A FISH, :D



  • Soma Thanks for the feedback here & on the other thread. TBH, I'm not convinced of the use case for installing OMV6 on a BTRFS filesystem. Sure you can gain snapshots of the root fs and booting into a previous working OMV6 after a serious config failure, but I don't see segregation of the various parts of the root fs by using subvols is of much use in the way OMV6 works. Backing up such an install is not so straightforward either.


    So it boils down to whether you are persuaded by the argument that data expansion is easy if you use BTRFS and that snapshots give you access to previous versions of data via SMB/CIFS shares where shadow copies are configured and check sums can offer self-healing for certain raid profiles. But BTRFS RAID is not like traditional RAID as redundancy is at the chunk level, not the device level. For BTRFS RAID1 you're are probably better off starting with three drives instead of two to avoid having to mount in degraded mode in the case of one out of two drives failing. Ongoing maintenance means occasional scrubs and the possibel use of balance.

  • I'm not convinced of the use case for installing OMV6 on a BTRFS filesystem. Sure you can gain snapshots of the root fs and booting into a previous working OMV6 after a serious config failure, but I don't see segregation of the various parts of the root fs by using subvols is of much use in the way OMV6 works. Backing up such an install is not so straightforward either.

    In my defense I only did it on the Pi for the fun of following the procedure (and learning a lot with it).

    Now, it doesn't make me much sense, honestly.

    Well, you only learn through test-and-trial, :)


    The other Pi runs on a SDcard and behaves in the same way as the one with BTRFS SSD.


    Clones of both Pis OS drive is done in the same way:

    Power down, make clone on PC, flash to a same size SSD/SDcard, plug clone and boot.


    No need for the added complexity of having to configure what and whatnot to run the OS.

    Since it's running without issues, I've left it until I migrate it and decide if I will do it in this or that way.


    Ongoing maintenance means occasional scrubs and the possibel use of balance.

    Will shorten to only this, for now:

    You mean balance in case of a 3 drive RAID1, correct?


    On a 2 drives RAID1 with -m1 -d1, the balance is automatic.

  • For those of us that already had BTRFS RAID volumes created manually, is it best to delete and re-create these using the new GUI tools? I'm wondering if I might run into issues later with hand-crafted volumes.

  • Haven't read it with cleared eyes but did votdev added support for BTRFS RAID???


    I was convinced it was only for BTRFS volumes (single drive).


    [EDIT]

    Should have read PRIOR to post:

    • Enhance the BTRFS file system creation page by adding support of profiles to create various RAID levels.
    • Official Post

    OK getting back on track on a side note from tweets, snapper and CoW's -> this got my attention purely from the fact that it looked interesting, but was about as clear as mud


    EDIT: There is also some information on btrfs.readthedocs

  • it looked interesting

    Thank you for the link, some good reading.

    Really loved the btrfs disk usage calculator pic at the bottom, ;)


    but was about as clear as mud

    You're already on an advantage due to your knowledge of how MDADM works.


    IIRC:

    BTRFS will pump up the profiles to have even more safety on the DATA since it can make copies of the blocks containing, the DATA and metaDATA, not only to other disks of the RAID but to the same disk also, depending on what profile is selected.


    On a real basic newbie use, the RAID1 profile is a 1:1 copy from one disk to other (same profile used on MDADM).

    This will survive 1x disk failure (again, same as MDADM)


    Now adding other features:

    You want to have bitrot protection on a FS.

    You can do a pseudo "RAID1" with just 1x disk using the DUP profile. This will create 2x copies of any block ON the same disk, reducing the useful size of it to half.


    This is just to name a few examples.

    Of course, testing and trying will give better knowledge of what is best.


    The added bonus on this, is that you can always change the profiles whenever you want.

    Add to this, the ability of using different disk sizes or type, adding or removing disks whenever you want without the need to fail it and resilvering.

    Again, IIRC, :)


    As for the (still) existing issues with RAID5/6, I think that's the profile I will try first (VMs of course).


    The tryouts begin, :D

  • Sorry, I edited the post immediately after.


    I saw it already, thank you, ;)

    • Official Post

    The added bonus on this, is that you can always change the profiles whenever you want.

    Add to this, the ability of using different disk sizes or type, adding or removing disks whenever you want without the need to fail it and resilvering

    You should write this stuff up, this is beginning to make some sort of sense, you have a better way of explaining it than the stuff I've read

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!