• BTRFS, I know the raid implementation is still somewhat suspect and the recommendation is not to use it production.

    Is it so? My most recent info was that the danger in using btrfs raid5 exists without power redundancy only - being an actual issue on any raid solution. Thus, power cut would be potentially dangerous on btrfs like on any other raid technology. In the other hand as long as there is no power cut btrfs would be as safe as any other solution. Did I miss sth?

  • I'm the same, but one of the things I have found out is that btrfs cannot fix bitrot when detected when sitting on top of mdadm as there is only one copy of the data

    Hence the RAID created only via BTRFS and not MDADM.

    Thing is, there're a lot of ways to make the RAIDs with different nuances on the metadata and data deduplication that will require some reading to understand what they can/can't do and what is the best one to use per use case.

    Using Btrfs with Multiple Devices - btrfs Wiki (kernel.org)


    For eg:

    You make a RAID1 on 2x disks with snapshoted subvolumes and run scrub every week.

    BTRFS will detect errors on SCRUB and will fixed them with previous copies from the snapshots.



    This is the theory behind this:

    filesystems - Btrfs automatically bitrot correction with snapshots? - Unix & Linux Stack Exchange


    I still haven't got 1 single error but my system isn't UP that long. (my main Pi has an uptime of maybe 2 and a half to 3 years)

  • Yeah but as I learned here the RAID 5 write hole on btrfs is not btrfs specific but exists on every raid solution. It is just the fact that devs of other solutions do not warn about that anymore.

    HannesJo The problems with BTRFS raid5/6 go much deeper than just the write hole problem. Not much has changed re: this https://www.reddit.com/r/btrfs…s_read_before_you_create/


    @Some Does your "scrub on NC DATA HDDS" do anything? Doesn't setting nodatacow also turn off checksums? If some of the data sitting on a btrfs raid1 profile has checksums turned off doesn't that give you potential problems with data consistency?

  • Does your "scrub on NC DATA HDDS" do anything?

    It checks for errors on it. Same goes with the rest of the scrubs.

    Until now, I always had 0 errors, so dificult to say if all is OK. (I want to assume yes)

    It's a RAID1 made by BTRFS

    I can show the output later on.


    The nodatacow is set only on another one but only on the mariadb folder, NOT on all drive: @sd_configs.


    Nonetheless, I also make a timely push to a backup of all these FS to a secondary device.

    I don't recall but I think you can also use the btrfs send to make it recognize copies from there.

    Need to find more info on this.

  • The problems with BTRFS raid5/6 go much deeper than just the write hole problem. Not much has changed re: this https://www.reddit.com/r/btrfs…s_read_before_you_create/


    That’s interesting, thx. So some say it is as safe as any solution and some say it is not. While the official notice of btrfs devs only mentions the write hole issue that seems to be an actual problem of all raid solutions. Topic remains unclear 🤔

  • That’s interesting, thx. So some say it is as safe as any solution and some say it is not. While the official notice of btrfs devs only mentions the write hole issue that seems to be an actual problem of all raid solutions. Topic remains unclear 🤔

    votdev has not implemented a raid5/6 btrfs profile in the latest changes to OMV6 which seems entirely wise to me. If you want to get into the nitty-gritty maybe this will satisfy your curiosity :


    How to use btrfs raid5 successfully(ish) - Zygo Blaxell

  • but using nodatacow on a raid1 profile concerns me

    Ok, I'll try to explain in a way that, even I can understand since this is still a grey area to me and what was done, was some time ago so, the steps are, somewhat forgotten, ^^


    The only RAID1 I'm using is, indeed on the NC DATA HDDs.

    This RAID holds only the DATA folder used on the NC.

    If memory doesn't fail me, it was done with this command:

    mkfs.btrfs -m raid1 -d raid1 /dev/sdb1 /dev/sdc1


    Yes, on partitions because I didn't know better at the time, so I did it Windows style:

    Initialize both disks on the Pi with fdisk.

    Created the partition and format them in BTRFS.


    Only after, I made the RAID with above command.


    This RAID is a pure CoW system.


    As for the other disk (even though, there's 2x FS), it's a SSD with 4 partitions but no RAID.

    Partition 1 == the /boot in FAT32 that is required on the Pis (maybe all SBCs)


    Partition 2 == is a ext4 that holds the initial / before I cloned it into partition3. This was done to have the BTRFS kernel module compiled into the initramfs as a module so that the BTRFS would be recognized on boot time.

    This was, at the time, needed since Pis kernel didn't had it included (different story).


    Partition 3 == the actual root partition in BTRFS that has a subvolume with snapshots and where the boot partition points to.

    This is the cmdline.txt on the /boot partition that points to the above:

    Code
    pi@tacho:~ $ cat /boot/cmdline.txt
    console=serial0,115200 console=tty1 root=PARTUUID=8f4dbd00-03 rootfstype=btrfs rootflags=subvol=@root rootwait

    See the rootflags?


    On fstab, you can also add more functions to it:

    LABEL=sd_btrfs / btrfs noatime,nodiratime,defaults,ssd,subvol=@root,compress=zstd 0 0


    With this in place, SNAPPER than makes timely snapshots of the above subvol on every boot, on every apt run (pre & post) and with the specified timeline on it's default config where you can set how many you want per day, week, year.

    Code
    pi@tacho:~ $ snapper list-configs
    Config  | Subvolume
    --------+-------------------------------------------
    appdata | /srv/dev-disk-by-label-sd_configs/@appdata
    docker  | /srv/dev-disk-by-label-sd_configs/@docker
    root    | /


    Partition 4 == the rest of the SSD with 80Gb where I created 2 subvolumes: @appdata && @docker.

    @appdata is where the docker-configs live and where lives the mariadb folder with the chattr +C (I tought it was R)

    To achieve this, I first launched the stack with folders non-existant so they were created.

    Deleted the content INSIDE mariadb folder and set the chattr +C on it. Copied back the backup I had inside the folder and all files where with the +C attribute.

    The rest of the subvol stayed untouched.


    @docker is the docker root path and this is untouched.


    Is this what is CoW? Well, I want to think it is, :)

    Unless CoW is only when you have RAID but that's not what I understood when I first read about BTRFS.


    I think it's doable to test this on a VM:

    Create a BTRFS RAID1 and set a folder with the chattr +C. Populate it with files and see what happens.


    Am I making any sense with the above?

    Sorry if I can't explain better.

  • Soma Many thanks for the detailed explanation. Using nodatacow on a non-raid btrfs profile is supposedly OK, assuming the filesystem in your partition has a "single" profile. Nothing wrong in using a partition rather than no partitions when creating a btrfs raid profile.

  • assuming the filesystem in your partition has a "single" profile.

    ?!? How can I check this?

    The format was done via mkfs.btrfs command.

    Nothing wrong in using a partition rather than no partitions when creating a btrfs raid profile.

    I know. Only mentioned because it could have been done on block device.

    But I didn't know then, :D


    irvan.hendrik

    Correct.

  • 8| We've now got snappers and cows, no wonder it's confusing :D

    As long as the audience is kept entertained, all is good 🤣

  • btrfs fi df .... or btrfs file usage .. or btrfs dev us


    e.g for a fs mount at /


  • BTRFS on root drive (no RAID)


    BTRFS on @appdata subvolume


    BTRFS on @docker subvolume


    BTRFS on 2x HDD (RAID1)

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!