Workaround to use RAID5 with BTRFS ?

  • Hello,


    Well, first of all, it's a question not an assumption so please don't shoot


    Instead of creating the "RAID5" the proper way via and mksf.btrfs command to get the benefits of BTRFS + RAID5/6 (which is not recommended), is there any benefit from creating a RAID5/6 via mdadm and THEN formatting it using mkfs.btrfs as if it was a "single" BTRFS volume (and leave the RAID5/6 stuff to mdadm) ?


    I mean, it can't be worse than just formatting the an mdadm array as ext4, there must be some benefit in formatting as a single btrfs volume.

  • A BTRFS filesystem on a MD RAID5/6 gives you the filesystem that can detect corruption but as it is a BTRFS "single profile" there is no self-healing via BTRFS scrub. Downsides can be the filesystem does not perform as well as EXT4 in some cases, e.g. VMs, databases, docker.


    How many and what size/type of device do you intend to use in MD RAID?


    P.S I should have mentioned BTRFS also gives you snapshots of individual "shared folders" which are created as subvolumes. When shared by SMB the snapshots are viewable as "previous versions" of a "shared folder" and so provide a "rollback" mechanism. There is no separate "rollback" command in the WebUI for BTRFS subvolumes.

  • I just got a 4x 12TB WD Ultrastar disks to replace my 3x 6TB WD Red set ad MD RAID5 + Btrfs on top


    Well now that I'm experimenting with BTRFS RAID5 (metadata set as RAID1) with the 3 old disks (after backing up my stuff on one of the new drives... I just figured out that performance was really crap with my old setup using the exact same disks due to "younger me" using BTRFS the wrong way


    I think I'll end up using BTRFS RAID5 + metadata as RAID1 as most people recommend.


    Also, did a quick test with TrueNAS to get a feel of ZFS but CPU/RAM Performance was crap, thankfully I used a different SSD to boot.

    • Offizieller Beitrag

    I think I'll end up using BTRFS RAID5 + metadata as RAID1 as most people recommend.

    As I understand the problem, BTRFS in RAID5 has the same problem that mdadm RAID5 has - "the write hole" problem. Both issues are fixed with an UPS and, in the event of an extended power outage, a clean shutdown. 
    (Back in the day, some of the higher end hardware RAID controllers addressed the write hole issue with a battery backup on the controller itself.)

  • BTRFS RAID5 + metadata as RAID1 - Horrible. As much as I've tried to like BTRFS, BTRFS RAID5 still has major problems. So how many of these problems still exist: https://lore.kernel.org/linux-…GX10769@hungrycats.org/#r . Have look at various thread on r/btrfs


    I'd guess you used/tested TrueNAS SCALE, it is resource hungry compared to ZFS on OMV. Especially if you start using their apps. Less so if you use jailmaker to run native docker apps. But ZFS pers se does not have to be resource hungry. With four x 12TB drives, either RAIDZ2 or a pool of two vdevs each one being a mirror is the best config. So thats approx 24TB of usable space.


    But what CPU/RAM combo are you talking about?

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!