BTRFS system

    • OMV 4.x
    • On the CLI:

      (Use drives that have been wiped in the GUI. Device names below are examples only.)

      mkfs.btrfs -f -d raid5 /dev/sdb /dev/sdc /dev/sdd (If you want to look over the options for the above, a reference is -> here.)

      btrfs filesystem label /dev/sdb BTRFS1 (Where BTRFS1 is the name of the array, substitute your name in.)

      Go in the GUI;

      Storage, File Systems, and the new BTRFS array will be there. Mount it.


      Of course there's much more involved in maintaining the array and configuring features like sub-volumes. Most of this must be done on the CLI, but the above is a start.

      Note the following is not an opinion; it came from a warning on the command line and is contained in dmesg .

      Obviously, backup is recommended.

      The post was edited 1 time, last by crashtest ().

    • crashtest wrote:

      mkfs.btrfs -f -d raid5 /dev/sdb /dev/sdc /dev/sdd
      BS advice as almost always. I explained to you (flmaxey) and others already over at 'this thread' that adding -m raid1 is the way to go since this will protect metadata or 'the filesystem itself'. You should really start to learn at least the basics.
    • tkaiser wrote:

      BS advice as almost always. I explained to you (flmaxey) and others already over at 'this thread' that adding -m raid1
      You explain next to nothing to me because, as I've told you numerous times (and will yet again), I ignore you. :) Considering what I've observed (a decidedly dual nature), you might try it some time.

      Since you forced this thread into a PM to get it done, you didn't learn the specifics of what the user was trying to do. (As if that would have made any difference.) Primary storage is to be on RAID5/6 array, or a COW equivalent. Since @savellm is in a test build phase, he wanted to look at the BTRFS possibility. That's it, short and simple.

      So, the above was not bad advice - just a demo. It just underscores certain realities of trying to using a filesystem (BTRFS) for RAID5/6, that's not ready. The warning is pretty clear. One might wonder why they put that in there.....

    • ness1602 wrote:

      By default, metadata will be mirrored across two devices and data will be striped across all of the devices present. This is equivalent to mkfs.btrfs -m raid1 -d raid0.…rfs_with_Multiple_Devices
      Nope, that's about btrfs behavior with multiple devices without RAID. I linked to…01#issuecomment-467000104 above for a reason. Since there everything is explained by a kernel developer in the link provided there:[email]…org[/email]/msg79574.html

      Source Code

      1. > > > > If the filesystem is -draid5 -mraid1 then the metadata is not vulnerable
      2. > > > > to the write hole, but data is. In this configuration you can determine
      3. > > > > with high confidence which files you need to restore from backup, and
      4. > > > > the filesystem will remain writable to replace the restored data,
      5. > > > > because
      6. > > > > raid1 does not have the write hole bug.
      7. >
      8. > In regards to my earlier questions, what would change if i do -draid5 -mraid1?
      9. Metadata would be using RAID1 which is not subject to the RAID5 write
      10. hole issue. It is much more tolerant of unclean shutdowns especially
      11. in degraded mode.
      12. Data in RAID5 may be damaged when the array is in degraded mode and
      13. a write hole occurs (in either order as long as both occur). Due to
      14. RAID1 metadata, the filesystem will continue to operate properly,
      15. allowing the damaged data to be overwritten or deleted.
      Display All
      This is the link @crashtest should have read long ago since it would prevent him continuing with one of his moronic approaches to 'spread the news about btrfs'. He barely understands anything and feels encouraged to continue spreading BS.
    • More external links, "explaining things" and deflecting for the simple reality of the "experimental" message in RED in dmeg. How typical and what a strange world to live in. (On the other hand, maybe it's a colorful place with pink elephants and unicorns. :) )

      @savellm ; this is a prime example of what would have happened to your ZFS thread if it didn't go into a PM (a conversation). It would have become so polluted with irrelevant opinions, numerous links to external pages and other nonsense that the core issue would probably still be undiscovered. (Docker containers creating issues in ZFS filesystems.)

      Now, since it's pretty obvious that OCD is firmly in the drivers seat, feel free to go on-and-on about the virtues of BTRFS.
      I'm done here.