BTRFS from Rockstor

    • Offizieller Beitrag

    And if you fear the downsides of immature btrfs kernel/userland code due to relying on a Debian 'oldstable' system choosing a zmirror instead is always an option.

    After doing an hour or two of research and dinking around with the ZFS plugin in a VM, I think I'm going to go with a zmirror.


    In the essentials of I want right now, ZFS provides it without the need to wait on Linux kernel development. ZFS utilities are similar to BTRFS (actually, there appears to be more features), reporting is excellent and the OMV plugin simplifies basic tasks.


    The only down side I could complain about is, provisioning a ZFS vdev seems somewhat ridgid, compared to BTRFS which seems to be much more "on-the-fly". On the other hand, I'll be running a straight mirror for bitrot self healing so such considerations won't matter.


    I sincerely appreciate the nudge in this direction.
    Thanks

  • You have scrub with mdadm,zfs,btrfs so you do that weekly and if something arises,get those backups up.

    Not sufficient unfortunately. The main difference between those three variants is that the anachronistic one from last century doesn't give a sh*t about data integrity.


    RAID-1 done my mdraid tries to write the same data to two chunks on two different devices. When it reads from an RAID-1 array it chooses one of the devices (you can partially influence where from eg. by --write-mostly argument at array creation) and trusts blindly in the data being correct. When you run a scrub the same happens: 'If the array is a mirror, as it can't calculate the correct data, it will take the data from the first (available) drive and write it back to the dodgy drive'. That's the main disadvantage: since this RAID-1 mode doesn't know which chunk might be the correct one if a mismatch occurs it simply doesn't care about this at all.


    So mdraid's RAID-1 protects just from 'bad blocks' and a full hard drive fail as it did already 30 years ago. In the meantime technology has improved and with both ZFS and btrfs we have way better options due to checksumming: the scrub now recalculates checksums of both chunks, compares them with the stored metadata and therefore allows for self-healing if the reason for data corruption is located at the disk or between disk an host.


    Add to this snapshots and the ability to send them easily to another disk or even host (in another room/building/country) and you get so much more 'protection' that an anachronstic mdraid mirror just looks like a joke or at least an insane waste of disk space :)


    A zmirror with regular scrubs and regular snapshots being sent to another location (damn easy to use when relying on znapzend for example) means drive failure protection + data integrity + backup + desaster recovery preparation. Compared to that an anachronistic mdraid attempt is just wasted redundancy to potentially recover from disk failures.

  • As for backups,i usually compress data before backing it up the chain.


    I believe we're talking about different things. I was referring to ZFS/btrfs features that allow already for 'good enough backup' capabilities simply by using features like snapshots and sending them to a separate device (can be a disk or an array both on the local host or even better on another machine that provides physical isolation).


    Creating snapshots with both ZFS/btrfs is easy and free. Once you created a snapshot your data is already safe from being destroyed (either by accident or intention -- ransomware for example). Creating those snapshots regularly is again easy and free. And getting them sent to another device for data safety purposes is again easy and free (with ZFS all you need is just znapzend as already said).


    And also with both ZFS and btrfs we get transparent compression (and on beefy machines with a lot of DRAM also dedup). When transferring snapshots compression can be switched on/off just as we like (you might want to use gzip-9 on your backup datasets for example which shows a noticeable performance drop even with beefy filers but saves you also a lot of disk space at the backup target).


    Illustrating the difference is pretty easy: Use an anachronistic mdraid RAID-1 to store your data below /srv/ or a zmirror with regular snapshots enabled that are sent to another host (on the same network or even better somewhere else through a VPN connection). Now do a simple 'rm -rf /srv/*'. With the concept from last century (mdraid RAID-1) now your data is simply gone and you would need a real backup somewhere to start to restore data from, with the zmirror you have a short laugh and revert to latest snapshot.


    Same when your NAS box gets stolen or vapours in flames. Simply grab the disk/array at the remote location and start over :)

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!