MDADM RAID6 Alternatives

  • I am in the process of wanting to replace my RAID6 MDADM setup with something more flexible. I will have to buy new hard drives anyways, so might as well do it right.


    My concern with the current setup is that I have old 2TB drives x 6, and I know I have to start replacing them (since SMART will likely not pick up errors early enough), but MDADM is not very flexible. I replaced a few drives with 4TB onces, and will likely replace the next ones with bigger drives... but the issue is that MDADM is terrible at swapping drives, and doesn't make use of different sizes, and having to rebuild every time, etc... which is limiting


    What is the best solution for pooling drives that is painlessly easy to add and remove from the pool as they degrade, has 2 drive redundancy, and some performance gain over a single mechanical hard drive speed???


    Will look into Snapraid, but not sure if this will be best solution. Wanting to hear if there are better ones out there.


    TLDR; Any tips for easiest/painless hard drive pooling solution with 2 drive redundancy?? Much appreciated!

    • Offizieller Beitrag

    Flexible is btrfs. Unfortunate to reach raid56 you need kernel 4.x in btrfs. Raid1 and 10 are considered stable.
    zfs is another option but is not as flexible as btrfs in terms of expansion.


    i think snap raid is your best option atm. When omv reaches version 3 with Jessie then you can use btrfs with backport kernel. don't ask me when 3.0 is coming because I don't know.

  • ZFS is complicated but can have a good amount of flexibility. ZFS' raids have an "autoexpand" property where when all disks in a single vdev (virtual device) are replaced with larger disks, the filesystem will automatically grow. The nice thing about ZFS (and likely btrfs) is that the RAID and the filesystem are connected, so you don't have to grow the FS separately from the RAID.


    In addition, ZFS pools are basically groups of virtual devices. So you could achieve a RAID 10 style setup by having several mirrored vdevs in a pool. As you add more vdevs, the size of the pool will grow but you can't add more space to a single vdev without replacing all of the disks in that vdev. So if I had a single RAIDz2 (RAID 6 analog) with 6 disks, I would be stuck at that size until all disks were replaced with larger disks (of the same size). However, if I had a 3 vdevs each with 2 disks in one pool, I could add another vdev of two disks at any time but I could not add a 3rd disk to one of those vdevs. I also would have to replace both disks in a single vdev with larger ones if I wanted to increase the size of an existing vdev.


    In terms of degraded disks, ZFS handles them quite well. I'm in the process of swapping out my crappy WD Greens with WD REs and have replaced two so far. I simply yank out one of the disks, slide in the new one and run `zpool replace Data ${old disk ID} ${new disk ID}`. If you were to add drives by name, rather than ID, it can auto-replace them. The reason I don't is because I'd prefer to be able to plug all these disk into another machine in an emergency and have the FS come up as easily as possible.


    It then takes about 9 hours to resilver the new disk on my 6tb pool of 5 disks and 1 hot spare. One of the issues with ZFS, though, is that it effectively requires ECC RAM and a nontrivial amount of it. They suggest 1gb of RAM for every 1tb of space with a minimum of 4gb for the filesystem alone. ZFS uses RAM heavily for all of it's operations and yields some incredible performance numbers. It supports transparent compression (fast) and deduplication (though this gobbles RAM at huge rates).


    I don't know too much about btrfs as I stayed away from it until the disk layout was stable (which it now is). I may try it again at a later date. I think once some of the other features are integrated into Kernels I don't have to backport from and/or manually install.


    I got myself an external USB/eSATA enclosure that has space for 4 disks that I plan to put my old WD Greens into so I can more easily shift data off and back onto my RAID when switching filesystems.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!