Best way to use drives with mismatched sizes

  • I'm looking at upgrading from JBOD to some form of raid (or raidz2) and I'm looking for options on how to proceed. After reading what I hope is recent enough writeups about BTRFS (this being the most scary) I'm curious to know if either of these options are possible with zfs, though from a few hours of research it looks like the answer is no.


    I currently have:


    2 x 3TB

    3 x 6TB


    Options:

    1. Can I set up a 5 drive wide raidz2 using 3TB from each of the 5 drives, and then a 3 drive wide raidz1 using the remaining 3TB from each of the 6TB drives?

    2. Can I create a raidz2 and treat the two 3TB drives as if they were a single 6TB drive? I'd (eventually) replace both of those with another 6TB drive.


    If neither are feasible, could I create a degraded raidz2 with 3x6TB and then pop in a fourth 6TB at some point in the not-too-distant future?


    I'm leaning towards using something like ZFS due to capabilities like scrubbing. I've been hit by dataloss in the past and the idea of proactively detecting and fixing issues *before* you're in the middle of rebuilding a failed drive sounds ideal, and I'm leaning towards raidz2 for the extra safety.

  • ryecoaaron

    Approved the thread.
  • Mixing drive sizes in a raid is not recommended. even if it is possible to do. You will end up either creating a raid that is nesting different raid levels that are not a standard configuration, resulting in an overly complex setup that will cause all kinds if trouble if you have a problem that needs rebuild/recovery, or you will only be able to use the space of the smallest drive across all drives, meaning 3TB per drive in your case. You would be much better off setting up 2 different Z1 arrays for now, one with the 3TB drives and one with the 6TB drives. If you get another 6TB, you can attempt to reconfigure the one with the 6TB to add another drive and then migrate the data from the one using the 3TB dives, if you can convert a Z1 to Z2 now (last I read you couldn't)


    Personally though, even if you can convert Z1 to Z2, I have had mdadm raid reshapes go bad before, so a safer method is to back up all data first and them destroy the old config and create the new one then restore the data.


    The other option is mergerfs with snapraid on top. I personally don't run it, as I don't like the way it works, but many do use it, particularly if they have mismatched drive sizes.

  • mutant_fruit The answer to your ZFS questions is no, no and no.


    BTRFS does give you the flexibility to use all your current drives with a raid1 profile at 50% space efficiency giving 12TB of usable storage

    Btrfs disk usage calculator


    There is no in place conversion between radiz1 and raidz2 zfs pools. But things you can do now are:


    1. As stated by BernH , create two pools. A, single vdev mirror with 2 x 3TB and B, single vdev raidz1 with 3 x 6TB. Total usable storage 15TB but only single disk redundancy in each zfs pool.


    2. Create a single pool with two mirror v devs - one form 3TB + 6TB and the other from two 6TB. This will be lopsided until the 3TB is replaced by another 6TB drive. Then zfs pool auto expansion will give two mirror vdevs each of 6TB usable, and a total of 12TB. The pool can survive the loss of two drives as long as thy are not in the same mirror. Future pool expansion is by adding additional mirrors. Space efficiency is 50% but IOPS are higher than a raidz pool. You have the option to use the two 3tb in a second zfs pool.


    3. Create a single zfs pool with a single raidz2 pool using 4 or all 5 drives. The usable space will be as if all drives where 3TB in size until each 3TB drive is replaced by a 6TB drive. So you'd start with 6TB or 9TB usable space. When both 3TB drives are replaced you'd have 18TB usable and two disk redundancy at 60% space efficiency.


    The much anticipated ability to add single drives to an existing raidz1/2/3 pool is just around the corner in the forthcoming version 2.3 of OpenZFS.

    So option three above can be expanded from a raidz2 5 drive pool to a 6 drive pool to increase usable storage to 24TB at 66% space efficiency.

  • What you could do is build a Sliced Hybrid Raid (SHR).

    See here.


    But as it was already mentioned: This is considered a complex setup. You need to know what you are doing and you have to do it manually.

    (Much the same as you would have to do with BTRFS)

    Qnap TS-853A

    Syno DS-1618+

    Qnap TS-451+

    Qnap TS-259 Pro+

  • Hi folks!


    Thanks for the help! I didn't realize that adding extra drives to a raid is possible now - it just needs the next big release of openzfs. That makes my path forward easier :) in the long term I'd like 5x6TB raidz2, but I can get there incrementally!


    I've used 3x6TB drives to create a degraded 4x6TB raidz2, and will buy another 6TB in a few weeks (months?) to bring the raid to full health. My data is currently transferring across from my older, smaller, drives :) in a year or two I'll upgrade openzfs and then grow the raid by 1 drive, giving me 5x6TB raidz2.


    As of now the degraded array is effectively operating as a 3x6TB raidz1, and I will have all my data on the older drives, so I'm reasonably well protected against data loss from drive failure.


    The knowledge I can add an extra drives to the raid in the future meant I didn't have to mess around with lvm'ing two 3TB drives together to mimic a 6TB (or create a 5x3TB raidz2 and 'lose' the extra space on the larger drives, or equivalent) just to avoid having an undersized array right from the start :)


    This has been super helpful!

  • Quote

    I did say "no in place conversion between radiz1 and raidz2 zfs pools".


    Yes, I'm aware. That's why I said I created a degraded raidz2. I did not create a raidz1 as I'd be unable to convert it to raidz2 later.


    It's a degraded 4x6TB raidz2 as I ran "zpool create raidz2" using 3 drives and 1 sparse 6TB file, and then marked the 'file' as offline. This means I can immediately begin copying data into the raidz2 and will still benefit from having 1 parity disk of redundancy. When I pop in the fourth drive later I'll have 2 disks of redundancy. This is effectively the same as starting a raidz2 with 4 disks and one fails immediately.


  • As of now the degraded array is effectively operating as a 3x6TB raidz1, and I will have all my data on the older drives, so I'm reasonably well protected against data loss from drive failure.


    In my hurry earlier today, I read that as 3x6TB raidz1. Let's hope raidz expansion does materialise soon. There's a caveat re: final usable storage when adding single disk to raidz pools which is explained as:


    Code
    After the expansion completes, old blocks remain with their old data-to-parity ratio (e.g. 5-wide RAIDZ2, has 3 data to 2 parity), but distributed among the larger set of disks. New blocks will be written with the new data-to-parity ratio (e.g. a 5-wide RAIDZ2 which has been expanded once to 6-wide, has 4 data to 2 parity). However, the RAIDZ vdev's "assumed parity ratio" does not change, so slightly less space than is expected may be reported for newly-written blocks, according to zfs list, df, ls -s, and similar tools.

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!