RAID6 from 2TB drives to 4TB drives & other RAID questions

  • Hi,


    I have a few RAID6/mdadm related questions I've been meaning to clarify in OMV


    1) Will 4TB drives work in my OMV raid array out of box, or will I run into some weird 2.2Tb limits to work around?
    2) All my drives are 2TB currently, and I would like to eventually replace them all to 4TB. I understand RAID6 will use the lowest common denominator (if there is a mix of 2TB and 4TB, all will be limited to 2TB each). Once I replace the last drive to 4TB, will it automatically expand and read each drive as 4TB all of a sudden?
    3) Once a drive has been unplugged (I have a wonky sata controller) and available again, do I have to rebuild the drive every time? Is there not a way for RAID to continue to use the already pre-built drive again?


    Thank you! :thumbup:

  • 1) I am pretty sure that u don't get an 2TB problems. Many here use larger drives than 2TB, I use 5TB drives.


    2) You will have to extend the RAID manually and then resize the FS - since u use madm I don't know the procedures. I am sure others will help.


    3) Bad starting point for RAID rebuild. I am pretty sure every time a drive is kicked out of the array for whatever reason the rebuild starts from scratch.
    With a wonky sata controller that sounds like a risky business even with raid 6 since u never know if a drives get kicked out when I understand it correctly.

  • Back to topic #2: Going from 2TB drives each to 4TB drives...


    As I understand it, each drive will only be as big as the lowest common denominator. So as long as there is a mix of 2TB and 4TB drives, each drive will use 2TB effectively. Once the last drive is replaced to 4Tb, only then will the size expand.


    Here is my question:
    Since I plan to go from seven 2TB drives to five 4TB drives... once I have five drives replaced with 4Tb, will I be able to simply remove the remaining 2 drives and have the size automatically expand? I am not sure if RAID will like removing 2 drives at once and then effectively expanding the available size of the 4Tb drives? I am trying to work out the logistics if there is a way to do this without losing the data.

  • Simple answer is "no". RAID doesn't work like that. It will not automatically expand, and it will fail if you remove 2 drives.


    To guarantee no data loss, you should build a new, separate array out of the five 4TB drives and then copy all the data from the original array to the new array.


    A better solution would be to build a snapraid with the five 4TB drives, copy all the data from the array to the snapraid, then format the 2TB drives and add them to the snapraid. You'll never have to go through this again, because you can keep adding disks to snapraid without any hassle.

  • Simple answer is "no". RAID doesn't work like that. It will not automatically expand, and it will fail if you remove 2 drives.


    To guarantee no data loss, you should build a new, separate array out of the five 4TB drives and then copy all the data from the original array to the new array.


    A better solution would be to build a snapraid with the five 4TB drives, copy all the data from the array to the snapraid, then format the 2TB drives and add them to the snapraid. You'll never have to go through this again, because you can keep adding disks to snapraid without any hassle.



    That is correct you cannot go from 7 down to 4 drives in a raid - online.




    What u can do:



    Usually when you want to expand a raidyou do the following:


    Exchange all drives to the larger (identical size) type one after another.
    After very exchanged drive a full and successful rebuild has to be done - so 7 drives 7 rebuilds!


    After that the raid can be expanded to the new size - this is again a lengthy process which should not be interrupter.


    The final step would be to adjust the overlaying filesystem to the new max. Raid size.



    As such a procedure requires 7 rebuilds + 1 expansion (with 7 drives) it is a risky process so a backup is a must have! One power outrage on the raid expansion and the data is most likely toast!

  • Thanks guys! Sounds like the more realistic way to go is to simply backup the content and start a new raid array fresh for either scenario (since I don't plan to replace all 7 drives immediately).
    I plan to slowly replace my 2Tb drives as they fail with 4Tb ones... and once I get four 4Tb ones I might just deal with making a new array consisting of just 4Tb drives.


    I know there are a ton of RAID solutions out there, and SnapRAID was mentioned, but are there any other really good solutions out there?
    I chose mdadm RAID6 originally because I wanted the ability for up to 2 drives to fail, and otherwise I wanted the ability to stripe my data so that it can be read quickly over the gigabit network.
    Of course the limitation being that all drives have to be of same size, and any changes in system are really intensive with rebuilds etc (not very dynamic to make changes). Ideally there'd be a solution out there that expands on these features, such as allowing migration to larger drives while maintaining redundancy.

  • Sounds like the biggest benefit of RAID for you is the striping which = high read speeds. Before considering other solutions, I would first determine if those high read speeds are actually necessary for what you are doing. I'm trying hard, but cannot imagine a "home" scenario which would require saturating Gig E.


    Just to put this in perspective, I can stream 9GB .mkv files from my lowly OMV server over the network to my Android TV stick, without a hiccup. The files are stored on single (not striped) drives which attain a read speed of about 40MB/s (wow!) and the server is on a Gig E wired network. The funny part, however, is the Android TV stick is using 802.11n and only connects at ~70 Megabits/s. My point is, I would gain nothing from RAID striping.


    I'm not inferring that you don't need the extra read speed, I'm just curious what it is that you are doing in a home network that requires you to saturate Gig E. That is a shedload of data. :)


    If it turns out that you don't really need the extra speed, then that opens up other, much more convenient and flexible, options for storage redundancy. Snapraid, for me, is "the best of all worlds" so far.

  • You bring a valid point. I really am just looking for something that gives me more redundancy (2 drive failures) and pools all the data into one partition.


    I have an Atom server, with the most intensive scenario being running all the services (such as downloads) while potentially streaming HD content to two people in the house. RAID6 solved this problem, but if there are better solutions for ease of use as mentioned - why not.
    I am not too familiar with snapraid, and not entirely sure how it deals with having 2 parity disks...

  • You bring a valid point. I really am just looking for something that gives me more redundancy (2 drive failures) and pools all the data into one partition.


    I have an Atom server, with the most intensive scenario being running all the services (such as downloads) while potentially streaming HD content to two people in the house. RAID6 solved this problem, but if there are better solutions for ease of use as mentioned - why not.
    I am not too familiar with snapraid, and not entirely sure how it deals with having 2 parity disks...


    Snapraid has full support for 2 (and more) parity disks. No worries there. Think of snapraid as a hybrid of RAID and snapshot-style backups, with the benefits of both and the flexibility of JBOD.


    Snapraid has a built-in pooling function but I have not tried it. I immediately went with mhddfs once I learned about it.


    snapraid and mhddfs are mutually compatible but completely independant of one another, and both of them are quite mature and stable - they "just work". The only possible negative point of mhddfs is that it can be heavy on the CPU compared to alternatives such as aufs, but I run a single-core Athlon 64 1.8Ghz and it keeps up just fine. 15min. load averages are around .02-.05 when I am simultaneously torrenting stuff and streaming HD content.

    • Offizieller Beitrag

    I don't recommend using the snapraid pooling function because mhddfs and aufs are better. It is read-only as well.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!