Sanity check for my mdadm configuration update plan

  • Hello,


    Over the years, my NAS grew and disks of different sizes were added. I would like to update my current mdadm setup in order to make use of all the hard drives I currently have. Before committing to apply those changes I would be grateful if someone familiar with mdadm could give feedback.


    I am now quite familiar with mdadm for daily operations (add drive, grow array, fail and replace a drive) but I never tried strange setups like the one I have in mind. I don't mind having to rebuild the array multiple times (because of disk changes) but I would like to avoid copying the data all over manually.


    Here is the plan with schemas. I tried to make it as simple and visual as possible.


    Current setup


    I have 4x 6TB and 2x 12 TB drives. One 12 TB is unused at the moment. All the other drives are in a RAID 5 array, with a total usable capacity of 24 TB (half of the 12 TB is "wasted" here).

    The situation looks like this:


    Step 1 - Swap one 6 TB for the spare 12 TB


    This is a no risk step to reach the following situation.


    Step 2 - Fail one 6TB from the RAID 5 and create RAID 0


    Remove one 6 TB from the RAID 5 and create a new RAID 0 with the spare 6 TB plus the one I just removed. This will leave the setup with a degraded RAID 5 and a 12 TB RAID 0.


    Step 3 - Add the RAID 0 as a member of the RAID 5


    The 12 TB volume created with the RAID 0 is added to the degraded RAID 5. The RAID 5 is rebuilt to a clean state.


    Step 4 - Buy a new 12 TB and swap one 6 TB for the new 12 TB


    This is the same as in step 1 and will bring the following situation. The new 12 TB drive is in red here.


    Step 5 - Create a new RAID 0 with the remaining 2x 6 TB


    Basically repeat steps 2 and 3 to reach this final configuration. The array can be expanded as all the members will be 12 TB in size.



    Rationale / Things I already know

    1. I want to make use of all the drives I have, without buying many additional drives (one 12 TB is acceptable). The final setup seems optimal in that sense
    2. Raid is not backup: I have backup of important data somewhere else, and I'll not come crying here if I mess up :)
    3. Each step will require the array to rebuild the parity and it takes time
    4. This setup is not possible via the OMV web UI, only via the mdadm command line, according to this post (I am fine with that)
    5. I heard about LVM (Logical Volume Manager) but it does not seem to be a good fit here: it adds complexity for no good reason


    Questions I still have

    1. Are the plan and final setup sane? Writing it and explaining it doesn't look like a stupid idea (but maybe I missed a point)
    2. Are there common pitfalls I should be aware of? This kind of setup is not well documented from my research, or maybe I don't have the good keywords
    3. Are there resources that I should read before proceeding?

    Any feedback is appreciated and if you had a similar experience I would like to hear about it!

  • macom

    Approved the thread.
  • I personally would not mix and match drive sizes in the way your plan is laid out creating a very complicated hybrid type of array. It would be much safer in the long run to use the 6TB drives in one array and the 12TB drives in another. This would make drive replacement much simpler, and consequently much safer, than your proposed configuration. While theoretically the configuration is possible, the hybrid complexity is something to consider.


    The rule of thumb with a RAID is to have all drives the same size, and preferably all NAS or RAID rated, and if possible with the same firmware on all the drives. (I personally currently run a 6 x 4TB RAID5 and a 2 x 10TB RAID1) . With that said, I would not have even added the 12TB drive to the array knowing I would not be able to use half of it, and doing so goes against that rule of thumb.


    Reshaping a RAID is also a risky venture, and I would never try it without a full backup of the data. I have had it fail on me before so this is not said without experience. If you are going to do a full backup before doing any changes, it is also a lot faster to do a complete restructure and then just restore the backup, as each step in your proposed configuration would, as you said require a full data rebuild, which would actually take significantly longer than the single data restore

    Asrock B450M, AMD 5600G, 64GB RAM, 6 x 4TB RAID 5 array, 2 x 10TB RAID 1 array, 100GB SSD for OS, 1TB SSD for docker and VMs, 1TB external SSD for fsarchiver OS and docker data daily backups

  • I agree with BernH, I would aim for 4 x 6TB RAID5 + 3 x 12TB RAID5 with 42TB usable. Which means buying that additional 12TB now. Degrading your existing raid5 by removing the 12TB, copying all the raid data to a newly built 3 x 12TB RAID5 and then destroy and rebuild the old array of 6TB drives.

  • juliusone

    Added the Label resolved

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!