How to extend 2x1TB (RAID 1) with 2x4TB?

  • Hey all,


    I think I might have made a mistake here: A couple of years ago I set up a RAID 1 OMV for my mom's photos (my first ever use of RAID and installation of OMV - so complete newb here). I used two 1TB WD Reds back then. In the meantime the 1TB has filled up so that I bought two 4TB WD Reds (the old ones) and thought I might just extend the partition of the RAID 1, so that I end up with 5TB each. I just added them in the RAID Management, so that now my /dev/md0 shows up with a capacity of 931.39 GiB, but all four physical harddrives (sda, sdb, sdc, sdd). When going to the file system, I cannot "resize" the md0, though, as it will stick with the smaller harddisks, i.e. the 1TB ones.


    What is now the way to go for me? Should I go with another RAID config, e.g. RAID 10? What is probably the easiest way to achieve that? Do I have to backup all the data externally and remount all drives?


    Thanks for your help!

    • Offizieller Beitrag

    I have now read this 3 times and I'm still not understanding what you have written ?(


    thought I might just extend the partition of the RAID 1, so that I end up with 5TB each.

    No, not possible

    I just added them in the RAID Management, so that now my /dev/md0 shows up with a capacity of 931.39 GiB, but all four physical harddrives (sda, sdb, sdc, sdd).

    Even more confusing, again not possible.


    You cannot extend/expand a Raid1 from the original two drives. Using mismatched drive sizes in a Raid configuration will result in the Raid being configured based upon the smallest drive size.

    What is now the way to go for me? Should I go with another RAID config, e.g. RAID 10

    Again not possible, mismatched drive sizes.


    Your only option is to created another Raid1 with the 2x4TB drives or use mergerfs and snapraid, that way you would get approx. 6TB of storage 1+1+4 and the other 4TB as a parity drive for snapraid.

  • Thanks for your reply and sorry about the confusing text. In short: I added 2x4TB to a 2x1TB RAID 1 configuration, because I thought that this would work. I realized that it does not, because even though it was possible to add the two 4TB discs in the "Raid Management" to my RAID 1, it was not possible to "extend" the system's capacity under "File System".


    I have also understood that I cannot add the 4TB drives to the 1TB drives. But it is possible to just create a second Raid 1 based on the 4TB drives. So I might go with that solution.


    How should I now proceed with my two 4TB drives as they are listed in my RAID 1 under "Raid Management". Just deleting them out of the raid won't be sufficient, will it?

    • Offizieller Beitrag

    How should I now proceed with my two 4TB drives as they are listed in my RAID 1 under "Raid Management". Just deleting them out of the raid won't be sufficient, will it

    No at this moment do not touch it!!!!!


    I have just completed a test on a VM, but how did you add those 4TB drives to that Raid1

    • Offizieller Beitrag

    I have posted a query based upon testing in a VM the thread is here TBH what you have done I would not assumed this to be possible, but it is.

    I can't see a logical way round this as what has happened your data has been written across all four drives, you can remove one but you could not remove another as the raid would be clean/degraded.


    The only option I can suggest is to attach another drive and copy the data from the raid to the other drive, once you have done that you can sort the raid out.

  • Thanks for your efforts, geaves! And thanks for creating that query. The safest way to do it now is probably copying the whole data on another drive and then recreating the whole raid, isn't it? I would then go with one Raid with the 2x4TB and with another with the 2x1TB.

    • Offizieller Beitrag

    Thanks for your efforts, geaves! And thanks for creating that query. The safest way to do it now is probably copying the whole data on another drive and then recreating the whole raid, isn't it? I would then go with one Raid with the 2x4TB and with another with the 2x1TB.

    In answer to your first part of the above yes, but I think there could be another way, does the data exceed the size of the 2x1TB drives?

  • No, so far we only have around 800GB on the drives. I wanted to backup some other external drives, but the entire data exceeded the limit. That is why I bought the two 4TBs in the first place.

    • Offizieller Beitrag

    No, so far we only have around 800GB on the drives.

    :thumbup: that's good because it makes this easier and you can do it all from your windows or linux pc.


    You need to make a note of the drive reference for one of the 4TB drives, so in Storage -> Disks make a note of /dev/sd[?] the ? will be either a, b, c, d, etc.


    Raid Management -> select the raid and click remove from the pop up the drives in the raid will display, select the one you made a note of, click OK, this will remove that drive from the array, and the array will show as clean/degraded.


    At this point check to make sure you can access the shares you have set up, if you can proceed, if you can't come back.


    Assuming shares are OK and accessible; Storage -> Disks select the drive you have removed from the array, on the menu click wipe, from the pop up click short then start, this will wipe the drive.


    When finished File Systems, click create on the menu, from the pop up select the drive you have just wiped, give it a name (backup would be good) ensure that ext4 is the selected file system, click OK, the drive will now format. Due to the size of the drive both the wipe and the format will take some time the later being the longest.


    Once the format is finished, Shared Folders -> Add from the menu, add a folder, you know how to do this :) then add that to SMB.


    Now from your windows or linux machine open that folder and create sub folders inside that match the folder structure on your raid, now simply copy the data from your raid to that 4TB drive.


    If this works come back and we'll start the rest, good luck :thumbup:

  • Thanks for the explanation! I already read it during the weekend but I wanted to make sure that I will have some time to do it so that I do not mess things up. Unfortunately the week has been pretty busy so far. But I hope I will manage to try it in the following days. I will keep you updated. Thanks again!

  • So, I just started the process, i.e. I removed one of the 4TB drives from the array under "RAID Management":

    However, I received an email from the server at the same time, stating the following:


    "This is an automatically generated mail message from mdadm running on server



    A FailSpare event had been detected on md device /dev/md0.



    It could be related to component device /dev/sdd.



    Faithfully yours, etc.



    P.S. The /proc/mdstat file currently contains the following:



    Personalities : [raid1]


    md0 : active raid1 sdb[0] sdc[2](S) sda[1]


    976631512 blocks super 1.2 [2/2] [UU]


    bitmap: 0/8 pages [0KB], 65536KB chunk



    unused devices: <none>"


    Should this happen? Should I now go on with wiping sdd?


    Thanks again for your help!


    edit: Ah, and so far I can access my network drive and all the data.

    • Offizieller Beitrag

    Should this happen?

    Yes and no, what I was expecting was not that, what I'm guessing has happened OMV/mdadm has added those drives as spares, rather than actually add them to physically use them ?(

    If you look at your image the raid is in a clean state after removing that drive, but when I did this on my VM and removed a drive the raid was clean/degraded and I couldn't remove another.


    I could suggest that you remove the 4TB but I have no idea now how this will behave. Looking at this output

    md0 : active raid1 sdb[0] sdc[2](S) sda[1] the red would suggest that sdc is the second 4TB and it's a spare i.e. not physically active in the array.


    The last thing I would want to happen is this to blow away your data, so I would go with what I was assuming to be the best option that way you know you can get the data off, then remove that second 4TB. If the raid stays as clean and the data is still accessible you can start start by wiping the two 4TB and creating a second raid. If the blows up it doesn't matter as you've got the data on that 4TB.


    Sounds confusing, it does to me as I wasn't expecting that outcome.

  • The sdc is indeed the second 4TB drive. I will now wipe the first 4TB that I removed from the array (sdd) and backup the data on it. Then I will remove the second 4TB (sdc) from the array, too, and will check whether the 2x1TB RAID is still working ('clear' has to show up). If that is the case, then I can create a second RAID 1 out of those two 4TB, right?


    If it works that way, won't it be sufficient just to create a second RAID 1 out of sdd and sdc? Won't that automatically copy all the data that I backupped before on sdd to sdc? This would be preferable for me as I would then link the RAID with the 2x4TBs as primary network drive for my mom and let the both 1TB drives (sdb and sda) untouched for the near future so that I could be sure that what I have done actually works out.

  • I just restarted the server and now all the four drives sda, sdb, sdc and sdd appear again together under RAID Management, i.e. in the same field I screenshotted above. Is this normal behaviour? Should I just delete sdd again and then wipe it without restarting the server?


    • Offizieller Beitrag

    I was going to answer your first post but the above is odd behaviour, when you removed it, did you wipe and format it, Removing it from the array and wiping it should remove the signatures and not in essence add it back automatically.


    But yes remove it again, wipe it, create a file system, then reboot see what it throws.

  • I was going to answer your first post but the above is odd behaviour, when you removed it, did you wipe and format it, Removing it from the array and wiping it should remove the signatures and not in essence add it back automatically.


    But yes remove it again, wipe it, create a file system, then reboot see what it throws.

    Seems to have worked out. I am now locally duplicating the files from the RAID to the sdd drive. Should I - once finished - go ahead with the above written, i.e. deleting the sdc from the array, wiping it as well and creating a new RAID 1 out of sdc and sdd?

    • Offizieller Beitrag

    Should I - once finished - go ahead with the above written, i.e. deleting the sdc from the array, wiping it as well and creating a new RAID 1 out of sdc and sdd

    Only if the existing Raid 1 is intact, the idea of getting the data onto one of those drive was to ensure you had a backup, if removing the second 4TB the existing Raid 1 stays up, then create the second, but that means wiping both those drives again.


    If this was me I would then copy the data from the existing array to the new 4TB array before rebooting

  • Only if the existing Raid 1 is intact, the idea of getting the data onto one of those drive was to ensure you had a backup, if removing the second 4TB the existing Raid 1 stays up, then create the second, but that means wiping both those drives again.


    If this was me I would then copy the data from the existing array to the new 4TB array before rebooting

    Yes, I understood that this now is my backup for the case that there might be a failure when I remove the second 4TB drive (sdc) from my Raid 1 array. And I am glad that it works and that I do not have to get another drive for the backup!


    So I will now do the following:

    1. Finish with the local backup on the removed 4TB drive (sdd).

    2. Remove and wipe the other 4TB drive (sdc) from the array.

    3. See whether the two 1TB drives are still working cleanly in the Raid 1 array.

    4. If this is the case, then I might create a new Raid 1 array out of the two 4TB drives. By doing this both drives will be wiped again.

    5. Recopy all files again from the first array to the new array based on the 4TB drives.


    Might there be the option to just create an array out of the already existing data on sdd by adding the sdc drive? Or does this not work, respectively does this not work in a reliable way?

    • Offizieller Beitrag

    Might there be the option to just create an array out of the already existing data on sdd by adding the sdc drive

    No, the drives need to be clean to create the second array, your points 1-5 look OK, this seems 'around the houses' but it's the only way I can suggest to ensure you don't lose any existing data.


    But it will keep you busy :) good luck

  • No, the drives need to be clean to create the second array, your points 1-5 look OK, this seems 'around the houses' but it's the only way I can suggest to ensure you don't lose any existing data.


    But it will keep you busy :) good luck

    Okay, then I will just follow that approach. And if there might be a problem with the first array, I will just recreate a new Raid 1 out of the two 1TB drives and recopy all the data from my 4TB backup drive and will then follow with steps 4 and 5.


    Another question: I am atm copying all the files via SCP as I only want to duplicate them locally. I am doing this via WinSCP which times out during the copying of bigger folders because of lack of answer of my server during the copying of huge folders. This is okay as long as I am sitting in front of my PC working on other stuff, because then I can keep the program alive every now and then and by that prevent it from disconnecting from my server. But when I am going away from the PC it will reestablish the connection after a while and by that interrupt the copying processes. Is there another way to maybe copy all the files with one click/command overnight?

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!