Adding disks to RAID 1 and can't grow the space available

  • Good afternoon,


    I am posting this in the RAID section, but as this is my first post on the forum, this might not be the right place, in which case I apologize in advance.


    I am encountering an issue with extending my existing RAID array and adding space to my filesystem.


    I am running OMV 6.0.27-1 (Shaitan), with currently 2 x 4Tb disks set up as RAID 1, on a x64 system. OMV is running from a NVMe drive. The NAS is NOT connected to Internet. There are 3 SMB shares running.

    This setup has been running without issues for more than a year now.


    I have decided to add another pair of 4Tb disks in order to double the space available. They are the same as the original disks.


    However, I can't get the RAID to expand, and subsequently I can't get the filesystem to expand either.

    I have looked into the OMV documentation and on the Internet, but I cannot find any help related to the issue I have.


    Everything happens through the GUI.

    Here is what happened :

    - I have added the disks onto the system, and they are recognized. I wiped them, and added them in the array through the GUI and the "Grow" option.

    > I select the array, and click on "Show details". The new disks are recognized as "spares"

    - I go to File Systems, and click on resize
    > Nothing happens. I get a popup where I have to confirm my decision, but it leads to nothing.


    - I then remove them from the array, wipe them, and "Grow" the array again.

    > I select the array, and click on "Show details". This time, the disks are recognized as active.

    - I go to File Systems, and click on resize

    > Nothing happens. I get a popup where I have to confirm my decision, but it leads to nothing.


    - I remove 1 disk from the array, I get the status that the array is "clean, degraded".

    - I cannot remove the other disk, and have a 3 disks RAID 1.

    - I wipe the disk I previously removed, and I am now in the process of growing the array back with this disk, and have another 4hours to go until it is completed.


    Could you tell me what I am doing wrong? What is it I should be doing? What should I try?

    Should my new disks be recognized as "Spares" or "Active"? Is everything workind as intended and am I just missing something very obvious?


    I have (very) limited experience with Linux and CLI, but I am willing to try.

    I will try my best to give you the additional information you may need.


    Thanks in advance for your help !

  • macom

    Hat das Thema freigeschaltet.
  • That's correct as it's Raid1, Raid1 is about mirroring, so you can have as many drives as you like they will just mirror each other

    My understanding of RAID1 (and of the OMV documentation) was that as long as you added multiples of 2 disks into the array, you would be able to add available storage space to the effect of a single disk capacity (ie. you add 2 4Tb disks, you gain 4Tb of space).


    Your answer makes me think that it's not how RAID1 works, and that I will be stuck with 1 available disk and 3 for redundancy.


    In which case, how should I go to add capacity to my file system?
    Will I have to break the existing RAID array, or do I have an upgrade path to keep the shares and data as is all while adding capacity?



    Edit : Should I just create a new RAID1 with the new disks, and add it to the existing file system? Would that work out? Will the file system manage to see the new capacity and make use of it?

    • Offizieller Beitrag

    Will I have to break the existing RAID array

    Yes, if you want to increase the capacity, you would have to move to Raid5, that is doable, but would require removing the smb shares and shared folders and just recreate them but the data would still be intact

    Your answer makes me think that it's not how RAID1 works, and that I will be stuck with 1 available disk and 3 for redundancy.

    As I said Raid1 is about mirroring, 2 drives are exact copies of each other, add another drive that mirrors to the other two and so on.

  • Yes, if you want to increase the capacity, you would have to move to Raid5, that is doable, but would require removing the smb shares and shared folders and just recreate them but the data would still be intact

    You are giving me some hope, thanks for that :)

    I want to be extra clear on what you just said to make sure I understood :

    - Any disks added to an existing RAID1 array will add redundancy, but nothing else
    - Adding capacity to an existing RAID1 is not possible

    - Converting the RAID1 to RAID5 will add capacity, while keeping redundancy (I guess that OMV will tell me, but I'm guessing the redundancy will be over 2 disks)

    - Conversion to RAID5 will not erase the existing data

    - Once I have removed SMB shares and shared folders in the GUI, will I have the option to upgrade to RAID5 in the "RAID Management" section? Or do I have to follow another procedure? In which case, could you kindly direct me to a relevant guide for this?

    • Offizieller Beitrag

    - Any disks added to an existing RAID1 array will add redundancy, but nothing else
    - Adding capacity to an existing RAID1 is not possible

    Correct

    Converting the RAID1 to RAID5 will add capacity

    Conversion to RAID5 will not erase the existing data

    You're not converting you're moving to Raid5 :)

    while keeping redundancy

    This is the debatable point, Raid is about availability, not redundancy

    ___________________________________________________________________________________


    This what you're going to have to do, leave aside deleting smb shares and the shared folders they're pointing too


    Remove 3 of the drives from the existing Raid1 (I'm assuming you have the 4 in a Raid1), create a raid5 with the 3 drives, move the data from the Raid1 to the newly created Raid5, delete the Raid1, add the drive from the Raid1 to the Raid5 to grow it, then grow the file system.


    The above is the very, very, very basic concept, it does not require the user to reboot, pass go, most can be done in the GUI

  • Correct

    You're not converting you're moving to Raid5 :)

    This is the debatable point, Raid is about availability, not redundancy

    Thanks a lot :)
    Small comment about the last point : I think I get what you mean.
    Redundancy would be another storage (distant) that could be activated in case of (limited or total) failure (with data sync in real-time or whatever is acceptable); here, RAID is a mecanism that improves the (local) availability rate of data by ensuring a way of maintaining the access to data even in case of (limited) failure.

    Zitat

    This what you're going to have to do, leave aside deleting smb shares and the shared folders they're pointing too

    Remove 3 of the drives from the existing Raid1 (I'm assuming you have the 4 in a Raid1), create a raid5 with the 3 drives, move the data from the Raid1 to the newly created Raid5, delete the Raid1, add the drive from the Raid1 to the Raid5 to grow it, then grow the file system.

    The above is the very, very, very basic concept, it does not require the user to reboot, pass go, most can be done in the GUI

    Ohh, I see !
    If I'm not mistaken, I can just create a new shared folder on the RAID5 and do the moving from my main computer and do the above. I will recreate the shared folders from the GUI with the shares etc.

    That's...plain awesome, thanks a lot !
    I was expecting a process that would be quite a bit more annoying :)

    So I will be able to remove 3 disks straight from the RAID1, that's good news; I was a bit worried that this would not be possible.



    Thanks a lot for your help and your explanations !
    I will update the thread once I am able to proceed with all this :)

    • Offizieller Beitrag

    I will update the thread once I am able to proceed with all this

    Come back when you're ready to proceed, as my explanation was very basic and you can do the data transfer via smb if that is more comfortable for you.

    Redundancy would be another storage (distant) that could be activated in case of (limited or total) failure (with data sync in real-time or whatever is acceptable); here, RAID is a mecanism that improves the (local) availability rate of data by ensuring a way of maintaining the access to data even in case of (limited) failure

    Exactly

  • Come back when you're ready to proceed, as my explanation was very basic and you can do the data transfer via smb if that is more comfortable for you.

    Alright, the recovery is done, and all seems correct.

    Here is what is shown under the "More details" related to the RAID1 (/dev/md0) array.
    As you can guess, sda & sdb are the original disks, with sdc & sdd being the new ones.


    Following your "very basic" (I'm quoting you there) explanation, in the GUI, should I go ahead and select "Remove" for this array, and select 3 (sdb, sdc, sdd) of the 4 drives?

    Following that, I should wipe those 3 disks; and once that is done, I should create a new array in RAID5. And then a new file system.


    From your latest post, I understand there would be a way to do the copying all internally in the server...? In that case, I would very much like to do so instead of relying on Windows. If you can point me in the right direction, I would really appreciate it !

    FYI, I have installed WeTTY, so I should be able to proceed with some CLI.


    Then do the copy, finally remove the RAID1, wipe and grow the RAID5 array?



    Are those the main steps to follow? Am I missing something important?


    Thanks again for your patience and your help !

  • if you have backed up to any other drives which are not in the raid5 array . (After the raid5 has synced) you can make the ext4 (or whatever you want) partition on the raid. than after that's done move the files over.

    Dell 3050 Micro, i5-6500T, 8GB Ram

    Plugins - compose, cputemp, omv-extras, sharerootfs.

    Drives - 512gb SSD Boot, 1tb nvme Data, 16TB (8tbx 2 merg) Media,

    Docker - dozzle, netdata, nginx-proxy-manager, plex, prowlarr, qbittorrentvpn, radarr, sonarr, watchtower.

  • Noted, thanks !

    As my backup is not on the server, that means I have to create the shared folders after the ext4 to move the files from backup.

    Correct but please make the partition after raid5 has fully synced.

    Dell 3050 Micro, i5-6500T, 8GB Ram

    Plugins - compose, cputemp, omv-extras, sharerootfs.

    Drives - 512gb SSD Boot, 1tb nvme Data, 16TB (8tbx 2 merg) Media,

    Docker - dozzle, netdata, nginx-proxy-manager, plex, prowlarr, qbittorrentvpn, radarr, sonarr, watchtower.

    • Offizieller Beitrag

    Alright, the recovery is done, and all seems correct

    Sorry I'd logged off.


    Reading your #9 you seem to understand what to do, by removing 3 of the drives from the Raid1 the array will display as clean/degraded, that's fine.


    There have been some issues creating an array using repurposed drives, for some reason a 'ghost' file system remains on the drives especially if the drives has been wiped using quick rather than secure. Once your Raid5 has synced it may appear in file systems, this is the 'ghost' file system if this happens ssh into omv and try wipefs -a /dev/md? (replace the ? with the md reference of the new array) this should wipe and clear any residual file system and a new file system can be created. If this doesn't happen then proceed as normal and create a file system.


    Moving/copying files, you could do this via smb or using wetty the choice is yours, to use smb create a shared folder called data then an smb share pointing to that folder. Then under that folder create sub folders either using windows or linux for the shares you have on the Raid1, then copy the files from the Raid1 to the Raid5.

    Once that's done delete the smb shares then the shared folders on the Raid1, then you can delete the Raid1, then wipe that drive and grow the Raid5, then it's file system


    Finally create the shares and smb shares the same as you had on the Raid1, then move the file from the sub folders under data to the new root folders, then you can delete the smb and shared folder called data.


    You can do the above via wetty but you'll have to use the full path /srv/uuid etc to copy the directories.

  • Thanks for the pointers !

    I will proceed with those elements in mind.

  • Good evening,


    Just a quick update :

    RAID5 has been built, data has been transferred, and OMV is currently "reshaping" the RAID5 to accomodate for the 4th disk (=ex-RAID1).

    I will wait for the end of the reshaping to extend the file system (which, considering the time it took to build the empty RAID5, should take me well into tomorrow evening...well, ok, probably until later than that).


    Some comments about the process I had :

    - I had some issues with removing the 3 disks from the RAID1; after looking on the internet, I used the commands "mdadm --fail" and "mdadm --remove" in order to do so. The GUI would not let me remove more than 1 disk from the RAID1 array.

    - The "wipefs -a /dev/md?" command came in handy, as I couldn't create the file system once I built the RAID5. Thanks for mentioning it !

    - RAID5 really is a long one to build... and that was without any data. But I made sure to create the file system after the RAID5 was in the synced state for a while.

    - After looking up how to, I decided that the data transfer would happen locally; so I did "cp -a" with the UUIDs, from one array to the other.

    - As I am quite lazy, I just changed the mounting point for the shares to the new mounting points on the RAID5 array. I am aware it might not be among best practices, but well...it works.


    And that's it :)

    So thank you again for your help and advice through this process !
    First time I go through this, and I learned a lot :)

    I wish you a great time !

  • viggen

    Hat das Label gelöst hinzugefügt.
  • I know it does not help any longer, because you know have a RAID5.


    RAID 1 is maximum protection as you have a full mirror, but at the cost of space efficiency. Only 50% of the capacity is useable.

    RAID5 is better efficiency (75% useable with 4 disks), but at the extend of more CPU overhead.


    What you asked for would be a combination of RAID 0 (no pretection, just a stripe set for more performance over a set of disks and RAID 1 (mirroring).


    And that is exactly what RAID 10 is.


    Just for anyone else to get in here, this would have been the procedure:


    1. Convert your existing RAID 1 into a RAID10. This is a nondisruptive task, as actually nothing is happening.

    2. Then add two disks to your RAID 10, where actually to each of your mirrors a disk is put into a stripe set (now two disks long).


    So that would be what you asked for.


    However RAID5 is much more efficient and you have more space out of the 4 disks (12TB instead of 8TB).


    There is also a procedure to online convert a RAID1 into a RAID5 (degraded) and then add the third drive, and then grow it further to 4 drives.

    A guide to mdadm - Linux Raid Wiki


    Upgrading a mirror raid to a parity raid

    The following commands will convert a two-disk mirror into a degraded two-disk raid5, and then add the third disk for a fully functional raid5 array. Note that the first command will fail if run on the array we've just grown in the previous section if you changed raid-devices to three. If you already have a 3-disk mirror configured as two active, and one spare, you will need to run the first command then just change raid-devices to three. The code will reject any attempt to grow an array with more than two active devices.

    Code
    mdadm --grow /dev/md/mirror --level=5
    mdadm --grow /dev/md/mirror --add /dev/sdc1 --raid-devices=3

    Everything is possible, sometimes it requires Google to find out how.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!