Changing to bigger data disks - Was 'How to Fail a Raid 5 drive in the GUI'

  • Hardware is HP N54 microserver


    I am facing the perennial problem of replacing the disks in my Raid 5 array because I'm running out of capacity.


    The existing disks are 4 x 2Tb Western Digital types that have been installed for 5 years. The array has an 80% utilisation. I want to replace them with 4 x 8Tb Toshibas.


    I see in various threads that discuss Raid5 the need to 'fail' one of the discs causing the array to be degraded. Other references suggest that this can be done by selecting a disk and failing it under the GUI. I can't see how that can be done.


    The only other way I can see of doing this is to power down the server and remove one of the discs. The array would then, on power up, presumably be there in a degraded state hopefully without errors (clean). If I then fit the shiny new disc will it start to rebuild itself or is there something else I need to do? Can I fit the new disk immediately after removing the old one or is it necessary for the system to reconfigure itself in the degraded mode before fitment.


    I understand that the method would have to be repeated for the remaining old disks after recovery and finally to grow the array to take advantage of the new space.


    I recognise the possibility that I'm over thinking this and would welcome some guidance.


    Doug

    • New
    • Official Post

    It would be easier and safer to replace the four hard drives with the new ones, create a new array, and copy the data from the backup to the new array. Do you have a backup?

    • New
    • Official Post

    Hardware is HP N54 microserver

    Same hardware


    I see in various threads that discuss Raid5 the need to 'fail' one of the discs causing the array to be degraded. Other references suggest that this can be done by selecting a disk and failing it under the GUI. I can't see how that can be done

    There should be an option in the GUI to remove/delete a drive from an existing, the backend command will fail and remove the drive from the array, so you can shutdown, remove and replace a drive and the array will display as clean/degraded in the GUI


    The only other way I can see of doing this is to power down the server and remove one of the discs. The array would then, on power up, presumably be there in a degraded state hopefully without errors (clean). If I then fit the shiny new disc will it start to rebuild itself or is there something else I need to do

    No, mdadm is not hot swap like a hardware raid, do the above and the array will disappear from the GUI as it's now inactive, you'll have to reinitialise it (from the cli) before adding a new drive

    Each new drive will need to be wiped before adding it to the array, the array can only grown/expanded once all 4 x 8Tb are installed, once that's done then expand the file system

  • Hi,

    Thanks for responding.


    (1)The whole reason for the post is that I can't find such a control in the GUI for Ver 7.x. Hence the thought about removing a disk. I have every intention of doing so in a controlled manner that doesn't damage anything. Where is the control in your installation?


    (2) I also do realise that the HP is not hot swap and I'm not going to risk damaging anything . I was hoping, however, that by powering off and then rebooting that it would detect the empty disk and realise that it was a replacement. I can wipe the new disk from the GUI. Will that let me designate the new disk as part of the array?

  • On consideration I'm wondering whether it might be better to clone the disks in the array using clonezilla or some other variant. It would be a complete pain and take forever but might be less of a risk.


    I would still prefer Plan A (I think).

    • New
    • Official Post

    From what I see in this conversation, I assume you don't have a backup.


    The procedure you're going to follow is risky. You'll be synchronizing the array several times in a row without any parity, since each time you'll be deleting a drive from the existing array. You'll be forcing the old hard drives multiple times in a row. If something goes wrong, you could lose your data.


    So I suggest a "safety buffer":


    1 - Connect one of the four new 8TB drives to any PC on your local network. Format it with any file system and back up all your data to the network. Since the array's current capacity is 6TB, all your current data will fit on a single hard drive.


    2 - Once you've done that, begin replacing the drives one by one. Leave the drive where you copied the data until last. If something goes wrong during the first three synchronizations, you'll always have that backup.


    3 - After performing the first three synchronizations, erase the disk containing the backup, remove the last old disk, and replace it with the disk containing the backup to add it to the RAID array. At that point, the array will consist of new disks, which at least provides some assurance that the final synchronization won't fail.


    This minimizes the risk of any of the old disks failing. In any case, remember that RAID is not a backup. --> https://www.raidisnotabackup.com/

    • New
    • Official Post

    Plan B


    - Once you have the first backup, remove all the old drives from the server.

    - Insert the three remaining 8TB drives and create a new RAID array.

    - Copy the data from the backup drive to the new RAID array.

    - Once copied, connect this new drive to the server and expand the RAID from 3 to 4 drives.


    You'll always have the old drives stored away, ready to be returned to the server with the original RAID array intact. This is even more secure than the previous plan.

    • New
    • Official Post

    I'll shut the f* up then :D :D :D :D


    Actually that's a very good approach, I was also betting there was no backup

  • Hi,


    Thanks also for your help.


    No, I made a backup of all the data before I started since I realised that it could all go wrong very easily.


    I also upgraded to 7.x so there was even more incentive to have method of retrieving the situation if necessary. I was and am not confident in reconfiguring the system disk such that everything continues to work but if necessary I could do it all over. I thought naively that it would be simpler to maintain the existing config rather than start again.


    I take your point over keeping the old disks safe.


    I have decided to take the low risk approach of cloning the old disks to the new disks off line and then adjusting the partitions and then growing them.


    Doug

    • New
    • Official Post

    I made a backup of all the data before I started

    So there's something I don't understand. If you already have a backup, why not simply create a new RAID array with the new hard drives and copy the data to that new array from the backup? That would be the simplest solution.

    • New
    • Official Post

    All you need to do next is edit the existing shared folders and point them to the new location. Everything will be working again in 5 minutes.

  • That's a good idea.


    I had not thought of that because I haven't had the experience.


    I'll give that a try. At worst I may still have to clone but if it is as simple as you say then it will save a lot of time and effort.


    I'll let you know what happens.


    Doug

  • If you already have a backup, it is as simple as chente says. I have done that process every time I have swapped drives. It is also way safer and faster than doing multiple resyncs. If you think about it, you will actually have 2 backups, since the old array is not touched and can be reinstalled if required, plus you have the actual backup.


    The first swap I did I was able copy from the old array to the new one directly because I had enough ports to have both active at the same time, but after that, ports were not available because my first swap made a new array from more drives, so now it's an rsync backup that is ran one final time just before shutting down to replace drives. Then when the array is made, do an rsync back.


    As chente said, fix your shares and any docker compose files to reflect the new uuid of the array and you are up and running.

    Asrock B450M, AMD 5600G, 64GB RAM, 6 x 4TB RAID 5 array, 2 x 10TB RAID 1 array, 100GB SSD for OS, 1TB SSD for docker and VMs, 1TB external SSD for fsarchiver OS and docker data daily backups

  • I thought I would provide an update on this because, although I'm making progress, I'm not there yet. Partly because of the problem and and partly because I had a hardware failure of the machine I was using to access the server. I had a PSU failure and yes, of course, that was the machine that I had the server backups on.


    Anyway, I installed the new disks in the server then wiped each of them and formatted them to EXT4. That took about 1.5 hours for each disk.


    I then discovered that after previously upgrading to 7.x the plug in for Multi disk was missing. I reinstalled that and set it off to create the array using all four of the disks. That then took about 15 hours to resync and left me with the message in Multiple Device


    /dev/md0 clean Raid5 16.37Tib /dev/sda

    /dev/sdb

    /dev/sdc

    /dev/sdd


    I then mounted md0 in the File systems and it now shows as mounted but not referenced. In addition I have an entry for an unnamed device that is showing as referenced but not mounted. I cant see how to remove that either but I may be able to get rid of it when I have the shared folders rpointing at the correct


    I did then change the shared folders and pointed them at the new /dev/md0 .


    And I find that, on checking that the shares are available on the network and that it must have worked.


    Thanks again to those that gave their time to helping. It really is very much appreciated. For myself I'm finding that I just don't try things frequently enough to be able to do this with out the hand holding that people on the forum generously provide.


    Doug

  • dougdee123

    Added the Label resolved
  • dougdee123

    Changed the title of the thread from “How to Fail a Raid 5 drive in the GUI” to “Changing to bigger data disks - Was 'How to Fail a Raid 5 drive in the GUI'”.

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!