Re-add a drive that was removed from a RAID6 array

  • Hi all,
    I need help. I'm a newbie for RAID management.
    I have created a RAID6 array with 5 disks. Today, I inadvertently removed the power to one disk and mdadm decided to remove that drive from the array.
    Now I have a degraded RAID6 array with 4 disks.
    The removed disk is ok, so I want to re-add it to the array.
    I can't use the recover function of the OMV web UI, as I already explained in the last comment of issue http://bugtracker.openmediavault.org/view.php?id=672 (where we talked about another issue that caused disks not to appear in the list of available devices to create/extend/recover a RAID array).


    I did a search on the net and found that --re-add parameter to mdadm should do the trick. However, when I try to issue the following command:


    Code
    mdmadm /dev/md127 --re-add /dev/sde


    (/dev/sde is the device that was removed from the RAID array)
    I get the following error message: "mdadm: Cannot open /dev/sde: Device or resource busy"


    Why does this happen? /dev/sde isn't mounted (obviously). Moreover, it's not in the RAID array any more, so I would be surprised to know that it's still locked by mdadm.
    Do I have to unmount /dev/md127? Do I have to stop the array before re-adding the missing drive?


    Thanks in advance for any help.

  • Found solution by myself. If anyone encounters a problem like this, here is the solution. I first did:
    (please note that after reboot, the /dev/sde that was missing on my RAID6 array had become /dev/sdb)



    As you can see, this revealed that the /dev/sdb (once /dev/sde) device was added to another inactive Raid array called /dev/md126. So the solution was this:


    Code
    root@openmediavault:~# mdadm --stop /dev/md126
    mdadm: stopped /dev/md126
    root@openmediavault:~# mdadm /dev/md127 --re-add /dev/sdb
    mdadm: re-added /dev/sdb


    Now /dev/md127 is recovering:


    Code
    root@openmediavault:~# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md127 : active raid6 sdb[3] sdd[0] sde[4] sdc[2] sdf[1]
          175842816 blocks super 1.2 level 6, 512k chunk, algorithm 2 [5/4] [UUU_U]
          [=>...................]  recovery =  6.9% (4086448/58614272) finish=65.4min speed=13875K/sec
    
    
    unused devices: <none>


    Please note that the web UI of OMV didn't say anything about /dev/md126: it listed only /dev/md127. This is why I didn't realize what was happening before today.

  • Thank You for your findings.


    For some reason my OMV removed one of the drivers from the Raid6 pool.

    Using the commands you've listed i was able to readd the drive back to the pool and it is recovering now.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!