Newb: Did I destroy my RAID 5...??

  • Hi,


    I'm hoping I haven't done something irretrievable...


    I had a 4x2tb RAID 5 setup running fine on OMV5. (5.5.3-1) A disk started to fail and the raid became "Degraded".


    I determined which serial number was the bad disk.

    Shut the system down.

    Replaced the disk.

    Started up again.

    Did a "Quick wipe" on the new disk

    Expected to be able to rebuild the RAID and carry on.


    The RAID 5 I had set up is no longer on the list in the "Raid Management" section of OMV.

    In "File systems", the file system I had set up is listed as "Missing"


    If I try to set up a new RAID5, there are no disks available to select.


    Can I rescue my original RAID array, Zulu? Looking below it seems to still be there...?


    Code
    cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
    md0 : inactive sdd[3](S) sdb[1](S) sdc[2](S)
          5860147464 blocks super 1.2
           
    unused devices: <none>
    Code
    blkid
    /dev/sdb: UUID="e722afd9-5803-5460-ce00-e63017883000" UUID_SUB="5667c0d5-4cec-a644-36a3-e641ec176a46" LABEL="openmediavault.local:zulu" TYPE="linux_raid_member"
    /dev/sdc: UUID="e722afd9-5803-5460-ce00-e63017883000" UUID_SUB="d49ed9a5-6400-f405-ea4d-0601f2e60642" LABEL="openmediavault.local:zulu" TYPE="linux_raid_member"
    /dev/sde1: UUID="173d1141-65e9-4ee1-ae31-b73d34f7b2cf" TYPE="ext4" PARTUUID="9d8e1096-01"
    /dev/sde5: UUID="6947d5ca-f259-4fe7-be54-5b945620213c" TYPE="swap" PARTUUID="9d8e1096-05"
    /dev/sdd: UUID="e722afd9-5803-5460-ce00-e63017883000" UUID_SUB="21ba9f1c-88ed-e1ea-944d-71ba862a1252" LABEL="openmediavault.local:zulu" TYPE="linux_raid_member"
    /dev/sda1: PARTUUID="47bc5bbc-b468-49e3-a4ef-608d2bd5d20b"
    Code
    mdadm --detail --scan --verbose
    INACTIVE-ARRAY /dev/md0 num-devices=3 metadata=1.2 name=openmediavault.local:zulu UUID=e722afd9:58035460:ce00e630:17883000
       devices=/dev/sdb,/dev/sdc,/dev/sdd
    • Offizieller Beitrag

    mdadm --stop /dev/md0


    mdadm --assemble --force --verbose /dev/md0 /dev/sd[bcd]


    The raid should display as clean/degraded in raid management, if it shows as rebuilding wait until it's finished.


    Raid Management -> select the raid, on the menu click Recover, a dialog should pop up showing the new drive, select it, click OK and the raid should start rebuilding.

  • Geaves,


    Really appreciate your help...


    This is what I did:

    But when I try to "Recover" the RAID, I don't see any options in the popup window. See attachment. Should my new disk be in there?

    • Offizieller Beitrag

    Yes, as far as I can remember...

    :) if it did this, doesn't make sense;


    cat /proc/mdstat

    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]

    md0 : inactive sdd[3](S) sdb[1](S) sdc[2](S)

    5860147464 blocks super 1.2

    unused devices: <none>


    inactive should not have been the output, mdadm --detail /dev/md0 would have given more detail on the array, it would have shown all the drives including sda. Inactive occurs when a drive is 'pulled' from an array, such as you did when you shut down and removed the offending drive, mdadm didn't know what had happened to the drive so throws the array as inactive


    :thumbup: to you for reading the raid section and posting the information.

  • I must have seen it somewhere else... I'm still not sure what I should have done to replace the drive cleanly though...?


    (I try really hard to find solutions to my issues in forums... Usually I find them immediately after I've posted my question! :) I did stumble over your (?) post requiring that info though...)


    S

    • Offizieller Beitrag

    I'm still not sure what I should have done to replace the drive cleanly though

    If the drive was showing errors and needed replacing, then Raid Management -> select the raid then click delete on the menu, select the drive click OK and the drive is removed, then proceed to add the new drive. All this can be done via the WebUI and no command line necessary.

    My Zulu file system is Unmounted. Is it safe to mount it whilst it's rebuilding

    :/ I would wait until it has finished.

  • If the drive was showing errors and needed replacing, then Raid Management -> select the raid then click delete on the menu, select the drive click OK and the drive is removed, then proceed to add the new drive. All this can be done via the WebUI and no command line necessary.

    Blimey... I'd never have had the stones to select the RAID and then hit Delete if you hadn't told me! :) Many thanks! I'll know for next time :)


    S

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!