RAID Array missing after failed drive. Trying to rebuild

  • Hi,


    I recently had a drive fail in my RAID 5 array. I had hoped that I would still be able to use the array with a missing drive untill the replacement arrived, but OMV seems to have lost the array. I decided not to worry about that for the time being as the replacement drive was due soon


    My new drive has now arrived and i have put it in the NAS box.


    I am now trying to rebuild the array but not having much luck.


    Here is the Degraded Array Info from the Support Info tab in OMV:


    I have tried running

    Code
    mdadm --assemble /dev/md127 /dev/sd[bcd] --verbose --force

    with the following output

    Code
    mdadm: looking for devices for /dev/md127
    mdadm: no RAID superblock on /dev/sdc
    mdadm: /dev/sdc has no superblock - assembly aborted


    Anyone know the best way get this array back up and running without losing any data?


    Many thanks in advance
    Chewie

    • Offizieller Beitrag

    You forgot the most important output: cat /proc/mdstat

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Sorry about that. Here you go:

    Code
    Personalities : [raid6] [raid5] [raid4]
    md127 : inactive sdb[0] sdd[2]
          5860531120 blocks super 1.2
    
    
    unused devices: <none>


    Many thanks for the reply
    Chewie

    • Offizieller Beitrag

    Did you stop the array before trying the assemble command? What is the output of: cat /etc/mdadm/mdadm.conf Are you sure /dev/sdc is the new drive?

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Yes, I to stopped the array. It was giving me messages saying that /dev/sdb was busy when trying to assemble with the array running


    mdadm.conf:


    I should point out that when I first started looking into this the last line was not in mdadm.conf. I added it by running:

    Code
    mdadm --examine --scan >> /etc/mdadm/mdadm.conf


    Cheers

  • Oh, and I forgot to say. I am quite sure that sdc is the new drive as I can see that it is a different manufacturer to the other disks when looking at them in the webgui.


    Chewie

    • Offizieller Beitrag

    Stop the array again mdadm --stop /dev/mdX and try mdadm --assemble --scan to see if you can fix it or at least start as a degraded array.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    So, what is the output of: cat /proc/mdstat (looking at the output of that command is helpful after every mdadm step).

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    mdadm --stop /dev/md127
    mdadm --assemble --force /dev/md127 /dev/sdb /dev/sdd might have to add the word missing after sdd.
    cat /proc/mdstat
    If that starts it as a degraded array, then
    mdadm --add /dev/md127 /dev/sdc

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Thanks ever so much ryecoaaron that seems to have sorted it.


    After running the assemble (didn't need the word missing) I got this:

    Code
    mdadm: Marking array /dev/md127 as 'clean'
    mdadm: /dev/md127 has been started with 2 drives (out of 3).


    cat /proc/mdstat gave me:

    Code
    Personalities : [raid6] [raid5] [raid4]
    md127 : active (auto-read-only) raid5 sdb[0] sdd[2]
          5860530176 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [U_U]
    
    
    unused devices: <none>


    I can now see the array in the webgui.


    I added the new drive and now I can see in the webgui that it is recovering and cat /proc/mdstat gives me this:


    Code
    Personalities : [raid6] [raid5] [raid4]
    md127 : active raid5 sdc[3] sdb[0] sdd[2]
          5860530176 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [U_U]
          [>....................]  recovery =  1.9% (58056836/2930265088) finish=290.4min speed=164804K/sec
    
    
    unused devices: <none>


    Once the recovery is finished I shall be backing up the important stuff :)


    Thank you so much for your help
    Chewie

    • Offizieller Beitrag

    Glad it worked :) If that last post didn't work, I would have had some bad news for you...

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Yeah, was getting a bit nervous.


    I notice that the webgui is showing the array is not mounted. I assume it is best to leave it like that until the recovery is finished?


    Chewie

    • Offizieller Beitrag

    Yep, I wouldn't do anything until it is done syncing. Then mount the filesystem.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!