Raid + Filesystem missing after failed drive

  • Hi all,


    After experiencing a drive failure my RAID configuration is gone and my filesystem has the status 'Missing'. I've come across this thread of a user experiencing something similar. Howerver, I would like some advice on how to proceed.


    Some additional information as requested here: Degraded or missing raid array questions


    cat /proc/mdstat

    Code
    root@blackhole:~# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md0 : inactive sdb[1](S) sde[2](S) sda[0](S)
    2929891464 blocks super 1.2
    
    
    unused devices: <none>
    root@blackhole:~#



    blkid

    Code
    root@blackhole:~# blkid
    /dev/sda: UUID="f3d673d4-6ec1-22df-2817-7bcfbabef850" UUID_SUB="9dde3400-8912-4b48-4cae-fbc5615c0639" LABEL="blackhole:blackdatafive" TYPE="linux_raid_member"
    /dev/sde: UUID="f3d673d4-6ec1-22df-2817-7bcfbabef850" UUID_SUB="ede58e22-3571-eff1-cea6-bf408092cc7f" LABEL="blackhole:blackdatafive" TYPE="linux_raid_member"
    /dev/sdb: UUID="f3d673d4-6ec1-22df-2817-7bcfbabef850" UUID_SUB="ac59d1b4-66c2-fb9e-bd77-a292bf33e996" LABEL="blackhole:blackdatafive" TYPE="linux_raid_member"
    /dev/sdc1: UUID="a5c3c40c-49ec-485a-902e-daed29e0c8ca" TYPE="ext4" PARTUUID="283b6343-01"
    /dev/sdc5: UUID="d8d2ee6f-9711-4b1b-ab64-aa899c710392" TYPE="swap" PARTUUID="283b6343-05"


    fdisk -l | grep "Disk "



    cat /etc/mdadm/mdadm.conf



    mdadm --detail --scan --verbose

    Code
    root@blackhole:~# mdadm --detail --scan --verbose
    INACTIVE-ARRAY /dev/md0 num-devices=3 metadata=1.2 name=blackhole:blackdatafive UUID=f3d673d4:6ec122df:28177bcf:babef850   devices=/dev/sda,/dev/sdb,/dev/sde



    I am running OMV 5.6.26-1 (Usul)


    For replacement of the drive that has currently failed (still connected), I added /dev/sdd.

    Configuration failed while running, no loss of power to my knowledge.


    How can I recover this RAID configuration and filesystem?

    • Offizieller Beitrag

    The output makes sense but this doesn't -> For replacement of the drive that has currently failed (still connected), I added /dev/sdd.


    The other problem is the spares=1 in mdadm.conf file


    So;


    mdadm --stop /dev/md0


    mdadm --assemble --force --verbose /dev/md0 /dev/sd[abe]


    that should restart the array in a clean/degraded state, when it's up and running post the output of the following;


    mdadm --detail /dev/md0

  • The output makes sense but this doesn't -> For replacement of the drive that has currently failed (still connected), I added /dev/sdd.


    I ment that the drive I added for replacement of the broken drive (the one not showing up) is listed as /dev/sdd.



    So;


    mdadm --stop /dev/md0


    mdadm --assemble --force --verbose /dev/md0 /dev/sd[abe]


    that should restart the array in a clean/degraded state, when it's up and running post the output of the following;


    mdadm --detail /dev/md0


    I ran these commands and now my RAID configuration is showing again. Currently recovering so that's great! Thanks for the help and explanation. ^^

  • Hi, I need help. I have the same issues, but only have two disks. One completely vanished from system.


    On command

    mdadm --assemble --force --verbose /dev/md0 /dev/sda

    I get the error device /Dev/sda is busy --skipping


    Any one an idea?

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!