Missing Raid

  • After a recent power outage I noticed my raid5 filesystem is missing. Upon boot I noticed a "mdadm failed to add /dev/sd* to /dev/md0: Invalid argument" error. I had a raid5 configuration utilzing 4 1TB drives. Any guidance would be appreciated.



    Here are my outputs:

    Code
    root@falco:~# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4]
    unused devices: <none>







    Code
    root@falco:~# blkid
    /dev/sdc: UUID="d1d88067-1e3d-ec20-2933-c553a9e2a9c9" UUID_SUB="04eb6c15-c297-b926-a05b-bb284e599bf8" LABEL="falco:NAS1" TYPE="linux_raid_member"
    /dev/sdb: UUID="d1d88067-1e3d-ec20-2933-c553a9e2a9c9" UUID_SUB="ee5a8d89-7f55-8562-73a1-11fe1c4b2794" LABEL="falco:NAS1" TYPE="linux_raid_member"
    /dev/sda: UUID="d1d88067-1e3d-ec20-2933-c553a9e2a9c9" UUID_SUB="556f3075-06b6-571e-46e0-988aa5f550ee" LABEL="falco:NAS1" TYPE="linux_raid_member"
    /dev/sdd: UUID="d1d88067-1e3d-ec20-2933-c553a9e2a9c9" UUID_SUB="95d90508-886a-acc3-3f97-790059e832f6" LABEL="falco:NAS1" TYPE="linux_raid_member"
    /dev/sde1: UUID="5c2ec262-d826-4d30-b71e-0bba99b12cfd" TYPE="ext4" PARTUUID="275cdeef-01"
    /dev/sde5: UUID="fde9232e-e4bf-4fd4-ba0b-e2c5451963ad" TYPE="swap" PARTUUID="275cdeef-05"


    Code
    root@falco:~# fdisk -l | grep "Disk "
    Disk /dev/sdc: 931.5 GiB, 1000203804160 bytes, 1953523055 sectors
    Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
    Disk /dev/sda: 931.5 GiB, 1000203804160 bytes, 1953523055 sectors
    Disk /dev/sdd: 931.5 GiB, 1000203804160 bytes, 1953523055 sectors
    Disk /dev/sde: 7.5 GiB, 8004304896 bytes, 15633408 sectors
    Disk identifier: 0x275cdeef


    Code
    root@falco:~# mdadm --detail --scan --verbose
    root@falco:~#
  • Here is some additional info if it helps.


    Code
    root@falco:~# mdadm --assemble --run --force /dev/md0 /dev/sd[bdc]
    mdadm: failed to add /dev/sdc to /dev/md0: Invalid argument
    mdadm: failed to add /dev/sdd to /dev/md0: Invalid argument
    mdadm: failed to RUN_ARRAY /dev/md0: Input/output error



  • @ness1602
    That is what I'm afraid of, but looking at the smartctl data it doesn't show any failures unless I missed something. Here is the output for each of the raid disks.

  • @geaves


    Thanks for catching my typo. Here is the output from the request:


    Code
    root@falco:~# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4]
    unused devices: <none>

    Thanks for looking at this and trying to help me out!

    • Offizieller Beitrag

    I attached the the dmesg output due to max character limitations.



    As a side note, using smartctl I ran extended offline tests against all the raid disks and they all completed without error.

    Ok here's the output from dmesg that's stopping the raid from assembling;


    Code
    [    2.449916] md: md0 stopped.
    [    2.452127] md: bind<sdb>
    [    2.453790] md: sdc does not have a valid v1.2 superblock, not importing!
    [    2.453805] md: md_import_device returned -22
    [    2.454074] md: sdd does not have a valid v1.2 superblock, not importing!
    [    2.454090] md: md_import_device returned -22
    [    2.454422] md: bind<sda>

    As you can see the problem is with sdc and sdd missing the superblock, there should be a way of correcting it will need some research -> but I'm in the middle of cooking dinner, sorry, but will come back :thumbup:

    • Offizieller Beitrag

    No worries geaves, I appreciate any time you can give. I'll start digging around too and see if I can find anything as well now that I have something to pivot on.

    This could be the end of your Raid, I found this and this is worse than yours 8o seemed to solve the problem by cloning :?:


    This one also looked useful but tbh I got lost in the information supplied.

  • Well, I took a look at those posts and did a bit of digging as well. I found this post and it seemed almost too quick and to the point to work, but I figured I'd give it a shot since it seemed all doom and gloom at this point.



    I about fell out of my chair when the raid came back up clean. I've been able to read/write to it with no issues, so I'll keep an eye on it and report back if anything changes. Thanks for the help!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!