Beiträge von StaticNoise

    Thanks! It looks like the drive failed (probably offline) and then during reboot the assembly generated a goofy setup?


    I was a little hesitant about commands that would overwrite the raid configuration. But for future note I did run:


    mdadm --stop /dev/md0


    mdadm --assemble /dev/md0 /dev/sd[bde]


    mdadm --run /dev/md0

    (I would note I probably didn't need this command if I had run the assembly command with the '--force' option)


    This got the RAID array back up in a "clean, degraded" state. After mounting it in the webUI, I was able to access the shared files systems again. The RAID and File System are persistent through reboot. (I'm going to assume the persistence is due to the failed hard drive no longer being in the array)


    Waiting on a replacement drive, but glad I don't have to go through and restore the data from the original locations. I think there are plenty resources to add a replacement drive to a degraded RAID. Again thanks for the help!

    Code: cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [ra                                                                                                                                                             id10]
    md0 : inactive sdb[0](S) sdd[2](S) sde[3](S)
          11720659464 blocks super 1.2

    I don't understand why the Raid Level for md0 is showing as raid0 as I know it was set up as raid5. The three functioning drives are showing Raid Level raid5.

    I set up a RAID 5 array using 4 disks on a NAS I use for manual backups. the disks included are sdb, sdc, sdd, and sde. I noted the NAS was not working and debugging in the webUI showed the RAID array to be missing and sdc to have bad sectors.


    Digging a little further I've noted the following:


    /etc/mdadm/mdadm.conf references /dev/md0

    ARRAY /dev/md0 metadata=1.2 name=SPOT.local:mainarray UUID=53d92414:cab02b2d:ac3cc183:72749370


    and mdadm examination of any of the disks shows

    ARRAY /dev/md/mainarray metadata=1.2 UUID=53d92414:cab02b2d:ac3cc183:72749370 name=SPOT.local:mainarray


    /dev/md/mainarray does not exist


    /dev/md0 references a RAID 0 array with only sdb, sdd, and sde


    I'm not super familiar with mdadm and linux arrays. Did the failure of sdc lead to the configuration file using /dev/md0 in place of /dev/md/mainarray?


    How do I restart the array using the remaining disks?


    attached:

    raid.status (mdadm --examine /dev/sd[a-e] >> raid.status)

    dmesg.log

    mdadm.conf