I set up a RAID 5 array using 4 disks on a NAS I use for manual backups. the disks included are sdb, sdc, sdd, and sde. I noted the NAS was not working and debugging in the webUI showed the RAID array to be missing and sdc to have bad sectors.
Digging a little further I've noted the following:
/etc/mdadm/mdadm.conf references /dev/md0
ARRAY /dev/md0 metadata=1.2 name=SPOT.local:mainarray UUID=53d92414:cab02b2d:ac3cc183:72749370
and mdadm examination of any of the disks shows
ARRAY /dev/md/mainarray metadata=1.2 UUID=53d92414:cab02b2d:ac3cc183:72749370 name=SPOT.local:mainarray
/dev/md/mainarray does not exist
/dev/md0 references a RAID 0 array with only sdb, sdd, and sde
I'm not super familiar with mdadm and linux arrays. Did the failure of sdc lead to the configuration file using /dev/md0 in place of /dev/md/mainarray?
How do I restart the array using the remaining disks?
attached:
raid.status (mdadm --examine /dev/sd[a-e] >> raid.status)
dmesg.log
mdadm.conf