RAID Array Missing

  • I set up a RAID 5 array using 4 disks on a NAS I use for manual backups. the disks included are sdb, sdc, sdd, and sde. I noted the NAS was not working and debugging in the webUI showed the RAID array to be missing and sdc to have bad sectors.

    Digging a little further I've noted the following:

    /etc/mdadm/mdadm.conf references /dev/md0

    ARRAY /dev/md0 metadata=1.2 name=SPOT.local:mainarray UUID=53d92414:cab02b2d:ac3cc183:72749370

    and mdadm examination of any of the disks shows

    ARRAY /dev/md/mainarray metadata=1.2 UUID=53d92414:cab02b2d:ac3cc183:72749370 name=SPOT.local:mainarray

    /dev/md/mainarray does not exist

    /dev/md0 references a RAID 0 array with only sdb, sdd, and sde

    I'm not super familiar with mdadm and linux arrays. Did the failure of sdc lead to the configuration file using /dev/md0 in place of /dev/md/mainarray?

    How do I restart the array using the remaining disks?


    raid.status (mdadm --examine /dev/sd[a-e] >> raid.status)



  • You'll need to ssh into OMV as root and run the following two commands, copy then paste the output using </> on the menu

    cat /proc/mdstat

    mdadm --detail /dev/md0

    Raid is not a backup! Would you go skydiving without a parachute?

  • Code: cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [ra id10]
    md0 : inactive sdb[0](S) sdd[2](S) sde[3](S)
    11720659464 blocks super 1.2

    I don't understand why the Raid Level for md0 is showing as raid0 as I know it was set up as raid5. The three functioning drives are showing Raid Level raid5.

  • I don't understand why the Raid Level for md0 is showing as raid0 as I know it was set up as raid5

    I've seen that statement on here before :)

    Looking at the output you've posted;

    cat /proc/mdstat

    This shows the raid as inactive, this usually occurs with two of the most common problems,

    a) A power failure

    b) A drive being physically removed from the array (mdadm is not hot swap)

    If a drive fails in a Raid5 whilst in use, mdadm will remove it and mark the array as clean/degraded, it will do nothing else

    If you look at mdadm --detail and mdadm --examine, the UUID and the name of the array are identical

    A Raid5 will allow one drive failure Raid0 does not

    The norm for starting an inactive array is the following;

    mdadm --stop /dev/md0

    mdadm --assemble --force --verbose /dev/md0 /dev/sd[bde]

    I have only included the 3 drives from the output of cat /proc/mdstat

    Raid is not a backup! Would you go skydiving without a parachute?

  • Thanks! It looks like the drive failed (probably offline) and then during reboot the assembly generated a goofy setup?

    I was a little hesitant about commands that would overwrite the raid configuration. But for future note I did run:

    mdadm --stop /dev/md0

    mdadm --assemble /dev/md0 /dev/sd[bde]

    mdadm --run /dev/md0

    (I would note I probably didn't need this command if I had run the assembly command with the '--force' option)

    This got the RAID array back up in a "clean, degraded" state. After mounting it in the webUI, I was able to access the shared files systems again. The RAID and File System are persistent through reboot. (I'm going to assume the persistence is due to the failed hard drive no longer being in the array)

    Waiting on a replacement drive, but glad I don't have to go through and restore the data from the original locations. I think there are plenty resources to add a replacement drive to a degraded RAID. Again thanks for the help!

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!