Raid 5 array missing after failed rebuild

  • My raid 5 array had a drive failure. I installed new hard drive and attempted a rebuild. The rebuild failed and I let the server sit for a day because I had run out of time to work on it. I came back the next day and the array was missing from the webGUI. I am using 3 WD red 3TB drives and 3 Seagate Ironwolf 3tb drives. Thanks in advance!


    Here is the info requested:


  • zachlow

    Added the Label OMV 2.x
  • That worked! Here's the output now.


  • Don't like the output from that fdisk :/ also the mdadm definitions give no information regarding the number of drives, just spares=1 :( also it shows /dev/sde as a linix raid member, but no reference in mdstat.


    mdadm --stop /dev/md0


    mdadm --assemble --force --verbose /dev/md0 /dev/sd[bcdfg]


    'If' those work it should come back up as clean degraded. Have you a backup

  • I had a backup but the backup drive died about a week ago and I haven't gotten around to replacing it. It looks like 1 drive isn't cooperating. I have another drive I can replace it with if that would help.


  • mdadm: /dev/md0 assembled from 4 drives and 1 spare - not enough to start the array.

    That's why it won't start the array, your Raid 5 has 6 drives, you replaced one and during the rebuild something went wrong, so you only have 5 drives supposedly working within that array.

    As Raid 5 only allows for one drive failure you've already used that option by adding a new drive which failed to be added to the array although it's marked as a raid member.


    However, the output of 'slot -1' I've never come across before, but for whatever reason it's unable to read slot 0 or slot 5, this could be an issue with each slot which would suggest m'board or the sata cable/s.

  • Sounds like I'm SOL then. Kinda had a feeling when I checked the SMART status of the drives after the rebuild failed and saw a drive had recently developed bad sectors. Also I can see all 6 drives in the webGUI under physical disks.

  • Doh!! I forgot that needs the array working, this is what I was looking for;

    this is from a VM for OMV4 it tells me the three drives in the array and which ones they are, if there was a spare it would give that information as well.


    That's no help to you, what I'm trying to ascertain is which drive is the spare and why it's being registered as a spare. Will mdadm --examine /dev/md0 output anything.

  • root@NAS:~# mdadm --assemble --force --verbose /dev/md0 /dev/sd[bcdefg]

    mdadm: looking for devices for /dev/md0

    mdadm: /dev/sdb is busy - skipping

    mdadm: /dev/sdc is busy - skipping

    mdadm: /dev/sdd is busy - skipping

    mdadm: /dev/sde is busy - skipping

    mdadm: /dev/sdg is busy - skipping

    mdadm: /dev/md0 is already in use.

  • Here's that


    root@NAS:~# cat /proc/mdstat

    Personalities : [raid6] [raid5] [raid4]

    md0 : inactive sdc[1] sdb[7](S) sde[5] sdg[6] sdd[2]

    14650677560 blocks super 1.2


    I tried mdadm --assemble --force --verbose /dev/md0 /dev/sd[bcdefg] again and got this. Also sdb is the new drive that I replaced.


    root@NAS:~# mdadm --assemble --force --verbose /dev/md0 /dev/sd[bcdefg]

    mdadm: looking for devices for /dev/md0

    mdadm: /dev/sdb is identified as a member of /dev/md0, slot -1.

    mdadm: /dev/sdc is identified as a member of /dev/md0, slot 1.

    mdadm: /dev/sdd is identified as a member of /dev/md0, slot 2.

    mdadm: /dev/sde is identified as a member of /dev/md0, slot 4.

    mdadm: /dev/sdf is identified as a member of /dev/md0, slot 5.

    mdadm: /dev/sdg is identified as a member of /dev/md0, slot 3.

    mdadm: forcing event count in /dev/sdf(5) from 22765 upto 23272

    mdadm: clearing FAULTY flag for device 4 in /dev/md0 for /dev/sdf

    mdadm: Marking array /dev/md0 as 'clean'

    mdadm: no uptodate device for slot 0 of /dev/md0

    mdadm: added /dev/sdd to /dev/md0 as 2

    mdadm: added /dev/sdg to /dev/md0 as 3

    mdadm: added /dev/sde to /dev/md0 as 4

    mdadm: added /dev/sdf to /dev/md0 as 5

    mdadm: added /dev/sdb to /dev/md0 as -1

    mdadm: added /dev/sdc to /dev/md0 as 1

    mdadm: /dev/md0 assembled from 5 drives and 1 spare - not enough to start the array.

    root@NAS:~# cat /proc/mdstat

    Personalities : [raid6] [raid5] [raid4]

    md0 : inactive sdc[1](S) sdb[7](S) sdf[4](S) sde[5](S) sdg[6](S) sdd[2](S)

    17580813072 blocks super 1.2

  • :) This is making no sense, this would suggest the array has started mdadm: Marking array /dev/md0 as 'clean' but then you get this mdadm: /dev/md0 assembled from 5 drives and 1 spare - not enough to start the array, followed by inactive in mdstat.


    It's the no uptodate device for slot 0, I wonder if -1 is slot 0 which is /dev/sdb


    OK mdadm --stop /dev/md0 then mdadm --assemble --force --verbose /dev/md0 /dev/sd[cdefg] and it looks as if there could be a problem with /dev/sdf

  • root@NAS:~# mdadm --assemble --force --verbose /dev/md0 /dev/sd[cdefg]

    mdadm: looking for devices for /dev/md0

    mdadm: /dev/sdc is identified as a member of /dev/md0, slot 1.

    mdadm: /dev/sdd is identified as a member of /dev/md0, slot 2.

    mdadm: /dev/sde is identified as a member of /dev/md0, slot 4.

    mdadm: /dev/sdf is identified as a member of /dev/md0, slot 5.

    mdadm: /dev/sdg is identified as a member of /dev/md0, slot 3.

    mdadm: no uptodate device for slot 0 of /dev/md0

    mdadm: added /dev/sdd to /dev/md0 as 2

    mdadm: added /dev/sdg to /dev/md0 as 3

    mdadm: added /dev/sde to /dev/md0 as 4

    mdadm: added /dev/sdf to /dev/md0 as 5

    mdadm: added /dev/sdc to /dev/md0 as 1

    mdadm: /dev/md0 has been started with 5 drives (out of 6).


    You sir are a genius! Its now showing in the webgui as clean, degraded.

  • OK at this moment the drive /dev/sdb is the issue, so slot -1 should be slot 0, this is either a failing sata port on the m'board or a bad cable.


    What concerns me we can attempt to use the WebUi to wipe that drive and then add it to the array, but that could also put you back to square one + I think /dev/sdf may be failing or at least beginning too.

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!