raid5 disk failure.

    • OMV 4.x
    • Ok, unplugged a drive while powered down and tried the command that ness1602 suggested. changed it a bit to only include dev b,c, and d as those are the remaining 3 in the array.

      Source Code

      1. root@openmediavault4:~# mdadm --assemble --force /dev/md0 /dev/sdb /dev/sdc /dev/sdd
      2. mdadm: /dev/sdb is busy - skipping
      3. mdadm: /dev/sdc is busy - skipping
      4. mdadm: /dev/sdd is busy - skipping
      No joy there.
      I then went to the GUI under file systems where the RAID array shows up as a device and unmounted it. It changed to unmounted.
      I tried the command again with the same results.
    • JimT wrote:

      Actually click on a folder to open it and it throws an error message that says "The share is inaccessible because a device has been removed"
      It would because as far as Windows is concerned it's still there on the network.

      JimT wrote:

      No joy there.
      I then went to the GUI under file systems where the RAID array shows up as a device and unmounted it. It changed to unmounted.
      I tried the command again with the same results.
      Interesting you were able to unmount it from the GUI, that must be possible due to the raid being inactive, do you get a 'save configuration' come up?

      What happens if you do mdadm -- stop /dev/md0 then mdadm --assemble --force /dev/md0 /dev/sd[bcd] if that works what's the output of cat /proc/mdstat
      Raid is not a backup! Would you go skydiving without a parachute?
    • ness1602 wrote:

      When you fail one disk(mdadm based) RAID should be active/degraded. It shouldnt be inactive anytime.
      Yes and no. If you fail a drive using mdadm it's output would be clean/degraded, if you pull a drive whilst the machine is powered down it will come up as inactive as @ryecoaaron confirmed yesterday, simply stopping the raid and reassembling will bring it back up as clean/degraded.
      This can also occur if there is a power outage one drive would fail to initialise and this 'could' also result in the array in the array being inactive rather clean/degraded. I've tested one and experienced the other.
      Raid is not a backup! Would you go skydiving without a parachute?
    • geaves wrote:

      ness1602 wrote:

      When you fail one disk(mdadm based) RAID should be active/degraded. It shouldnt be inactive anytime.
      Yes and no. If you fail a drive using mdadm it's output would be clean/degraded, if you pull a drive whilst the machine is powered down it will come up as inactive as @ryecoaaron confirmed yesterday, simply stopping the raid and reassembling will bring it back up as clean/degraded.This can also occur if there is a power outage one drive would fail to initialise and this 'could' also result in the array in the array being inactive rather clean/degraded. I've tested one and experienced the other.
      This is consistent with what I'm seeing happen.

      geaves wrote:

      Interesting you were able to unmount it from the GUI, that must be possible due to the raid being inactive, do you get a 'save configuration' come up?
      What happens if you do mdadm -- stop /dev/md0 then mdadm --assemble --force /dev/md0 /dev/sd[bcd] if that works what's the output of cat /proc/mdstat
      Yes I got a save configuration prompt.
      Found it really weird though that even after unmounting, I powered down and plugged in the unplugged drive. This returned everything to normal with Raid clean and all drives included.
      I intend to try your suggested commands this evening.

      The post was edited 1 time, last by JimT ().

    • Powered down machine and unplugged drive.
      Boot up and entire raid array is gone in GUI as had been the case.
      did the commands below that geaves and ness1602 helped me with.
      It appears that after a forced stop command and then an assemble command, I'm good to go.
      The array showed back up in the GUI in a degraded state which allowed me to add another drive and recover from there.

      Source Code

      1. root@openmediavault4:~# cat /proc/mdstat
      2. Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
      3. md0 : inactive sdb[0](S) sdd[3](S) sdc[2](S)
      4. 556247409 blocks super 1.2
      5. unused devices: <none>
      6. root@openmediavault4:~# mdadm --stop /dev/md0
      7. mdadm: stopped /dev/md0
      8. root@openmediavault4:~# mdadm --assemble --force /dev/md0 /dev/sdb /dev/sdc /dev/sdd
      9. mdadm: /dev/md0 has been started with 3 drives (out of 4).
      10. root@openmediavault4:~#
      Display All
      This is a great learning experience and I thank you gentlemen greatly.