Raid 5 lost after failure of power supply

    • OMV 3.x

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • Raid 5 lost after failure of power supply

      Hello together,

      I'm using Openmediavault for nearly 2 years now with a Raid 5 array (3x WD Red 4 TB).

      Recently, my NAS stopped working and I figured out it was the power supply (external power supply with PicoPSU). During troubleshooting, I also dismounted the mainboard and the SATA Cables. I think I placed the SATA cables in the same order as before, but I'm not 100 % sure.

      After replacing the power supply, the System works again and OMV shows all three drives as physical drives, but no Raid.

      I read many of the threads concerning such problems, but am not 100 % sure what I can do without destroying something (the last Backup doesn't contain all data).

      I retrieved the following information:

      1. cat /proc/mdstat


      Shell-Script

      1. Personalities : [raid6] [raid5] [raid4]
      2. unused devices: <none>

      2. blkid

      Shell-Script

      1. /dev/sda: UUID="7a3a0eca-7615-3cc2-f086-e85a5cba2017" UUID_SUB="f082f604-46c3-9868-87d2-3a812ba76a6d" LABEL="openmediavault:Meins" TYPE="linux_raid_member"
      2. /dev/sdc: UUID="7a3a0eca-7615-3cc2-f086-e85a5cba2017" UUID_SUB="21a38f7b-9b5b-d2e1-758a-68d1d8d823c1" LABEL="openmediavault:Meins" TYPE="linux_raid_member"
      3. /dev/sdb1: UUID="8fcb8f05-0087-4943-849b-29809705ae97" TYPE="ext4" PARTUUID="db38b868-01"
      4. /dev/sdb3: UUID="06b87404-2eba-4acf-8079-24812f838995" TYPE="ext4" PARTUUID="db38b868-03"
      5. /dev/sdb5: UUID="2fa71961-def1-415d-83ab-c13c2def7e37" TYPE="swap" PARTUUID="db38b868-05"
      6. /dev/sdd: UUID="7a3a0eca-7615-3cc2-f086-e85a5cba2017" UUID_SUB="81d2fe2f-8f12-8980-6e83-c79a8d0e57ee" LABEL="openmediavault:Meins" TYPE="linux_raid_member"

      3. fdisk -l | grep "Disk "


      Shell-Script

      1. Disk /dev/sda: 3,7 TiB, 4000787030016 bytes, 7814037168 sectors
      2. Disk /dev/sdc: 3,7 TiB, 4000787030016 bytes, 7814037168 sectors
      3. Disk /dev/sdb: 28 GiB, 30016659456 bytes, 58626288 sectors
      4. Disk identifier: 0xdb38b868
      5. Disk /dev/sdd: 3,7 TiB, 4000787030016 bytes, 7814037168 sectors


      4. cat /etc/mdadm/mdadm.conf


      Shell-Script

      1. # mdadm.conf
      2. #
      3. # Please refer to mdadm.conf(5) for information about this file.
      4. #
      5. # by default, scan all partitions (/proc/partitions) for MD superblocks.
      6. # alternatively, specify devices to scan, using wildcards if desired.
      7. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      8. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      9. # used if no RAID devices are configured.
      10. DEVICE partitions
      11. # auto-create devices with Debian standard permissions
      12. CREATE owner=root group=disk mode=0660 auto=yes
      13. # automatically tag new arrays as belonging to the local system
      14. HOMEHOST <system>
      15. # definitions of existing MD arrays
      16. ARRAY /dev/md0 metadata=1.2 name=openmediavault:Meins UUID=7a3a0eca:76153cc2:f086e85a:5cba2017
      17. # instruct the monitoring daemon where to send mail alerts
      18. MAILADDR [my mail address]
      19. MAILFROM root
      Display All


      5. mdadm --detail --scan --verbose

      This command does absolutely nothing. It results just in the next empty line in the console.

      As I don't want to destroy anything, it would be great if someone can tell me if there's a chance to restore the raid array and what I can try to do so. Thank you! :)


      - Edit: additionally, OMV sent me the following mails just before it stopped working (one for each drive):

      Source Code

      1. This is an automatically generated mail message from mdadm
      2. running on openmediavault
      3. A Fail event had been detected on md device /dev/md0.
      4. It could be related to component device /dev/sdd.
      5. Faithfully yours, etc.
      6. P.S. The /proc/mdstat file currently contains the following:
      7. Personalities : [raid6] [raid5] [raid4]
      8. md0 : active raid5 sda[0](F) sdd[2](F) sdc[1](F)
      9. 7813774336 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/0] [___]
      10. bitmap: 3/30 pages [12KB], 65536KB chunk
      11. unused devices: <none>
      Display All

      Source Code

      1. This is an automatically generated mail message from mdadm
      2. running on openmediavault
      3. A Fail event had been detected on md device /dev/md0.
      4. It could be related to component device /dev/sda.
      5. Faithfully yours, etc.
      6. P.S. The /proc/mdstat file currently contains the following:
      7. Personalities : [raid6] [raid5] [raid4]
      8. md0 : active raid5 sda[0](F) sdd[2] sdc[1](F)
      9. 7813774336 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/1] [__U]
      10. bitmap: 1/30 pages [4KB], 65536KB chunk
      Display All

      Source Code

      1. This is an automatically generated mail message from mdadm
      2. running on openmediavault
      3. A Fail event had been detected on md device /dev/md0.
      4. It could be related to component device /dev/sdc.
      5. Faithfully yours, etc.
      6. P.S. The /proc/mdstat file currently contains the following:
      7. Personalities : [raid6] [raid5] [raid4]
      8. md0 : active raid5 sda[0] sdd[2] sdc[1](F)
      9. 7813774336 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [U_U]
      10. bitmap: 1/30 pages [4KB], 65536KB chunk
      11. unused devices: <none>
      Display All

      The post was edited 1 time, last by Micha-el ().

    • Hi there,

      no answer since 10th Oct.? Uhm ...

      Did you tried an "assemble" on your drives?

      mdadm --assemble /dev/md0 /dev/sda /dev/sdc /dev/sdd (tries to re-assemble the drives to a raid-array)
      - may be, that will fail - because of invalid eventcount of each drive -> please copy the error message in here
      - you can expand the command for your own after understanding the possible error message(s):
      mdadm --assemble --run /dev/md0 /dev/sda /dev/sdc /dev/sdd
      mdadm --assemble --run --force /dev/md0 /dev/sda /dev/sdc /dev/sdd

      -use with absolute caution, and only if you backupped your data:
      mdadm --assemble --run --force --update=resync /dev/md0 /dev/sda /dev/sdc /dev/sdd

      Sc0rp
    • Why I posted this:

      1// you answered in a few minutes several RAID questions
      2// seems like members tend to avoid RAID questions
      3// you helped me on my own RAID question
      Odroid HC2 - armbian - Seagate ST4000DM004 - OMV4.x
      Asrock Q1900DC-ITX - 16GB - 2x Seagate ST3000VN000 - Intenso SSD 120GB - OMV4.x
      :!: Backup - Solutions to common problems - OMV setup videos - OMV4 Documentation - user guide :!:
    • Re,

      macom wrote:

      1// you answered in a few minutes several RAID questions
      i'm RAID-addicted ... worked with md-RAID since ages :P


      macom wrote:

      2// seems like members tend to avoid RAID questions
      Since they are not responsible for the linux-software-RAID-stuff ... so i'll jump in (if i got time).
      Btw. RAID-questions (incl. SnapRAID) are common in the german technikaffe.de-Forum too ;)

      macom wrote:

      3// you helped me on my own RAID question
      I'm always glad i could help, since i don't earn money with that (it's pure hobby) :D

      Btw.: no answer from thread-creator so far ... i think this is stub.

      Sc0rp
    • Hey,

      thanks for your response :) No, no stub. Sorry for not answering earlier, I just didn't find the time to try your suggestion as I'm currently mostly not at home and the data on the NAS are not that time-critical (and I had almost given up the thread ;) ). So I'm just happy that someone answered my question at all :)
      I didn't try assemble so far, as I wasn't sure which commands I can use without risking losing the data (as you said about the update=resync command).

      I just tried
      mdadm --assemble /dev/md0 /dev/sda /dev/sdc /dev/sdd

      this responds in
      mdadm: /dev/md0 assembled from 1 drive - not enough to start the array.

      Can I just try the other commands like --run or --run --force without destroying something?
    • Re,

      Micha-el wrote:

      Can I just try the other commands like --run or --run --force without destroying something?
      You can try, but it will fail too - i wouldn't recommend it, because your array is not in a "sync, but not startet" state ...

      What gaves:
      cat /proc/mdstat
      mdadm -D /dev/mdX (take the number (X) from above output)?

      Micha-el wrote:

      I figured out it was the power supply (external power supply with PicoPSU).
      Uhm, didn't read that part ... thanks to @tkaiser pointing me at that.

      REALLY? PicoPSU? Which one in detail? What was the power source (primary side)?

      Sc0rp