RAID 5 Missing - need help for rebuild

    • OMV 1.0
    • Resolved
    • Hi,

      Source Code

      1. mdadm --assemble /dev/md127 /dev/sd[bcd] --verbose --force

      I just want to say thank you for this command!
      I modified it to keep bcd only since the a wasn't par of my RAID5.
      I don't know why but after I moved one of the 2 RAID5s I had in my machine to a new build, OMV wouldn't detect my 3 HDDs as a RAID5 anymore.
      I will keep this one in my book in case it happens to me again. :)
    • Hi everybody,

      I'm so sorry for my english (i'm french), and actually i'm panic !!!!! I'm lost my RAID5 with all my pictures, so my wife panic too.

      I wanted to add a disk to my OMV, nothing to do with RAID5.
      I stopped OMV, then I restarted.

      The first messages were "softreset failed". Maintenance or not maintenance. Ctrl + D

      OK OMV start, but i don't find my RAID5.

      I do this command "blkid"

      Source Code

      1. /dev/sdc1: UUID="bc7c4f2a-667c-47ad-9aae-7877bb724ae1" TYPE="ext4"
      2. /dev/sdc5: UUID="ed3b823d-6149-4a1e-beeb-ba0885771be1" TYPE="swap"
      3. /dev/sda: UUID="f7093f38-a653-0b7f-2f4b-306f818ce6cf" LABEL="NAS:raid5" TYPE="linux_raid_member" UUID_SUB="5a0d1f21-4ca4-789f-08e7-1964e635b935"
      4. /dev/sdb: UUID="f7093f38-a653-0b7f-2f4b-306f818ce6cf" LABEL="NAS:raid5" TYPE="linux_raid_member" UUID_SUB="d12f04d9-f580-d30c-8395-9f693ee17b19"
      5. /dev/sde: UUID="f7093f38-a653-0b7f-2f4b-306f818ce6cf" LABEL="NAS:raid5" TYPE="linux_raid_member" UUID_SUB="24e72a99-3c5b-f0e5-76c1-218999a4d3d3"
      6. /dev/sdd1: LABEL="DD1500" UUID="40cfb129-6726-4146-8eb4-201f517c3918" TYPE="ext4 "


      and this "cat /proc/mdstat"


      Source Code

      1. Personalities : [raid6] [raid5] [raid4]
      2. md126 : inactive sdb[1](S)
      3. 1953513560 blocks super 1.2
      4. md127 : inactive sda[0] sde[2]
      5. 3907027120 blocks super 1.2
      6. unused devices: <none>

      I do this command " mdadm --assemble /dev/md127 /dev/sd[abe] --verbose --force"


      Source Code

      1. mdadm: looking for devices for /dev/md127
      2. mdadm: /dev/sda is busy - skipping
      3. mdadm: /dev/sdb is busy - skipping
      4. mdadm: /dev/sde is busy - skipping

      But it does work...
      Please help me
    • i find to stop md126 and md127

      Source Code

      1. root@NAS:~# mdadm --stop /dev/md126
      2. mdadm: stopped /dev/md126
      3. root@NAS:~# mdadm --stop /dev/md127
      4. mdadm: stopped /dev/md127


      and i try to assemble this

      Source Code

      1. root@NAS:~# mdadm --assemble /dev/md127 /dev/sd[ae] --verbose --force
      2. mdadm: looking for devices for /dev/md127
      3. mdadm: /dev/sda is identified as a member of /dev/md127, slot 0.
      4. mdadm: /dev/sde is identified as a member of /dev/md127, slot 2.
      5. mdadm: Marking array /dev/md127 as 'clean'
      6. mdadm: no uptodate device for slot 1 of /dev/md127
      7. mdadm: added /dev/sde to /dev/md127 as 2
      8. mdadm: added /dev/sda to /dev/md127 as 0
      9. mdadm: /dev/md127 has been started with 2 drives (out of 3).
      10. root@NAS:~# mdadm --assemble /dev/md126 /dev/sd[b] --verbose --force
      11. mdadm: looking for devices for /dev/md126
      12. mdadm: /dev/sdb is identified as a member of /dev/md126, slot 1.
      13. mdadm: Marking array /dev/md126 as 'clean'
      14. mdadm: no uptodate device for slot 0 of /dev/md126
      15. mdadm: no uptodate device for slot 2 of /dev/md126
      16. mdadm: added /dev/sdb to /dev/md126 as 1
      17. mdadm: /dev/md126 assembled from 1 drive - not enough to start the array.
      Display All

      but just 2 disks is in my raid. I don't find the good write ....
    • After stopping, you need to include all three drives.

      Remember raid is not backup...
      omv 4.1.19 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.15
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • thinks a lot for your answers but it does not work

      Source Code

      1. root@NAS:~# mdadm --stop /dev/md126
      2. mdadm: stopped /dev/md126
      3. root@NAS:~# mdadm --stop /dev/md127
      4. mdadm: stopped /dev/md127
      5. root@NAS:~# mdadm --assemble /dev/md127 /dev/sd[abe] --verbose --force
      6. mdadm: looking for devices for /dev/md127
      7. mdadm: /dev/sda is identified as a member of /dev/md127, slot 0.
      8. mdadm: /dev/sdb is identified as a member of /dev/md127, slot 1.
      9. mdadm: /dev/sde is identified as a member of /dev/md127, slot 2.
      10. mdadm: added /dev/sdb to /dev/md127 as 1 (possibly out of date)
      11. mdadm: added /dev/sde to /dev/md127 as 2
      12. mdadm: added /dev/sda to /dev/md127 as 0
      13. mdadm: /dev/md127 has been started with 2 drives (out of 3).
      Display All
    • Hi,

      i read this thread, but i cannot understand, why your RAID5 was breaking up - one drive was sorted out for /dev/md126, the other two drives stood on md127 ... and originally all three should be on md0 :P

      But anyway, you have to:
      - remove the drive from md126
      should be [sudo] mdadm /dev/md126 --remove /dev/sde (sudo only if ya are not root ...)

      - adding that drive as hotspare to md127 (starts the rebuild immediately)
      should be [sudo] mdadm /dev/md127 --add /dev/sde

      Working backup is always necessary!Beg and hope, that no more drive will fail while rebuilding ...

      Sc0rp

      The post was edited 2 times, last by Sc0rp ().

    • Same issue here.

      omv runs as a vm in a proxmox node.
      has 4x4T disks passed-through from the node and a raid5 on them.

      suddenly after a reboot I can't see no raid, the disks and the filesystem is still there in the list but refered to as missing

      /proc/mdstat shows no raid.

      root@omvbackup:~/# mdadm --assemble /dev/md127 /dev/vd[abcd] --verbose --force
      mdadm: looking for devices for /dev/md127
      mdadm: Cannot assemble mbr metadata on /dev/vda
      mdadm: /dev/vda has no superblock - assembly aborted

      Then I go to raid management, recreate the raid from scratch with same disks and name, and voila... the filesystem is intact with all the data there...

      The post was edited 1 time, last by cypour ().