Raid only 3 of 4 active HELP! State: Clean, degraded

    • OMV 1.0
    • Resolved
    • Raid only 3 of 4 active HELP! State: Clean, degraded

      Hello guys,

      i must change my mainboard because of a failure.
      Now i have problems with my Raid. There is only 3 of 4 active.


      Source Code

      1. root@NAS:~# cat /proc/mdstat
      2. Personalities : [raid6] [raid5] [raid4]
      3. md126 : inactive sdb[2](S)
      4. 2930265560 blocks super 1.2
      5. md127 : active raid5 sda[0] sdd[3] sde[1]
      6. 8790795264 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UU_U]
      7. unused devices: <none>


      Source Code

      1. root@NAS:~# mdadm --detail /dev/md127
      2. /dev/md127:
      3. Version : 1.2
      4. Creation Time : Mon Dec 30 01:17:09 2013
      5. Raid Level : raid5
      6. Array Size : 8790795264 (8383.56 GiB 9001.77 GB)
      7. Used Dev Size : 2930265088 (2794.52 GiB 3000.59 GB)
      8. Raid Devices : 4
      9. Total Devices : 3
      10. Persistence : Superblock is persistent
      11. Update Time : Mon Feb 1 07:43:27 2010
      12. State : clean, degraded
      13. Active Devices : 3
      14. Working Devices : 3
      15. Failed Devices : 0
      16. Spare Devices : 0
      17. Layout : left-symmetric
      18. Chunk Size : 512K
      19. Name : NAS:Raid5 (local to host NAS)
      20. UUID : d4c16b96:4a1865bd:44b73cec:362a790e
      21. Events : 1816
      22. Number Major Minor RaidDevice State
      23. 0 8 0 0 active sync /dev/sda
      24. 1 8 64 1 active sync /dev/sde
      25. 2 0 0 2 removed
      26. 3 8 48 3 active sync /dev/sdd
      Display All


      How can i fix it?
    • mdadm --stop /dev/md126
      mdadm --stop /dev/md127
      mdadm --assemble /dev/md127 /dev/sd[aebd] --verbose --force
      omv 4.1.13 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • It is not better.

      Look@: mdadm: added /dev/sdb to /dev/md127 as 2 (possibly out of date)

      Source Code

      1. root@NAS:/# mdadm --assemble /dev/md127 /dev/sd[aebd] --verbose --force
      2. mdadm: looking for devices for /dev/md127
      3. mdadm: /dev/sda is identified as a member of /dev/md127, slot 0.
      4. mdadm: /dev/sdb is identified as a member of /dev/md127, slot 2.
      5. mdadm: /dev/sdd is identified as a member of /dev/md127, slot 3.
      6. mdadm: /dev/sde is identified as a member of /dev/md127, slot 1.
      7. mdadm: added /dev/sde to /dev/md127 as 1
      8. mdadm: added /dev/sdb to /dev/md127 as 2 (possibly out of date)
      9. mdadm: added /dev/sdd to /dev/md127 as 3
      10. mdadm: added /dev/sda to /dev/md127 as 0
      11. mdadm: /dev/md127 has been started with 3 drives (out of 4).
      Display All
    • I would wipe /dev/sdb and then try again. Hope you have backup. You could mount the array with 3 out of 4 and try to recover data first.

      mdadm --stop /dev/md127
      dd if=/dev/zero of=/dev/sdb bs=512 count=20000
      mdadm --assemble /dev/md127 /dev/sd[aebd] --verbose --force
      omv 4.1.13 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!