RAID5 disappeared

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • RAID5 disappeared

      Hello,

      Sorry for my english, i'm french.

      I run on OMV 4.1.21-1 and I have several disks including a RAID 5 of 3 disks 2 TB.
      One of the disks failed, I replaced it, but I can not put this disk in the raid management because the raid has disappeared.

      Could you help me please.

      Source Code

      1. root@NAS:~# cat /proc/mdstat
      2. Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
      3. md127 : inactive sdb[4](S) sdc[3](S)
      4. 3907027120 blocks super 1.2
      5. unused devices: <none>
      6. root@NAS:~# mdadm --detail /dev/md127
      7. /dev/md127:
      8. Version : 1.2
      9. Raid Level : raid0
      10. Total Devices : 2
      11. Persistence : Superblock is persistent
      12. State : inactive
      13. Name : NAS:raid5 (local to host NAS)
      14. UUID : f7093f38:a6530b7f:2f4b306f:818ce6cf
      15. Events : 5974
      16. Number Major Minor RaidDevice
      17. - 8 32 - /dev/sdc
      18. - 8 16 - /dev/sdb
      19. root@NAS:~# mdadm --stop /dev/md127 mdadm --assemble /dev/md127 /dev/sdb /dev/sdc --run
      20. mdadm: --assemble would set mdadm mode to "assemble", but it is already set to "misc".
      21. root@NAS:~# mdadm --assemble /dev/md127 /dev/sd[bcf]
      22. mdadm: /dev/sdb is busy - skipping
      23. mdadm: /dev/sdc is busy - skipping
      24. mdadm: no recogniseable superblock on /dev/sdf
      25. mdadm: /dev/sdf has no superblock - assembly aborted
      26. root@NAS:~# mdadm --assemble --force --verbose /dev/md127 /dev/sd[bcf]
      27. mdadm: looking for devices for /dev/md127
      28. mdadm: /dev/sdb is busy - skipping
      29. mdadm: /dev/sdc is busy - skipping
      30. mdadm: no recogniseable superblock on /dev/sdf
      31. mdadm: /dev/sdf has no superblock - assembly aborted
      Display All
      Thanks in advance.
      Michael
    • Thinks for your answers

      Source Code

      1. root@NAS:~# cat /proc/mdstat
      2. Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
      3. md127 : inactive sdc[3](S) sdb[4](S)
      4. 3907027120 blocks super 1.2
      5. unused devices: <none>



      Source Code

      1. root@NAS:~# blkid
      2. /dev/sda1: LABEL="Surveillance" UUID="477137df-7f7e-44b8-893b-5f2ed6c8fbd9" TYPE="ext4" PARTUUID="2049d577-639c-4b3a-9a5a-4d3034bf7b60"
      3. /dev/sdb: UUID="f7093f38-a653-0b7f-2f4b-306f818ce6cf" UUID_SUB="d860f486-e04a-bf13-f05a-30082711bc72" LABEL="NAS:raid5" TYPE="linux_raid_member"
      4. /dev/sdc: UUID="f7093f38-a653-0b7f-2f4b-306f818ce6cf" UUID_SUB="90a1f1a1-a119-3715-f728-1c6808ecbf91" LABEL="NAS:raid5" TYPE="linux_raid_member"
      5. /dev/sdd1: UUID="8d47b635-3f13-4a48-81b2-c7086cf50599" TYPE="ext4" PARTUUID="0007700b-01"
      6. /dev/sdd5: UUID="da8466fc-068e-4e77-9589-714d0ade56ee" TYPE="swap" PARTUUID="0007700b-05"
      7. /dev/sde1: LABEL="DD1500" UUID="40cfb129-6726-4146-8eb4-201f517c3918" TYPE="ext4" PARTUUID="782958b0-31d8-4daa-89d9-c6dbfcb247cc"

      Source Code

      1. root@NAS:~# fdisk -l | grep "Disk "
      2. Disk /dev/sda: 2,7 TiB, 3000592982016 bytes, 5860533168 sectors
      3. Disk identifier: 9FEA1A4A-3675-42D7-BC63-88CBB5F823C6
      4. Disk /dev/sdb: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors
      5. Disk /dev/sdc: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors
      6. Disk /dev/sdd: 149,1 GiB, 160041885696 bytes, 312581808 sectors
      7. Disk identifier: 0x0007700b
      8. Disk /dev/sdf: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors
      9. Disk /dev/sde: 1,4 TiB, 1500301910016 bytes, 2930277168 sectors
      10. Disk identifier: BE1BC9E8-BBE0-446B-8CD4-EFC3592FF49A

      Source Code

      1. root@NAS:~# cat /etc/mdadm/mdadm.conf
      2. # mdadm.conf
      3. #
      4. # Please refer to mdadm.conf(5) for information about this file.
      5. #
      6. # by default, scan all partitions (/proc/partitions) for MD superblocks.
      7. # alternatively, specify devices to scan, using wildcards if desired.
      8. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      9. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      10. # used if no RAID devices are configured.
      11. DEVICE partitions
      12. # auto-create devices with Debian standard permissions
      13. CREATE owner=root group=disk mode=0660 auto=yes
      14. # automatically tag new arrays as belonging to the local system
      15. HOMEHOST <system>
      16. # definitions of existing MD arrays
      17. ARRAY /dev/md/raid5 metadata=1.2 name=NAS:raid5 UUID=f7093f38:a6530b7f:2f4b306f:818ce6cf
      Display All

      Source Code

      1. root@NAS:~# mdadm --detail --scan --verbose
      2. INACTIVE-ARRAY /dev/md127 num-devices=2 metadata=1.2 name=NAS:raid5 UUID=f7093f38:a6530b7f:2f4b306f:818ce6cf
      3. devices=/dev/sdb,/dev/sdc
    • Hello.
      Thank you for your message.
      I just tried with the link you gave me but I can not find my RAID.

      Please could you help me.

      Source Code

      1. root@NAS:~# cat /proc/mdstat
      2. Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
      3. md127 : inactive sdc[3](S) sdb[4](S)
      4. 3907027120 blocks super 1.2
      5. unused devices: <none>
      6. root@NAS:~# $ omv-mkconf mdadm
      7. -bash: $ : commande introuvable
      8. root@NAS:~# mdadm --assemble --verbose --force /dev/md127 /dev/sd[bcf]
      9. mdadm: looking for devices for /dev/md127
      10. mdadm: /dev/sdb is busy - skipping
      11. mdadm: /dev/sdc is busy - skipping
      12. mdadm: no recogniseable superblock on /dev/sdf
      13. mdadm: /dev/sdf has no superblock - assembly aborted
      14. root@NAS:~# mdadm --stop /dev/md127
      15. mdadm: stopped /dev/md127
      16. root@NAS:~# mdadm --assemble --verbose --force /dev/md127 /dev/sd[bcf]
      17. mdadm: looking for devices for /dev/md127
      18. mdadm: No super block found on /dev/sdf (Expected magic a92b4efc, got 000004ea)
      19. mdadm: no RAID superblock on /dev/sdf
      20. mdadm: /dev/sdf has no superblock - assembly aborted
      Display All
    • Thank you for your reply.
      Here is the return. And I finally found in the openmediavault interface my raid with 2 disks.
      I will try a reconstruction of the raid with my new disk.
      Thank you very much for your answer, I'll let you know the result

      Source Code

      1. root@NAS:~# mdadm --assemble --verbose --force /dev/md127 /dev/sd[bc]
      2. mdadm: looking for devices for /dev/md127
      3. mdadm: /dev/sdb is identified as a member of /dev/md127, slot 0.
      4. mdadm: /dev/sdc is identified as a member of /dev/md127, slot 1.
      5. mdadm: Marking array /dev/md127 as 'clean'
      6. mdadm: added /dev/sdc to /dev/md127 as 1
      7. mdadm: no uptodate device for slot 2 of /dev/md127
      8. mdadm: added /dev/sdb to /dev/md127 as 0
      9. mdadm: /dev/md127 has been started with 2 drives (out of 3).
    • Thank you all for your messages.

      From the beginning I had almost the solution but in fact I was trying the command

      Source Code

      1. mdadm --assemble --verbose --force /dev/md127 /dev/sd[bcf]
      when it was necessary to do

      Source Code

      1. mdadm --assemble --verbose --force /dev/md127 /dev/sd[bc]
      Nevertheless I have a big problem since and I opened a new post.
      OMV does not start anymore

      I am disgusted...


      Sincerely
      Michael