Linear RAID suddenly disappeard

    • OMV 4.x
    • Resolved
    • Linear RAID suddenly disappeard

      Hi,

      I rebooted my NAS, and ... i've lost my RAID . It was a linear one. All my disks seems ok. My filesystem still exist but can't found the raid .. .When i go in the RAID management section, i try to create a new one ... I can't because it don't find any hard drive ... Any chance to get back my data and raid (i've got a backup but i hope to not use it...)

      Thanks a lot for the help, i'm totally lost :(
    • i've found this on another topic and tried :

      Source Code

      1. cat /proc/mdstat
      2. Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
      3. md0 : inactive sda[0] sdh[5](S) sdg[4] sdf[3] sde[2] sdc[1]
      4. 23441325072 blocks super 1.2
      and i've launched


      Source Code

      1. root@bart-nas:/sharedfolders# mdadm --assemble --scan -v
      2. mdadm: looking for devices for /dev/md0
      3. mdadm: /dev/sdh is busy - skipping
      4. mdadm: /dev/sdg is busy - skipping
      5. mdadm: /dev/sdf is busy - skipping
      6. mdadm: /dev/sde is busy - skipping
      7. mdadm: No super block found on /dev/sdd1 (Expected magic a92b4efc, got 000004ea)
      8. mdadm: no RAID superblock on /dev/sdd1
      9. mdadm: No super block found on /dev/sdd (Expected magic a92b4efc, got 00000000)
      10. mdadm: no RAID superblock on /dev/sdd
      11. mdadm: /dev/sdc is busy - skipping
      12. mdadm: No super block found on /dev/sdb5 (Expected magic a92b4efc, got 4617b000)
      13. mdadm: no RAID superblock on /dev/sdb5
      14. mdadm: /dev/sdb2 is too small for md: size is 2 sectors.
      15. mdadm: no RAID superblock on /dev/sdb2
      16. mdadm: No super block found on /dev/sdb1 (Expected magic a92b4efc, got 00000426)
      17. mdadm: no RAID superblock on /dev/sdb1
      18. mdadm: No super block found on /dev/sdb (Expected magic a92b4efc, got 2953b73d)
      19. mdadm: no RAID superblock on /dev/sdb
      20. mdadm: /dev/sda is busy - skipping
      Display All
      seems my raid is still there .. .but inactive. how can i active it back ?
      i got an error on the sdb2 ?

      Thanks
    • Source Code

      1. root@bart-nas:/sharedfolders# mdadm --examine /dev/sdc
      2. /dev/sdc:
      3. Magic : a92b4efc
      4. Version : 1.2
      5. Feature Map : 0x0
      6. Array UUID : 8efca549:86714fd7:d3d0ebc2:67a522cc
      7. Name : bart-nas:stockage (local to host bart-nas)
      8. Creation Time : Sun Sep 16 23:22:47 2018
      9. Raid Level : linear
      10. Raid Devices : 5
      11. Avail Dev Size : 7813775024 (3725.90 GiB 4000.65 GB)
      12. Used Dev Size : 0
      13. Data Offset : 262144 sectors
      14. Super Offset : 8 sectors
      15. Unused Space : before=262056 sectors, after=0 sectors
      16. State : clean
      17. Device UUID : 5c30d4e3:b8bca622:6fca6cd3:4c3e2663
      18. Update Time : Sun Sep 16 23:22:47 2018
      19. Bad Block Log : 512 entries available at offset 72 sectors
      20. Checksum : ddf02a8f - correct
      21. Events : 0
      22. Rounding : 0K
      23. Device Role : Active device 1
      24. Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)
      25. root@bart-nas:/sharedfolders# mdadm --examine /dev/sde
      26. /dev/sde:
      27. Magic : a92b4efc
      28. Version : 1.2
      29. Feature Map : 0x0
      30. Array UUID : 8efca549:86714fd7:d3d0ebc2:67a522cc
      31. Name : bart-nas:stockage (local to host bart-nas)
      32. Creation Time : Sun Sep 16 23:22:47 2018
      33. Raid Level : linear
      34. Raid Devices : 5
      35. Avail Dev Size : 11720783024 (5588.90 GiB 6001.04 GB)
      36. Used Dev Size : 0
      37. Data Offset : 262144 sectors
      38. Super Offset : 8 sectors
      39. Unused Space : before=262056 sectors, after=0 sectors
      40. State : clean
      41. Device UUID : 9138bfb0:3e339857:6a4f680a:6877b21c
      42. Update Time : Sun Sep 16 23:22:47 2018
      43. Bad Block Log : 512 entries available at offset 72 sectors
      44. Checksum : b9349d62 - correct
      45. Events : 0
      46. Rounding : 0K
      47. Device Role : Active device 2
      48. Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)
      49. root@bart-nas:/sharedfolders# mdadm --examine /dev/sdf
      50. /dev/sdf:
      51. Magic : a92b4efc
      52. Version : 1.2
      53. Feature Map : 0x0
      54. Array UUID : 8efca549:86714fd7:d3d0ebc2:67a522cc
      55. Name : bart-nas:stockage (local to host bart-nas)
      56. Creation Time : Sun Sep 16 23:22:47 2018
      57. Raid Level : linear
      58. Raid Devices : 5
      59. Avail Dev Size : 7813775024 (3725.90 GiB 4000.65 GB)
      60. Used Dev Size : 0
      61. Data Offset : 262144 sectors
      62. Super Offset : 8 sectors
      63. Unused Space : before=262056 sectors, after=0 sectors
      64. State : clean
      65. Device UUID : cecc2e01:c1c7212f:f68f5ef9:9ae3708b
      66. Update Time : Sun Sep 16 23:22:47 2018
      67. Bad Block Log : 512 entries available at offset 72 sectors
      68. Checksum : 56023ce1 - correct
      69. Events : 0
      70. Rounding : 0K
      71. Device Role : Active device 3
      72. Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)
      73. root@bart-nas:/sharedfolders# mdadm --examine /dev/sda
      74. /dev/sda:
      75. Magic : a92b4efc
      76. Version : 1.2
      77. Feature Map : 0x0
      78. Array UUID : 8efca549:86714fd7:d3d0ebc2:67a522cc
      79. Name : bart-nas:stockage (local to host bart-nas)
      80. Creation Time : Sun Sep 16 23:22:47 2018
      81. Raid Level : linear
      82. Raid Devices : 5
      83. Avail Dev Size : 7813775024 (3725.90 GiB 4000.65 GB)
      84. Used Dev Size : 0
      85. Data Offset : 262144 sectors
      86. Super Offset : 8 sectors
      87. Unused Space : before=262056 sectors, after=0 sectors
      88. State : clean
      89. Device UUID : 2c6412a4:e1069c36:d7229b73:5951bcc4
      90. Update Time : Sun Sep 16 23:22:47 2018
      91. Bad Block Log : 512 entries available at offset 72 sectors
      92. Checksum : b3e813fc - correct
      93. Events : 0
      94. Rounding : 0K
      95. Device Role : Active device 0
      96. Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)
      97. root@bart-nas:/sharedfolders# mdadm --examine /dev/sdh
      98. /dev/sdh:
      99. Magic : a92b4efc
      100. Version : 1.2
      101. Feature Map : 0x0
      102. Array UUID : 8efca549:86714fd7:d3d0ebc2:67a522cc
      103. Name : bart-nas:stockage (local to host bart-nas)
      104. Creation Time : Sun Sep 16 23:22:47 2018
      105. Raid Level : linear
      106. Raid Devices : 5
      107. Avail Dev Size : 3906767024 (1862.89 GiB 2000.26 GB)
      108. Used Dev Size : 0
      109. Data Offset : 262144 sectors
      110. Super Offset : 8 sectors
      111. Unused Space : before=262056 sectors, after=0 sectors
      112. State : clean
      113. Device UUID : dac0067c:74771a27:64c78c0c:1801a707
      114. Update Time : Sun Sep 16 23:22:47 2018
      115. Bad Block Log : 512 entries available at offset 72 sectors
      116. Checksum : 6f56ff8c - correct
      117. Events : 0
      118. Rounding : 0K
      119. Device Role : spare
      120. Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)
      121. root@bart-nas:/sharedfolders# mdadm --examine /dev/sdg
      122. /dev/sdg:
      123. Magic : a92b4efc
      124. Version : 1.2
      125. Feature Map : 0x0
      126. Array UUID : 8efca549:86714fd7:d3d0ebc2:67a522cc
      127. Name : bart-nas:stockage (local to host bart-nas)
      128. Creation Time : Sun Sep 16 23:22:47 2018
      129. Raid Level : linear
      130. Raid Devices : 5
      131. Avail Dev Size : 7813775024 (3725.90 GiB 4000.65 GB)
      132. Used Dev Size : 0
      133. Data Offset : 262144 sectors
      134. Super Offset : 8 sectors
      135. Unused Space : before=262056 sectors, after=0 sectors
      136. State : clean
      137. Device UUID : 7b92929a:90dcd1d1:42ebcccd:777bcd88
      138. Update Time : Sun Sep 16 23:22:47 2018
      139. Bad Block Log : 512 entries available at offset 72 sectors
      140. Checksum : 63e10a88 - correct
      141. Events : 0
      142. Rounding : 0K
      143. Device Role : Active device 4
      144. Array State : AAAAA ('A' == active, '.' == missing, 'R' == replacing)
      Display All

      Source Code

      1. mdadm: looking for devices for /dev/md0
      2. mdadm: /dev/sdc is identified as a member of /dev/md0, slot 1.
      3. mdadm: /dev/sde is identified as a member of /dev/md0, slot 2.
      4. mdadm: /dev/sdf is identified as a member of /dev/md0, slot 3.
      5. mdadm: /dev/sda is identified as a member of /dev/md0, slot 0.
      6. mdadm: /dev/sdg is identified as a member of /dev/md0, slot 4.
      7. mdadm: /dev/sdh is identified as a member of /dev/md0, slot -1.
      8. mdadm: added /dev/sdc to /dev/md0 as 1
      9. mdadm: added /dev/sde to /dev/md0 as 2
      10. mdadm: added /dev/sdf to /dev/md0 as 3
      11. mdadm: added /dev/sdg to /dev/md0 as 4
      12. mdadm: added /dev/sdh to /dev/md0 as -1
      13. mdadm: added /dev/sda to /dev/md0 as 0
      14. mdadm: failed to RUN_ARRAY /dev/md0: No such device or address
      15. root@bart-nas:/sharedfolders# mdadm --detail /dev/md0
      Display All

      The post was edited 1 time, last by bart70 ().

    • bart70 wrote:

      seems my raid is still there .. .but inactive. how can i active it back ?
      i got an error on the sdb2 ?
      Inactive doesn't mean you will be able to get it back. Hope you have backup. You need to stop the old array first.

      mdadm --stop /dev/md0
      mdadm --assemble --verbose --force /dev/md0 /dev/sd[ahgfec]
      omv 4.1.14 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • thanks for your message. Yes, i guess it's possible to lose .. .i hope i wont but i got a backup, if i can avoid the restoring time, it's cool ;)
      the commands gave me thoses results :

      root@bart-nas:/sharedfolders# mdadm --stop /dev/md0
      mdadm: stopped /dev/md0

      root@bart-nas:/sharedfolders# mdadm --assemble --verbose --force /dev/md0 /dev/sd[ahgfec]
      mdadm: looking for devices for /dev/md0
      mdadm: /dev/sda is identified as a member of /dev/md0, slot 0.
      mdadm: /dev/sdc is identified as a member of /dev/md0, slot 1.
      mdadm: /dev/sde is identified as a member of /dev/md0, slot 2.
      mdadm: /dev/sdf is identified as a member of /dev/md0, slot 3.
      mdadm: /dev/sdg is identified as a member of /dev/md0, slot 4.
      mdadm: /dev/sdh is identified as a member of /dev/md0, slot -1.
      mdadm: added /dev/sdc to /dev/md0 as 1
      mdadm: added /dev/sde to /dev/md0 as 2
      mdadm: added /dev/sdf to /dev/md0 as 3
      mdadm: added /dev/sdg to /dev/md0 as 4
      mdadm: added /dev/sdh to /dev/md0 as -1
      mdadm: added /dev/sda to /dev/md0 as 0
      mdadm: failed to RUN_ARRAY /dev/md0: No such device or address
    • What is the output of: ls -al /dev/md*

      I haven't seen this error before. If there is a different node in the output from above, use that node. Otherwise, try:

      mdadm --assemble --verbose --force --update=summaries /dev/md0 /dev/sd[ahgfec]
      omv 4.1.14 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • root@bart-nas:/sharedfolders# mdadm --stop /dev/md0
      mdadm: stopped /dev/md0


      root@bart-nas:/sharedfolders# mdadm --assemble --verbose --force /dev/md0 /dev/sd[ahgfec]
      mdadm: looking for devices for /dev/md0
      mdadm: /dev/sda is identified as a member of /dev/md0, slot 0.
      mdadm: /dev/sdc is identified as a member of /dev/md0, slot 1.
      mdadm: /dev/sde is identified as a member of /dev/md0, slot 2.
      mdadm: /dev/sdf is identified as a member of /dev/md0, slot 3.
      mdadm: /dev/sdg is identified as a member of /dev/md0, slot 4.
      mdadm: /dev/sdh is identified as a member of /dev/md0, slot -1.
      mdadm: added /dev/sdc to /dev/md0 as 1
      mdadm: added /dev/sde to /dev/md0 as 2
      mdadm: added /dev/sdf to /dev/md0 as 3
      mdadm: added /dev/sdg to /dev/md0 as 4
      mdadm: added /dev/sdh to /dev/md0 as -1
      mdadm: added /dev/sda to /dev/md0 as 0
      mdadm: failed to RUN_ARRAY /dev/md0: No such device or address


      it's the same ... the -1 is not weird ?
    • bart70 wrote:

      it's the same ... the -1 is not weird ?
      strange. Maybe it was a spare? How many drives are in the array? Try the commands again but remove the h from the brackets in the assemble command.
      omv 4.1.14 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • looks better i guess

      root@bart-nas:/sharedfolders# mdadm --assemble --verbose --force /dev/md0 /dev/sd[agfec]
      mdadm: looking for devices for /dev/md0
      mdadm: /dev/sda is identified as a member of /dev/md0, slot 0.
      mdadm: /dev/sdc is identified as a member of /dev/md0, slot 1.
      mdadm: /dev/sde is identified as a member of /dev/md0, slot 2.
      mdadm: /dev/sdf is identified as a member of /dev/md0, slot 3.
      mdadm: /dev/sdg is identified as a member of /dev/md0, slot 4.
      mdadm: added /dev/sdc to /dev/md0 as 1
      mdadm: added /dev/sde to /dev/md0 as 2
      mdadm: added /dev/sdf to /dev/md0 as 3
      mdadm: added /dev/sdg to /dev/md0 as 4
      mdadm: added /dev/sda to /dev/md0 as 0
      mdadm: /dev/md0 has been started with 5 drives.


      but .. still no access to my data

      spare disk is a free disk not in the raid ? (it was with a S flag i saw in another command)
    • bart70 wrote:

      i see it in the web GUI .. should i mount it ? (i prefer ask than break all)
      No. that will cause it to create a new entry. mount -a from the command line should mount everything.
      omv 4.1.14 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • bart70 wrote:

      Can i ask you something more, can we know why this crashed ?
      I wish I knew that answer. Sometimes it is because the system was not shutdown properly. Other times, it is because the hardware spins up the drives too late. Never found the reason for a lot of these issues. While I have never had issues with raid, I don't use it on most of my systems now (mergerfs and rsnapshot are good enough for most of my uses).
      omv 4.1.14 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • bart70 wrote:

      i guess i can't change to mergefs ?
      Nope. You need to format each drive to use it in this case.

      bart70 wrote:

      should i watch rsnapshot or Greyhole ?
      I use rsync to sync two OMV systems. Then I use rsnapshot on both systems. This does use a lot of space but very reliable and I have lots of safety factors involved. Greyhole isn't support as a plugin anymore.
      omv 4.1.14 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!