Raid5 Missing, File System Missing

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • Raid5 Missing, File System Missing

      Hello, all.

      I added a new drive (6 x 1Tb total drives now), expanded my array and then resized the file system. I lost network connectivity during the resize and had to reboot. Now it appears I've lost my array. Below is the output based on ryecoaaron request in his sticky thread:

      Source Code

      1. root@omv:~# cat /proc/mdstat
      2. Personalities : [raid6] [raid5] [raid4]
      3. md127 : inactive sdd[0] sdb[5] sda[3] sdf[2] sde[1]
      4. 4883157560 blocks super 1.2
      5. unused devices: <none>

      Source Code

      1. root@omv:~# blkid
      2. /dev/sdd: UUID="c628d2a9-0132-5816-afb8-3a2e73031b40" UUID_SUB="e072462a-09bb-7769-b0cd-88f58f9bb5ca" LABEL="omv:R5Array" TYPE="linux_raid_member"
      3. /dev/sdb: UUID="c628d2a9-0132-5816-afb8-3a2e73031b40" UUID_SUB="b0bd798e-80b4-db90-4e35-5d832793fe30" LABEL="omv:R5Array" TYPE="linux_raid_member"
      4. /dev/sde: UUID="c628d2a9-0132-5816-afb8-3a2e73031b40" UUID_SUB="ea552547-e7ab-f516-5b71-5b6a5234da89" LABEL="omv:R5Array" TYPE="linux_raid_member"
      5. /dev/sdg1: UUID="1d42ce8e-7dca-4183-8723-926eafc7c182" TYPE="ext4"
      6. /dev/sdg5: UUID="7bbc0b57-05ef-4b1b-8d19-2e1a6c71d8b5" TYPE="swap"
      7. /dev/sdf: UUID="c628d2a9-0132-5816-afb8-3a2e73031b40" UUID_SUB="bbf79cbf-98e4-2acb-37f9-2301d9fe9c1d" LABEL="omv:R5Array" TYPE="linux_raid_member"
      8. /dev/sda: UUID="c628d2a9-0132-5816-afb8-3a2e73031b40" UUID_SUB="f092b520-1794-41f6-5ca7-33251544ddd9" LABEL="omv:R5Array" TYPE="linux_raid_member"

      Source Code

      1. root@omv:~# fdisk -l | grep "Disk "
      2. Disk /dev/sda doesn't contain a valid partition table
      3. WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted.
      4. Disk /dev/sdd doesn't contain a valid partition table
      5. Disk /dev/sdb doesn't contain a valid partition table
      6. Disk /dev/sde doesn't contain a valid partition table
      7. Disk /dev/sdf doesn't contain a valid partition table
      8. Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
      9. Disk identifier: 0x00000000
      10. Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
      11. Disk identifier: 0x00000000
      12. Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes
      13. Disk identifier: 0x00000000
      14. Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
      15. Disk identifier: 0x00000000
      16. Disk /dev/sde: 1000.2 GB, 1000204886016 bytes
      17. Disk identifier: 0x00000000
      18. Disk /dev/sdf: 1000.2 GB, 1000204886016 bytes
      19. Disk identifier: 0x00000000
      20. Disk /dev/sdg: 500.1 GB, 500107862016 bytes
      21. Disk identifier: 0x000ab530
      Display All

      Source Code

      1. root@omv:~# cat /etc/mdadm/mdadm.conf
      2. # mdadm.conf
      3. #
      4. # Please refer to mdadm.conf(5) for information about this file.
      5. #
      6. # by default, scan all partitions (/proc/partitions) for MD superblocks.
      7. # alternatively, specify devices to scan, using wildcards if desired.
      8. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      9. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      10. # used if no RAID devices are configured.
      11. DEVICE partitions
      12. # auto-create devices with Debian standard permissions
      13. CREATE owner=root group=disk mode=0660 auto=yes
      14. # automatically tag new arrays as belonging to the local system
      15. HOMEHOST <system>
      16. # definitions of existing MD arrays
      17. ARRAY /dev/md/R5Array metadata=1.2 name=omv:R5Array UUID=c628d2a9:01325816:afb83a2e:73031b40
      Display All

      Source Code

      1. root@omv:~# mdadm --detail --scan --verbose
      2. mdadm: cannot open /dev/md/R5Array: No such file or directory

      I've rebooted a few times but do not see the array in the GUI. See attachments for more details on Physical Disks, Raid Management, and File System screenshots.







      Any help is GREATLY appreciated.
    • Well, a little more digging and I came up with this:

      Source Code

      1. root@omv:/# mdadm --stop /dev/md127
      2. mdadm: stopped /dev/md127

      Source Code

      1. root@omv:/# mdadm --assemble --force --verbose /dev/md0 /dev/sd[abcdef]
      2. mdadm: looking for devices for /dev/md0
      3. mdadm: no RAID superblock on /dev/sdc
      4. mdadm: /dev/sdc has no superblock - assembly aborted

      Source Code

      1. root@omv:/# mdadm --assemble --force --verbose /dev/md0 /dev/sd[abdef]
      2. mdadm: looking for devices for /dev/md0
      3. mdadm: /dev/sda is identified as a member of /dev/md0, slot 3.
      4. mdadm: /dev/sdb is identified as a member of /dev/md0, slot 4.
      5. mdadm: /dev/sdd is identified as a member of /dev/md0, slot 0.
      6. mdadm: /dev/sde is identified as a member of /dev/md0, slot 1.
      7. mdadm: /dev/sdf is identified as a member of /dev/md0, slot 2.
      8. mdadm: Marking array /dev/md0 as 'clean'
      9. mdadm:/dev/md0 has an active reshape - checking if critical section needs to be restored
      10. mdadm: added /dev/sde to /dev/md0 as 1
      11. mdadm: added /dev/sdf to /dev/md0 as 2
      12. mdadm: added /dev/sda to /dev/md0 as 3
      13. mdadm: added /dev/sdb to /dev/md0 as 4
      14. mdadm: no uptodate device for slot 5 of /dev/md0
      15. mdadm: added /dev/sdd to /dev/md0 as 0
      16. mdadm: /dev/md0 has been started with 5 drives (out of 6).
      Display All

      My array showed up and I tried to mount the file system. The system has now locked up and cat /proc/mdstat doesn't return anything.

      I'm going to leave it overnight and check it again the morning.
    • I restarted the box and went through the process in post #2. I now see the array in the GUI and the file system shows unmounted.

      If I click mount, how long should it take? I tried it again and it locks up and "cat /proc/mdstat" doesn't show anything and clicking on RAID Management or File System in the GUI shows a loading button and nothing else.

      What would be the best way to proceed from here?
      Images
      • unmounted.png

        12.65 kB, 898×130, viewed 24 times
      • degradedarray.png

        10.69 kB, 602×148, viewed 26 times
    • That is not a good sign that cat /proc/mdstat doesn't return anything. You will through the right steps to fix it. Sounds like the system isn't stable or a drive is failing. Anything in dmesg?

      I would ask yourself two questions:
      1 - Do you really need raid? Raid isn't backup and there are other ways to pool drives.
      2 - Can you back up the array? Raid isn't backup.
      omv 4.0.11 arrakis | 64 bit | 4.13 backports kernel | omvextrasorg 4.1.0
      omv-extras.org plugins source code and issue tracker - github.com/OpenMediaVault-Plugin-Developers

      Please don't PM for support... Too many PMs!
    • ryecoaaron wrote:

      That is not a good sign that cat /proc/mdstat doesn't return anything. You will through the right steps to fix it. Sounds like the system isn't stable or a drive is failing. Anything in dmesg?

      I would ask yourself two questions:
      1 - Do you really need raid? Raid isn't backup and there are other ways to pool drives.
      2 - Can you back up the array? Raid isn't backup.
      Thanks for the reply!

      1) Going forward, I don't care what the pooling method is but for now I need to recover the array/data if possible
      2) I'm aware a RAID doesn't provide backups but I was in the process of consolidating everything so I could take a backup. At this point, I have none, unfortunately.

      I've made some progress in troubleshooting, I think.

      I put the 2 drives that were in after I expanded and reshaped the array. The following information is with these 2 drives in:

      Source Code

      1. #############################################################################################
      2. root@omv:~# mdadm --examine /dev/sd*
      3. /dev/sda:
      4. Magic : a92b4efc
      5. Version : 1.2
      6. Feature Map : 0x4
      7. Array UUID : c628d2a9:01325816:afb83a2e:73031b40
      8. Name : omv:R5Array (local to host omv)
      9. Creation Time : Tue Jan 6 12:00:20 2015
      10. Raid Level : raid5
      11. Raid Devices : 6
      12. Avail Dev Size : 1953263024 (931.39 GiB 1000.07 GB)
      13. Array Size : 4883156480 (4656.94 GiB 5000.35 GB)
      14. Used Dev Size : 1953262592 (931.39 GiB 1000.07 GB)
      15. Data Offset : 262144 sectors
      16. Super Offset : 8 sectors
      17. State : active
      18. Device UUID : f092b520:179441f6:5ca73325:1544ddd9
      19. Reshape pos'n : 577431040 (550.68 GiB 591.29 GB)
      20. Delta Devices : 1 (5->6)
      21. Update Time : Thu Aug 17 01:44:30 2017
      22. Checksum : 76b3597c - correct
      23. Events : 970956
      24. Layout : left-symmetric
      25. Chunk Size : 512K
      26. Device Role : Active device 3
      27. Array State : AAAAA. ('A' == active, '.' == missing)
      28. /dev/sdb:
      29. MBR Magic : aa55
      30. Partition[0] : 1953525167 sectors at 1 (type ee)
      31. mdadm: No md superblock detected on /dev/sdb1.
      32. mdadm: No md superblock detected on /dev/sdb2.
      33. /dev/sdc:
      34. Magic : a92b4efc
      35. Version : 1.2
      36. Feature Map : 0x4
      37. Array UUID : c628d2a9:01325816:afb83a2e:73031b40
      38. Name : omv:R5Array (local to host omv)
      39. Creation Time : Tue Jan 6 12:00:20 2015
      40. Raid Level : raid5
      41. Raid Devices : 6
      42. Avail Dev Size : 1953263024 (931.39 GiB 1000.07 GB)
      43. Array Size : 4883156480 (4656.94 GiB 5000.35 GB)
      44. Used Dev Size : 1953262592 (931.39 GiB 1000.07 GB)
      45. Data Offset : 262144 sectors
      46. Super Offset : 8 sectors
      47. State : active
      48. Device UUID : b0bd798e:80b4db90:4e355d83:2793fe30
      49. Reshape pos'n : 577431040 (550.68 GiB 591.29 GB)
      50. Delta Devices : 1 (5->6)
      51. Update Time : Thu Aug 17 01:44:30 2017
      52. Checksum : 345c81ab - correct
      53. Events : 970956
      54. Layout : left-symmetric
      55. Chunk Size : 512K
      56. Device Role : Active device 4
      57. Array State : AAAAA. ('A' == active, '.' == missing)
      58. /dev/sdd:
      59. Magic : a92b4efc
      60. Version : 1.2
      61. Feature Map : 0x4
      62. Array UUID : c628d2a9:01325816:afb83a2e:73031b40
      63. Name : omv:R5Array (local to host omv)
      64. Creation Time : Tue Jan 6 12:00:20 2015
      65. Raid Level : raid5
      66. Raid Devices : 6
      67. Avail Dev Size : 1953263024 (931.39 GiB 1000.07 GB)
      68. Array Size : 4883156480 (4656.94 GiB 5000.35 GB)
      69. Used Dev Size : 1953262592 (931.39 GiB 1000.07 GB)
      70. Data Offset : 262144 sectors
      71. Super Offset : 8 sectors
      72. State : active
      73. Device UUID : e072462a:09bb7769:b0cd88f5:8f9bb5ca
      74. Reshape pos'n : 577431040 (550.68 GiB 591.29 GB)
      75. Delta Devices : 1 (5->6)
      76. Update Time : Thu Aug 17 01:44:30 2017
      77. Checksum : b4a7de29 - correct
      78. Events : 970956
      79. Layout : left-symmetric
      80. Chunk Size : 512K
      81. Device Role : Active device 0
      82. Array State : AAAAA. ('A' == active, '.' == missing)
      83. /dev/sde:
      84. Magic : a92b4efc
      85. Version : 1.2
      86. Feature Map : 0x4
      87. Array UUID : c628d2a9:01325816:afb83a2e:73031b40
      88. Name : omv:R5Array (local to host omv)
      89. Creation Time : Tue Jan 6 12:00:20 2015
      90. Raid Level : raid5
      91. Raid Devices : 6
      92. Avail Dev Size : 1953263024 (931.39 GiB 1000.07 GB)
      93. Array Size : 4883156480 (4656.94 GiB 5000.35 GB)
      94. Used Dev Size : 1953262592 (931.39 GiB 1000.07 GB)
      95. Data Offset : 262144 sectors
      96. Super Offset : 8 sectors
      97. State : active
      98. Device UUID : ea552547:e7abf516:5b715b6a:5234da89
      99. Reshape pos'n : 577431040 (550.68 GiB 591.29 GB)
      100. Delta Devices : 1 (5->6)
      101. Update Time : Thu Aug 17 01:44:30 2017
      102. Checksum : b2fbee7f - correct
      103. Events : 970956
      104. Layout : left-symmetric
      105. Chunk Size : 512K
      106. Device Role : Active device 1
      107. Array State : AAAAA. ('A' == active, '.' == missing)
      108. /dev/sdf:
      109. Magic : a92b4efc
      110. Version : 1.2
      111. Feature Map : 0x4
      112. Array UUID : c628d2a9:01325816:afb83a2e:73031b40
      113. Name : omv:R5Array (local to host omv)
      114. Creation Time : Tue Jan 6 12:00:20 2015
      115. Raid Level : raid5
      116. Raid Devices : 6
      117. Avail Dev Size : 1953263024 (931.39 GiB 1000.07 GB)
      118. Array Size : 4883156480 (4656.94 GiB 5000.35 GB)
      119. Used Dev Size : 1953262592 (931.39 GiB 1000.07 GB)
      120. Data Offset : 262144 sectors
      121. Super Offset : 8 sectors
      122. State : active
      123. Device UUID : bbf79cbf:98e42acb:37f92301:d9fe9c1d
      124. Reshape pos'n : 577431040 (550.68 GiB 591.29 GB)
      125. Delta Devices : 1 (5->6)
      126. Update Time : Thu Aug 17 01:44:30 2017
      127. Checksum : a3422e6 - correct
      128. Events : 970956
      129. Layout : left-symmetric
      130. Chunk Size : 512K
      131. Device Role : Active device 2
      132. Array State : AAAAA. ('A' == active, '.' == missing)
      133. /dev/sdg:
      134. MBR Magic : aa55
      135. Partition[0] : 962410496 sectors at 2048 (type 83)
      136. Partition[1] : 14356482 sectors at 962414590 (type 05)
      137. mdadm: No md superblock detected on /dev/sdg1.
      138. /dev/sdg2:
      139. MBR Magic : aa55
      140. Partition[0] : 14356480 sectors at 2 (type 82)
      141. mdadm: No md superblock detected on /dev/sdg5.
      142. #############################################################################################
      143. #############################################################################################
      144. /etc/mdadm/mdadm.conf
      145. root@omv:~# cat /etc/mdadm/mdadm.conf
      146. # mdadm.conf
      147. #
      148. # Please refer to mdadm.conf(5) for information about this file.
      149. #
      150. # by default, scan all partitions (/proc/partitions) for MD superblocks.
      151. # alternatively, specify devices to scan, using wildcards if desired.
      152. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      153. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      154. # used if no RAID devices are configured.
      155. DEVICE partitions
      156. # auto-create devices with Debian standard permissions
      157. CREATE owner=root group=disk mode=0660 auto=yes
      158. # automatically tag new arrays as belonging to the local system
      159. HOMEHOST <system>
      160. # definitions of existing MD arrays
      161. ARRAY /dev/md/R5Array metadata=1.2 name=omv:R5Array UUID=c628d2a9:01325816:afb83a2e:73031b40
      162. #############################################################################################
      163. #############################################################################################
      164. cat /proc/mdstat
      165. root@omv:~# cat /proc/mdstat
      166. Personalities : [raid6] [raid5] [raid4]
      167. md127 : inactive sdd[0] sdc[5] sda[3] sdf[2] sde[1]
      168. 4883157560 blocks super 1.2
      169. unused devices: <none>
      170. root@omv:~#
      171. #############################################################################################
      172. #############################################################################################
      173. mdadm --assemble --force --verbose /dev/md0 /dev/sd[abcdef]
      174. root@omv:~# mdadm --assemble --force --verbose /dev/md0 /dev/sd[abcdef]
      175. mdadm: looking for devices for /dev/md0
      176. mdadm: /dev/sda is busy - skipping
      177. mdadm: Cannot assemble mbr metadata on /dev/sdb
      178. mdadm: /dev/sdb has no superblock - assembly aborted
      179. root@omv:~#
      180. #############################################################################################
      Display All
    • I bought a new 1TB drive and put it in along with the original drives and the previously "new" drive that was in. This info is from that setup:

      Source Code

      1. #############################################################################################
      2. mdadm --examine /dev/sd*
      3. Chunk Size : 512K
      4. Device Role : Active device 1
      5. Array State : AAAAA. ('A' == active, '.' == missing)
      6. /dev/sdd:
      7. Magic : a92b4efc
      8. Version : 1.2
      9. Feature Map : 0x4
      10. Array UUID : c628d2a9:01325816:afb83a2e:73031b40
      11. Name : omv:R5Array (local to host omv)
      12. Creation Time : Tue Jan 6 12:00:20 2015
      13. Raid Level : raid5
      14. Raid Devices : 6
      15. Avail Dev Size : 1953263024 (931.39 GiB 1000.07 GB)
      16. Array Size : 4883156480 (4656.94 GiB 5000.35 GB)
      17. Used Dev Size : 1953262592 (931.39 GiB 1000.07 GB)
      18. Data Offset : 262144 sectors
      19. Super Offset : 8 sectors
      20. State : active
      21. Device UUID : bbf79cbf:98e42acb:37f92301:d9fe9c1d
      22. Reshape pos'n : 577431040 (550.68 GiB 591.29 GB)
      23. Delta Devices : 1 (5->6)
      24. Update Time : Thu Aug 17 01:44:30 2017
      25. Checksum : a3422e6 - correct
      26. Events : 970956
      27. Layout : left-symmetric
      28. Chunk Size : 512K
      29. Device Role : Active device 2
      30. Array State : AAAAA. ('A' == active, '.' == missing)
      31. /dev/sde:
      32. Magic : a92b4efc
      33. Version : 1.2
      34. Feature Map : 0x4
      35. Array UUID : c628d2a9:01325816:afb83a2e:73031b40
      36. Name : omv:R5Array (local to host omv)
      37. Creation Time : Tue Jan 6 12:00:20 2015
      38. Raid Level : raid5
      39. Raid Devices : 6
      40. Avail Dev Size : 1953263024 (931.39 GiB 1000.07 GB)
      41. Array Size : 4883156480 (4656.94 GiB 5000.35 GB)
      42. Used Dev Size : 1953262592 (931.39 GiB 1000.07 GB)
      43. Data Offset : 262144 sectors
      44. Super Offset : 8 sectors
      45. State : active
      46. Device UUID : f092b520:179441f6:5ca73325:1544ddd9
      47. Reshape pos'n : 577431040 (550.68 GiB 591.29 GB)
      48. Delta Devices : 1 (5->6)
      49. Update Time : Thu Aug 17 01:44:30 2017
      50. Checksum : 76b3597c - correct
      51. Events : 970956
      52. Layout : left-symmetric
      53. Chunk Size : 512K
      54. Device Role : Active device 3
      55. Array State : AAAAA. ('A' == active, '.' == missing)
      56. /dev/sdf:
      57. MBR Magic : aa55
      58. Partition[0] : 962410496 sectors at 2048 (type 83)
      59. Partition[1] : 14356482 sectors at 962414590 (type 05)
      60. mdadm: No md superblock detected on /dev/sdf1.
      61. /dev/sdf2:
      62. MBR Magic : aa55
      63. Partition[0] : 14356480 sectors at 2 (type 82)
      64. mdadm: No md superblock detected on /dev/sdf5.
      65. #############################################################################################
      66. #############################################################################################
      67. /etc/mdadm/mdadm.conf
      68. # mdadm.conf
      69. #
      70. # Please refer to mdadm.conf(5) for information about this file.
      71. #
      72. # by default, scan all partitions (/proc/partitions) for MD superblocks.
      73. # alternatively, specify devices to scan, using wildcards if desired.
      74. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      75. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      76. # used if no RAID devices are configured.
      77. DEVICE partitions
      78. # auto-create devices with Debian standard permissions
      79. CREATE owner=root group=disk mode=0660 auto=yes
      80. # automatically tag new arrays as belonging to the local system
      81. HOMEHOST <system>
      82. # definitions of existing MD arrays
      83. ARRAY /dev/md/R5Array metadata=1.2 name=omv:R5Array UUID=c628d2a9:01325816:afb83a2e:73031b40
      84. #############################################################################################
      85. #############################################################################################
      86. cat /proc/mdstat
      87. Personalities : [raid6] [raid5] [raid4]
      88. md127 : inactive sdb[0] sda[5] sde[3] sdd[2] sdc[1]
      89. 4883157560 blocks super 1.2
      90. unused devices: <none>
      91. #############################################################################################
      92. #############################################################################################
      93. mdadm --assemble --force --verbose /dev/md0 /dev/sd[abcdef]
      94. root@omv:~# mdadm --assemble --force --verbose /dev/md0 /dev/sd[abcdef]
      95. mdadm: looking for devices for /dev/md0
      96. mdadm: /dev/sda is busy - skipping
      97. mdadm: /dev/sdb is busy - skipping
      98. mdadm: /dev/sdc is busy - skipping
      99. mdadm: /dev/sdd is busy - skipping
      100. mdadm: /dev/sde is busy - skipping
      101. mdadm: Cannot assemble mbr metadata on /dev/sdf
      102. mdadm: /dev/sdf has no superblock - assembly aborted
      103. #############################################################################################
      Display All

      I think my array is still in tact. I'm a bit confused on the naming though: is it /dev/md0, /dev/md127, or /dev/R5Array?

      At this point, I'm getting desperate. I'm willing to work with someone remotely and pay for their time.
    • This is probably my latest update for the night as I think I've made progress and the array might be reshaping right now.

      I removed the 6th disk I added and ran the following:

      Source Code

      1. root@omv:~# mdadm --stop /dev/md127
      2. mdadm: stopped /dev/md127
      3. root@omv:~# mdadm --assemble --force /dev/md127 /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde
      4. mdadm: Marking array /dev/md127 as 'clean'
      5. mdadm: /dev/md127 has been started with 5 drives (out of 6).
      6. root@omv:~# sudo mdadm --detail /dev/md127
      7. /dev/md127:
      8. Version : 1.2
      9. Creation Time : Tue Jan 6 12:00:20 2015
      10. Raid Level : raid5
      11. Array Size : 3906525184 (3725.55 GiB 4000.28 GB)
      12. Used Dev Size : 976631296 (931.39 GiB 1000.07 GB)
      13. Raid Devices : 6
      14. Total Devices : 5
      15. Persistence : Superblock is persistent
      16. Update Time : Thu Aug 17 01:44:30 2017
      17. State : clean, degraded
      18. Active Devices : 5
      19. Working Devices : 5
      20. Failed Devices : 0
      21. Spare Devices : 0
      22. Layout : left-symmetric
      23. Chunk Size : 512K
      24. Delta Devices : 1, (5->6)
      25. Name : omv:R5Array (local to host omv)
      26. UUID : c628d2a9:01325816:afb83a2e:73031b40
      27. Events : 970956
      28. Number Major Minor RaidDevice State
      29. 0 8 32 0 active sync /dev/sdc
      30. 1 8 48 1 active sync /dev/sdd
      31. 2 8 64 2 active sync /dev/sde
      32. 3 8 0 3 active sync /dev/sda
      33. 5 8 16 4 active sync /dev/sdb
      34. 5 0 0 5 removed
      Display All

      Following that, I did this:

      Source Code

      1. mkdir -p /mnt/md127
      2. mount /dev/md127 /mnt/md127

      At this point my console just has a blinking cursor as if it was doing something.

      I logged in to another session and ran cat /proc/mdstat but get the blinking cursor again.

      Before I tried to mount the array, I saw it mounted in the GUI when I clicked on RAID Management and it showed "clean,degraded".

      After I mounted it, I get "Loading..." when I click on RAID Management and File System.

      I'm hoping the array is reshaping and will just take some time but I'm not sure how I can confirm that other than just waiting it out.

      Attached is the latest dmesg.
      Files
      • dmesg2.txt

        (64.42 kB, downloaded 17 times, last: )
    • Ok, maybe it wasn't my last post - sorry. Just thought of something...

      Is it possible that the reshaping never finished [Reshape pos'n : 577431040 (550.68 GiB 591.29 GB)]? I'm thinking if I power down, plug in the 6th drive, boot the machine, stop md127, and then run mdadm --assemble --force /dev/md127 /dev/sda /dev/sdb /dev/sdc /dev/sdd /dev/sde that might do the trick?
    • Well, I did just that. Once I added the new drive and added it to the array it started to resync. Once the resync finished it started to recover. After about 18 hours total, my array is now online and I have access to all my data!

      First order of business, transfer everything to backblaze and a 6TB drive!

      After that, I'll start my research into other alternatives.

      Hope this helps someone in the future!
    • QPlus7 wrote:

      First order of business, transfer everything to backblaze and a 6TB drive!
      In a word, "smart".

      I've noticed that Home and small business RAID users usually fall into one of two different camps.
      1. Those who have never had a problem, or minor problems, that they easily recovered from (using, perhaps, an online hot spare). They love RAID and promote it.
      2. Those who had a major problem, where they lost their entire array. These folks, almost without fail, "had an array" in the past.

      (There's a 3rd group who actually backup their arrays and are ready for a full array failure, but among home and small business users, they seem to be exceedingly rare.)

      With the sizes of drives that are available these days (up to 8TB), I don't understand why NAS users would feel the need to pool disks. Does administering a NAS become easier, somehow, with a common mount point? If it does, things become a bit more complicated and inconvenient when the inevitable physical disk problem crops up, in the pool.
      ________________________________

      In a JBOD config - it's easy enough to divide up data folders, in a logical manner, over different physical drives. When a NAS puts shares out to the network, the source physical drive is irrelevant.

      Further, it's easy enough to Rsync network shares / folders to a local destination or a remote server - or even entire drives - without risking the quarks of running a RAID1 broken mirror. Rsync provides true backup, versus the false sense of security that users believe they're getting with RAID.
      Good backup takes the "drama" out of computing
      ____________________________________
      OMV 3.0.90 Erasmus
      ThinkServer TS140, 12GB ECC / 32GB USB3.0
      4TB SG+4TB TS ZFS mirror/ 3TB TS

      OMV 3.0.81 Erasmus - Rsync'ed Backup Server
      R-PI 2 $29 / 16GB SD Card $8 / Real Time Clock $1.86
      4TB WD My Passport $119