RAID5 degraded, but SMART shows all drives 'passed' test

    • OMV 3.x
    • RAID5 degraded, but SMART shows all drives 'passed' test

      Had a bad thunderstorm in my area last night that resulted in a power outage. After OMV restart, once the power was restored, I noticed one of my two RAID 5 arrays was offline. After some fiddling, I got it back online but with 2 out of 3 drives. SMART shows all 3 of the drives 'passed'. I'm running OMV 3.0.99 with kernel 4.9.0.

      Here is the mdadm --misc --query --examine for each drive in the array. /sde is the drive excluded from the array

      Source Code

      1. root@openmediavault:~# mdadm --misc --query --examine /dev/sda
      2. /dev/sda:
      3. Magic : a92b4efc
      4. Version : 1.2
      5. Feature Map : 0x0
      6. Array UUID : be9d6a6e:3d6f3e23:fee9a740:47a5fefb
      7. Name : omvnas:Server
      8. Creation Time : Sun May 31 03:02:34 2015
      9. Raid Level : raid5
      10. Raid Devices : 3
      11. Avail Dev Size : 3906767024 (1862.89 GiB 2000.26 GB)
      12. Array Size : 3906765824 (3725.78 GiB 4000.53 GB)
      13. Used Dev Size : 3906765824 (1862.89 GiB 2000.26 GB)
      14. Data Offset : 262144 sectors
      15. Super Offset : 8 sectors
      16. Unused Space : before=262064 sectors, after=1200 sectors
      17. State : clean
      18. Device UUID : c61e594b:4faee165:7b4ebbcb:91866b91
      19. Update Time : Sat Jul 21 21:04:06 2018
      20. Checksum : 2f47c0cf - correct
      21. Events : 329308
      22. Layout : left-symmetric
      23. Chunk Size : 512K
      24. Device Role : Active device 2
      25. Array State : A.A ('A' == active, '.' == missing, 'R' == replacing)
      26. root@openmediavault:~# mdadm --misc --query --examine /dev/sdb
      27. /dev/sdb:
      28. Magic : a92b4efc
      29. Version : 1.2
      30. Feature Map : 0x0
      31. Array UUID : be9d6a6e:3d6f3e23:fee9a740:47a5fefb
      32. Name : omvnas:Server
      33. Creation Time : Sun May 31 03:02:34 2015
      34. Raid Level : raid5
      35. Raid Devices : 3
      36. Avail Dev Size : 3906767024 (1862.89 GiB 2000.26 GB)
      37. Array Size : 3906765824 (3725.78 GiB 4000.53 GB)
      38. Used Dev Size : 3906765824 (1862.89 GiB 2000.26 GB)
      39. Data Offset : 262144 sectors
      40. Super Offset : 8 sectors
      41. Unused Space : before=262064 sectors, after=1200 sectors
      42. State : clean
      43. Device UUID : 84b9412f:42afe2e6:9f1434c4:60d18caf
      44. Update Time : Sat Jul 21 21:04:06 2018
      45. Checksum : aacb6760 - correct
      46. Events : 329308
      47. Layout : left-symmetric
      48. Chunk Size : 512K
      49. Device Role : Active device 0
      50. Array State : A.A ('A' == active, '.' == missing, 'R' == replacing)
      51. root@openmediavault:~# mdadm --misc --query --examine /dev/sde
      52. /dev/sde:
      53. Magic : a92b4efc
      54. Version : 1.2
      55. Feature Map : 0x0
      56. Array UUID : be9d6a6e:3d6f3e23:fee9a740:47a5fefb
      57. Name : omvnas:Server
      58. Creation Time : Sun May 31 03:02:34 2015
      59. Raid Level : raid5
      60. Raid Devices : 3
      61. Avail Dev Size : 3906767024 (1862.89 GiB 2000.26 GB)
      62. Array Size : 3906765824 (3725.78 GiB 4000.53 GB)
      63. Used Dev Size : 3906765824 (1862.89 GiB 2000.26 GB)
      64. Data Offset : 262144 sectors
      65. Super Offset : 8 sectors
      66. Unused Space : before=262064 sectors, after=1200 sectors
      67. State : active
      68. Device UUID : efb3b1e8:17e74d29:786fab7c:1fc09375
      69. Update Time : Wed Jul 11 17:20:58 2018
      70. Checksum : 2517312b - correct
      71. Events : 112539
      72. Layout : left-symmetric
      73. Chunk Size : 512K
      74. Device Role : Active device 1
      75. Array State : AAA ('A' == active, '.' == missing, 'R' == replacing)
      Display All


      Here's the cat /proc/mdstat

      Brainfuck Source Code

      1. md126 : active raid5 sdd[0] sdg[2] sdf[1]
      2. 5860270080 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
      3. [============>........] resync = 63.9% (1875104952/2930135040) finish=167.8min speed=104764K/sec
      4. md127 : active raid5 sdb[0] sda[3]
      5. 3906765824 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [U_U]
      6. unused devices: <none>

      blkid

      Source Code

      1. root@openmediavault:~# blkid
      2. /dev/sdb: UUID="be9d6a6e-3d6f-3e23-fee9-a74047a5fefb" UUID_SUB="84b9412f-42af-e2e6-9f14-34c460d18caf" LABEL="omvnas:Server" TYPE="linux_raid_member"
      3. /dev/sda: UUID="be9d6a6e-3d6f-3e23-fee9-a74047a5fefb" UUID_SUB="c61e594b-4fae-e165-7b4e-bbcb91866b91" LABEL="omvnas:Server" TYPE="linux_raid_member"
      4. /dev/sdd: UUID="bbd69834-ff0c-e5ad-7ae1-5eba53319b53" UUID_SUB="bda84ee5-2b28-4037-9957-61fb75a8c344" LABEL="omvnas:Media" TYPE="linux_raid_member"
      5. /dev/sdf: UUID="bbd69834-ff0c-e5ad-7ae1-5eba53319b53" UUID_SUB="3e1288a4-3bdb-5442-ee98-e178b5afabf1" LABEL="omvnas:Media" TYPE="linux_raid_member"
      6. /dev/sde: UUID="be9d6a6e-3d6f-3e23-fee9-a74047a5fefb" UUID_SUB="efb3b1e8-17e7-4d29-786f-ab7c1fc09375" LABEL="omvnas:Server" TYPE="linux_raid_member"
      7. /dev/sdc1: UUID="36a4a1dd-606a-4b4a-b410-4bf096d16f4c" TYPE="ext4" PARTUUID="21d0f6b2-01"
      8. /dev/sdc5: UUID="3a278f7f-8800-4cdb-83f8-5fd8b86d0634" TYPE="swap" PARTUUID="21d0f6b2-05"
      9. /dev/sdg: UUID="bbd69834-ff0c-e5ad-7ae1-5eba53319b53" UUID_SUB="b8082027-e0b5-f031-3d23-be8edf8af761" LABEL="omvnas:Media" TYPE="linux_raid_member"
      10. /dev/md127: LABEL="Server" UUID="9d8702ea-a1a8-4a5e-9029-40c3106dd9e9" TYPE="ext4"
      11. /dev/md126: LABEL="Media" UUID="8164bc97-8e41-49f5-8584-742d2e2cbbc9" TYPE="ext4"
      Display All

      Finally here is the mdadm --detail --scan --verbose

      Source Code

      1. root@openmediavault:~# mdadm --detail --scan --verbose
      2. ARRAY /dev/md127 level=raid5 num-devices=3 metadata=1.2 name=omvnas:Server UUID=be9d6a6e:3d6f3e23:fee9a740:47a5fefb
      3. devices=/dev/sda,/dev/sdb
      4. ARRAY /dev/md126 level=raid5 num-devices=3 metadata=1.2 name=omvnas:Media UUID=bbd69834:ff0ce5ad:7ae15eba:53319b53
      5. devices=/dev/sdd,/dev/sdf,/dev/sdg


      I have no idea if the drive is actually good or bad, or how to fix this issue.
    • If you believe the drive is still good, you can overwrite the first blocks of that drive and readd to the raid.

      It is currently in failed mode and still has the raid header on it.

      So you can run dd if=/dev/zero of=/dev/sde count=10 bs=1M

      That will overwrite the first 10M of your drive with zeros.

      After that you can rejoin the disk to your array md127 and the resync will start.

      Hope that helps.
      Everything is possible, sometimes it requires Google to find out how.
    • Users Online 1

      1 Guest