clean, degraded missing disk that shows up under disks

    • clean, degraded missing disk that shows up under disks

      Raid says drive disconnected but it still shows up under disks so I'm confused. Not really sure when this happend it was storming last night but the server is on battery backup and that never faltered. I have 7 4tb WD Red drives.


      Source Code

      1. root@openmediavault:~# cat /proc/mdstat
      2. Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
      3. md127 : active raid5 sdb[6] sde[7] sdc[8] sdg[3] sda[4] sdf[1]
      4. 23441323008 blocks super 1.2 level 5, 512k chunk, algorithm 2 [7/6] [UU_UUUU]
      5. unused devices: <none>




      Source Code

      1. root@openmediavault:~# blkid
      2. /dev/sdh1: UUID="00891412-6ed1-43cb-8460-761ebfaf9786" TYPE="ext4" PARTUUID="f9c061ab-01"
      3. /dev/sdh3: LABEL="SSD Data" UUID="7a981dd3-4c7d-4f1c-9002-050b1a7a57e0" TYPE="ext4" PARTUUID="f9c061ab-03"
      4. /dev/sdh5: UUID="fbfec248-cf4c-4069-9fa1-49aac24c64fa" TYPE="swap" PARTUUID="f9c061ab-05"
      5. /dev/sdc: UUID="7a39528f-fab6-6321-dc1e-d267566f9251" UUID_SUB="83372978-ed2d-7122-5ffd-550b979ceefc" LABEL="openmediavault:Media" TYPE="linux_raid_member"
      6. /dev/sdb: UUID="7a39528f-fab6-6321-dc1e-d267566f9251" UUID_SUB="8d952e8f-dc6c-b800-3454-9ffd5769255c" LABEL="openmediavault:Media" TYPE="linux_raid_member"
      7. /dev/sde: UUID="7a39528f-fab6-6321-dc1e-d267566f9251" UUID_SUB="6cf55581-47cd-73d4-a16f-48ea65219772" LABEL="openmediavault:Media" TYPE="linux_raid_member"
      8. /dev/sda: UUID="7a39528f-fab6-6321-dc1e-d267566f9251" UUID_SUB="75bdbcd6-6039-80d9-d718-adce02a77dd1" LABEL="openmediavault:Media" TYPE="linux_raid_member"
      9. /dev/sdd: UUID="7a39528f-fab6-6321-dc1e-d267566f9251" UUID_SUB="1e0af5dd-a4c5-8732-677e-1f167ea38a4b" LABEL="openmediavault:Media" TYPE="linux_raid_member"
      10. /dev/sdg: UUID="7a39528f-fab6-6321-dc1e-d267566f9251" UUID_SUB="2ee77675-4553-2fae-7469-bf712a401962" LABEL="openmediavault:Media" TYPE="linux_raid_member"
      11. /dev/md127: LABEL="Media" UUID="89bc21f5-e775-476e-8a79-13667f2f6beb" TYPE="ext4"
      12. /dev/sdf: UUID="7a39528f-fab6-6321-dc1e-d267566f9251" UUID_SUB="f5a1474f-cb09-a4a8-72c1-87b2655d31e4" LABEL="openmediavault:Media" TYPE="linux_raid_member"
      13. root@openmediavault:~#
      Display All

      Source Code

      1. root@openmediavault:~# fdisk -l | grep "Disk "
      2. Disk /dev/sdh: 223.6 GiB, 240057409536 bytes, 468862128 sectors
      3. Disk identifier: 0xf9c061ab
      4. Disk /dev/sdc: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
      5. Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
      6. Disk /dev/sde: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
      7. Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
      8. Disk /dev/sdd: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
      9. Disk /dev/sdg: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
      10. Disk /dev/sdf: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
      11. Disk /dev/md127: 21.9 TiB, 24003914760192 bytes, 46882646016 sectors
      12. root@openmediavault:~#
      Display All

      Source Code

      1. root@openmediavault:~# cat /etc/mdadm/mdadm.conf
      2. # mdadm.conf
      3. #
      4. # Please refer to mdadm.conf(5) for information about this file.
      5. #
      6. # by default, scan all partitions (/proc/partitions) for MD superblocks.
      7. # alternatively, specify devices to scan, using wildcards if desired.
      8. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      9. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      10. # used if no RAID devices are configured.
      11. DEVICE partitions
      12. # auto-create devices with Debian standard permissions
      13. CREATE owner=root group=disk mode=0660 auto=yes
      14. # automatically tag new arrays as belonging to the local system
      15. HOMEHOST <system>
      16. # definitions of existing MD arrays
      17. ARRAY /dev/md127 metadata=1.2 name=openmediavault:Media UUID=7a39528f:fab66321:dc1ed267:566f9251
      18. # instruct the monitoring daemon where to send mail alerts
      19. MAILADDR droptopfox90@gmail.com
      20. MAILFROM rootroot@openmediavault:~#
      Display All

      Source Code

      1. root@openmediavault:~# mdadm --detail --scan --verbose
      2. ARRAY /dev/md127 level=raid5 num-devices=7 metadata=1.2 name=openmediavault:Media UUID=7a39528f:fab66321:dc1ed267:566f9251
      3. devices=/dev/sda,/dev/sdb,/dev/sdc,/dev/sde,/dev/sdf,/dev/sdg
      4. root@openmediavault:~#
      Images
      • disks.png

        23 kB, 701×341, viewed 13 times
      • raid info.png

        23.43 kB, 702×614, viewed 13 times
      • smart.png

        32.08 kB, 837×416, viewed 12 times
    • According the image raid info /dev/sdd has been removed from the array you should therefore be able to recover this from the GUI using raid management.

      Select the raid array in Raid Management, select remove from the menu does /dev/sdd show in the list? if not go to Storage -> Disks and wipe /dev/sdd (you may have to format the drive as well) back to Raid Management select Recover from the menu /dev/sdd should be there select it hit Ok and you should see the raid recovering.
      Raid is not a backup! Would you go skydiving without a parachute?
    • GTvert90 wrote:

      Any idea why it would randomly be removed?
      No not really unless there was a problem if the raid was being accessed, if it recovers fine, but if you get any errors then it could be anything hardware related, but that is somewhat of a risk to run a Raid5 with 7 drives. If 2 drives had done that the Raid would be dead :( with no way of recovery unless you have a backup.
      Raid is not a backup! Would you go skydiving without a parachute?
    • geaves wrote:

      GTvert90 wrote:

      Any idea why it would randomly be removed?
      No not really unless there was a problem if the raid was being accessed, if it recovers fine, but if you get any errors then it could be anything hardware related, but that is somewhat of a risk to run a Raid5 with 7 drives. If 2 drives had done that the Raid would be dead :( with no way of recovery unless you have a backup.
      I do have another drive laying around. I can't go from 5 to 6 though without blowing it all away and restoring from a backup though right?
    • GTvert90 wrote:

      I can't go from 5 to 6 though without blowing it all away and restoring from a backup though right?
      You can but I wouldn't recommend it, if it goes wrong you lose everything, use the existing drive /dev/sdd to get the raid back to a clean state.

      If you have an additional spare drive (but that would give you 8 in the array) then you can grow it by adding the drive as the additional drive would appear as a spare. But whilst this should work from the GUI I'm sure I read on forum someone having a problem growing their raid, but that could be overcome by using the cli.

      Personally if this was me I would look at reconfiguring from scratch and restoring from backup, a PIA yes, but at least you could go down the raid 6 route.
      Raid is not a backup! Would you go skydiving without a parachute?
    • So I keep getting this email from my server

      Source Code

      1. This is an automatically generated mail message from mdadm
      2. running on openmediavault
      3. A DegradedArray event had been detected on md device /dev/md127.
      4. Faithfully yours, etc.
      5. P.S. The /proc/mdstat file currently contains the following:
      6. Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
      7. md127 : active raid5 sdb[6] sde[7] sdc[8] sdg[3] sda[4] sdf[1]
      8. 23441323008 blocks super 1.2 level 5, 512k chunk, algorithm 2 [7/6] [UU_UUUU]
      9. unused devices: <none>
      Display All


      and this one.

      Source Code

      1. This is an automatically generated mail message from mdadm
      2. running on openmediavault
      3. A SparesMissing event had been detected on md device /dev/md127.
      4. Faithfully yours, etc.
      5. P.S. The /proc/mdstat file currently contains the following:
      6. Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
      7. md127 : active raid5 sdd[9] sdb[6] sde[7] sdc[8] sdg[3] sda[4] sdf[1]
      8. 23441323008 blocks super 1.2 level 5, 512k chunk, algorithm 2 [7/7] [UUUUUUU]
      9. unused devices: <none>
      Display All



      yet my RAID info shows this.
      whats going on???

      Source Code

      1. Version : 1.2
      2. Creation Time : Mon Jul 6 21:13:27 2015
      3. Raid Level : raid5
      4. Array Size : 23441323008 (22355.39 GiB 24003.91 GB)
      5. Used Dev Size : 3906887168 (3725.90 GiB 4000.65 GB)
      6. Raid Devices : 7
      7. Total Devices : 7
      8. Persistence : Superblock is persistent
      9. Update Time : Fri Jul 5 09:45:59 2019
      10. State : clean
      11. Active Devices : 7
      12. Working Devices : 7
      13. Failed Devices : 0
      14. Spare Devices : 0
      15. Layout : left-symmetric
      16. Chunk Size : 512K
      17. Name : openmediavault:Media (local to host openmediavault)
      18. UUID : 7a39528f:fab66321:dc1ed267:566f9251
      19. Events : 209912
      20. Number Major Minor RaidDevice State
      21. 4 8 0 0 active sync /dev/sda
      22. 1 8 80 1 active sync /dev/sdf
      23. 9 8 48 2 active sync /dev/sdd
      24. 3 8 96 3 active sync /dev/sdg
      25. 7 8 64 4 active sync /dev/sde
      26. 6 8 16 5 active sync /dev/sdb
      27. 8 8 32 6 active sync /dev/sdc
      Display All
    • GTvert90 wrote:

      Can drives be hot swapped?
      No, not with mdadm, there are a few threads on the forum where users have 'tested' a failure by simply pulling a drive, on reboot the raid comes back as inactive and has to be reassembled, then it will appear as clean degraded, they have to go through the process that you have just done.

      Sorry yes the info looks OK the raid is back to normal.
      Raid is not a backup! Would you go skydiving without a parachute?
    • geaves wrote:

      GTvert90 wrote:

      Can drives be hot swapped?
      No, not with mdadm, there are a few threads on the forum where users have 'tested' a failure by simply pulling a drive, on reboot the raid comes back as inactive and has to be reassembled, then it will appear as clean degraded, they have to go through the process that you have just done.
      Sorry yes the info looks OK the raid is back to normal.
      So I shut down. removed the bad drive, put a good drive in powered it on and it shows all my disks but I have nothing under RAID. It doesn't show any arrays Ideas?
    • Source Code

      1. root@openmediavault:~# cat /proc/mdstat
      2. Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
      3. md127 : inactive sde[7](S) sdb[6](S) sdf[1](S) sda[4](S) sdc[9](S) sdd[8](S)
      4. 23441343504 blocks super 1.2
      5. unused devices: <none>
      6. root@openmediavault:~# blkid
      7. /dev/sdh1: UUID="00891412-6ed1-43cb-8460-761ebfaf9786" TYPE="ext4" PARTUUID="f9c061ab-01"
      8. /dev/sdh3: LABEL="SSD Data" UUID="7a981dd3-4c7d-4f1c-9002-050b1a7a57e0" TYPE="ext4" PARTUUID="f9c061ab-03"
      9. /dev/sdh5: UUID="fbfec248-cf4c-4069-9fa1-49aac24c64fa" TYPE="swap" PARTUUID="f9c061ab-05"
      10. /dev/sdb: UUID="7a39528f-fab6-6321-dc1e-d267566f9251" UUID_SUB="8d952e8f-dc6c-b800-3454-9ffd5769255c" LABEL="openmediavault:Media" TYPE="linux_raid_member"
      11. /dev/sda: UUID="7a39528f-fab6-6321-dc1e-d267566f9251" UUID_SUB="75bdbcd6-6039-80d9-d718-adce02a77dd1" LABEL="openmediavault:Media" TYPE="linux_raid_member"
      12. /dev/sdd: UUID="7a39528f-fab6-6321-dc1e-d267566f9251" UUID_SUB="83372978-ed2d-7122-5ffd-550b979ceefc" LABEL="openmediavault:Media" TYPE="linux_raid_member"
      13. /dev/sde: UUID="7a39528f-fab6-6321-dc1e-d267566f9251" UUID_SUB="6cf55581-47cd-73d4-a16f-48ea65219772" LABEL="openmediavault:Media" TYPE="linux_raid_member"
      14. /dev/sdf: UUID="7a39528f-fab6-6321-dc1e-d267566f9251" UUID_SUB="f5a1474f-cb09-a4a8-72c1-87b2655d31e4" LABEL="openmediavault:Media" TYPE="linux_raid_member"
      15. /dev/sdg2: LABEL="Media" UUID="381CCEC11CCE7A00" TYPE="ntfs" PARTLABEL="Basic data partition" PARTUUID="1c3cd474-4d84-4d90-99e7-ef3414d9e3ef"
      16. /dev/sdc: UUID="7a39528f-fab6-6321-dc1e-d267566f9251" UUID_SUB="161974ea-c3a4-5990-9357-f1e19ee4834f" LABEL="openmediavault:Media" TYPE="linux_raid_member"
      17. /dev/sdg1: PARTLABEL="Microsoft reserved partition" PARTUUID="e547ec55-66dc-458a-b5a6-f0e1f495511f"
      18. root@openmediavault:~# fdisk -l | grep "Disk "
      19. Partition 1 does not start on physical sector boundary.
      20. Disk /dev/sdh: 223.6 GiB, 240057409536 bytes, 468862128 sectors
      21. Disk identifier: 0xf9c061ab
      22. Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
      23. Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
      24. Disk /dev/sdd: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
      25. Disk /dev/sde: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
      26. Disk /dev/sdf: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
      27. Disk /dev/sdg: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
      28. Disk identifier: CEFD4E7C-3CBB-4C51-9BFE-04C2E0F38E82
      29. Disk /dev/sdc: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
      30. root@openmediavault:~# cat /etc/mdadm/mdadm.conf
      31. # mdadm.conf
      32. #
      33. # Please refer to mdadm.conf(5) for information about this file.
      34. #
      35. # by default, scan all partitions (/proc/partitions) for MD superblocks.
      36. # alternatively, specify devices to scan, using wildcards if desired.
      37. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      38. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      39. # used if no RAID devices are configured.
      40. DEVICE partitions
      41. # auto-create devices with Debian standard permissions
      42. CREATE owner=root group=disk mode=0660 auto=yes
      43. # automatically tag new arrays as belonging to the local system
      44. HOMEHOST <system>
      45. # definitions of existing MD arrays
      46. ARRAY /dev/md127 metadata=1.2 name=openmediavault:Media UUID=7a39528f:fab66321:dc1ed267:566f9251
      47. # instruct the monitoring daemon where to send mail alerts
      48. MAILADDR droptopfox90@gmail.com
      49. MAILFROM rootroot@openmediavault:~# mdam --detail --scan --verbose
      50. -bash: mdam: command not found
      51. root@openmediavault:~# mdadm --detail --scan --verbose
      52. INACTIVE-ARRAY /dev/md127 num-devices=6 metadata=1.2 name=openmediavault:Media UUID=7a39528f:fab66321:dc1ed267:566f9251
      53. devices=/dev/sda,/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde,/dev/sdf
      54. root@openmediavault:~#
      Display All
      Here's all this info for you. should I make a new thread? I feel like I'm a simple command away from it working. lol
    • GTvert90 wrote:

      So I shut down. removed the bad drive, put a good drive in powered it on and it shows all my disks but I have nothing under RAID
      :?: why, I said in post 11 that you cannot hot swap the drives!!

      GTvert90 wrote:

      got it up and going with this
      forum.openmediavault.org/index…er-upgrade-to-OMV-4-1-22/
      When you say you've got it going can you you explain? That thread is different to what you have done.
      Raid is not a backup! Would you go skydiving without a parachute?
    • geaves wrote:

      GTvert90 wrote:

      So I shut down. removed the bad drive, put a good drive in powered it on and it shows all my disks but I have nothing under RAID
      :?: why, I said in post 11 that you cannot hot swap the drives!!

      GTvert90 wrote:

      got it up and going with this
      forum.openmediavault.org/index…er-upgrade-to-OMV-4-1-22/
      When you say you've got it going can you you explain? That thread is different to what you have done.
      I think we were using the term hotswap. I was probably using it wrong. I was using the term as in can I pull the drive out and insert a new one without shutting OMV off. so it didn't even occur to me until it was too late that I might need to remove the disk from the raid before I shut down and pull the disk.

      mdadm --assemble --verbose --force /dev/md127 /dev/sd[abcdef] is what got the array to show for me. i then wiped the new drive and rebuilt the array.
    • GTvert90 wrote:

      mdadm --assemble --verbose --force /dev/md127 /dev/sd[abcdef] is what got the array to show for me. i then wiped the new drive and rebuilt the array.
      Ok, so is the raid now showing clean with all 7 drives?

      GTvert90 wrote:

      I think we were using the term hotswap. I was probably using it wrong.
      :) :thumbup: You can use the GUI to remove a drive from an array, then add a new drive and rebuild it.
      Raid is not a backup! Would you go skydiving without a parachute?