Need help with a DegradedArray notification

    • OMV 3.x
    • Need help with a DegradedArray notification

      Hello all together,

      a few days I got an email notification about a degraded array which I am a little bit confused about:

      OMV wrote:

      This is an automatically generated mail message from mdadm
      running on openmediavault

      A DegradedArray event had been detected on md device /dev/md0.

      Faithfully yours, etc.

      P.S. The /proc/mdstat file currently contains the following:

      Personalities : [raid1]
      md0 : active raid1 sdb[0]
      1953383488 blocks super 1.2 [2/1] [U_]
      bitmap: 15/15 pages [60KB], 65536KB chunk

      unused devices: <none>
      This let me logging into OMV and checking the state of the drives.
      They are alle accessible and their SMART status seems to be OK:



      The raid was configured between /dev/sdb and /dev/sdc, but somehow has a ?( state:


      OMV wrote:

      Version : 1.2
      Creation Time : Fri Jan 27 02:40:34 2017
      Raid Level : raid1
      Array Size : 1953383488 (1862.89 GiB 2000.26 GB)
      Used Dev Size : 1953383488 (1862.89 GiB 2000.26 GB)
      Raid Devices : 2
      Total Devices : 1
      Persistence : Superblock is persistent

      Intent Bitmap : Internal

      Update Time : Mon Feb 4 22:24:12 2019
      State : clean, degraded
      Active Devices : 1
      Working Devices : 1
      Failed Devices : 0
      Spare Devices : 0

      Name : openmediavault:RAID (local to host openmediavault)
      UUID : 5439acfd:992cf538:fc8d08f4:fa4f8fd7
      Events : 595312

      Number Major Minor RaidDevice State
      0 8 16 0 active sync /dev/sdb
      2 0 0 2 removed
      And if I click on recover, I can't select any device:



      Any help appreciated!

      Here's the required information about my setup:
      • OMV 3.0.99 Erasmus with Kernel 3.16.0-4-amd64
      • 4 Total disks (one 500 GB OMV host, 2x 2TB in RAID1, 1x 8TB)
      Here are the log outputs:

      root@openmediavault:~# cat /proc/mdstat
      Personalities : [raid1]
      md0 : active raid1 sdb[0]
      1953383488 blocks super 1.2 [2/1] [U_]
      bitmap: 15/15 pages [60KB], 65536KB chunk


      unused devices: <none>


      root@openmediavault:~# blkid
      /dev/sda1: UUID="b279d58d-a670-4db5-a4a2-a70bbd0c1f10" TYPE="ext4" PARTUUID="56d063f1-01"
      /dev/sda5: UUID="8feb4be8-3916-4d2c-acd7-d76fe2089a47" TYPE="swap" PARTUUID="56d063f1-05"
      /dev/sdb: UUID="5439acfd-992c-f538-fc8d-08f4fa4f8fd7" UUID_SUB="de544b7f-38e8-9789-c0a0-48064749c10b" LABEL="openmediavault:RAID" TYPE="linux_raid_member"
      /dev/sdd1: UUID="1eaa2aa6-3acf-4b46-bd11-1954749d8470" TYPE="ext4" PARTUUID="6a6bdcf6-2744-4a33-9d43-c3b09c167dd5"
      /dev/md0: UUID="4559837c-667d-46d0-9ec5-053f67eed5fa" TYPE="ext4"
      /dev/sdc: UUID="5439acfd-992c-f538-fc8d-08f4fa4f8fd7" UUID_SUB="2ded4613-7409-0a9c-1958-2730c4b2e0c7" LABEL="openmediavault:RAID" TYPE="linux_raid_member"





      root@openmediavault:~# fdisk -l | grep "Disk "
      Disk /dev/sda: 465.8 GiB, 500107862016 bytes, 976773168 sectors
      Disk identifier: 0x56d063f1
      Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
      Disk /dev/sdd: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors
      Disk identifier: 849C94FC-17E3-4AE3-8440-05852118C172
      Disk /dev/sdc: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
      Disk /dev/md0: 1.8 TiB, 2000264691712 bytes, 3906766976 sectors


      root@openmediavault:~# cat /etc/mdadm/mdadm.conf
      # mdadm.conf
      #
      # Please refer to mdadm.conf(5) for information about this file.
      #


      # by default, scan all partitions (/proc/partitions) for MD superblocks.
      # alternatively, specify devices to scan, using wildcards if desired.
      # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      # used if no RAID devices are configured.
      DEVICE partitions


      # auto-create devices with Debian standard permissions
      CREATE owner=root group=disk mode=0660 auto=yes


      # automatically tag new arrays as belonging to the local system
      HOMEHOST <system>


      # definitions of existing MD arrays
      ARRAY /dev/md0 metadata=1.2 name=openmediavault:RAID UUID=5439acfd:992cf538:fc8d08f4:fa4f8fd7


      root@openmediavault:~# mdadm --detail --scan --verbose
      ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=openmediavault:RAID UUID=5439acfd:992cf538:fc8d08f4:fa4f8fd7
      devices=/dev/sdb
      root@openmediavault:~#

      The post was edited 1 time, last by stefan1983 ().

    • Sorry that I created a second thread with the same issue.
      When submitting this thread I got an error message about an unsuccessful thread creation at the end so I thought it didn't work (but it did anyway).


      Editing also doesn't work:



      I am using SAFARI on a MacBook Pro (latest updates installed, macOS Mojave).

      The post was edited 2 times, last by stefan1983 ().

    • Have a look at this thread the symptoms appear to be similar, please ignore the fact that I got lost with what the OP was doing, but f you run mdadm --detail /dev/md0 the output should confirm removed relating to /dev/sdc

      What you could try and I emphasise could is mdadm --manage /dev/md0 --add /dev/sdc the only reason I suggest this is because the drive is visible in Storage -> Disks and the OP from the other thread did the same. If it errors then it's a different approach.

      BTW if you get a forum error as above ignore it go to the top of the page and click the forum tab and you'll find your post has been added.
      Raid is not a backup! Would you go skydiving without a parachute?
    • geaves wrote:

      Have a look at this thread the symptoms appear to be similar, please ignore the fact that I got lost with what the OP was doing, but f you run mdadm --detail /dev/md0 the output should confirm removed relating to /dev/sdc

      What you could try and I emphasise could is mdadm --manage /dev/md0 --add /dev/sdc the only reason I suggest this is because the drive is visible in Storage -> Disks and the OP from the other thread did the same. If it errors then it's a different approach.

      BTW if you get a forum error as above ignore it go to the top of the page and click the forum tab and you'll find your post has been added.
      Thanks! I´ll give it a try!
    • That's the output, seems to be rebuild now.
      But why did it happen in the first place?

      root@openmediavault:~# mdadm --detail /dev/md0
      /dev/md0:
      Version : 1.2
      Creation Time : Fri Jan 27 02:40:34 2017
      Raid Level : raid1
      Array Size : 1953383488 (1862.89 GiB 2000.26 GB)
      Used Dev Size : 1953383488 (1862.89 GiB 2000.26 GB)
      Raid Devices : 2
      Total Devices : 1
      Persistence : Superblock is persistent


      Intent Bitmap : Internal


      Update Time : Tue Feb 5 00:39:04 2019
      State : clean, degraded
      Active Devices : 1
      Working Devices : 1
      Failed Devices : 0
      Spare Devices : 0


      Name : openmediavault:RAID (local to host openmediavault)
      UUID : 5439acfd:992cf538:fc8d08f4:fa4f8fd7
      Events : 597622


      Number Major Minor RaidDevice State
      0 8 16 0 active sync /dev/sdb
      2 0 0 2 removed

      root@openmediavault:~# mdadm --manage /dev/md0 --add /dev/sdc
      mdadm: re-added /dev/sdc

      root@openmediavault:~# mdadm --detail /dev/md0

      /dev/md0:
      Version : 1.2
      Creation Time : Fri Jan 27 02:40:34 2017
      Raid Level : raid1
      Array Size : 1953383488 (1862.89 GiB 2000.26 GB)
      Used Dev Size : 1953383488 (1862.89 GiB 2000.26 GB)
      Raid Devices : 2
      Total Devices : 2
      Persistence : Superblock is persistent


      Intent Bitmap : Internal


      Update Time : Tue Feb 5 00:39:38 2019
      State : clean, degraded, recovering
      Active Devices : 1
      Working Devices : 2
      Failed Devices : 0
      Spare Devices : 1


      Rebuild Status : 0% complete


      Name : openmediavault:RAID (local to host openmediavault)
      UUID : 5439acfd:992cf538:fc8d08f4:fa4f8fd7
      Events : 597628


      Number Major Minor RaidDevice State
      0 8 16 0 active sync /dev/sdb
      1 8 32 1 spare rebuilding /dev/sdc





    • geaves wrote:

      stefan1983 wrote:

      As of why?
      Because OMV 3 is EOL, no more updates, but the choice is yours :)
      Looks as if the Raid is rebuilding :thumbup:
      Thanks for your feedback, rebuild is complete now.

      But I still don't get why it did happen in the first place ?(


      OMV wrote:

      Version : 1.2
      Creation Time : Fri Jan 27 02:40:34 2017
      Raid Level : raid1
      Array Size : 1953383488 (1862.89 GiB 2000.26 GB)
      Used Dev Size : 1953383488 (1862.89 GiB 2000.26 GB)
      Raid Devices : 2
      Total Devices : 2
      Persistence : Superblock is persistent

      Intent Bitmap : Internal

      Update Time : Tue Feb 5 08:15:31 2019
      State : clean
      Active Devices : 2
      Working Devices : 2
      Failed Devices : 0
      Spare Devices : 0

      Name : openmediavault:RAID (local to host openmediavault)
      UUID : 5439acfd:992cf538:fc8d08f4:fa4f8fd7
      Events : 604841

      Number Major Minor RaidDevice State
      0 8 16 0 active sync /dev/sdb
      1 8 32 1 active sync /dev/sdc