Need help with a DegradedArray notification

  • Hello all together,


    a few days I got an email notification about a degraded array which I am a little bit confused about:


    This let me logging into OMV and checking the state of the drives.
    They are alle accessible and their SMART status seems to be OK:



    The raid was configured between /dev/sdb and /dev/sdc, but somehow has a ?( state:


    And if I click on recover, I can't select any device:



    Any help appreciated!


    Here's the required information about my setup:

    • OMV 3.0.99 Erasmus with Kernel 3.16.0-4-amd64
    • 4 Total disks (one 500 GB OMV host, 2x 2TB in RAID1, 1x 8TB)

    Here are the log outputs:


    root@openmediavault:~# cat /proc/mdstat
    Personalities : [raid1]
    md0 : active raid1 sdb[0]
    1953383488 blocks super 1.2 [2/1] [U_]
    bitmap: 15/15 pages [60KB], 65536KB chunk


    unused devices: <none>


    root@openmediavault:~# blkid
    /dev/sda1: UUID="b279d58d-a670-4db5-a4a2-a70bbd0c1f10" TYPE="ext4" PARTUUID="56d063f1-01"
    /dev/sda5: UUID="8feb4be8-3916-4d2c-acd7-d76fe2089a47" TYPE="swap" PARTUUID="56d063f1-05"
    /dev/sdb: UUID="5439acfd-992c-f538-fc8d-08f4fa4f8fd7" UUID_SUB="de544b7f-38e8-9789-c0a0-48064749c10b" LABEL="openmediavault:RAID" TYPE="linux_raid_member"
    /dev/sdd1: UUID="1eaa2aa6-3acf-4b46-bd11-1954749d8470" TYPE="ext4" PARTUUID="6a6bdcf6-2744-4a33-9d43-c3b09c167dd5"
    /dev/md0: UUID="4559837c-667d-46d0-9ec5-053f67eed5fa" TYPE="ext4"
    /dev/sdc: UUID="5439acfd-992c-f538-fc8d-08f4fa4f8fd7" UUID_SUB="2ded4613-7409-0a9c-1958-2730c4b2e0c7" LABEL="openmediavault:RAID" TYPE="linux_raid_member"





    root@openmediavault:~# fdisk -l | grep "Disk "
    Disk /dev/sda: 465.8 GiB, 500107862016 bytes, 976773168 sectors
    Disk identifier: 0x56d063f1
    Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
    Disk /dev/sdd: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors
    Disk identifier: 849C94FC-17E3-4AE3-8440-05852118C172
    Disk /dev/sdc: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
    Disk /dev/md0: 1.8 TiB, 2000264691712 bytes, 3906766976 sectors


    root@openmediavault:~# cat /etc/mdadm/mdadm.conf
    # mdadm.conf
    #
    # Please refer to mdadm.conf(5) for information about this file.
    #


    # by default, scan all partitions (/proc/partitions) for MD superblocks.
    # alternatively, specify devices to scan, using wildcards if desired.
    # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
    # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
    # used if no RAID devices are configured.
    DEVICE partitions


    # auto-create devices with Debian standard permissions
    CREATE owner=root group=disk mode=0660 auto=yes


    # automatically tag new arrays as belonging to the local system
    HOMEHOST <system>


    # definitions of existing MD arrays
    ARRAY /dev/md0 metadata=1.2 name=openmediavault:RAID UUID=5439acfd:992cf538:fc8d08f4:fa4f8fd7


    root@openmediavault:~# mdadm --detail --scan --verbose
    ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=openmediavault:RAID UUID=5439acfd:992cf538:fc8d08f4:fa4f8fd7
    devices=/dev/sdb
    root@openmediavault:~#

  • Sorry that I created a second thread with the same issue.
    When submitting this thread I got an error message about an unsuccessful thread creation at the end so I thought it didn't work (but it did anyway).



    Editing also doesn't work:



    I am using SAFARI on a MacBook Pro (latest updates installed, macOS Mojave).

    • Offizieller Beitrag

    Have a look at this thread the symptoms appear to be similar, please ignore the fact that I got lost with what the OP was doing, but f you run mdadm --detail /dev/md0 the output should confirm removed relating to /dev/sdc


    What you could try and I emphasise could is mdadm --manage /dev/md0 --add /dev/sdc the only reason I suggest this is because the drive is visible in Storage -> Disks and the OP from the other thread did the same. If it errors then it's a different approach.


    BTW if you get a forum error as above ignore it go to the top of the page and click the forum tab and you'll find your post has been added.

  • Have a look at this thread the symptoms appear to be similar, please ignore the fact that I got lost with what the OP was doing, but f you run mdadm --detail /dev/md0 the output should confirm removed relating to /dev/sdc


    What you could try and I emphasise could is mdadm --manage /dev/md0 --add /dev/sdc the only reason I suggest this is because the drive is visible in Storage -> Disks and the OP from the other thread did the same. If it errors then it's a different approach.


    BTW if you get a forum error as above ignore it go to the top of the page and click the forum tab and you'll find your post has been added.

    Thanks! I´ll give it a try!

  • That's the output, seems to be rebuild now.
    But why did it happen in the first place?


    root@openmediavault:~# mdadm --detail /dev/md0
    /dev/md0:
    Version : 1.2
    Creation Time : Fri Jan 27 02:40:34 2017
    Raid Level : raid1
    Array Size : 1953383488 (1862.89 GiB 2000.26 GB)
    Used Dev Size : 1953383488 (1862.89 GiB 2000.26 GB)
    Raid Devices : 2
    Total Devices : 1
    Persistence : Superblock is persistent


    Intent Bitmap : Internal


    Update Time : Tue Feb 5 00:39:04 2019
    State : clean, degraded
    Active Devices : 1
    Working Devices : 1
    Failed Devices : 0
    Spare Devices : 0


    Name : openmediavault:RAID (local to host openmediavault)
    UUID : 5439acfd:992cf538:fc8d08f4:fa4f8fd7
    Events : 597622


    Number Major Minor RaidDevice State
    0 8 16 0 active sync /dev/sdb
    2 0 0 2 removed

    root@openmediavault:~# mdadm --manage /dev/md0 --add /dev/sdc
    mdadm: re-added /dev/sdc

    root@openmediavault:~# mdadm --detail /dev/md0

    /dev/md0:
    Version : 1.2
    Creation Time : Fri Jan 27 02:40:34 2017
    Raid Level : raid1
    Array Size : 1953383488 (1862.89 GiB 2000.26 GB)
    Used Dev Size : 1953383488 (1862.89 GiB 2000.26 GB)
    Raid Devices : 2
    Total Devices : 2
    Persistence : Superblock is persistent


    Intent Bitmap : Internal


    Update Time : Tue Feb 5 00:39:38 2019
    State : clean, degraded, recovering
    Active Devices : 1
    Working Devices : 2
    Failed Devices : 0
    Spare Devices : 1


    Rebuild Status : 0% complete


    Name : openmediavault:RAID (local to host openmediavault)
    UUID : 5439acfd:992cf538:fc8d08f4:fa4f8fd7
    Events : 597628


    Number Major Minor RaidDevice State
    0 8 16 0 active sync /dev/sdb
    1 8 32 1 spare rebuilding /dev/sdc





  • Because OMV 3 is EOL, no more updates, but the choice is yours :)
    Looks as if the Raid is rebuilding :thumbup:

    Thanks for your feedback, rebuild is complete now.


    But I still don't get why it did happen in the first place ?(



Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!