Degraded array event - disabling false warning

  • Hi guys


    First post here as I am new to OMV though not network admin and linux servers. i am getting warnings about a degraded raid. Now I know why this is happening. About 5 years back I moved a QNAP drive onto a debian server with SMB shares. This worked fine. There were a number of file systems on the QNAP drive as you would expect but given the amount of storage used on the drive I decided not to copy it all off and back on to a freshly partitioned ext4 drive.


    So OMV is seeing the drive was at one time in a QNAP with some sort of raid info though it was only ever a single drive on its own.


    Is there anyway to stop OMV warning me about the degraded raid?


    Thanks guys

    Andy

    • Offizieller Beitrag

    One way might be to zero the superblock but as this is from a Qnap I don't know if that would work, do you get any output from mdadm --examine /dev/sd? where ? is the drive reference

  • Hi Geaves


    Thanks for your reply and very sorry I missed it!


    mdadm --examine /dev/sda

    gives:


    /dev/sda:

    MBR Magic : aa55

    Partition[0] : 4294967295 sectors at 1 (type ee)

  • I get 3 notifications each day at 0805 which is the same time the system updates automatically download. The messages are the same apart from the variation i have listed at the bottom here


    ---

    DegradedArray event on /dev/md/2_0:omv1: This is an automatically generated mail message from mdadm

    running on omv1


    A DegradedArray event had been detected on md device /dev/md/2_0.


    Faithfully yours, etc.


    P.S. The /proc/mdstat file currently contains the following:


    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]

    md9 : active (auto-read-only) raid1 sda1[0]

    530112 blocks super 1.0 [2/1] [U_]

    bitmap: 9/9 pages [36KB], 32KB chunk


    md127 : active (auto-read-only) raid1 sda4[0]

    458880 blocks super 1.0 [2/1] [U_]

    bitmap: 7/8 pages [28KB], 32KB chunk

    ---


    A DegradedArray event had been detected on md device /dev/md/2_0.


    A DegradedArray event had been detected on md device /dev/md/sda4_0.


    A DegradedArray event had been detected on md device /dev/md/9_0.

  • mdadm --examine /dev/sda1

    /dev/sda1:

    Magic : a92b4efc

    Version : 1.0

    Feature Map : 0x1

    Array UUID : d58b0ecd:e3a465d7:90c2ea97:3e0f6398

    Name : 9

    Creation Time : Sat Jan 7 04:55:57 2017

    Raid Level : raid1

    Raid Devices : 2


    Avail Dev Size : 1060224 (517.69 MiB 542.83 MB)

    Array Size : 530112 (517.69 MiB 542.83 MB)

    Super Offset : 1060232 sectors

    Unused Space : before=0 sectors, after=8 sectors

    State : clean

    Device UUID : a9d08707:163d47f8:1ced6977:fe66c04b


    Internal Bitmap : 2 sectors from superblock

    Update Time : Sun Dec 24 14:03:07 2017

    Checksum : f3074a77 - correct

    Events : 152245


    Device Role : Active device 0

    Array State : A. ('A' == active, '.' == missing, 'R' == replacing)



    mdadm --examine /dev/sda4

    /dev/sda4:

    Magic : a92b4efc

    Version : 1.0

    Feature Map : 0x1

    Array UUID : cf828cbb:9656ee63:ccc9b03a:cdd6c91d

    Name : sda4

    Creation Time : Sat Jan 7 04:56:06 2017

    Raid Level : raid1

    Raid Devices : 2


    Avail Dev Size : 996000 (486.33 MiB 509.95 MB)

    Array Size : 458880 (448.13 MiB 469.89 MB)

    Used Dev Size : 917760 (448.13 MiB 469.89 MB)

    Super Offset : 996008 sectors

    Unused Space : before=0 sectors, after=78248 sectors

    State : clean

    Device UUID : bffaa7cd:70de1f9c:df243dd0:32bc97aa


    Internal Bitmap : 2 sectors from superblock

    Update Time : Sun Dec 24 14:03:11 2017

    Checksum : ec39a778 - correct

    Events : 40400



    Device Role : Active device 0

    Array State : A. ('A' == active, '.' == missing, 'R' == replacing)

    • Offizieller Beitrag

    :/ There are two ways you could possibly do this;


    1) Stop each array that mdadm detects and delete the superblock, so for the first one mdadm --stop /dev/md9 then mdadm --zero-superblock /dev/sda1 after doing both you would have to run update-initramfs -u


    2) Boot with a GParted Live CD select the drive and it will display the partition information, if there is no data on those partitions delete them, then extend the working partition with the recovered space.


    Will any of this work :) well how long's a piece of string.


    If this were me presented with the above I would go with option 2, at least you can see how the drive is partitioned and go from there, OR backup, remove all references to shares (in reverse) wipe the drive, recreate the file system and shares.


    BTW all suggestions come with the usual health warnings :)

  • Many thanks Graves. I will think it over. I agree the best option is backup, format, partition etc and copy back the data. But being ~ 6.5TB it would take some time. It is something to plan for a future date.


    I might just accept the warnings for the moment as an easier though less elegant solution.


    Thanks again.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!