RAID1 - Status: Clean, Degraded

  • Dear OMV experts,


    I have an issue with my RAID1 which status is "Clean, Degraded".
    I tried to read as much as possible on the similar issues in this forum.


    Below is what I believe that are needed from my NAS so that it can be assessed:


    ----------------------------------------------


    root@helios4:~# cat /proc/mdstat

    Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]

    md0 : active raid1 sdb[1]

    3906886464 blocks super 1.2 [2/1] [_U]

    bitmap: 7/30 pages [28KB], 65536KB chunk


    unused devices: <none>


    -------------------------------------------


    root@helios4:~# blkid

    /dev/mmcblk0p1: UUID="8892867a-c9b1-41ae-89b5-19305e0a0bb4" TYPE="ext4" PARTUUID="3759f4dd-01"

    /dev/sda: UUID="bca2fb0e-cea2-a5f7-1018-c1ddbfb580de" UUID_SUB="3b3baa37-fa9f-9b23-3629-11a3ce48333c" LABEL="helios4:raid1" TYPE="linux_raid_member"

    /dev/md0: LABEL="data" UUID="2b2aa904-2b11-49a0-ae10-8f030c524cfa" TYPE="ext4"

    /dev/sdb: UUID="bca2fb0e-cea2-a5f7-1018-c1ddbfb580de" UUID_SUB="5bbb90ae-998a-6e95-f225-404affa2282a" LABEL="helios4:raid1" TYPE="linux_raid_member"

    /dev/mmcblk0: PTUUID="3759f4dd" PTTYPE="dos"


    ----------------------------------------------


    root@helios4:~# fdisk -l | grep "Disk "

    Disk /dev/mmcblk0: 29.7 GiB, 31914983424 bytes, 62333952 sectors

    Disk identifier: 0x3759f4dd

    Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors

    Disk model: WDC WD40EFRX-68N

    Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors

    Disk model: WDC WD40EFRX-68N

    Disk /dev/md0: 3.7 TiB, 4000651739136 bytes, 7813772928 sectors


    -----------------------------------------------


    root@helios4:~# cat /etc/mdadm/mdadm.conf

    # This file is auto-generated by openmediavault (https://www.openmediavault.org)

    # WARNING: Do not edit this file, your changes will get lost.


    # mdadm.conf

    #

    # Please refer to mdadm.conf(5) for information about this file.

    #


    # by default, scan all partitions (/proc/partitions) for MD superblocks.

    # alternatively, specify devices to scan, using wildcards if desired.

    # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.

    # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is

    # used if no RAID devices are configured.

    DEVICE partitions


    # auto-create devices with Debian standard permissions

    CREATE owner=root group=disk mode=0660 auto=yes


    # automatically tag new arrays as belonging to the local system

    HOMEHOST <system>


    # definitions of existing MD arrays

    ARRAY /dev/md0 metadata=1.2 name=helios4:raid1 UUID=bca2fb0e:cea2a5f7:1018c1dd:bfb580de


    -------------------------------------------------


    root@helios4:~# mdadm --detail --scan --verbose

    ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=helios4:raid1 UUID=bca2fb0e:cea2a5f7:1018c1dd:bfb580de

    devices=/dev/sdb


    ------------------------------------------------------


    Type of Drives and Qty being used:

    The two hard disks used in the RAID1 are Western Digital Red 4 TB (CMR).


    What might have caused the array to stop working:

    The NAS is not on 24/7.

    Most of the time it is actually off (fully shut down).

    I only turn it on once or twice a week, each time around 4 to 6 hours at most when I use it (as I don't have a UPS and am afraid a power outage that causes improper shut down will do more damage [though I also heard turning on and off NAS system will "wear" the hard disk]).

    I'm not sure whether the above turning on and off the NAS is causing this issue.

    The hard disks have been used with the above scenario for around 3 years (from brand new).


    ------------------------------------------------------------------------

    The "Detail" button in RAID Management:



    shows the below info:




    which trying to understand the information above seems to say that I somehow lost the drive /dev/sda ?


    Also I'm wondering what does the "Recover" button do?



    Thank you in advance to all the administrators and experts in this forum.

  • chente

    Hat das Thema freigeschaltet.
    • Offizieller Beitrag

    You are still running OMV5 which is EOL and no longer supported

    Also I'm wondering what does the "Recover" button do

    Just that if you replace a drive by first wiping it to prepare for use with OMV the new drive would display when selecting that and would be added to an existing array


    For some reason that drive has been removed from the array but the drive still shows in blkid and fdisk, you could try;


    1) Storage -> Disks wipe the drive and then use the recover option

    2) ssh into omv as root and run mdadm --add /dev/md0 /dev/sda

  • Updating OMV5 to OMV6 added to my to do list.

    I will need to tread slowly as I'm not familiar with OMV and Linux.


    I'm trying your solution 1.

    I went to Storage -> Disks, then Wipe.

    In Wipe there are two options: Secure or Quick.

    I clicked on the Secure.

    At the moment, below is the progress of the Wipe after 1 and 1/2 hour:


    May take a bit of time before it finishes.

    I will post the result once done.


    To learn further, what does Secure Wipe do?
    Is it overwriting the hard disk with dummy data?


    Thanks

  • Secure Wipe takes around 8 hours, but finally completed:



    Went to RAID Management -> Recover

    picking the drive /dev/sda



    Recover takes around another 8 hours from the 4TB drive:



    And finally I think it's fixed:





    I run those prerequisite CLI codes again so the experts can confirm what I'm seeing from the GUI.

    Code
    root@helios4:~# cat /proc/mdstat
    Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
    md0 : active raid1 sda[2] sdb[1]
          3906886464 blocks super 1.2 [2/2] [UU]
          bitmap: 0/30 pages [0KB], 65536KB chunk
    
    unused devices: <none>
    Code
    root@helios4:~# blkid
    /dev/mmcblk0p1: UUID="8892867a-c9b1-41ae-89b5-19305e0a0bb4" TYPE="ext4" PARTUUID="3759f4dd-01"
    /dev/md0: LABEL="data" UUID="2b2aa904-2b11-49a0-ae10-8f030c524cfa" TYPE="ext4"
    /dev/sda: UUID="bca2fb0e-cea2-a5f7-1018-c1ddbfb580de" UUID_SUB="fb9ddbcd-edf9-4795-0e80-fd14ddba4405" LABEL="helios4:raid1" TYPE="linux_raid_member"
    /dev/sdb: UUID="bca2fb0e-cea2-a5f7-1018-c1ddbfb580de" UUID_SUB="5bbb90ae-998a-6e95-f225-404affa2282a" LABEL="helios4:raid1" TYPE="linux_raid_member"
    /dev/mmcblk0: PTUUID="3759f4dd" PTTYPE="dos"
    Code
    root@helios4:~# fdisk -l | grep "Disk "
    Disk /dev/mmcblk0: 29.7 GiB, 31914983424 bytes, 62333952 sectors
    Disk identifier: 0x3759f4dd
    Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk model: WDC WD40EFRX-68N
    Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
    Disk model: WDC WD40EFRX-68N
    Disk /dev/md0: 3.7 TiB, 4000651739136 bytes, 7813772928 sectors
    Code
    root@helios4:~# mdadm --detail --scan --verbose
    ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=helios4:raid1 UUID=bca2fb0e:cea2a5f7:1018c1dd:bfb580de
       devices=/dev/sda,/dev/sdb
  • geaves

    Hat das Label OMV 5.x hinzugefügt.
  • geaves

    Hat das Label gelöst hinzugefügt.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!