RAID showing clean, degraded

  • I would really appreciate some help with my RAID problem, please. I know a little Linux but have met the limit of my knowledge and Googling skills.


    My NAS box (64bit) had a motherboard failure. The replacement computer only supports 32 bit, so I did a fresh install using the instructions from the thread:

    "Installing OMV5 on Raspberry PI's, Armbian SBC's, & i386 32-bit platforms"


    Components:

    1 x IDE 80GB system drive

    2 x SATA 8TB drives in RAID 1 mirror


    After installation, I experienced some issues with the mobo intermittently not seeing one of the SATA drives, a replacement SATA lead resolved this.

    Other times it failed to see all of the drives, a replacement RAM module resolved this.


    I was able to see the original RAID array / file system and recreated the shared folders, including all the data.


    At some point I saw a message (can't remember where) that the array was only running on one drive .


    The other drive was still showing as a drive but not in the array and it wasn't available in the Recover options, so I used:


    Storage > Disks > /dev/xxx > Wipe


    RAID Management > recover,


    and it let me add the /dev/xxx to the /dev/mdxxx RAID array.


    It showed 'clean, degraded, recovering (x%.....)' but was stuck on 0.1% for ages.


    Now shows "clean, degraded", with no % or time remaining and the Array Details shows 'Spare Devices = 1'


    State : clean, degraded

    Active Devices : 1

    Working Devices : 2

    Failed Devices : 0

    Spare Devices : 1


    Thanks in advance.


    P.S: I am also seeing multiple errors on the console screen and in the system log, a typical block is:

    Code
    Feb 22 13:24:09 NAS1 kernel: [ 6875.356499] ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6
    Feb 22 13:24:09 NAS1 kernel: [ 6875.357161] ata3.00: BMDMA stat 0x26
    Feb 22 13:24:09 NAS1 kernel: [ 6875.357799] ata3.00: failed command: READ DMA EXT
    Feb 22 13:24:09 NAS1 kernel: [ 6875.358440] ata3.00: cmd 25/00:00:72:c8:81/00:02:72:00:00/e0 tag 0 dma 262144 in
    Feb 22 13:24:09 NAS1 kernel: [ 6875.358440] res 51/84:e0:92:c8:81/84:01:72:00:00/e0 Emask 0x30 (host bus error)
    Feb 22 13:24:09 NAS1 kernel: [ 6875.359735] ata3.00: status: { DRDY ERR }
    Feb 22 13:24:09 NAS1 kernel: [ 6875.360379] ata3.00: error: { ICRC ABRT }
    Feb 22 13:24:09 NAS1 kernel: [ 6875.361068] ata3: soft resetting link
    Feb 22 13:24:09 NAS1 kernel: [ 6875.635123] ata3.00: configured for UDMA/33
    Feb 22 13:24:09 NAS1 kernel: [ 6875.635144] ata3: EH complete

    Output of the required commands:

    Code
    cat /proc/mdstat
    Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
    md127 : active (auto-read-only) raid1 sdc[2] sdb[0]
    7813894464 blocks super 1.2 [2/1] [U_]
    bitmap: 2/59 pages [8KB], 65536KB chunk
    unused devices: <none>


    Code
    blkid
    /dev/sda1: UUID="678b8982-463f-4ae6-b484-195bbe76617b" TYPE="ext4" PARTUUID="132f9f46-01"
    /dev/sda5: UUID="86420abd-1fd7-425d-bc81-977c0b20cadf" TYPE="swap" PARTUUID="132f9f46-05"
    /dev/sdb: UUID="41e45e20-f7b2-cbf7-0498-1a0dcf71ce99" UUID_SUB="3cdfc28d-aca8-06e1-b0bd-19a7e97a679c" LABEL="NAS1:OMVStore" TYPE="linux_raid_member"
    /dev/md127: LABEL="NTFS" UUID="3416FC8116FC4580" TYPE="ntfs" PTTYPE="atari"
    /dev/sdc: UUID="41e45e20-f7b2-cbf7-0498-1a0dcf71ce99" UUID_SUB="0cdd921d-2079-318f-b77b-250907fd5bab" LABEL="NAS1:OMVStore" TYPE="linux_raid_member"
    Code
    mdadm --detail --scan --verbose
    ARRAY /dev/md/OMVStore level=raid1 num-devices=2 metadata=1.2 spares=1 name=NAS1:OMVStore UUID=41e45e20:f7b2cbf7:04981a0d:cf71ce99
    devices=/dev/sdb,/dev/sdc
  • Normally I would happily help, but the above output from various commands is just too weird, suggestion back up your data if one drive is accessible and start again. Those kernel outputs do not instill confidence in the hardware, let alone the output from the rest.

    Raid is not a backup! Would you go skydiving without a parachute?

  • Thanks for the reply, I appreciate your candour.

    It's a newish array with only about 600GB used so I'm currently taking a full back up to ext USB then will reformat drives and start new array.

    I'm also looking for a newer mobo/ computer so I can use 64bit OMV 😃

  • I'm also looking for a newer mobo/ computer so I can use 64bit OMV

    :thumbup: if you do a reinstall wipe the drives before you do anything with them + you might want to rethink the Raid 1 and use one drive for data and the second as an rsync backup have a look at this guide there is information in there about rsync.

    Raid is not a backup! Would you go skydiving without a parachute?

  • :thumbup: if you do a reinstall wipe the drives before you do anything with them + you might want to rethink the Raid 1 and use one drive for data and the second as an rsync backup have a look at this guide there is information in there about rsync.

    Thanks for the suggestion, it got me thinking of new ways to go about tackling my requirements.

    I've ended up ditching the idea of a NAS unit altogether, as I don't currently actually need network accessibility to my data.


    I now have each drive (in their original external USB 3 enclosures) plugged into my computer and am using a 'live' replicator program to copy changes from Drive #1 to Drive #2.

    It gives me availability in that I can always access data from either drive.

    It gives me back-up in that I can access data from the second drive if the first fails.

    I can plug the drives into any other computer should my main one fail.


    Maybe in the future I will explore NAS again, but with newer technology !

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!