Clean, Degraded Array

  • Hi all,


    I think i have either a failed or failling drive. For the past week my server has been failing to boot (headless, turned on by WOL) it powers up but always sat there booting. After a few hard power offs and my general laziness in wanting to unplug it and plug a monitor into it i did that tonight.


    The boot up sequence showed that 4 out of 5 devices where being used in 'md127' which is the RAID device i assume.


    The OMV GUI shows all my disks but the RAID tab only shows 4 devices, it did show a missing one so i unplugged that device and plugged it back in, it appeared a /dev/sdg/ Afte rmoving the machine back downstairs to use my laptop instead of my phone it appeared as /dev/sdd again.


    Im unsure if this disk is failing or not? I've seen other mentioning zeroing out the super block and re-adding it to the array to rebuild but not sure if this is the best course of action.


    The drives where cheap and second hand from a work colleague so im not too fussed if its dying, but wanted to make sure it was dying first before buying a replacement.


    Any tips / steps?


    this is my first failure of a RAID array in Linux so im a little cautious.


    Cheers


    M.

  • Did you check smart values of the drive? Another thing to check is the sata cable.


    Edit: Is the Raid now synced again after the redetection of the harddrive?


    Greetings
    David

    "Well... lately this forum has become support for everything except omv" [...] "And is like someone is banning Google from their browsers"


    Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.

    Upload Logfile via WebGUI/CLI
    #openmediavault on freenode IRC | German & English | GMT+1
    Absolutely no Support via PM!

  • Hi,


    Nope the RAID was still marked as degraded. I did a


    Code
    cat /proc/mdstat/


    and it showed that i had a device called 'md126' which was the single 3TB drive on its own. So i stoped it via


    Code
    mdadm --stop /dev/md126


    Then


    Code
    mdadm --zero-superblock /dev/sdd


    I could then recover the RAID via the GUI, it is currently rebuilding but i think the drive is marked as a spare where as before i had all 5 drives as active with no spares. If this works i think i'll just leave it as a spare if its a dodgy drive anyway.


    With regard to SMART, im not sure what im looking at, can you give me an pointers that are bad values?


    Cheers

  • There is still a bug in the repair option in the GUI. It uses the method via a spare to rebuild you raid.


    That leaves a spare entry in /etc/md/md.conf so you are also getting false alarms of an absent spare afterwards


    The question is: What happend with your raid before that reconstruct?


    Do you find anything in /var/log/messages or /var/log/syslog ?


    Also after a certain amount of time, debian force a check of filesystems and that will take quite some time. You should open a console (via physical monitor or ILO), to find out what the system is doing when not available via web frontend.

    Everything is possible, sometimes it requires Google to find out how.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!