Disk shows status = red in Smart Menu

  • I only noticed it this morning, and open vault didn't notify me of a problem (I have email notifications turned on).


    I have no redundancy, it is a 'linear' array. So what does this red icon actually mean ? and what do I do next ? Have I lost all my data ?



    I have attached the output requested in the sticky post

  • Check the smart values of the drive that is showing the red status.


    Greetings
    David

    "Well... lately this forum has become support for everything except omv" [...] "And is like someone is banning Google from their browsers"


    Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.

    Upload Logfile via WebGUI/CLI
    #openmediavault on freenode IRC | German & English | GMT+1
    Absolutely no Support via PM!

  • Your drive starts to relocate sectors, so it begins to wear out. Replace it sooner than later.


    Greetings
    David

    "Well... lately this forum has become support for everything except omv" [...] "And is like someone is banning Google from their browsers"


    Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.

    Upload Logfile via WebGUI/CLI
    #openmediavault on freenode IRC | German & English | GMT+1
    Absolutely no Support via PM!

  • I don't know right now, maybe @ryecoaaron can point you to it.


    Greetings
    David

    "Well... lately this forum has become support for everything except omv" [...] "And is like someone is banning Google from their browsers"


    Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.

    Upload Logfile via WebGUI/CLI
    #openmediavault on freenode IRC | German & English | GMT+1
    Absolutely no Support via PM!

    • Offizieller Beitrag

    Using a linear array is dangerous. Not sure what the best way to replace it would be. You could try cloning it with clonezilla but I have never tried. dd should work too.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Thanks ryecoaaron,


    I have OMV setup to Rsync to a Raspberry PI nightly, so I wasn't to worried about redundancy of the Array. I'll try dd'ing the whole drive to my replacement and see how it goes.


    I have already re-run the rsync, so I can just blow away the array and start again if it comes to it.

    • Offizieller Beitrag

    Glad to hear you have backup :) dd'ing may transfer some weird problems but I would think an fsck would fix it since the new drive won't have problems.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Change the drive

    Thanks for the suggestion. Although I'm quite embarrassed as the drives are not that old and was hoping they would last me several years, I will make sure I get this in schedule and eventually exchange them.


    Out of curiosity what are the actual marks above indicating a hw failure?

    • Offizieller Beitrag

    The drive is indicating bad sectors, the drive could last for quite some time, you could leave it in place and keep an eye on 197 and 198 if those begin to increase or you errors in 5, then replace the drive. Worst case scenario the drive fails and the raid will display as degraded, but usable until the drive is replaced.

  • The drive is indicating bad sectors, the drive could last for quite some time, you could leave it in place and keep an eye on 197 and 198 if those begin to increase or you errors in 5, then replace the drive. Worst case scenario the drive fails and the raid will display as degraded, but usable until the drive is replaced.

    Great info. Thanks a bunch!

  • But isn't there a possibility to repair these bad sectors?


    I have quite a new SSD (bought in May) and it has not been under power use, but it shows also a red dot... but when looking at the chart I have no clue what's really the problem... I am somehow reluctant to believe that it is already worn out...

    I don't think bad sectors are actually repaired. But rather the drive will attempt to copy the data within them to spare sectors and prevent the bad sectors from being reused again in the future. It is for this reason that the drive will forever show that it has bad sectors and OMV will show this in the status.

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

  • I don't think bad sectors are actually repaired. But rather the drive will attempt to copy the data within them to spare sectors and prevent the bad sectors from being reused again in the future. It is for this reason that the drive will forever show that it has bad sectors and OMV will show this in the status.

    That means: I could leave it be as it is, correct? I am just a bit astonished that this message occured so quickly... I will check if I can replace my SanDisk...

  • I had a hard disk that suddenly displayed that red dot. Looking into it, I saw that eight bad sectors were discovered, and the data successfully reallocated to spare sectors. If that was the end of it I would have been happy. But the number of reallocated sectors kept increasing. When it got to over 300 I requested an advance replacement drive under warranty. I got the new drive a few day later, and by then the number of bad sectors had increased to over 800, an increase of more than 500 new bad sectors in just a few days.


    There are two things to keep in mind with this. Has the data within the bad sectors been successfully relocated into spare sectors, and if the problem continues, how many more spare sectors are available? Even if the data is all successfully relocated, you will run out of spare sectors sooner or later and once that happens data will be begin being lost. Also, the drive may stop being able to read any data at all without any warning.


    My position is that once any reallocated sectors are reported, I plan on replacing the drive as soon as I can.

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

    • Offizieller Beitrag

    My position is that once any reallocated sectors are reported, I plan on replacing the drive as soon as I can.

    +1

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Thank you for the +1. It's still with warranty so I probably send it in right away in order to get either reimbursed or exchanged for a new one.


    Meanwhile I was checking the disk also with smartmontools and it does not show any error... Isn't OMV also using smartctl for getting all the SMART details out of the HDD? I am bit confused now.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!