Free inodes count wrong problems keep setting the filesystem to read-only

  • Hello mates, about a week ago I started having problems with the filesystem having wrong free inode counts, I have to make an fsck.ext4 -fy on the partition to fix the problems and then reboot. This is the first time I have this problem.


    This SSD is two years old, an WD Green 240gb ssd. Is the problem related to the SSD?


    This is the S.M.A.R.T. Extended information, I don't see anything concerning.

    https://pastebin.com/A6CxEH72

    • Offizieller Beitrag

    This isn't good... More than 0 is bad.


    169 Total_Bad_Blocks -O--CK 100 100 --- - 156


    And I don't know how the 80 TBW life is calculated for your SSD but it looks your drive is about half through its life. It might be failing early.

    233 NAND_GB_Written_TLC -O--CK 100 100 --- - 7657

    234 NAND_GB_Written_SLC -O--CK 100 100 000 - 24853

    241 Host_Writes_GiB ----CK 100 100 000 - 10634

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • OK, thanks for the answer, I'm buying a new one, I had my suspicions since I paid much less than other options for this WD green but I thought it would last at least some 5 years. Any recommendations on brand? I initially was looking for the blue WD.


    Is an HP 5100 PRO a better option? or should I buy a 2.5 HDD for the OMV OS instead? I can't pay for an NAS red ssd right now.

    • Offizieller Beitrag

    Any recommendations on brand?

    Most of my SSDs are Samsung.


    Is an HP 5100 PRO a better option?

    No idea.


    or should I buy a 2.5 HDD for the OMV OS instead?

    OMV didn't do all of those writes. So, if you are using the SSD for ONLY the OMV OS, an ssd is fine. If you are using it for data as well, then that is the problem. I have run OMV on usb flash drives for years (with the flashmemory plugin installed) but they aren't used for data.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • OMV didn't do all of those writes. So, if you are using the SSD for ONLY the OMV OS, an ssd is fine. If you are using it for data as well, then that is the problem. I have run OMV on usb flash drives for years (with the flashmemory plugin installed) but they aren't used for data.

    My docker container config files were in the SSD, so I guess they pushed a lot of writes and read to it, deluge, plex, , also I had swap on the ssd, although not used often.


    I'll move the container configs to an HDD.

    • Offizieller Beitrag

    It is very possible that fsck isn't doing a good job if you are running it on a filesystem mounted rw but as you said, I would still dump the ssd.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • It is very possible that fsck isn't doing a good job if you are running it on a filesystem mounted rw but as you said, I would still dump the ssd.

    Yep, I was doing it wrong by running fsck on a mounted system, after running fsck on a live ubuntu, it's been a week that the problem hasn't happened.


    At least I'll have a few week as I wait for a SSD replacement and do the proper migration.

  • Hello, I just want to add a follow up if anyone came up with this problem.


    Some few weeks after this thread I began receiving kernel messages about "failed command: READ FPDMA QUEUED", which apparently it's related to error in communication from the disk, like a cable problem. I made an image of the disk but the image itself using ddrescue came up with no error on its all it's passes, so I'm speculating the WD Green SSD problem was on the connection to the MB, the filesystem was OK.


    I changed the disk for a Samsung 860 evo, same size but double the price comparing to the WD Green, I had no problems anymore.

  • Agricola

    Hat das Label gelöst hinzugefügt.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!