Weird situation - root filesystem filled - please help!

  • Folks,
    I am on Stoneburner for my main media system. Long story of why I cannot yet upgrade this one including need for TFTP server, iSCSI targets etc.


    Anyway, this system has been going great and has been rock solid for quite a number of years. However, within the last few months the root filesystem has become 100% filled and I cannot seem be able to locate what is causing it.


    I run Docker with three containers including MythTV and Emby and false media plugin. It has NFS exports also.


    Anyway, as mentioned earlier, it has been running all this great for quite some time but it appears something (perhaps a package update) may have caused this.


    If you can help suggest what I can try next, please let me know.


    Thank you.





  • Login via console or ssh and cd /


    Then as root run: du -sh *


    Examine the directory sizes to see which one got filled up.

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

  • Login via console or ssh and cd /


    Then as root run: du -sh *


    Examine the directory sizes to see which one got filled up.

    This is what I am getting. It doesn't really help me pinpoint the problematic files:


  • This isn't making sense:


  • Try unmounting each of your storage disks and then look in their mount point directories.


    These look suspicious:

    Code
    /dev/disk/by-uuid/bcb 7.8G 7.8G 0 100% /var/folder2ram/var/tmp
    /dev/disk/by-uuid/bcb 7.8G 7.8G 0 100% /var/folder2ram/var/spool

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

  • Try unmounting each of your storage disks and then look in their mount point directories.


    These look suspicious:

    Code
    /dev/disk/by-uuid/bcb 7.8G 7.8G 0 100% /var/folder2ram/var/tmp
    /dev/disk/by-uuid/bcb 7.8G 7.8G 0 100% /var/folder2ram/var/spool

    I did the one for Docker and it didn't show anything different after unmounting that filesystem. I too am beginning to suspect something is up with the flash memory plug in (those that show with folder2ram).

  • I did the one for Docker and it didn't show anything different after unmounting that filesystem. I too am beginning to suspect something is up with the flash memory plug in (those that show with folder2ram).

    Did you unmount /dev/disk/by-uuid/bcb and look there or not?

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

  • Wanted to update this. It appears omv-mkconf backup backup had made a backup underneath the mount point. Not sure how and and why it did that or was allowed to write to the root area instead of the mount point of sde1. Anyway, it is solved now. Will keep a eye on it. Perhaps sde1 got unmounted at some point and when omv-mkconf backup backup ran it wrote to the root disk. Bazaar!

    • Offizieller Beitrag

    Perhaps sde1 got unmounted at some point and when omv-mkconf backup backup ran it wrote to the root disk.

    That is definitely what happened. The backup plugin doesn't do anything that would cause it to write to the root disk.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I'm having the same issue. Somehow my backup-drive gets "lost" / unmounted and then the backup process (rsnapshot) fills the root drive. How did you resolve this (deleted which files or did what) and prevent it from happening again?

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!