rootfs matches resource limit

  • Since a couple of week I get this email for rootfs space usage.



    and



    How can I detect where my rootfs space gone?

    • Offizieller Beitrag

    Use this instead: du -d1 -h -x /var/ | sort -h (the -x will not check the space of other filesystems). A quick look of your other output tells me it is mostly in /var though. Probably logs, docker, or plex.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Thanks.
    I have Plex installed and I did a symlink for Log folder to place them in an external usb hdd.
    Could this influence the rootfs space usage despite of symlink?


    • Offizieller Beitrag

    You need to keep drilling down. Now run du -d1 -h -x /var/lib/ | sort -h It shouldn't show space usage from a symlink.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Many thanks!

    • Offizieller Beitrag

    Plex is your problem. Not sure how you made your symlink but it doesn't appear to work.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I used "ln -s " to create symlink



    I did it only for logs folder. Maybe I have to move to usb hdd the media folder and keep the database on the sd to have quick access (despite of this I have a lot of slow query warn messages)

    • Offizieller Beitrag

    The logs are not the big user of space. The database is. That is why it is filling your drive. An 8GB root drive is not helping you either.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Ok, in fact were media and metadata folder the fat ones.
    I moved to the external hdd drive using symlink and now I'm fine.
    Thanks to your hints I can find out who is taking too much space ;)


    Thanks

  • Sorry to come back with the same problem but I still have a full mmcblk02p file system


    I have root (/) usage for 4,7GB but OMV says 29 ?

    • Offizieller Beitrag

    Remove the card to another Linux system (perhaps booted using a thumb drive) and examine the contents there.


    It is likely that there is stuff hidden "behind" mount points. The easiest way to see this is to examine the filesystem when nothing running on it and nothing is mounted in it.

  • I mounted the microSD card on a live linux using virtualBox.
    This is DiskUsage report



    But Gparted says







    Update:
    probably I got it.




    /srv folder is full of file, 25GB...
    Now I have to understand why. I have a usb drive plugged into the router and each night a made an rsync backup to it.
    I'm using /srv/xxxx path as destination for rsync and if the drive isn't available due router issue it is writing into the sd card :(

    • Offizieller Beitrag

    Weird.


    Make sure to look using the root user. It is possible that the user you are using is not allowed to see the junk files?


    I would suspect the /srv tree and the /var/lib tree to be the most likely place to find junk.


    Edit: Yes! You found it!


    Sometimes drives don't spin up and mount as they are supposed to do. And then this can happen. Is it a USB drive?


    I cheat.


    I have my rsync scripts stored on the actual backup destination drive, next to the backup destination folder. And then I have a scheduled cron job run the script. Actually the cron job runs bash with the script as a parameter, this avoids the noexec problems.


    This way backups can ONLY run if the backup drive is spun up and correctly mounted. So no danger of trying to save a backup to a drive that is not mounted.


    Also I like to have the backup script next to the backup folder. That makes checking and changing backups easy.


    Also I use autofs. That will automatically mount a network drive when it is accessed. If it isn't already mounted.


    I typically execute the backup script on the destination NAS. That also protects against the NAS being shut down. My backups are mostly rsync pulls.

  • I removed the backup folder that shouldn't be on the sdcard.
    I checked rsync script but I'm not addressing directly anything in /srv/
    I have to check remote shared folder setting

    • Offizieller Beitrag

    Store the backup script on the destination server. The USB drive connected to the router.


    Run the backup script on the source server using a scheduled job or cron.


    Run it by launching bash with the full path to the script as a parameter. Then the script can ONLY run if the destination is available.

    • Offizieller Beitrag

    Here is a line from /etc/crontab on nas1.



    2 3 * * * adoby cd /sharedfolders/nas1/snapshots && /bin/bash all.sh


    So I don't actually specify the path to the script. Instead I cd into the folder where it is.


    And here is all.sh



    Code
    #!/usr/bin/env bash
    for f in *_snapshot.sh; do
        bash "$f"
    done

    Then I also have a couple of my *_snapshot.sh scripts there.


    https://github.com/WikiBox/snapshot.sh

  • Currently I changed the command to run a script stored in the router hdd via full path (so if router isn't connected no command execution).


    Do you see any advantages in your approach (cd into folder and then run the script)?

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!