Systemdisk/OS disk almost full

    • OMV 4.x
    • Resolved
    • Systemdisk/OS disk almost full

      Hi.
      I have not spent two days searching for an answer in this and other forums, but have not managed to find a solution. Now I am desperate. :S

      I have a system with a 120 GB SSD drive were OMV has allocated 40 GB space (/dev/sde1).
      The system also contain 6 drives in a Snapraid configuration.

      After installing dockers and setting up a few containers I got a notification a few days ago that the OS disk was getting full, 89%.
      To me it seems that one or a few of the containers filled up the OS drive.
      Also, it seems I might have set up one or more folders erroneously, not pointing to the Snapraid pool of disks, which might have contributed to the problem that data got stored on the OS drive.
      I decided to delete the containers, but this did not solve the problem. I deleted the shared folders that might have been set up erroneously with no changes in disk space.
      I followed up with uninstallation of Docker. Still no solution. Ran rm -r of the suspected folder, files removed, but still no solution. Restart of OMV did not help.

      So, I am stuck. Running df and du has not managed to help me to figure out which folder/file is taking up space on my OS drive after the previous efforts.

      My conclusion: I suspect that somehow containers did take up space, upon deletion of the containers and file/folder, the space on the drive is still occupied but is not visible for the system (or me)? Is this at all possible?

      I need help, please. Alternatively, I have to do a full re-installation of OMV, not a preferred solution.

      Here follows the du and df commands:


      Source Code

      1. root@OMV:~# df -h
      2. Filesystem Size Used Avail Use% Mounted on
      3. udev 32G 0 32G 0% /dev
      4. tmpfs 6.3G 9.5M 6.3G 1% /run
      5. /dev/sde1 47G 40G 5.0G 89% /
      6. tmpfs 32G 0 32G 0% /dev/shm
      7. tmpfs 5.0M 0 5.0M 0% /run/lock
      8. tmpfs 32G 0 32G 0% /sys/fs/cgroup
      9. tmpfs 32G 0 32G 0% /tmp
      10. 1:2:3:4:5:6 20T 11T 9.6T 52% /srv/e9dd6d01-bdd8-4814-a132-e8ee274fadba
      11. /dev/sdf1 3.6T 3.1T 498G 87% /srv/dev-disk-by-label-Disk2
      12. /dev/sdc1 3.6T 2.1T 1.6T 58% /srv/dev-disk-by-label-Disk3
      13. /dev/sdg1 3.6T 708G 2.9T 20% /srv/dev-disk-by-label-Disk6
      14. /dev/sdb1 2.7T 610G 2.1T 23% /srv/dev-disk-by-label-Disk4
      15. /dev/sda1 3.6T 3.2T 489G 87% /srv/dev-disk-by-label-Disk1
      16. /dev/sdd1 2.7T 609G 2.1T 23% /srv/dev-disk-by-label-Disk5
      17. folder2ram 32G 30M 32G 1% /var/log
      18. folder2ram 32G 0 32G 0% /var/tmp
      19. folder2ram 32G 2.9M 32G 1% /var/lib/openmediavault/rrd
      20. folder2ram 32G 16K 32G 1% /var/spool
      21. folder2ram 32G 67M 32G 1% /var/lib/rrdcached
      22. folder2ram 32G 12K 32G 1% /var/lib/monit
      23. folder2ram 32G 4.0K 32G 1% /var/lib/php
      24. folder2ram 32G 0 32G 0% /var/lib/netatalk/CNID
      25. folder2ram 32G 420K 32G 1% /var/cache/samba
      Display All



      Source Code

      1. root@OMV:~# du -h --max-depth=1 --exclude="*/proc/*" /
      2. 0 /proc
      3. 9.5M /run
      4. 385M /var
      5. 0 /sys
      6. 0 /tmp
      7. 4.0K /lib64
      8. 4.0K /home
      9. 8.0K /media
      10. 644M /usr
      11. 11T /sharedfolders
      12. 7.0M /etc
      13. 4.0K /mnt
      14. 44K /root
      15. 13M /bin
      16. 0 /dev
      17. 4.0K /opt
      18. 4.0K /export
      19. 15M /sbin
      20. 21T /srv
      21. 713M /lib
      22. 16K /lost+found
      23. 86M /boot
      24. 31T /
      Display All


      Or this alternative du ...

      Source Code

      1. root@OMV:/# du -hxd1
      2. 295M ./var
      3. 4.0K ./lib64
      4. 4.0K ./home
      5. 8.0K ./media
      6. 644M ./usr
      7. 4.0K ./sharedfolders
      8. 7.0M ./etc
      9. 4.0K ./mnt
      10. 44K ./root
      11. 13M ./bin
      12. 4.0K ./opt
      13. 4.0K ./export
      14. 15M ./sbin
      15. 12K ./srv
      16. 713M ./lib
      17. 16K ./lost+found
      18. 86M ./boot
      19. 1.8G .
      Display All
    • See what you can find under /var/lib/docker The sub-dir /containers might have something interesting.
      Unfortunately, Docker uses a merged directory structure to create containers, that's not unlike UnionFS. Basically the components of a container consist of overlays that are compiled from more than 1 folder on the boot drive.

      The following is only speculation:
      What would be more interesting, and much more likely to contain a lot of data, would the host side path to your deleted containers Volumes and Bind points. If a path to a data drive was not specifically set, and you used a path that might have been part of a containers default set up, the destination was probably set on your boot drive. Depending on what that container is/was, (a downloader?) a lot of data might end up on your boot drive.

      Another Possibility:
      If the meta-data location is not specifically changed, media servers like Plex are known for automatically collecting and dropping several Gig's of meta-data, for media files, on the boot drive.

      As you've noted, Linux has hidden folders and files. While I don't think this will be productive, the following will list all hidden folders.

      find / -type d -iname ".*" -ls
      The following is for all hidden files.
      find / -type f -iname ".*" -ls


      *If you don't have it installed already, WinSCP might make it easier to visualize and navigate your boot drive.* It installs on a Windows client and connects to OMV by SSH.
      ____________________________________________

      A potential patch, if you don't want to rebuild right now:
      Since you have a 120GB SSD, with only 40GB used, Gparted can expand OMV's root partition.

      The next time around, give some consider to backing up your boot drive to an image file, once in a awhile. As you might imagine, it's nice to have a punt option if something goes wrong.

      Video Guides :!: New User Guide :!: Docker Guides :!: Pi-hole in Docker
      Good backup takes the "drama" out of computing.
      ____________________________________
      Primary: OMV 3.0.99, ThinkServer TS140, 12GB ECC, 32GB USB boot, 4TB+4TB zmirror, 3TB client backup.
      Backup: OMV 4.1.9, Acer RC-111, 4GB, 32GB USB boot, 3TB+3TB zmirror, 4TB Rsync'ed disk
    • flmaxey wrote:

      See what you can find under /var/lib/docker The sub-dir /containers might have something interesting.
      I have un-installed docker, so there is no docker folder.


      flmaxey wrote:

      As you've noted, Linux has hidden folders and files. While I don't think this will be productive, the following will list all hidden folders.
      find / -type d -iname ".*" -ls
      The following is for all hidden files.
      find / -type f -iname ".*" -ls

      After running the du -hxd1, which from my understanding will include hidden files, I still can not identify files or folders to occupy 39 GB of data. See my first post.

      Here is the suggested find command:


      Source Code

      1. root@OMV:/# find / -type d -iname ".*" -ls
      2. find: ‘/proc/26978/task/26978/net’: Invalid argument
      3. find: ‘/proc/26978/net’: Invalid argument
      4. find: ‘/proc/26979/task/26979/net’: Invalid argument
      5. find: ‘/proc/26979/net’: Invalid argument
      6. find: ‘/proc/26980/task/26980/net’: Invalid argument
      7. find: ‘/proc/26980/net’: Invalid argument
      8. find: ‘/proc/26981/task/26981/net’: Invalid argument
      9. find: ‘/proc/26981/net’: Invalid argument
      10. find: ‘/proc/26982/task/26982/net’: Invalid argument
      11. find: ‘/proc/26982/net’: Invalid argument
      12. find: ‘/proc/26983/task/26983/net’: Invalid argument
      13. find: ‘/proc/26983/net’: Invalid argument
      14. find: ‘/proc/26984/task/26984/net’: Invalid argument
      15. find: ‘/proc/26984/net’: Invalid argument
      16. 2883654 4 drwx------ 2 shellinabox shellinabox 4096 Jul 28 09:46 /var/lib/shellinabox/.ssh
      17. 23761 0 drwxrwxrwt 2 root root 40 Oct 17 16:44 /tmp/.Test-unix
      18. 23760 0 drwxrwxrwt 2 root root 40 Oct 17 16:44 /tmp/.font-unix
      19. 23759 0 drwxrwxrwt 2 root root 40 Oct 17 16:44 /tmp/.XIM-unix
      20. 23758 0 drwxrwxrwt 2 root root 40 Oct 17 16:44 /tmp/.ICE-unix
      21. 23757 0 drwxrwxrwt 2 root root 40 Oct 17 16:44 /tmp/.X11-unix
      22. 9212828131330 4 drwxrwxrw- 4 jim users 4096 Oct 13 18:56 /sharedfolders/media/.filebot
      23. 9212828131329 4 drwxrwxrw- 2 jim users 4096 Oct 13 18:56 /sharedfolders/media/.oracle_jre_usage
      24. 262153 4 drwxr-xr-x 2 root root 4096 Aug 12 08:00 /root/.nano
      25. 262148 4 drwx------ 2 root root 4096 Jul 28 08:17 /root/.ssh
      26. 123281410 4 drwxrwxrw- 4 jim users 4096 Oct 13 18:56 /srv/dev-disk-by-label-Disk6/media/.filebot
      27. 123281409 4 drwxrwxrw- 2 jim users 4096 Oct 13 18:56 /srv/dev-disk-by-label-Disk6/media/.oracle_jre_usage
      28. 9212828131330 4 drwxrwxrw- 4 jim users 4096 Oct 13 18:56 /srv/e9dd6d01-bdd8-4814-a132-e8ee274fadba/media/.filebot
      29. 9212828131329 4 drwxrwxrw- 2 jim users 4096 Oct 13 18:56 /srv/e9dd6d01-bdd8-4814-a132-e8ee274fadba/media/.oracle_jre_usage
      Display All
    • I still have not managed to identify the files that are filling up my OS-disk.

      As a result I have had to boot to GParted and adjust my partitions twice now. If this continues I will run out of space on the disk.

      Does anyone have an idea of what I can do next? Would it be advisable to address this issue on the Debian forum, since OMV is based on Debian?
    • You can install ncdu it will show you a nice list of folders and the space they use (including hidden folders)
      apt install ncdu

      then start with

      ncdu -x

      the -x will prevent to read also other filesystem, so your data drive will not be included

      quit ncdu with q
      Odroid HC2 - armbian - Seagate ST4000DM004 - OMV4.x
      Asrock Q1900DC-ITX - 16GB - 2x Seagate ST3000VN000 - Intenso SSD 120GB - OMV4.x
      :!: Backup - Solutions to common problems - OMV setup videos - OMV4 Documentation - user guide :!:
    • If a rsync job runs without that the target drive is available, it creates a folder in the /media/ or /srv/ folder and puts the data there. The data are then stored on the OS drive instead of the data drive.

      So check out these two folders, if there are folders which are not mount points for drives.
      Odroid HC2 - armbian - Seagate ST4000DM004 - OMV4.x
      Asrock Q1900DC-ITX - 16GB - 2x Seagate ST3000VN000 - Intenso SSD 120GB - OMV4.x
      :!: Backup - Solutions to common problems - OMV setup videos - OMV4 Documentation - user guide :!:
    • By chance do you have Plex or Emby installed? If you do, and you have a lot of media files, the folder where these managers store metadata can grow to enormous sizes. The more media files added (or as these media managers search out media file related data), the larger the metadata folder grows.

      I also had a problem, early on, with UrBackup. Urbackup was trying and failing to backup a UEFI client that had a problem. That resulted in a growing collection of temp files. Once the backup was successful, UrBackup erased the temp files.

      Video Guides :!: New User Guide :!: Docker Guides :!: Pi-hole in Docker
      Good backup takes the "drama" out of computing.
      ____________________________________
      Primary: OMV 3.0.99, ThinkServer TS140, 12GB ECC, 32GB USB boot, 4TB+4TB zmirror, 3TB client backup.
      Backup: OMV 4.1.9, Acer RC-111, 4GB, 32GB USB boot, 3TB+3TB zmirror, 4TB Rsync'ed disk
    • Source Code

      1. root@OMV:/# df -h /dev/sdg1
      2. Filesystem Size Used Avail Use% Mounted on
      3. /dev/sdg1 91G 49G 38G 57% /

      ncdu -x

      Source Code

      1. 5.4 GiB [##########] /home
      2. 3.6 GiB [###### ] /var
      3. 949.5 MiB [# ] /lib
      4. 935.1 MiB [# ] /usr
      5. 130.7 MiB [ ] /boot
      6. 14.0 MiB [ ] /sbin
      7. 12.6 MiB [ ] /bin
      8. 7.3 MiB [ ] /etc
      9. 40.0 KiB [ ] /root
      10. e 16.0 KiB [ ] /lost+found
      11. 12.0 KiB [ ] /srv
      12. 8.0 KiB [ ] /media
      13. 4.0 KiB [ ] /lib64
      14. 4.0 KiB [ ] /sharedfolders
      15. e 4.0 KiB [ ] /opt
      16. e 4.0 KiB [ ] /mnt
      17. e 4.0 KiB [ ] /export
      18. @ 0.0 B [ ] initrd.img.old
      19. @ 0.0 B [ ] initrd.img
      20. @ 0.0 B [ ] vmlinuz.old
      21. @ 0.0 B [ ] vmlinuz
      22. > 0.0 B [ ] /tmp
      23. > 0.0 B [ ] /sys
      24. > 0.0 B [ ] /run
      25. > 0.0 B [ ] /proc
      26. > 0.0 B [ ] /dev
      Display All

      So, using ncdu, what is making up 49GB on my drive?
    • macom wrote:

      If a rsync job runs without that the target drive is available, it creates a folder in the /media/ or /srv/ folder and puts the data there. The data are then stored on the OS drive instead of the data drive.

      So check out these two folders, if there are folders which are not mount points for drives.
      The more likely scenario is something writing to a mount point with no drive mounted to that mountpoint. When the drive is eventually mounted, the contents of the drive will appear in the mount point directory and what was previously written in the directory will not be visible but is still present and taking up space on the rootfs.

      So, it is best to check by first unmounting the drive, then looking in the mount point directory. It should be empty.

      Reply being delayed because it is wrongly being held for moderation.
      OMV 4.x - ASRock Rack C2550D4I - 16GB ECC - Silverstone DS380
    • gderf wrote:

      The more likely scenario is something writing to a mount point with no drive mounted to that mountpoint. When the drive is eventually mounted, the contents of the drive will appear in the mount point directory and what was previously written in the directory will not be visible but is still present and taking up space on the rootfs.
      So, it is best to check by first unmounting the drive, then looking in the mount point directory. It should be empty.

      Reply being delayed because it is wrongly being held for moderation.
      This seems like a potential cause.
      I did see at some point thought my docker container Sabnzbd to store data on my OS-disk. This was caused by the data point being pointed erroneously in the docker container (to a mount point/folder in my MergerFS pool). I corrected the path but it did not make a difference. I did uninstall Docker and all the containers, but it did not make a difference. I though this was not the cause - maybe it was.
      Could it be that the data stored by the containers was not deleted on the disk and is therefore taking up space even though you can not identify that by du or ncdu command?

      If data from "ghost" containers still takes up space on the rootfs, how can it be removed?
    • I might be being dumb in my observation here but from the first post:

      This /dev/sde1 47G 40G 5.0G 89% / I assume is the SSD in question

      This /dev/sdg1 3.6T 708G 2.9T 20% /srv/dev-disk-by-label-Disk6 is part of part of MergFS

      If the above is correct what is this:
      1. root@OMV:/# df -h /dev/sdg1
      2. Filesystem Size Used Avail Use% Mounted on
      3. /dev/sdg1 91G 49G 38G 57% /
      or am I missing something, I understand what I see (unless I need new glasses) but I can't understand/decipher it.
      Raid is not a backup! Would you go skydiving without a parachute?
    • Typically, dockers write data to two places. One possibility is to a Volume and Bind mount that connects a directory inside the container to a directory outside the container somewhere on the system. That "somewhere" could be anywhere the container has permission to write to, including the rootfs. The other possibility is that the data is written to a point inside the container. This is what happens if a Volume and Bind mount is not specified and the container needs to write data. If the container is located on the system drive, it could fill it up.

      Data written to locations inside a container will be deleted if the container is deleted. But if the data was aimed at a Volume and Bind mount outside the container it will not be deleted if the container itself is deleted.

      Data written onto the rootfs can be deleted. The problem is finding it.
      OMV 4.x - ASRock Rack C2550D4I - 16GB ECC - Silverstone DS380
    • Source Code

      1. root@OMV:/# lsof +L1
      2. COMMAND PID USER FD TYPE DEVICE SIZE/OFF NLINK NODE NAME
      3. php-fpm7. 17624 root 3u REG 0,40 0 0 918917 /tmp/.ZendSem.mUQiM7 (deleted)
      4. php-fpm7. 17625 www-data 3u REG 0,40 0 0 918917 /tmp/.ZendSem.mUQiM7 (deleted)
      5. php-fpm7. 17626 www-data 3u REG 0,40 0 0 918917 /tmp/.ZendSem.mUQiM7 (deleted)
      Found some deleted files still being kept "open". These might be a problem. Not sure how I will tackle this further though. Have tried to perform kill -9 PID (e.g. kill -9 17624, but this did not help)
    • geaves wrote:

      I might be being dumb in my observation here but from the first post:

      This /dev/sde1 47G 40G 5.0G 89% / I assume is the SSD in question

      This /dev/sdg1 3.6T 708G 2.9T 20% /srv/dev-disk-by-label-Disk6 is part of part of MergFS

      If the above is correct what is this:
      1. root@OMV:/# df -h /dev/sdg1
      2. Filesystem Size Used Avail Use% Mounted on
      3. /dev/sdg1 91G 49G 38G 57% /
      or am I missing something, I understand what I see (unless I need new glasses) but I can't understand/decipher it.
      I have added two more disks and the disk "names" have changed from the first post.

      sdg1 is now the SSD. I understand that was confusing.
    • I have managed to solve the problem. It seems that some hidden files and folders were present. I posted the problem on the Debian forum and got an answer. Wanted to post the solution here so that other user could troubleshoot "the same problem" using the same approach.

      You need to Mount bind the root ( /) to another mount point. First create the folder mnt:

      Source Code

      1. mkdir /mnt


      Then mount bind the root to /mnt

      Source Code

      1. mount --bind / /mnt
      Then perform:

      Source Code

      1. du -h --max-depth=1 /mnt

      I then found that there was another folder called sharedfolders with lot of files and folders, only visible upon doing a mount bind.
      After removing them on the new mount point i performed:

      Source Code

      1. umount /mnt

      Files and folders are now gone and space on the disk is back!

      Thanks to everyone who provided help and suggestions!
    • All's well that ends well, but one would have to wonder; what happened where "sharedfolders" became hidden and what was adding files to it?

      Was there anything on the Debian forum that might explain or suggest how it happened?
      (Just curious.)

      Video Guides :!: New User Guide :!: Docker Guides :!: Pi-hole in Docker
      Good backup takes the "drama" out of computing.
      ____________________________________
      Primary: OMV 3.0.99, ThinkServer TS140, 12GB ECC, 32GB USB boot, 4TB+4TB zmirror, 3TB client backup.
      Backup: OMV 4.1.9, Acer RC-111, 4GB, 32GB USB boot, 3TB+3TB zmirror, 4TB Rsync'ed disk