No, I did not get any explanation - and I am just as curious. My hypothesis is that my Docker container was given a wrong path, hence it made a path on / instead on the mount point in my MergerFS pool - this filled up my drive. When I deleted files/folders and removed the containers/Docker images as well as uninstalled Docker using the plugin - something was still occupied by a service and the data allocated to the disk was still occupied ...
LSOF did not help me, so if there was a service keeping this "shadow" folder active I could not find it. Weird.
Beiträge von SonOfThor
-
-
I have managed to solve the problem. It seems that some hidden files and folders were present. I posted the problem on the Debian forum and got an answer. Wanted to post the solution here so that other user could troubleshoot "the same problem" using the same approach.
You need to Mount bind the root ( /) to another mount point. First create the folder mnt:
Then mount bind the root to /mnt
Then perform:
I then found that there was another folder called sharedfolders with lot of files and folders, only visible upon doing a mount bind.
After removing them on the new mount point i performed:
Files and folders are now gone and space on the disk is back!Thanks to everyone who provided help and suggestions!
-
I might be being dumb in my observation here but from the first post:
This /dev/sde1 47G 40G 5.0G 89% / I assume is the SSD in question
This /dev/sdg1 3.6T 708G 2.9T 20% /srv/dev-disk-by-label-Disk6 is part of part of MergFS
If the above is correct what is this:
- root@OMV:/# df -h /dev/sdg1
- Filesystem Size Used Avail Use% Mounted on
- /dev/sdg1 91G 49G 38G 57% /
or am I missing something, I understand what I see (unless I need new glasses) but I can't understand/decipher it.
I have added two more disks and the disk "names" have changed from the first post.
sdg1 is now the SSD. I understand that was confusing.
-
Code
root@OMV:/# lsof +L1 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NLINK NODE NAME php-fpm7. 17624 root 3u REG 0,40 0 0 918917 /tmp/.ZendSem.mUQiM7 (deleted) php-fpm7. 17625 www-data 3u REG 0,40 0 0 918917 /tmp/.ZendSem.mUQiM7 (deleted) php-fpm7. 17626 www-data 3u REG 0,40 0 0 918917 /tmp/.ZendSem.mUQiM7 (deleted)
Found some deleted files still being kept "open". These might be a problem. Not sure how I will tackle this further though. Have tried to perform kill -9 PID (e.g. kill -9 17624, but this did not help)
-
The more likely scenario is something writing to a mount point with no drive mounted to that mountpoint. When the drive is eventually mounted, the contents of the drive will appear in the mount point directory and what was previously written in the directory will not be visible but is still present and taking up space on the rootfs.
So, it is best to check by first unmounting the drive, then looking in the mount point directory. It should be empty.Reply being delayed because it is wrongly being held for moderation.
This seems like a potential cause.
I did see at some point thought my docker container Sabnzbd to store data on my OS-disk. This was caused by the data point being pointed erroneously in the docker container (to a mount point/folder in my MergerFS pool). I corrected the path but it did not make a difference. I did uninstall Docker and all the containers, but it did not make a difference. I though this was not the cause - maybe it was.
Could it be that the data stored by the containers was not deleted on the disk and is therefore taking up space even though you can not identify that by du or ncdu command?If data from "ghost" containers still takes up space on the rootfs, how can it be removed?
-
Code
root@OMV:/# df -h /dev/sdg1 Filesystem Size Used Avail Use% Mounted on /dev/sdg1 91G 49G 38G 57% /
ncdu -xCode
Alles anzeigen5.4 GiB [##########] /home 3.6 GiB [###### ] /var 949.5 MiB [# ] /lib 935.1 MiB [# ] /usr 130.7 MiB [ ] /boot 14.0 MiB [ ] /sbin 12.6 MiB [ ] /bin 7.3 MiB [ ] /etc 40.0 KiB [ ] /root e 16.0 KiB [ ] /lost+found 12.0 KiB [ ] /srv 8.0 KiB [ ] /media 4.0 KiB [ ] /lib64 4.0 KiB [ ] /sharedfolders e 4.0 KiB [ ] /opt e 4.0 KiB [ ] /mnt e 4.0 KiB [ ] /export @ 0.0 B [ ] initrd.img.old @ 0.0 B [ ] initrd.img @ 0.0 B [ ] vmlinuz.old @ 0.0 B [ ] vmlinuz > 0.0 B [ ] /tmp > 0.0 B [ ] /sys > 0.0 B [ ] /run > 0.0 B [ ] /proc > 0.0 B [ ] /dev
So, using ncdu, what is making up 49GB on my drive? -
I still have not managed to identify the files that are filling up my OS-disk.
As a result I have had to boot to GParted and adjust my partitions twice now. If this continues I will run out of space on the disk.
Does anyone have an idea of what I can do next? Would it be advisable to address this issue on the Debian forum, since OMV is based on Debian?
-
See what you can find under /var/lib/docker The sub-dir /containers might have something interesting.
I have un-installed docker, so there is no docker folder.
As you've noted, Linux has hidden folders and files. While I don't think this will be productive, the following will list all hidden folders.
find / -type d -iname ".*" -ls
The following is for all hidden files.
find / -type f -iname ".*" -ls
After running the du -hxd1, which from my understanding will include hidden files, I still can not identify files or folders to occupy 39 GB of data. See my first post.Here is the suggested find command:
Code
Alles anzeigenroot@OMV:/# find / -type d -iname ".*" -ls find: ‘/proc/26978/task/26978/net’: Invalid argument find: ‘/proc/26978/net’: Invalid argument find: ‘/proc/26979/task/26979/net’: Invalid argument find: ‘/proc/26979/net’: Invalid argument find: ‘/proc/26980/task/26980/net’: Invalid argument find: ‘/proc/26980/net’: Invalid argument find: ‘/proc/26981/task/26981/net’: Invalid argument find: ‘/proc/26981/net’: Invalid argument find: ‘/proc/26982/task/26982/net’: Invalid argument find: ‘/proc/26982/net’: Invalid argument find: ‘/proc/26983/task/26983/net’: Invalid argument find: ‘/proc/26983/net’: Invalid argument find: ‘/proc/26984/task/26984/net’: Invalid argument find: ‘/proc/26984/net’: Invalid argument 2883654 4 drwx------ 2 shellinabox shellinabox 4096 Jul 28 09:46 /var/lib/shellinabox/.ssh 23761 0 drwxrwxrwt 2 root root 40 Oct 17 16:44 /tmp/.Test-unix 23760 0 drwxrwxrwt 2 root root 40 Oct 17 16:44 /tmp/.font-unix 23759 0 drwxrwxrwt 2 root root 40 Oct 17 16:44 /tmp/.XIM-unix 23758 0 drwxrwxrwt 2 root root 40 Oct 17 16:44 /tmp/.ICE-unix 23757 0 drwxrwxrwt 2 root root 40 Oct 17 16:44 /tmp/.X11-unix 9212828131330 4 drwxrwxrw- 4 jim users 4096 Oct 13 18:56 /sharedfolders/media/.filebot 9212828131329 4 drwxrwxrw- 2 jim users 4096 Oct 13 18:56 /sharedfolders/media/.oracle_jre_usage 262153 4 drwxr-xr-x 2 root root 4096 Aug 12 08:00 /root/.nano 262148 4 drwx------ 2 root root 4096 Jul 28 08:17 /root/.ssh 123281410 4 drwxrwxrw- 4 jim users 4096 Oct 13 18:56 /srv/dev-disk-by-label-Disk6/media/.filebot 123281409 4 drwxrwxrw- 2 jim users 4096 Oct 13 18:56 /srv/dev-disk-by-label-Disk6/media/.oracle_jre_usage 9212828131330 4 drwxrwxrw- 4 jim users 4096 Oct 13 18:56 /srv/e9dd6d01-bdd8-4814-a132-e8ee274fadba/media/.filebot 9212828131329 4 drwxrwxrw- 2 jim users 4096 Oct 13 18:56 /srv/e9dd6d01-bdd8-4814-a132-e8ee274fadba/media/.oracle_jre_usage
-
Hi.
I have not spent two days searching for an answer in this and other forums, but have not managed to find a solution. Now I am desperate.I have a system with a 120 GB SSD drive were OMV has allocated 40 GB space (/dev/sde1).
The system also contain 6 drives in a Snapraid configuration.After installing dockers and setting up a few containers I got a notification a few days ago that the OS disk was getting full, 89%.
To me it seems that one or a few of the containers filled up the OS drive.
Also, it seems I might have set up one or more folders erroneously, not pointing to the Snapraid pool of disks, which might have contributed to the problem that data got stored on the OS drive.
I decided to delete the containers, but this did not solve the problem. I deleted the shared folders that might have been set up erroneously with no changes in disk space.
I followed up with uninstallation of Docker. Still no solution. Ran rm -r of the suspected folder, files removed, but still no solution. Restart of OMV did not help.So, I am stuck. Running df and du has not managed to help me to figure out which folder/file is taking up space on my OS drive after the previous efforts.
My conclusion: I suspect that somehow containers did take up space, upon deletion of the containers and file/folder, the space on the drive is still occupied but is not visible for the system (or me)? Is this at all possible?
I need help, please. Alternatively, I have to do a full re-installation of OMV, not a preferred solution.
Here follows the du and df commands:
Code
Alles anzeigenroot@OMV:~# df -h Filesystem Size Used Avail Use% Mounted on udev 32G 0 32G 0% /dev tmpfs 6.3G 9.5M 6.3G 1% /run /dev/sde1 47G 40G 5.0G 89% / tmpfs 32G 0 32G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 32G 0 32G 0% /sys/fs/cgroup tmpfs 32G 0 32G 0% /tmp 1:2:3:4:5:6 20T 11T 9.6T 52% /srv/e9dd6d01-bdd8-4814-a132-e8ee274fadba /dev/sdf1 3.6T 3.1T 498G 87% /srv/dev-disk-by-label-Disk2 /dev/sdc1 3.6T 2.1T 1.6T 58% /srv/dev-disk-by-label-Disk3 /dev/sdg1 3.6T 708G 2.9T 20% /srv/dev-disk-by-label-Disk6 /dev/sdb1 2.7T 610G 2.1T 23% /srv/dev-disk-by-label-Disk4 /dev/sda1 3.6T 3.2T 489G 87% /srv/dev-disk-by-label-Disk1 /dev/sdd1 2.7T 609G 2.1T 23% /srv/dev-disk-by-label-Disk5 folder2ram 32G 30M 32G 1% /var/log folder2ram 32G 0 32G 0% /var/tmp folder2ram 32G 2.9M 32G 1% /var/lib/openmediavault/rrd folder2ram 32G 16K 32G 1% /var/spool folder2ram 32G 67M 32G 1% /var/lib/rrdcached folder2ram 32G 12K 32G 1% /var/lib/monit folder2ram 32G 4.0K 32G 1% /var/lib/php folder2ram 32G 0 32G 0% /var/lib/netatalk/CNID folder2ram 32G 420K 32G 1% /var/cache/samba
Code
Alles anzeigenroot@OMV:~# du -h --max-depth=1 --exclude="*/proc/*" / 0 /proc 9.5M /run 385M /var 0 /sys 0 /tmp 4.0K /lib64 4.0K /home 8.0K /media 644M /usr 11T /sharedfolders 7.0M /etc 4.0K /mnt 44K /root 13M /bin 0 /dev 4.0K /opt 4.0K /export 15M /sbin 21T /srv 713M /lib 16K /lost+found 86M /boot 31T /
Or this alternative du ...