There were some giant json.log file within Docker containers. It freed up 10 gigs...but still in the same situation.
Beiträge von Goobs
-
-
My install didn't have NCDU pre-installed so grabbed the tar. Still, nothing obvious is hogging 900+ GB. I've pasted the initial ncdu results. The largest being media in /srv which is part of a RAID 5 array--anything in /export is also a symlink to the array. /proc, /sys and /temp @ 0.0 KiB seems strangely low...that should report with something since there is data within these directories.
ncdu is a great tool though...it'll come in handy in the future.
--- / ------------------------------------------------------------------------------------------------------------------------------------------------------------------
12.1TiB [##########] /srv
23.6GiB [ ] /export
20.3GiB [ ] /var
3.0GiB [ ] /home
1.5GiB [ ] /usr
1.0GiB [ ] /boot
556.3MiB [ ] /lib
256.1MiB [ ] /opt
18.0MiB [ ] /run
15.6MiB [ ] /sbin
9.6MiB [ ] /bin
8.5MiB [ ] /etc
44.0KiB [ ] /root
e 16.0KiB [ ] /lost+found
8.0KiB [ ] /dev
8.0KiB [ ] /media
4.0KiB [ ] /lib64
e 4.0KiB [ ] /mnt
. 0.0 B [ ] /proc
0.0 B [ ] /sys
0.0 B [ ] /tmp
@ 0.0 B [ ] initrd.img.old
@ 0.0 B [ ] initrd.img
@ 0.0 B [ ] vmlinuz.old
@ 0.0 B [ ] vmlinuzTotal disk usage: 12.1TiB Apparent size: 140.1TiB Items: 1432822
-
It's been three years on OMV, and this is the first time I've had to create my own thread---Google has failed me.
Running OMV 3.0 on a 1 TB HDD. I'm unable to login with the WEB GUI, but SSH is fine. All services are working without any noticeable lag (Plex, Shell in a box, Radarr, SABNZBD). I'm certain there isn't a huge log, or any large file for that matter that's the culprit. All in all, I'm not using more than 10-20 GB on a 1 TB HDD. Root directory, "/", is at 100% capacity
du -hx --max-depth=1 --exclude=procdu: cannot access ‘./proc/10373/fd/4’: No such file or directory
du: cannot access ‘./proc/10373/fdinfo/4’: No such file or directory
du: cannot access ‘./proc/18467’: No such file or directory
13027695972 .
12975465008 ./srv
5604787056 ./srv/dev-disk-by-label-8tbXeonVault
3167284320 ./srv/dev-disk-by-label-8tbXeonVault/MOVIES
2236724512 ./srv/dev-disk-by-label-8tbXeonVault/TELEVISION
1917482908 ./srv/dev-disk-by-label-PARITYdisk1
1917482888 ./srv/dev-disk-by-label-PARITYdisk1/snapraid.parity
1839466948 ./srv/dev-disk-by-label-BUdisk2
1796074224 ./srv/dev-disk-by-label-BUdisk1
1767773268 ./srv/dev-disk-by-label-BUdisk3
1741663196 ./srv/dev-disk-by-label-BUdisk2/BACKUP_TELEVISION
1704735924 ./srv/dev-disk-by-label-BUdisk3/UFS_TEST_BACKUP_MOVIES
1365065088 ./srv/dev-disk-by-label-BUdisk1/UFS_TEST_BACKUP_MOVIES
430690492 ./srv/dev-disk-by-label-BUdisk1/BACKUP_TELEVISIONdu -hx --max-depth=1
8.0K ./media
16M ./sbin
9.7M ./bin
557M ./lib
257M ./opt
8.6M ./etc
713M ./var
4.0K ./mnt
44K ./root
998M ./boot
20K ./srv
3.1G ./home
1.5G ./usr
16K ./lost+found
4.0K ./lib64
4.0K ./export7.0G .
I've seen Ryan mention failed rsync jobs as a possible culprit, but i'm unable to go much further than that. Rsync seems likely to be involved---every time I reboot, I see my backup array spinning up, without executing a job.
Any help would be greatly appreciated. Thanks
-
Had the same problem---manually changing the host and port worked great. Thanks.