Unable to log into web gui after update

  • Just an update. I went through all files located in /var/lib/docker/overlay2 and there was no files larger than 620M (3.2G) and certainly not adding up to anywhere near 100GB . I then proceeded to stop the Plex docker, remove it and run another docker prune just to ensure that wasn't the problem. I am still in the same position and there is still no space on the 'Boot Drive' even though Plex has been removed from the system. What could possibly be taking up the space on the boot drive?

  • The output of sudo du -d1 -x -h / | sort -h


    root@lh-omv-nas:~# sudo du -d1 -x -h / | sort -h
    4.0K /export
    4.0K /home
    4.0K /lib64
    4.0K /mnt
    4.0K /sharedfolders
    8.0K /media
    12K /srv
    16K /lost+found
    16K /opt
    76K /root
    7.3M /etc
    13M /bin
    15M /sbin
    173M /boot
    950M /usr
    1.2G /lib
    1.4G /var
    3.7G /

    • Offizieller Beitrag

    Is the filesystem still full?
    df -h /


    Also, is this btrfs and the filesystem is full from subvolumes or snapshots? What is the output of: blkid

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.6 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • The output of df -h /


    root@lh-omv-nas:~# df -h /
    Filesystem Size Used Avail Use% Mounted on
    /dev/sdc1 110G 108G 0 100% /


    The drive appears to have change for some reason to sdc1 as apposed to sdb1.


    The full output of df -h:


    root@lh-omv-nas:~# df -h
    Filesystem Size Used Avail Use% Mounted on
    udev 3.9G 0 3.9G 0% /dev
    tmpfs 794M 9.2M 784M 2% /run
    /dev/sdc1 110G 108G 0 100% /
    tmpfs 3.9G 0 3.9G 0% /dev/shm
    tmpfs 5.0M 0 5.0M 0% /run/lock
    tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
    tmpfs 3.9G 0 3.9G 0% /tmp
    /dev/sdb1 2.8T 12G 2.8T 1% /srv/dev-disk-by-label-Nextcloud
    /dev/sde1 7.3T 234G 7.1T 4% /srv/dev-disk-by-label-Rsnapshot
    /dev/sda1 2.8T 208G 2.6T 8% /srv/dev-disk-by-label-Docker
    //192.168.10.246/RemoteBackup 7.3T 349G 7.0T 5% /srv/2b1385da-c097-4af7-8d5f-83da773d4b29
    overlay 110G 108G 0 100% /var/lib/docker/overlay2/1bb93d795d93790555cade4c27f2b28765673b594ba15e4d5b7786044f450e4a/merged
    overlay 110G 108G 0 100% /var/lib/docker/overlay2/28f5e6285cb6404702a61090a5f75eb3a05798888a2aa5d514e08f36a3a7dc1b/merged
    overlay 110G 108G 0 100% /var/lib/docker/overlay2/dacfcb5422d6922a081bdf59ff3c5807375859a408da7be74762430cb28ee1a9/merged
    overlay 110G 108G 0 100% /var/lib/docker/overlay2/2be799da1cd914fa2265839de5cc636d09462af966573a9d2a233aa5b7d64a84/merged
    overlay 110G 108G 0 100% /var/lib/docker/overlay2/e0595ffea34c748296043761a2424a712c24450d1ed9258967f3ad64445e5947/merged
    overlay 110G 108G 0 100% /var/lib/docker/overlay2/cdec3ba174225596ad44ba8215f81a9919ef7f042b23a5609a42be81bed26ffe/merged
    overlay 110G 108G 0 100% /var/lib/docker/overlay2/33b3d671e34abc938e64383c4007b5cf38c8781811b427a88f14bba0ab83d2d2/merged
    shm 64M 0 64M 0% /var/lib/docker/containers/5a8410a44e40c83d085bb8f81f0f0d86d93c434e3cf048862249c37d74a75103/mounts/shm
    shm 64M 0 64M 0% /var/lib/docker/containers/35f81b915c9e6d75ba42f56510305d34d564b71f3d71200e3f925d5c2d31b2d8/mounts/shm
    shm 64M 0 64M 0% /var/lib/docker/containers/83679895de479f3b7ae17ccc7e413d4572782d0b1be38830c182fb4469a3fd38/mounts/shm
    shm 64M 0 64M 0% /var/lib/docker/containers/071aff54b9075a0af3a6980d10d0e3a0699eb61869341bdac7a5f61713e20ae8/mounts/shm
    shm 64M 0 64M 0% /var/lib/docker/containers/c5b1c11399dcbf41804aa98c685ea3c6ae17cdee314b0c4ca642449bb9be60e7/mounts/shm
    shm 64M 0 64M 0% /var/lib/docker/containers/e69a6ee3537c48c7d721119c7d67e2eee440caaa25db40687f6cdd3f2d042852/mounts/shm
    shm 64M 0 64M 0% /var/lib/docker/containers/f04dfe06d693b0a67796294f791402aec59b3696b72e5ced9027d2c0725d06c6/mounts/shm

    • Offizieller Beitrag

    The drive appears to have change for some reason to sdc1 as apposed to sdb1.

    That doesn't cause any problems.


    I didn't need the full output of df but blkid is important to answer the btrfs question.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.6 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • blkid output:


    root@lh-omv-nas:~# blkid
    /dev/sda1: LABEL="Docker" UUID="e1563c34-77b8-4332-bf39-e64bb930389c" UUID_SUB="1c425899-894d-478d-8f8f-19cf6c7ccb55" TYPE="btrfs" PARTUUID="b8ae89ee-24d2-4c1a-8258-04bc0317d137"
    /dev/sdb1: LABEL="Nextcloud" UUID="579d4d87-6c71-47bd-96f5-4241e4ab5d74" UUID_SUB="951f9b65-18f1-41b3-9f0d-4eb0a3e71342" TYPE="btrfs" PARTUUID="b3cc046c-eda0-4b02-a440-dc3467d36170"
    /dev/sde1: LABEL="Rsnapshot" UUID="a1fab608-3891-44e4-9437-74c7de457d17" UUID_SUB="2354a522-96b4-4269-b9a4-595534fe85f8" TYPE="btrfs" PARTUUID="8e5200ec-1ad7-429a-ae66-e1c13c6a2620"
    /dev/sdc1: UUID="6cab88f3-a43d-4489-b0ca-30d48f16e623" TYPE="ext4" PARTUUID="dee338c9-01"
    /dev/sdc5: UUID="fb892647-b833-40b0-92af-d601a036bf2c" TYPE="swap" PARTUUID="dee338c9-05"


    edit: btrfs is only on the shared drives not on the boot drive.

    • Offizieller Beitrag

    Have you rebooted? It almost seems like there was a large file(s) deleted but it is still in use and not freeing up the filesystem.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.6 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    What type of media is the OS installed on?

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.6 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    What is the output of:


    tune2fs -l /dev/sdc1

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.6 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • The output of tune2fs -l /dev/sdc1


    root@lh-omv-nas:~# tune2fs -l /dev/sdc1
    tune2fs 1.44.5 (15-Dec-2018)
    Filesystem volume name: <none>
    Last mounted on: /
    Filesystem UUID: 6cab88f3-a43d-4489-b0ca-30d48f16e623
    Filesystem magic number: 0xEF53
    Filesystem revision #: 1 (dynamic)
    Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent 64bit flex_bg sparse_super large_file huge_file dir_nlink extra_isize metadata_csum
    Filesystem flags: signed_directory_hash
    Default mount options: user_xattr acl
    Filesystem state: clean
    Errors behavior: Continue
    Filesystem OS type: Linux
    Inode count: 7290880
    Block count: 29163075
    Reserved block count: 1458153
    Free blocks: 405302
    Free inodes: 7154689
    First block: 0
    Block size: 4096
    Fragment size: 4096
    Group descriptor size: 64
    Reserved GDT blocks: 1024
    Blocks per group: 32768
    Fragments per group: 32768
    Inodes per group: 8192
    Inode blocks per group: 512
    RAID stripe width: 8191
    Flex block group size: 16
    Filesystem created: Sun Oct 7 10:27:51 2018
    Last mount time: Wed Mar 27 18:01:27 2019
    Last write time: Wed Mar 27 18:01:23 2019
    Mount count: 92
    Maximum mount count: -1
    Last checked: Sun Oct 7 10:27:51 2018
    Check interval: 0 (<none>)
    Lifetime writes: 467 GB
    Reserved blocks uid: 0 (user root)
    Reserved blocks gid: 0 (group root)
    First inode: 11
    Inode size: 256
    Required extra isize: 32
    Desired extra isize: 32
    Journal inode: 8
    First orphan inode: 6952315
    Default directory hash: half_md4
    Directory Hash Seed: 30e707c8-a770-4147-95f8-4ad0ef7f4b8a
    Journal backup: inode blocks
    Checksum type: crc32c
    Checksum: 0xbbf3b2fb

    • Offizieller Beitrag

    Well, it isn't out of inodes and the reservation isn't set really high. I would boot systemrescuecd on the system and see if the disk is still full.

  • Thanks @ryecoaaron I will give this a try when i get back later tonight.


    edit: Ps I have found a file that appears to be 131GB located in /proc/kcore is this normal? I must have over 200+ new files created in /proc all created within 5 seconds of each other.

    • Offizieller Beitrag

    Isn't there an easy to way to stop all OMV processes at once to umount the data disks and look then which of the /srv mountpoints contains the data?

    Nope. Especially if you have docker running. Booting systemrescuecd would be much easier.

    Just thinking out loud if I removed the storage drives would that be of any benefit?

    OMV gets angry if you remove disks that are mounted by OMV. Why do you not want to boot systemrescuecd?

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.6 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!