Posts by wolf69

    Thanks to you both,

    I confirm that if there is any issue on missing space on you system disk, and you can’t figure out where the files are, the solution provided on the link mentionned by macom is working perfectly!

    (Making this final post if this post is poping for other users while searching for a similar issue)

    Thanks again for your time!

    Sorry, thanks i was doing it in the meanwhile, and the explanation and step by step was well written, i shoult have better look at the last post.

    So i find out the 70Go deleted them, now i’m stuck as i can’t unmount the folder created, CLI says it don’t recognize the unmound function

    Thanks to for baring with me

    I agree with your asumption of the disk not being moounted at the time but weird i can’t figure out where are the files, here the root search:

    Yes i did but can’t find the folder

    root@NAS-JC:/var# du -ahxd1 | sort -hr | head -n 10

    12G .

    11G ./lib

    985M ./log

    39M ./cache

    6,7M ./www

    1,8M ./backups

    260K ./spool

    28K ./tmp

    4,0K ./opt

    4,0K ./mail

    I would assume it would be in the docker folder but portainer UI is only stating 17Go used…

    Could it be the overlay files above? But i read they are not really that size and juste like a cache file.


    Thanks for answering.

    So yes my disk system was full, after cleaning the log (apt get clean), i’m now able to login in OMV UI.

    I think i understand where is coming the issue, earlier this week there was an issue with sabnzbd, it donwloaded some file i could t retreived on my final « complete » usual folder, i rebooted and it worked as usual, but it seems the files has been downloaded on the OS SSD instead of the usual other HDD.

    I confirmed it by looking at the historical usage of the OS SSD which increased from 20Go to 100Go earlier this week…

    I did a search but i can’t find the files on my OS SSD:

    root@NAS-JC:/var# df -h

    Sys. de fichiers Taille Utilisé Dispo Uti% Monté sur

    udev 7,8G 0 7,8G 0% /dev

    tmpfs 1,6G 19M 1,6G 2% /run

    /dev/sdn1 102G 95G 1,2G 99% /

    tmpfs 7,8G 0 7,8G 0% /dev/shm

    tmpfs 5,0M 0 5,0M 0% /run/lock

    tmpfs 7,8G 0 7,8G 0% /sys/fs/cgroup

    tmpfs 7,8G 0 7,8G 0% /tmp

    overlay 102G 95G 1,2G 99% /var/lib/docker/overlay2/73cec07480eb97405e88b50056f841ee35dabcec3cfe0daaed7663c7fe7e316b/merged

    overlay 102G 95G 1,2G 99% /var/lib/docker/overlay2/111cea32ca8b5e413318bf525533845090d36e04828b2aaf376c953e974601e7/merged

    overlay 102G 95G 1,2G 99% /var/lib/docker/overlay2/ae30d6a33635d0da98bb5f2c2e2beff8e08cc9492966f832b953b6a952b69773/merged

    overlay 102G 95G 1,2G 99% /var/lib/docker/overlay2/83167294355cb0592bb96e9baf8da9f81d1ad9ac4fca5640b6172d871ddf8fa5/merged

    overlay 102G 95G 1,2G 99% /var/lib/docker/overlay2/dd7a4155bfadcc8c8dc518b86786ca495f688dc561fb0dff7030a1a805877abf/merged

    overlay 102G 95G 1,2G 99% /var/lib/docker/overlay2/e49f19236750998ee45bf91ad9c75736c78b998eed5e54a3ab7025025bd8aa14/merged

    overlay 102G 95G 1,2G 99% /var/lib/docker/overlay2/0251c98dfb58efa283190ca094c9a34fdb55ab77d95453d58272666474f88e24/merged

    overlay 102G 95G 1,2G 99% /var/lib/docker/overlay2/636dddd7f52dc4cdd83caecf5dc29ac3be653220ad8824b481c28c6b05f48279/merged

    overlay 102G 95G 1,2G 99% /var/lib/docker/overlay2/6ee94ec482117109551a73521ba4b3b60ab3288f944b7072dc6f328a33d8bbd0/merged

    shm 64M 0 64M 0% /var/lib/docker/containers/4e587a57d3770c91cee09244c4f97fab238c2b36c3cb6b22a83f662bace0f0ad/mounts/shm

    shm 64M 0 64M 0% /var/lib/docker/containers/d253d5fc409054f4eb5e0993d9b280c2a70a0853de8718ec8aa913c967393eb3/mounts/shm



    I have been using OMV for years, and today, my docker radaar failed i had « disk i/o error », i though it was the docker.

    Restart the docker and it was now unavailable.

    Then i looked at sonaar, same issue…

    Then restarted sabnzbd and i had a lot of error on the ui that i never had.

    So i wanted to look at OMV UI to see if there was an issue with my system hardrive and now i can’t even access the UI, i log in and it just stay on the log in page.

    Weird thing is that some app like sabnzbd and accessing the different disk via ftp,smb works just fine.

    Wiuld you have any idea? Any recommandation to fix or debug?



    I just bought and installed an ip camera which is storing videos files on a specific hdd Via ftp.

    The ip camera (reolink) app is good enough i don’t need to use a nvr app on omv like shinobi.

    however the only issue i have is that the hdd will run out of space, so is there any way to run a script that will delete older files if the hdd space is running low and where i can define the hdd space threshold?

    I don't think it's the label it appears to be the file system look at the output on blkid you could run fsck /dev/sdb

    I already tried to fixed it without success


    I was doing some test on the transfer speed on my HDDs and the ones i was testing seems to have lost their mapping with their labels, that's really weird i hope i didnt erase them by accident...

    By running:

    blkid -o full

    I have the results:

    /dev/sdg1: LABEL="Autre" UUID="380b65e9-771c-4996-abbe-3efbe4c64981" TYPE="ext4" PARTUUID="44672ec2-e421-4871-911d-a4489debb0f5"
    /dev/sdb1: PARTUUID="fc2dc338-bce2-4760-9b90-816cd3980796"
    /dev/sdc1: PARTUUID="b6a51930-60e2-493a-a901-f9f8f0fc199a"

    and by running:

     ls -la /srv

    I have:

    drwxrwxrwx  3 root root    4096 févr. 14 14:58 dev-disk-by-label-Films
    drwxrwxrwx  3 root root    4096 févr. 14 14:58 dev-disk-by-label-Telechargement
    drwxr-xr-x  4 root root    4096 oct.  23 18:51 dev-disk-by-label-Autre

    As you can see sdb1 and sdbc1 are not mapped to their system file "Films" and "Telechargement"

    sdb      8:16   0  10,9T  0 disk
    └─sdb1   8:17   0  10,9T  0 part
    sdc      8:32   0   1,8T  0 disk
    └─sdc1   8:33   0   1,8T  0 part
    sdg      8:96   0   3,7T  0 disk
    └─sdg1   8:97   0   3,7T  0 part /srv/dev-disk-by-label-Autre

    How to remapp them without loosing my data (if they are still there)?

    Thank you