Harddisk full!?

  • Hello,


    I installed OMV 5.05 on a TerraMaster F2-421 as a replacement for my old Synology NAS. OMV is installed on a Patriot Burst SSD 120 GB. In the TerraMaster NAS one Seagate IronWolf 8TB is installed and formatted with BTRFS. OMV-Extras are installed and the Flashplugin.


    I also installed docker and portainer from OMV-Extras in a directory on the harddisk. I use docker for syncthing, which runs now in a docker-container.


    Actually I copy data from the old NAS to OMV via CIFS-Shares and also via syncthing. Today I get error messages, that the harddisk is full. I connect via ssh to OMV and see:

    But OMV tells me in Filesystems, that only 423.36 GiB is used and 6.86 TiB is free.


    Can someone give me a hint, whats going wrong and how I can solve the problem?


    Thanks in advance


    Matthias

  • This is very strange...


    I deleted 2 directories, which are copied at last and had a normal behavior and much free space:


    I restart the syncthing-container, which I paused before because the empty space and it runs several minutes normal. Than again I get a message, that the target directory is full. Now I have again 0 available space:

    I stop the syncthing container again. Have someone a idea, where the cause of this is? Maybe the harddisk has a defect?


    Matthias

  • Here some additional information:
    I tested the harddisk with f3probe and it finds no problems:


    And here the output from fdisk:

    Matthias

    • Offizieller Beitrag

    You might be out of inodes. What is the output of: df -i

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Yes, this seems to be the cause:

    Can you point me to a solution? I heard about such problems, but don't know a solution for it.


    Matthias

    • Offizieller Beitrag

    Can you point me to a solution? I heard about such problems, but don't know a solution for it.

    You have no inodes in that list. I just noticed this is btrfs too. This should help - http://www.nrtm.org/index.php/…on-device/comment-page-1/

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    So this has been a problem for 8 years, based on the article. It seems like the default settings should be a little less aggressive.

    I just checked and I have almost 2 million files on my 8TB drive with btrfs filesystem. So, I don't think the average user is going to have a problem very often.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    As I understand, in the article snapper was creating many snapshots which was causing the issue. We do not know if OP is using snapper. Unlikely, otherwise he would be aware of the config file as he would have installed snapper on purpose.


    BTW in the meantime it is possible to delete a range of snapshots with one command.

    • Offizieller Beitrag

    For reference, I have no snapshots on mine.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I also think, that snapshots are not the problem.


    I have no snapper installed. I used the normal installation from OpenMediaVault 5.0.5.


    I found in https://wiki.debianforum.de/Btrfs#Grundlagen_3 a possibility to find snapshots:


    This snapshots are not very big:

    Code
    du -sh /srv/dev-disk-by-label-IronWolf8TB1/docker/btrfs/subvolumes
    795M    subvolumes

    I searched, how I can find more information about my filesystem and find:

    Here I see, that my harddisk has a size of 7.28 TiB (its a IronWolf 8 TB Drive), 421.02 GiB is allocated and 419.96 GiB is used.


    Then I searched about allocation in btrfs and find, that unallocated space is automatically allocated if necessary. But on my disk this seems to not happen. For me it looks like the 421.03 GiB is the maximum space, which can be used. I don't understand this and am thinking, if it was a big fault to use btrfs for my harddisk.


    But maybe someone can find a solution for this?


    Thanks for any hint, which can help. If you need more information, please tell.


    Matthias

  • I found, balance can help with empty space with btrfs. But on my disk not:

    Code
    btrfs balance start -dusage=25 -dlimit=10 -musage=25 -mlimit=10 /srv/dev-disk-by-label-IronWolf8TB1/
    Done, had to relocate 0 out of 422 chunks
  • A complete balance ends with an error:


    The last lines of dmesg say:


    Matthias

  • Maybe also interesting:

    Code
    btrfs filesystem df /srv/dev-disk-by-label-IronWolf8TB1
    Data, single: total=419.00GiB, used=418.85GiB
    System, DUP: total=32.00MiB, used=80.00KiB
    Metadata, DUP: total=2.00GiB, used=742.78MiB
    GlobalReserve, single: total=508.09MiB, used=0.00B

    Here is nothing to see from the unallocated space. Here it looks like the harddisk only have 419 GiB.


    Matthias

  • It seems, my problem is solved...


    I updated manually:


    After the update a reboot.


    The system runs now about 30 minutes, syncthing is written new files on the harddisk and no error...


    The allocated space is growing, as it should be:

    Actually is 435 GiB allocated. Before update and reboot is stays at 419 GiB. Hopefully it is not a short-term-solution... I will tell.


    Matthias

  • It happens again...


    I was copying more files from old NAS to OMV and start a Windows-Virtualbox-VM in which Syncthing runs. Then again "disk full".


    A restart of OMV doesn't help.


    Matthias

  • A complete balance solve the problem for now:

    But I still don't know, why this occur. Has someone a idea? I don't want this error every few days...


    Matthias

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!