Harddisk full!?

    • Harddisk full!?

      Hello,

      I installed OMV 5.05 on a TerraMaster F2-421 as a replacement for my old Synology NAS. OMV is installed on a Patriot Burst SSD 120 GB. In the TerraMaster NAS one Seagate IronWolf 8TB is installed and formatted with BTRFS. OMV-Extras are installed and the Flashplugin.

      I also installed docker and portainer from OMV-Extras in a directory on the harddisk. I use docker for syncthing, which runs now in a docker-container.

      Actually I copy data from the old NAS to OMV via CIFS-Shares and also via syncthing. Today I get error messages, that the harddisk is full. I connect via ssh to OMV and see:

      Source Code

      1. df
      2. Filesystem 1K-blocks Used Available Use% Mounted on
      3. udev 3873040 0 3873040 0% /dev
      4. tmpfs 779144 87816 691328 12% /run
      5. /dev/sda1 106962540 2884372 98601664 3% /
      6. tmpfs 3895708 100 3895608 1% /dev/shm
      7. tmpfs 5120 0 5120 0% /run/lock
      8. tmpfs 3895708 0 3895708 0% /sys/fs/cgroup
      9. tmpfs 3895708 0 3895708 0% /tmp
      10. folder2ram 3895708 614812 3280896 16% /var/log
      11. folder2ram 3895708 0 3895708 0% /var/tmp
      12. folder2ram 3895708 692 3895016 1% /var/lib/openmediavault/rrd
      13. folder2ram 3895708 1396 3894312 1% /var/spool
      14. folder2ram 3895708 16460 3879248 1% /var/lib/rrdcached
      15. folder2ram 3895708 4 3895704 1% /var/lib/monit
      16. folder2ram 3895708 4 3895704 1% /var/lib/php
      17. folder2ram 3895708 1260 3894448 1% /var/cache/samba
      18. /dev/sdb1 7814025540 445237928 0 100% /srv/dev-disk-by-label-IronWolf8TB1
      Display All
      But OMV tells me in Filesystems, that only 423.36 GiB is used and 6.86 TiB is free.

      Can someone give me a hint, whats going wrong and how I can solve the problem?

      Thanks in advance

      Matthias
    • This is very strange...

      I deleted 2 directories, which are copied at last and had a normal behavior and much free space:

      Source Code

      1. df
      2. Filesystem 1K-blocks Used Available Use% Mounted on
      3. udev 3873040 0 3873040 0% /dev
      4. tmpfs 779144 88612 690532 12% /run
      5. /dev/sda1 106962540 2885508 98600528 3% /
      6. tmpfs 3895708 160 3895548 1% /dev/shm
      7. tmpfs 5120 0 5120 0% /run/lock
      8. tmpfs 3895708 0 3895708 0% /sys/fs/cgroup
      9. tmpfs 3895708 12 3895696 1% /tmp
      10. folder2ram 3895708 630492 3265216 17% /var/log
      11. folder2ram 3895708 0 3895708 0% /var/tmp
      12. folder2ram 3895708 664 3895044 1% /var/lib/openmediavault/rrd
      13. folder2ram 3895708 1396 3894312 1% /var/spool
      14. folder2ram 3895708 17400 3878308 1% /var/lib/rrdcached
      15. folder2ram 3895708 4 3895704 1% /var/lib/monit
      16. folder2ram 3895708 4 3895704 1% /var/lib/php
      17. folder2ram 3895708 1260 3894448 1% /var/cache/samba
      18. /dev/sdb1 7814025540 436122020 7377876124 6% /srv/dev-disk-by-label-IronWolf8TB1
      Display All

      I restart the syncthing-container, which I paused before because the empty space and it runs several minutes normal. Than again I get a message, that the target directory is full. Now I have again 0 available space:

      Source Code

      1. df
      2. Filesystem 1K-blocks Used Available Use% Mounted on
      3. udev 3873040 0 3873040 0% /dev
      4. tmpfs 779144 88612 690532 12% /run
      5. /dev/sda1 106962540 2885508 98600528 3% /
      6. tmpfs 3895708 160 3895548 1% /dev/shm
      7. tmpfs 5120 0 5120 0% /run/lock
      8. tmpfs 3895708 0 3895708 0% /sys/fs/cgroup
      9. tmpfs 3895708 4 3895704 1% /tmp
      10. folder2ram 3895708 630532 3265176 17% /var/log
      11. folder2ram 3895708 0 3895708 0% /var/tmp
      12. folder2ram 3895708 664 3895044 1% /var/lib/openmediavault/rrd
      13. folder2ram 3895708 1396 3894312 1% /var/spool
      14. folder2ram 3895708 17620 3878088 1% /var/lib/rrdcached
      15. folder2ram 3895708 4 3895704 1% /var/lib/monit
      16. folder2ram 3895708 4 3895704 1% /var/lib/php
      17. folder2ram 3895708 1260 3894448 1% /var/cache/samba
      18. /dev/sdb1 7814025540 437798356 0 100% /srv/dev-disk-by-label-IronWolf8TB1
      Display All
      I stop the syncthing container again. Have someone a idea, where the cause of this is? Maybe the harddisk has a defect?

      Matthias
    • Here some additional information:
      I tested the harddisk with f3probe and it finds no problems:

      Source Code

      1. f3probe /dev/sdb
      2. F3 probe 7.1
      3. Copyright (C) 2010 Digirati Internet LTDA.
      4. This is free software; see the source for copying conditions.
      5. WARNING: Probing normally takes from a few seconds to 15 minutes, but
      6. it can take longer. Please be patient.
      7. Probe finished, recovering blocks... Done
      8. Good news: The device `/dev/sdb' is the real thing
      9. Device geometry:
      10. *Usable* size: 7.28 TB (15628053168 blocks)
      11. Announced size: 7.28 TB (15628053168 blocks)
      12. Module: 8.00 TB (2^43 Bytes)
      13. Approximate cache size: 0.00 Byte (0 blocks), need-reset=no
      14. Physical block size: 512.00 Byte (2^9 Bytes)
      15. Probe time: 1'11"
      Display All

      And here the output from fdisk:

      Source Code

      1. fdisk -l
      2. Disk /dev/sdb: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors
      3. Disk model: ST8000VN004-2M21
      4. Units: sectors of 1 * 512 = 512 bytes
      5. Sector size (logical/physical): 512 bytes / 4096 bytes
      6. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      7. Disklabel type: gpt
      8. Disk identifier: C89B0361-1027-4CEF-91D0-B8B50D573E44
      9. Device Start End Sectors Size Type
      10. /dev/sdb1 2048 15628053134 15628051087 7.3T Linux filesystem
      11. Disk /dev/sda: 111.8 GiB, 120034123776 bytes, 234441648 sectors
      12. Disk model: 2115
      13. Units: sectors of 1 * 512 = 512 bytes
      14. Sector size (logical/physical): 512 bytes / 512 bytes
      15. I/O size (minimum/optimal): 512 bytes / 33553920 bytes
      16. Disklabel type: dos
      17. Disk identifier: 0x27e60320
      18. Device Boot Start End Sectors Size Id Type
      19. /dev/sda1 * 2048 218406911 218404864 104.1G 83 Linux
      20. /dev/sda2 218408958 234440703 16031746 7.7G 5 Extended
      21. /dev/sda5 218408960 234440703 16031744 7.7G 82 Linux swap / Solaris
      Display All
      Matthias
    • Yes, this seems to be the cause:

      Source Code

      1. df -i
      2. Filesystem Inodes IUsed IFree IUse% Mounted on
      3. udev 968260 378 967882 1% /dev
      4. tmpfs 973927 695 973232 1% /run
      5. /dev/sda1 6832128 59229 6772899 1% /
      6. tmpfs 973927 56 973871 1% /dev/shm
      7. tmpfs 973927 11 973916 1% /run/lock
      8. tmpfs 973927 17 973910 1% /sys/fs/cgroup
      9. tmpfs 973927 12 973915 1% /tmp
      10. folder2ram 973927 100 973827 1% /var/log
      11. folder2ram 973927 5 973922 1% /var/tmp
      12. folder2ram 973927 36 973891 1% /var/lib/openmediavault/rrd
      13. folder2ram 973927 488 973439 1% /var/spool
      14. folder2ram 973927 96 973831 1% /var/lib/rrdcached
      15. folder2ram 973927 4 973923 1% /var/lib/monit
      16. folder2ram 973927 133 973794 1% /var/lib/php
      17. folder2ram 973927 3 973924 1% /var/cache/samba
      18. /dev/sdb1 0 0 0 - /srv/dev-disk-by-label-IronWolf8TB1
      Display All
      Can you point me to a solution? I heard about such problems, but don't know a solution for it.

      Matthias
    • matkoh wrote:

      Can you point me to a solution? I heard about such problems, but don't know a solution for it.
      You have no inodes in that list. I just noticed this is btrfs too. This should help - nrtm.org/index.php/2012/03/13/…on-device/comment-page-1/
      omv 5.3.4 usul | 64 bit | 5.3 proxmox kernel | omvextrasorg 5.2.5
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • TechnoDadLife wrote:

      So this has been a problem for 8 years, based on the article. It seems like the default settings should be a little less aggressive.
      I just checked and I have almost 2 million files on my 8TB drive with btrfs filesystem. So, I don't think the average user is going to have a problem very often.
      omv 5.3.4 usul | 64 bit | 5.3 proxmox kernel | omvextrasorg 5.2.5
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • As I understand, in the article snapper was creating many snapshots which was causing the issue. We do not know if OP is using snapper. Unlikely, otherwise he would be aware of the config file as he would have installed snapper on purpose.

      BTW in the meantime it is possible to delete a range of snapshots with one command.
      Odroid HC2 - armbian - OMV5.x | Asrock Q1900DC-ITX - Intenso SSD 120GB - OMV5.x
      :!: Backup - Solutions to common problems - OMV setup videos - OMV5 Documentation - user guide :!:
    • I also think, that snapshots are not the problem.

      I have no snapper installed. I used the normal installation from OpenMediaVault 5.0.5.

      I found in wiki.debianforum.de/Btrfs#Grundlagen_3 a possibility to find snapshots:

      Source Code

      1. /usr/bin/btrfs subvolume list /srv/dev-disk-by-label-IronWolf8TB1
      2. ID 257 gen 15041 top level 5 path docker/btrfs/subvolumes/ebf6713eecd5be7f0f1ce83223a43fbd70d95b4c8da4ba70926698f76cec6290
      3. ID 258 gen 15041 top level 5 path docker/btrfs/subvolumes/6333ab9f8d91f0a31e859ca1fc9535d8310976a94b2eaf5c5fbdd41ec0b7da8c
      4. ID 260 gen 15041 top level 5 path docker/btrfs/subvolumes/a35723f3b2f25ff1754d0696018af252b951d95779e60657b30d53546fd05fe6-init
      5. ID 261 gen 159340 top level 5 path docker/btrfs/subvolumes/a35723f3b2f25ff1754d0696018af252b951d95779e60657b30d53546fd05fe6
      6. ID 262 gen 15041 top level 5 path docker/btrfs/subvolumes/04880c0813ee9b59f8bb103ba3ca271589cc1d59331337f4ab2dec4ebfa48f8b
      7. ID 263 gen 15041 top level 5 path docker/btrfs/subvolumes/ae4a3b1884f14fa3ddc06a80f1e50a713c6b950f9e9b80a3e2059cb818a44914
      8. ID 264 gen 15041 top level 5 path docker/btrfs/subvolumes/dd93047ca1813fa8c07f54f4de11eb87a49a24809327fa72aa2280008b9925b7
      9. ID 265 gen 15041 top level 5 path docker/btrfs/subvolumes/e92a0a5578c95448ef4def797b1184c06b36b209b78f075900c895d7e2d4b931
      10. ID 266 gen 15041 top level 5 path docker/btrfs/subvolumes/7ed8709ad68702d7a30a93a10711d4266f3ebdc6d48c0519dceb8f482df62bc8
      11. ID 267 gen 20738 top level 5 path docker/btrfs/subvolumes/5db32d6ba2df5d9f282dbb11b7a3f92aacfbb84b826b5dc6933802e11f13af7b
      12. ID 712 gen 151093 top level 5 path docker/btrfs/subvolumes/f3c31fdb089d2a2cb1a92c6bb55274bba74e14e72ce01ff0d678f2829fd538be
      13. ID 713 gen 151111 top level 5 path docker/btrfs/subvolumes/f7a78db6be55747065c557dc8eae23f2bffcb97edd03712283eedcb1e5b4425b
      14. ID 714 gen 151120 top level 5 path docker/btrfs/subvolumes/fcd459cb578600648ec81b8d2f5b4b0ea234a78bec0ec61c8fe26490f1fbbb0f
      15. ID 715 gen 151131 top level 5 path docker/btrfs/subvolumes/596f05ca1787a6d302da5ada05c73f19b0ec4af68db12f28085a136f14b07629
      16. ID 716 gen 151147 top level 5 path docker/btrfs/subvolumes/b49ef20515214e084ae37bd02d2cfacf11f2007dd1377f6d172db4464cb09a1d
      17. ID 717 gen 151165 top level 5 path docker/btrfs/subvolumes/df564ced2fddd7bffe9f9a2bfe390933ad73bde1d6546a34dc3178b32972d046
      18. ID 719 gen 151166 top level 5 path docker/btrfs/subvolumes/224783c3b8b6b4ca2afd86d83ee97874510963fc71b21926d0d35a70f8ebeec1-init
      19. ID 720 gen 159320 top level 5 path docker/btrfs/subvolumes/224783c3b8b6b4ca2afd86d83ee97874510963fc71b21926d0d35a70f8ebeec1
      Display All

      This snapshots are not very big:

      Source Code

      1. du -sh /srv/dev-disk-by-label-IronWolf8TB1/docker/btrfs/subvolumes
      2. 795M subvolumes
      I searched, how I can find more information about my filesystem and find:

      Source Code

      1. /usr/bin/btrfs fi usage /srv/dev-disk-by-label-IronWolf8TB1
      2. Overall:
      3. Device size: 7.28TiB
      4. Device allocated: 421.02GiB
      5. Device unallocated: 6.87TiB
      6. Device missing: 0.00B
      7. Used: 419.96GiB
      8. Free (estimated): 6.87TiB (min: 3.43TiB)
      9. Data ratio: 1.00
      10. Metadata ratio: 2.00
      11. Global reserve: 512.00MiB (used: 0.00B)
      12. Data,single: Size:419.01GiB, Used:418.45GiB
      13. /dev/sdb1 419.01GiB
      14. Metadata,DUP: Size:1.00GiB, Used:773.81MiB
      15. /dev/sdb1 2.00GiB
      16. System,DUP: Size:8.00MiB, Used:64.00KiB
      17. /dev/sdb1 16.00MiB
      18. Unallocated:
      19. /dev/sdb1 6.87TiB
      Display All
      Here I see, that my harddisk has a size of 7.28 TiB (its a IronWolf 8 TB Drive), 421.02 GiB is allocated and 419.96 GiB is used.

      Then I searched about allocation in btrfs and find, that unallocated space is automatically allocated if necessary. But on my disk this seems to not happen. For me it looks like the 421.03 GiB is the maximum space, which can be used. I don't understand this and am thinking, if it was a big fault to use btrfs for my harddisk.

      But maybe someone can find a solution for this?

      Thanks for any hint, which can help. If you need more information, please tell.

      Matthias
    • A complete balance ends with an error:

      Source Code

      1. btrfs balance start /srv/dev-disk-by-label-IronWolf8TB1/
      2. WARNING:
      3. Full balance without filters requested. This operation is very
      4. intense and takes potentially very long. It is recommended to
      5. use the balance filters to narrow down the scope of balance.
      6. Use 'btrfs balance start --full-balance' option to skip this
      7. warning. The operation will start in 10 seconds.
      8. Use Ctrl-C to stop it.
      9. 10 9 8 7 6 5 4 3 2 1
      10. Starting balance without any filters.
      11. ERROR: error during balancing '/srv/dev-disk-by-label-IronWolf8TB1/': No space left on device
      12. There may be more info in syslog - try dmesg | tail
      Display All

      The last lines of dmesg say:

      Source Code

      1. [109822.888749] BTRFS info (device sdb1): relocating block group 1104150528 flags data
      2. [109834.843236] BTRFS info (device sdb1): found 4111 extents
      3. [109837.550816] BTRFS info (device sdb1): found 4111 extents
      4. [109839.844608] BTRFS info (device sdb1): relocating block group 22020096 flags system|dup
      5. [109842.011210] BTRFS info (device sdb1): found 5 extents
      6. [109843.931808] BTRFS info (device sdb1): relocating block group 13631488 flags data
      7. [109845.657598] BTRFS info (device sdb1): found 128 extents
      8. [109847.551417] BTRFS info (device sdb1): found 128 extents
      9. [109848.951345] BTRFS info (device sdb1): 1 enospc errors during balance
      10. [109848.951352] BTRFS info (device sdb1): balance: ended with status: -28
      11. [141196.929489] perf: interrupt took too long (6195 > 6193), lowering kernel.perf_event_max_sample_rate to 32250
      Display All

      Matthias
    • Maybe also interesting:

      Source Code

      1. btrfs filesystem df /srv/dev-disk-by-label-IronWolf8TB1
      2. Data, single: total=419.00GiB, used=418.85GiB
      3. System, DUP: total=32.00MiB, used=80.00KiB
      4. Metadata, DUP: total=2.00GiB, used=742.78MiB
      5. GlobalReserve, single: total=508.09MiB, used=0.00B
      Here is nothing to see from the unallocated space. Here it looks like the harddisk only have 419 GiB.

      Matthias
    • It seems, my problem is solved...

      I updated manually:

      Source Code

      1. apt update
      2. apt upgrade
      3. ...
      4. ackages will be upgraded:
      5. e2fsprogs libext2fs2 logsave openmediavault openmediavault-omvextrasorg
      6. ...
      7. Get:1 http://packages.openmediavault.org/public usul/main amd64 openmediavault all 5.2.4-1 [1670 kB]
      8. Get:2 https://dl.bintray.com/openmediavault-plugin-developers/usul buster/main amd64 openmediavault-omvextrasorg all 5.2.2 [69.3 kB]
      9. Get:3 http://httpredir.debian.org/debian buster-backports/main amd64 logsave amd64 1.45.5-2~bpo10+1 [72.2 kB]
      10. Get:4 http://httpredir.debian.org/debian buster-backports/main amd64 libext2fs2 amd64 1.45.5-2~bpo10+1 [248 kB]
      11. Get:5 http://httpredir.debian.org/debian buster-backports/main amd64 e2fsprogs amd64 1.45.5-2~bpo10+1 [593 kB]
      12. ...
      13. omv-confdbadm populate
      Display All

      After the update a reboot.

      The system runs now about 30 minutes, syncthing is written new files on the harddisk and no error...

      The allocated space is growing, as it should be:

      Source Code

      1. btrfs filesystem usage /srv/dev-disk-by-label-IronWolf8TB1/
      2. Overall:
      3. Device size: 7.28TiB
      4. Device allocated: 435.06GiB
      5. Device unallocated: 6.85TiB
      6. Device missing: 0.00B
      7. Used: 431.62GiB
      8. Free (estimated): 6.85TiB (min: 3.43TiB)
      9. Data ratio: 1.00
      10. Metadata ratio: 2.00
      11. Global reserve: 512.00MiB (used: 0.00B)
      12. Data,single: Size:431.00GiB, Used:430.10GiB
      13. /dev/sdb1 431.00GiB
      14. Metadata,DUP: Size:2.00GiB, Used:776.31MiB
      15. /dev/sdb1 4.00GiB
      16. System,DUP: Size:32.00MiB, Used:80.00KiB
      17. /dev/sdb1 64.00MiB
      18. Unallocated:
      19. /dev/sdb1 6.85TiB
      Display All
      Actually is 435 GiB allocated. Before update and reboot is stays at 419 GiB. Hopefully it is not a short-term-solution... I will tell.

      Matthias
    • It happens again...

      I was copying more files from old NAS to OMV and start a Windows-Virtualbox-VM in which Syncthing runs. Then again "disk full".

      Source Code

      1. df
      2. Filesystem 1K-blocks Used Available Use% Mounted on
      3. udev 3873036 0 3873036 0% /dev
      4. tmpfs 779144 93548 685596 13% /run
      5. /dev/sda1 106962540 3187104 98298932 4% /
      6. tmpfs 3895704 200 3895504 1% /dev/shm
      7. tmpfs 5120 0 5120 0% /run/lock
      8. tmpfs 3895704 0 3895704 0% /sys/fs/cgroup
      9. tmpfs 3895704 0 3895704 0% /tmp
      10. folder2ram 3895704 3895704 0 100% /var/log
      11. /dev/sdb1 7814025540 613660876 0 100% /srv/dev-disk-by-label-IronWolf8TB1
      12. folder2ram 3895704 0 3895704 0% /var/tmp
      13. folder2ram 3895704 676 3895028 1% /var/lib/openmediavault/rrd
      14. folder2ram 3895704 1396 3894308 1% /var/spool
      15. folder2ram 3895704 17456 3878248 1% /var/lib/rrdcached
      16. folder2ram 3895704 4 3895700 1% /var/lib/monit
      17. folder2ram 3895704 0 3895704 0% /var/lib/php
      18. folder2ram 3895704 1260 3894444 1% /var/cache/samba
      19. root@TerraMaster-OMV:~# Connection to 192.168.178.91 closed by remote host.
      20. Connection to 192.168.178.91 closed.
      Display All


      Source Code

      1. btrfs filesystem usage /srv/dev-disk-by-label-IronWolf8TB1/
      2. Overall:
      3. Device size: 7.28TiB
      4. Device allocated: 586.06GiB
      5. Device unallocated: 6.70TiB
      6. Device missing: 0.00B
      7. Used: 584.73GiB
      8. Free (estimated): 6.71TiB (min: 3.35TiB)
      9. Data ratio: 1.00
      10. Metadata ratio: 2.00
      11. Global reserve: 512.00MiB (used: 0.00B)
      12. Data,single: Size:582.00GiB, Used:581.23GiB
      13. /dev/sdb1 582.00GiB
      14. Metadata,DUP: Size:2.00GiB, Used:1.75GiB
      15. /dev/sdb1 4.00GiB
      16. System,DUP: Size:32.00MiB, Used:96.00KiB
      17. /dev/sdb1 64.00MiB
      18. Unallocated:
      19. /dev/sdb1 6.70TiB
      Display All
      A restart of OMV doesn't help.

      Matthias
    • A complete balance solve the problem for now:

      Source Code

      1. btrfs balance start /srv/dev-disk-by-label-IronWolf8TB1/
      2. WARNING:
      3. Full balance without filters requested. This operation is very
      4. intense and takes potentially very long. It is recommended to
      5. use the balance filters to narrow down the scope of balance.
      6. Use 'btrfs balance start --full-balance' option to skip this
      7. warning. The operation will start in 10 seconds.
      8. Use Ctrl-C to stop it.
      9. 10 9 8 7 6 5 4 3 2 1
      10. Starting balance without any filters.
      11. Done, had to relocate 587 out of 587 chunks
      Display All
      But I still don't know, why this occur. Has someone a idea? I don't want this error every few days...

      Matthias