I am still playing around with btrfs and encountered some kind of that "no disk space available" behavior. Actually it is slightly different. I am getting no errors or so but since today, omv is simply not able to detect the correct amount of total storage space. Current setup is a simple Stripe of 2 16 TB disks. For the last days and weeks, omv's disk usage graphs was showing correct data but stopped doing so some hours ago.
I checked the output of the btrfs disk usage tools and I think it looks good. Any idea?
Edit: Or is it just the common "no disk space bug" and I misunderstood its behavior? I was assuming the unallocated space is auto allocated when needed..
Code
root@NAS:~# btrfs filesystem df -h /srv/dev-disk-by-label-Storage
Data, RAID0: total=14.28TiB, used=14.28TiB
System, RAID1: total=8.00MiB, used=1.00MiB
Metadata, RAID1: total=16.00GiB, used=15.92GiB
GlobalReserve, single: total=512.00MiB, used=0.00B
root@NAS:~# btrfs fi usage /srv/dev-disk-by-label-Storage
Overall:
Device size: 29.11TiB
Device allocated: 14.31TiB
Device unallocated: 14.79TiB
Device missing: 0.00B
Used: 14.31TiB
Free (estimated): 14.80TiB (min: 7.40TiB)
Data ratio: 1.00
Metadata ratio: 2.00
Global reserve: 512.00MiB (used: 0.00B)
Data,RAID0: Size:14.28TiB, Used:14.28TiB
/dev/sdb 7.14TiB
/dev/sdc 7.14TiB
Metadata,RAID1: Size:16.00GiB, Used:15.92GiB
/dev/sdb 16.00GiB
/dev/sdc 16.00GiB
System,RAID1: Size:8.00MiB, Used:1.00MiB
/dev/sdb 8.00MiB
/dev/sdc 8.00MiB
Unallocated:
/dev/sdb 7.39TiB
/dev/sdc 7.39TiB
Alles anzeigen