Display MoreYour BTRFS filesystem look OK. The "details" concatenates output from three separate BTRFS commands:
btrfs fi sh mountpoint
btrfs fi df mountpoint
btrfs dev stats mountpoint
The output of the last command should show dev stats for all four individual devices, but is curtailed.
1.) Yes, you're right I tested it and those 3 commands you mentioned including the mount point show exactly the same data:
btrfs fi sh /srv/dev-disk-by-uuid-73180add-31d0-4cac-b6ce-8a218144cd63
btrfs fi df /srv/dev-disk-by-uuid-73180add-31d0-4cac-b6ce-8a218144cd63
btrfs dev stats /srv/dev-disk-by-uuid-73180add-31d0-4cac-b6ce-8a218144cd63
If you want a fully picture of the way data and metadata have been allocated. etc use: btrfs fi us mountpoint.
2.) Yes, the command you mentioned including the mount point shows more data which is as follows,
including that the metadata is also running in RAID1:
root@nas:~#
btrfs fi us /srv/dev-disk-by-uuid-73180add-31d0-4cac-b6ce-8a218144cd63
Overall:
Device size: 10.92TiB
Device allocated: 6.34TiB
Device unallocated: 4.57TiB
Device missing: 0.00B
Device slack: 0.00B
Used: 6.27TiB
Free (estimated): 2.32TiB (min: 2.32TiB)
Free (statfs, df): 1.91TiB
Data ratio: 2.00
Metadata ratio: 2.00
Global reserve: 512.00MiB (used: 0.00B)
Multiple profiles: no
Data,RAID1: Size:3.17TiB, Used:3.13TiB (98.89%)
/dev/sda 2.21TiB
/dev/sdb 2.21TiB
/dev/sde 1.36TiB
/dev/sdc 553.00GiB
Metadata,RAID1: Size:5.00GiB, Used:4.41GiB (88.17%)
/dev/sda 3.00GiB
/dev/sdb 4.00GiB
/dev/sde 2.00GiB
/dev/sdc 1.00GiB
System,RAID1: Size:8.00MiB, Used:480.00KiB (5.86%)
/dev/sda 8.00MiB
/dev/sdb 8.00MiB
Unallocated:
/dev/sda 523.51GiB
/dev/sdb 522.51GiB
/dev/sde 1.36TiB
/dev/sdc 2.19TiB
Display More
Display MoreAs you built and copied data to your 4 device BRTFS RAID1 filesystem ins stages, I'd do one full balance now. It will take a couple of hours to complete. You should not need a full balance in the future unless disks are replaced. A regular "filtered" balance of data only at say 5%, in step with your backup schedule would be useful. See: https://wiki.tnonline.net/w/Btrfs/Balance.
Stick with RAID1 for data and metadata.
Running into a "no space left" error (ENOSPC) on BTRFS can be a pita. A kind of catch-22, when you can't delete stuff because there's not enough unallocated space to increase the number of metadata chunks needed for the deletions. A consequence of BTRFS first allocating space in 1GB chunks, before filling those with extents. You can be bitten by ENOSPC during a balance too, because of the "working space" it needs. See: https://wiki.tnonline.net/w/Btrfs/ENOSPC
So, enable email notifications and "edit" your filesystem in the WEBUI to set a usage warning threshold at say 85%.
3.) Okay, I will stick with RAID1 for data and metadata and not switch to RAID1 for data and RAID1C3 for metadata.
I have enabled email notifications via the OMV GUI (="System" -> "Notification" -> "Settings" by adding the respective data from my email provider accordingly and sending the test email worked fine).
I have also checked via the OMV GUI that such a usage warning is set for my filesystem (="Storage" -> "File Systems" -> Button "Edit" -> "Usage Warning Threshold *" set to "85%").
However, this was selected and activated by default already.
As you built and copied data to your 4 device BRTFS RAID1 filesystem ins stages, I'd do one full balance now. It will take a couple of hours to complete. You should not need a full balance in the future unless disks are replaced. A regular "filtered" balance of data only at say 5%, in step with your backup schedule would be useful. See: https://wiki.tnonline.net/w/Btrfs/Balance.
4.) With "full balance" you mean the following command according to the link you've posted, right?:
btrfs balance start
Okay, I will try to schedule a regular "filtered" balance of data only at 5% with my backup schedule.
Is there a way to see the progress during running such a balance or when it has been successfully completed?
To be honest, the last time I just left it running for a couple of hours until I could also hear no read/write activity from the HDDs anymore.