encrypt the data in the destination
Why do you need encryption, encryption only serves it's purpose if your physical hardware is stolen
encrypt the data in the destination
Why do you need encryption, encryption only serves it's purpose if your physical hardware is stolen
BTRFS, I know the raid implementation is still somewhat suspect and the recommendation is not to use it production.
Is it so? My most recent info was that the danger in using btrfs raid5 exists without power redundancy only - being an actual issue on any raid solution. Thus, power cut would be potentially dangerous on btrfs like on any other raid technology. In the other hand as long as there is no power cut btrfs would be as safe as any other solution. Did I miss sth?
My most recent info was that the danger in using btrfs raid5 exists without power redundancy only
I found that from the btrfs read the docs page I have no idea how old that is
I found that from the btrfs read the docs page I have no idea how old that is
Yeah but as I learned here the RAID 5 write hole on btrfs is not btrfs specific but exists on every raid solution. It is just the fact that devs of other solutions do not warn about that anymore.
I'm the same, but one of the things I have found out is that btrfs cannot fix bitrot when detected when sitting on top of mdadm as there is only one copy of the data
Hence the RAID created only via BTRFS and not MDADM.
Thing is, there're a lot of ways to make the RAIDs with different nuances on the metadata and data deduplication that will require some reading to understand what they can/can't do and what is the best one to use per use case.
Using Btrfs with Multiple Devices - btrfs Wiki (kernel.org)
For eg:
You make a RAID1 on 2x disks with snapshoted subvolumes and run scrub every week.
BTRFS will detect errors on SCRUB and will fixed them with previous copies from the snapshots.
This is the theory behind this:
filesystems - Btrfs automatically bitrot correction with snapshots? - Unix & Linux Stack Exchange
I still haven't got 1 single error but my system isn't UP that long. (my main Pi has an uptime of maybe 2 and a half to 3 years)
Yeah but as I learned here the RAID 5 write hole on btrfs is not btrfs specific but exists on every raid solution. It is just the fact that devs of other solutions do not warn about that anymore.
HannesJo The problems with BTRFS raid5/6 go much deeper than just the write hole problem. Not much has changed re: this https://www.reddit.com/r/btrfs…s_read_before_you_create/
@Some Does your "scrub on NC DATA HDDS" do anything? Doesn't setting nodatacow also turn off checksums? If some of the data sitting on a btrfs raid1 profile has checksums turned off doesn't that give you potential problems with data consistency?
Does your "scrub on NC DATA HDDS" do anything?
It checks for errors on it. Same goes with the rest of the scrubs.
Until now, I always had 0 errors, so dificult to say if all is OK. (I want to assume yes)
It's a RAID1 made by BTRFS
I can show the output later on.
The nodatacow is set only on another one but only on the mariadb folder, NOT on all drive: @sd_configs.
Nonetheless, I also make a timely push to a backup of all these FS to a secondary device.
I don't recall but I think you can also use the btrfs send to make it recognize copies from there.
Need to find more info on this.
Soma Perhaps I misunderstood your setup, but using nodatacow on a raid1 profile concerns me. See., for example: https://www.reddit.com/r/btrfs…s/m2bge7/comment/gqkvdpp/
The problems with BTRFS raid5/6 go much deeper than just the write hole problem. Not much has changed re: this https://www.reddit.com/r/btrfs…s_read_before_you_create/
That’s interesting, thx. So some say it is as safe as any solution and some say it is not. While the official notice of btrfs devs only mentions the write hole issue that seems to be an actual problem of all raid solutions. Topic remains unclear 🤔
That’s interesting, thx. So some say it is as safe as any solution and some say it is not. While the official notice of btrfs devs only mentions the write hole issue that seems to be an actual problem of all raid solutions. Topic remains unclear 🤔
votdev has not implemented a raid5/6 btrfs profile in the latest changes to OMV6 which seems entirely wise to me. If you want to get into the nitty-gritty maybe this will satisfy your curiosity :
but using nodatacow on a raid1 profile concerns me
Ok, I'll try to explain in a way that, even I can understand since this is still a grey area to me and what was done, was some time ago so, the steps are, somewhat forgotten,
The only RAID1 I'm using is, indeed on the NC DATA HDDs.
This RAID holds only the DATA folder used on the NC.
If memory doesn't fail me, it was done with this command:
mkfs.btrfs -m raid1 -d raid1 /dev/sdb1 /dev/sdc1
Yes, on partitions because I didn't know better at the time, so I did it Windows style:
Initialize both disks on the Pi with fdisk.
Created the partition and format them in BTRFS.
Only after, I made the RAID with above command.
This RAID is a pure CoW system.
As for the other disk (even though, there's 2x FS), it's a SSD with 4 partitions but no RAID.
Partition 1 == the /boot in FAT32 that is required on the Pis (maybe all SBCs)
Partition 2 == is a ext4 that holds the initial / before I cloned it into partition3. This was done to have the BTRFS kernel module compiled into the initramfs as a module so that the BTRFS would be recognized on boot time.
This was, at the time, needed since Pis kernel didn't had it included (different story).
Partition 3 == the actual root partition in BTRFS that has a subvolume with snapshots and where the boot partition points to.
This is the cmdline.txt on the /boot partition that points to the above:
pi@tacho:~ $ cat /boot/cmdline.txt
console=serial0,115200 console=tty1 root=PARTUUID=8f4dbd00-03 rootfstype=btrfs rootflags=subvol=@root rootwait
See the rootflags?
On fstab, you can also add more functions to it:
LABEL=sd_btrfs / btrfs noatime,nodiratime,defaults,ssd,subvol=@root,compress=zstd 0 0
With this in place, SNAPPER than makes timely snapshots of the above subvol on every boot, on every apt run (pre & post) and with the specified timeline on it's default config where you can set how many you want per day, week, year.
pi@tacho:~ $ snapper list-configs
Config | Subvolume
--------+-------------------------------------------
appdata | /srv/dev-disk-by-label-sd_configs/@appdata
docker | /srv/dev-disk-by-label-sd_configs/@docker
root | /
pi@tacho:~ $ sudo snapper list
# | Type | Pre # | Date | User | Cleanup | Description | Userdata
-------+--------+-------+--------------------------+------+----------+-----------------+--------------
0 | single | | | root | | current |
756 | single | | Thu Apr 29 20:56:49 2021 | root | number | rollback backup | important=yes
757+ | single | | Thu Apr 29 20:56:49 2021 | root | | |
14406 | pre | | Wed Dec 7 14:15:09 2022 | root | number | apt |
14407 | post | 14406 | Wed Dec 7 14:16:30 2022 | root | number | apt |
--------
14660 | single | | Sat Dec 17 18:11:35 2022 | root | number | boot |
--------
15240 | pre | | Tue Jan 10 11:55:23 2023 | root | number | apt |
15241 | post | 15240 | Tue Jan 10 11:55:47 2023 | root | number | apt |
15271 | single | | Wed Jan 11 16:49:41 2023 | root | number | boot |
15344 | pre | | Sat Jan 14 16:06:45 2023 | root | number | apt |
15345 | post | 15344 | Sat Jan 14 16:07:41 2023 | root | number | apt |
--------
16111 | single | | Tue Feb 14 13:00:07 2023 | root | timeline | timeline |
16112 | single | | Tue Feb 14 14:00:02 2023 | root | timeline | timeline |
16113 | single | | Tue Feb 14 15:00:08 2023 | root | timeline | timeline |
16114 | single | | Tue Feb 14 16:00:08 2023 | root | timeline | timeline |
16115 | single | | Tue Feb 14 17:00:05 2023 | root | timeline | timeline |
Alles anzeigen
pi@tacho:~ $ sudo snapper get-config
Key | Value
-----------------------+-------
ALLOW_GROUPS |
ALLOW_USERS | xxxxxxxx
BACKGROUND_COMPARISON | yes
EMPTY_PRE_POST_CLEANUP | yes
EMPTY_PRE_POST_MIN_AGE | 1800
FREE_LIMIT | 0.2
FSTYPE | btrfs
NUMBER_CLEANUP | yes
NUMBER_LIMIT | 50
NUMBER_LIMIT_IMPORTANT | 10
NUMBER_MIN_AGE | 1800
QGROUP |
SPACE_LIMIT | 0.5
SUBVOLUME | /
SYNC_ACL | no
TIMELINE_CLEANUP | yes
TIMELINE_CREATE | yes
TIMELINE_LIMIT_DAILY | 2
TIMELINE_LIMIT_HOURLY | 2
TIMELINE_LIMIT_MONTHLY | 0
TIMELINE_LIMIT_WEEKLY | 1
TIMELINE_LIMIT_YEARLY | 0
TIMELINE_MIN_AGE | 1800
Alles anzeigen
Partition 4 == the rest of the SSD with 80Gb where I created 2 subvolumes: @appdata && @docker.
@appdata is where the docker-configs live and where lives the mariadb folder with the chattr +C (I tought it was R)
To achieve this, I first launched the stack with folders non-existant so they were created.
Deleted the content INSIDE mariadb folder and set the chattr +C on it. Copied back the backup I had inside the folder and all files where with the +C attribute.
The rest of the subvol stayed untouched.
@docker is the docker root path and this is untouched.
pi@tacho:~ $ lsattr /srv/dev-disk-by-label-sd_configs/@appdata/mariadb
---------------C------ /srv/dev-disk-by-label-sd_configs/@appdata/mariadb/config
pi@tacho:~ $ lsattr /srv/dev-disk-by-label-sd_configs/@appdata
---------------------- /srv/dev-disk-by-label-sd_configs/@appdata/redis
---------------C------ /srv/dev-disk-by-label-sd_configs/@appdata/mariadb
---------------------- /srv/dev-disk-by-label-sd_configs/@appdata/swag
---------------------- /srv/dev-disk-by-label-sd_configs/@appdata/nextcloud
---------------------- /srv/dev-disk-by-label-sd_configs/@appdata/adguard
---------------------- /srv/dev-disk-by-label-sd_configs/@appdata/wireguard
---------------------- /srv/dev-disk-by-label-sd_configs/@appdata/jellyfin
---------------------- /srv/dev-disk-by-label-sd_configs/@appdata/pihole
---------------------- /srv/dev-disk-by-label-sd_configs/@appdata/scrutiny
pi@tacho:~ $ sudo lsattr /srv/dev-disk-by-label-sd_configs/@docker/
---------------------- /srv/dev-disk-by-label-sd_configs/@docker/btrfs
---------------------- /srv/dev-disk-by-label-sd_configs/@docker/containers
---------------------- /srv/dev-disk-by-label-sd_configs/@docker/plugins
---------------------- /srv/dev-disk-by-label-sd_configs/@docker/image
---------------------- /srv/dev-disk-by-label-sd_configs/@docker/volumes
---------------------- /srv/dev-disk-by-label-sd_configs/@docker/trust
---------------------- /srv/dev-disk-by-label-sd_configs/@docker/network
---------------------- /srv/dev-disk-by-label-sd_configs/@docker/swarm
---------------------- /srv/dev-disk-by-label-sd_configs/@docker/buildkit
---------------------- /srv/dev-disk-by-label-sd_configs/@docker/engine-id
---------------------- /srv/dev-disk-by-label-sd_configs/@docker/tmp
---------------------- /srv/dev-disk-by-label-sd_configs/@docker/runtimes
Alles anzeigen
Is this what is CoW? Well, I want to think it is,
Unless CoW is only when you have RAID but that's not what I understood when I first read about BTRFS.
I think it's doable to test this on a VM:
Create a BTRFS RAID1 and set a folder with the chattr +C. Populate it with files and see what happens.
Am I making any sense with the above?
Sorry if I can't explain better.
assuming the filesystem in your partition has a "single" profile.
?!? How can I check this?
The format was done via mkfs.btrfs command.
Nothing wrong in using a partition rather than no partitions when creating a btrfs raid profile.
I know. Only mentioned because it could have been done on block device.
But I didn't know then,
Correct.
Is this what is CoW
We've now got snappers and cows, no wonder it's confusing
We've now got snappers and cows, no wonder it's confusing
As long as the audience is kept entertained, all is good 🤣
Alles anzeigen?!? How can I check this?
The format was done via mkfs.btrfs command.
I know. Only mentioned because it could have been done on block device.
But I didn't know then,
Correct.
btrfs fi df .... or btrfs file usage .. or btrfs dev us
e.g for a fs mount at /
root@spiralvm:~# btrfs fi us /
Overall:
Device size: 19.70GiB
Device allocated: 9.02GiB
Device unallocated: 10.67GiB
Device missing: 0.00B
Used: 8.23GiB
Free (estimated): 10.97GiB (min: 5.63GiB)
Free (statfs, df): 10.96GiB
Data ratio: 1.00
Metadata ratio: 2.00
Global reserve: 22.78MiB (used: 0.00B)
Multiple profiles: no
Data,single: Size:8.01GiB, Used:7.72GiB (96.36%)
/dev/vda2 8.01GiB
Metadata,DUP: Size:512.00MiB, Used:265.06MiB (51.77%)
/dev/vda2 1.00GiB
System,DUP: Size:8.00MiB, Used:16.00KiB (0.20%)
/dev/vda2 16.00MiB
Unallocated:
/dev/vda2 10.67GiB
root@spiralvm:~# btrfs dev us /
/dev/vda2, ID: 1
Device size: 19.70GiB
Device slack: 1.00KiB
Data,single: 8.01GiB
Metadata,DUP: 1.00GiB
System,DUP: 16.00MiB
Unallocated: 10.67GiB
root@spiralvm:~# btrfs fi sh /
Label: none uuid: f93cf12e-2289-471a-9a96-974db77d776e
Total devices 1 FS bytes used 7.98GiB
devid 1 size 19.70GiB used 9.02GiB path /dev/vda2
root@spiralvm:~# btrfs fi df /
Data, single: total=8.01GiB, used=7.72GiB
System, DUP: total=8.00MiB, used=16.00KiB
Metadata, DUP: total=512.00MiB, used=265.08MiB
GlobalReserve, single: total=22.78MiB, used=0.00B
root@spiralvm:~#
Alles anzeigen
As long as the audience is kept entertained, all is good
well after reading your #31 I might as well be on another planet
BTRFS on root drive (no RAID)
pi@tacho:~ $ sudo btrfs fi us /
Overall:
Device size: 16.00GiB
Device allocated: 9.02GiB
Device unallocated: 6.98GiB
Device missing: 0.00B
Used: 6.88GiB
Free (estimated): 8.39GiB (min: 8.39GiB)
Free (statfs, df): 8.39GiB
Data ratio: 1.00
Metadata ratio: 1.00
Global reserve: 26.33MiB (used: 0.00B)
Multiple profiles: no
Data,single: Size:8.01GiB, Used:6.60GiB (82.42%)
/dev/sdc3 8.01GiB
Metadata,single: Size:1.01GiB, Used:290.20MiB (28.12%)
/dev/sdc3 1.01GiB
System,single: Size:4.00MiB, Used:16.00KiB (0.39%)
/dev/sdc3 4.00MiB
Unallocated:
/dev/sdc3 6.98GiB
pi@tacho:~ $ sudo btrfs dev us /
/dev/sdc3, ID: 1
Device size: 16.00GiB
Device slack: 0.00B
Data,single: 8.01GiB
Metadata,single: 1.01GiB
System,single: 4.00MiB
Unallocated: 6.98GiB
pi@tacho:~ $ sudo btrfs fi df /
Data, single: total=8.01GiB, used=6.60GiB
System, single: total=4.00MiB, used=16.00KiB
Metadata, single: total=1.01GiB, used=290.23MiB
GlobalReserve, single: total=26.33MiB, used=0.00B
Alles anzeigen
BTRFS on @appdata subvolume
pi@tacho:~ $ sudo btrfs fi us /srv/dev-disk-by-label-sd_configs/@appdata
Overall:
Device size: 79.99GiB
Device allocated: 15.96GiB
Device unallocated: 64.04GiB
Device missing: 0.00B
Used: 11.46GiB
Free (estimated): 67.50GiB (min: 67.50GiB)
Free (statfs, df): 67.49GiB
Data ratio: 1.00
Metadata ratio: 1.00
Global reserve: 51.22MiB (used: 0.00B)
Multiple profiles: no
Data,single: Size:14.70GiB, Used:11.24GiB (76.46%)
/dev/sdc4 14.70GiB
Metadata,single: Size:1.26GiB, Used:229.14MiB (17.79%)
/dev/sdc4 1.26GiB
System,single: Size:4.00MiB, Used:16.00KiB (0.39%)
/dev/sdc4 4.00MiB
Unallocated:
/dev/sdc4 64.04GiB
pi@tacho:~ $ sudo btrfs dev us /srv/dev-disk-by-label-sd_configs/@appdata
/dev/sdc4, ID: 1
Device size: 79.99GiB
Device slack: 0.00B
Data,single: 14.70GiB
Metadata,single: 1.26GiB
System,single: 4.00MiB
Unallocated: 64.04GiB
pi@tacho:~ $ sudo btrfs fi df /srv/dev-disk-by-label-sd_configs/@appdata
Data, single: total=14.70GiB, used=11.24GiB
System, single: total=4.00MiB, used=16.00KiB
Metadata, single: total=1.26GiB, used=229.14MiB
GlobalReserve, single: total=51.22MiB, used=0.00B
Alles anzeigen
BTRFS on @docker subvolume
pi@tacho:~ $ sudo btrfs fi us /srv/dev-disk-by-label-sd_configs/@docker
Overall:
Device size: 79.99GiB
Device allocated: 15.96GiB
Device unallocated: 64.04GiB
Device missing: 0.00B
Used: 11.46GiB
Free (estimated): 67.49GiB (min: 67.49GiB)
Free (statfs, df): 67.49GiB
Data ratio: 1.00
Metadata ratio: 1.00
Global reserve: 51.22MiB (used: 0.00B)
Multiple profiles: no
Data,single: Size:14.70GiB, Used:11.24GiB (76.48%)
/dev/sdc4 14.70GiB
Metadata,single: Size:1.26GiB, Used:229.17MiB (17.79%)
/dev/sdc4 1.26GiB
System,single: Size:4.00MiB, Used:16.00KiB (0.39%)
/dev/sdc4 4.00MiB
Unallocated:
/dev/sdc4 64.04GiB
pi@tacho:~ $ sudo btrfs dev us /srv/dev-disk-by-label-sd_configs/@docker
/dev/sdc4, ID: 1
Device size: 79.99GiB
Device slack: 0.00B
Data,single: 14.70GiB
Metadata,single: 1.26GiB
System,single: 4.00MiB
Unallocated: 64.04GiB
pi@tacho:~ $ sudo btrfs fi df /srv/dev-disk-by-label-sd_configs/@docker
Data, single: total=14.70GiB, used=11.24GiB
System, single: total=4.00MiB, used=16.00KiB
Metadata, single: total=1.26GiB, used=229.20MiB
GlobalReserve, single: total=51.22MiB, used=0.00B
Alles anzeigen
BTRFS on 2x HDD (RAID1)
pi@tacho:~ $ sudo btrfs fi us /srv/dev-disk-by-label-wolf1
Overall:
Device size: 7.28TiB
Device allocated: 414.06GiB
Device unallocated: 6.87TiB
Device missing: 0.00B
Used: 313.89GiB
Free (estimated): 3.48TiB (min: 3.48TiB)
Free (statfs, df): 3.48TiB
Data ratio: 2.00
Metadata ratio: 2.00
Global reserve: 242.69MiB (used: 0.00B)
Multiple profiles: no
Data,RAID1: Size:205.00GiB, Used:156.17GiB (76.18%)
/dev/sda1 205.00GiB
/dev/sdb1 205.00GiB
Metadata,RAID1: Size:2.00GiB, Used:790.39MiB (38.59%)
/dev/sda1 2.00GiB
/dev/sdb1 2.00GiB
System,RAID1: Size:32.00MiB, Used:64.00KiB (0.20%)
/dev/sda1 32.00MiB
/dev/sdb1 32.00MiB
Unallocated:
/dev/sda1 3.44TiB
/dev/sdb1 3.44TiB
pi@tacho:~ $ sudo btrfs dev us /srv/dev-disk-by-label-wolf1
/dev/sda1, ID: 1
Device size: 3.64TiB
Device slack: 3.50KiB
Data,RAID1: 205.00GiB
Metadata,RAID1: 2.00GiB
System,RAID1: 32.00MiB
Unallocated: 3.44TiB
/dev/sdb1, ID: 2
Device size: 3.64TiB
Device slack: 3.50KiB
Data,RAID1: 205.00GiB
Metadata,RAID1: 2.00GiB
System,RAID1: 32.00MiB
Unallocated: 3.44TiB
pi@tacho:~ $ sudo btrfs fi df /srv/dev-disk-by-label-wolf1
Data, RAID1: total=205.00GiB, used=156.18GiB
System, RAID1: total=32.00MiB, used=64.00KiB
Metadata, RAID1: total=2.00GiB, used=790.39MiB
GlobalReserve, single: total=242.69MiB, used=0.00B
Alles anzeigen
Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!