BTRFS space cache v1 will be deprecated - Best way to change to v2?

  • Asking for best solutions for making the transition of this.


    Background is that some of my disks were created some time ago with btrfs-progs <=5.14 where the default was using "space cache v1"


    On boot, there's some warnings regarding it:

    Code
    USER@HOST:~ $ sudo dmesg | grep "space cach"
    [    8.686123] BTRFS info (device nvme1n1): disk space caching is enabled
    [    8.686312] BTRFS warning (device nvme1n1): space cache v1 is being deprecated and will be removed in a future release, please use -o space_cache=v2
    [    8.799068] BTRFS info (device sda1): disk space caching is enabled
    [    8.799273] BTRFS warning (device sda1): space cache v1 is being deprecated and will be removed in a future release, please use -o space_cache=v2
    [    9.047055] BTRFS info (device sdc1): disk space caching is enabled
    [    9.047290] BTRFS warning (device sdc1): space cache v1 is being deprecated and will be removed in a future release, please use -o space_cache=v2
    [    9.834824] BTRFS info (device sda2): disk space caching is enabled
    [    9.834826] BTRFS warning (device sda2): space cache v1 is being deprecated and will be removed in a future release, please use -o space_cache=v2


    While the other's that were created more recent, and with btrfs-progs >= 5.15, now use as default, space cache v2 (free-space-tree):

    Code
    USER@HOST:~ $ sudo dmesg | grep "space-tree"
    [    7.809409] BTRFS info (device sdf): using free-space-tree
    [    8.702583] BTRFS info (device sdb): using free-space-tree
    [    9.413756] BTRFS info (device sdd): using free-space-tree


    Reading about it, it advises to umount the drives first and then, run several commands:

    Btrfs's Space Cache and Free Space Tree | Forza's Ramblings


    Since unmount my drives will require some (a lot of) work to unreference everything and also redo OR stop some parts (mergerfs pool, RAID1 that olds Nextcloud among other things), I ask:

    ryecoaaron  votdev

    1 - Is it viable to just use the mount editor pluging and add the option space_cache=v2 to the drives in question even though the above instructions tell to remove the v1 first?


    2 - Or it's better to run a Live Distro and do it to the drives unmounted, as long as that Distro has a btrfs-progs version >= 5.15 ???


    Reading further, it seems removing v1 and assigning v2 can take quite some time, depending on the size of the drive, so maybe it's better to stand fast and wait a while longer until the transition is really needed?!?

    Quote

    IMPORTANT! On very large filesystems, the first mount after changing Space Cache can take a long time. Usually several minutes, but there are reports of an hour or more for extreme cases with massive filesystems.

  • Me? Not had to do thjs myself, but I'd favour method 2 and would probably create an up to data bootable USB stick with systemrescue for the task.

    That is also a possibility.


    I also thought about how it would be simple to just use the Kernel plugin to boot to SystemRescue and do it that way.


    Unfortunately this causes a issue.

    The plugin uses the /boot drive to save the ISOs and they can be BIG (according to the wiki:(

    Quote

    Install SystemRescue

    • The spaced used by ISO in /boot directory on OS drive: 687M


    My /boot is 512Mb so, I have to go the Live Distro way.



    ryecoaaron

    Is there any way to use a different folder (with more space even if on the same drive, maybe /isos) to hold the ISOs for the kernel plugin?

  • What's the objection to using a systemrescue USB stick? Lack of machine access? You don't need a monitor & keyboard attached to the machine. Setting the correct boot params for systemrescue lets you ssh in and/or use VNC for remote access. ( see: https://www.system-rescue.org/manual/Booting_SystemRescue/).


    OMV does this. Systemrescue is downloaded to local /boot directory and is now 958 MB in size and creates a custom entry in /etc/grub.cfg


    Code
    ### BEGIN /etc/grub.d/42_sysresccd ###
    menuentry 'SystemRescue 11.02' {
      probe -u $root --set=rootuuid
      set imgdevpath="/dev/disk/by-uuid/$rootuuid"
      set isofile='/boot/systemrescue-11.02-amd64.iso'
      loopback loop "$isofile"
      linux (loop)/sysresccd/boot/x86_64/vmlinuz rootpass=openmediavault nofirewall archisobasedir=sysresccd copytoram dovnc vncpass=openmediavault setkmap=us img_dev="$imgdevpath" img_loop="$isofile" earlymodules=loop
      initrd (loop)/sysresccd/boot/intel_ucode.img (loop)/sysresccd/boot/amd_ucode.img (loop)/sysresccd/boot/x86_64/sysresccd.img
    }
    ### END /etc/grub.d/42_sysresccd ###

    If you then select to "Boot to Systemrescue Once" on the WEBUI, AFAIK in the background the /etc/defaultgrub file is altered to use "Grub Default=saved", an "update-grub" is executed followed by a "grub-reboot [num]", where num depends on number of existing grub entries.


    I suppose you could do all this manually pointing other paths for the iso img, but I've never tested this myself. The big snag of course is the system boot sequence and if the alternative iso path is available early enough in the boot process.


    P.S. The Finnix iso is 498MB - maybe just enough space on your /boot . You ssh into this with a new IP given by DHCP. It has btrfs progs 6.6.3.


    I assume you have a fixed size /boot partition or btrfs sub-vol where the size cannot be increased.

  • What's the objection to using a systemrescue USB stick? Lack of machine access?

    I have access to the machine. Just wanted to find an easier solution to not have to touch it.

    For that, I can just launch the Live Debian and do it from there.


    Systemrescue is downloaded to local /boot directory and is now 958 MB in size and creates a custom entry in /etc/grub.cfg

    That's the issue with using the SystemRescue on the OMV GUI.

    My boot partition (EFI) only has 512Mb and can't be increased.

    And I'm almost sure that most systems have the same (or less) size when installed with UEFI

    I could change it but would involve starting a new install and making the partitions "by hand" with a EFI partition of 1Tb and then use omv-regen to move the system to the new stick.


    Too much hassle for such a simple task.

    I'll just go with Debian and sort it that way.


    Just a thought and maybe I'm seeing this wrong:

    I'm assuming that when OMV refers to /boot folder it means the 1st partition of the OS drive.


    Or does the mount of /boot is on the 2nd partition and only /boot/efi is mounted on the 1st partition?

    • Official Post

    /boot and /boot/efi are different partitions. /boot is typically on / on amd64 systems. What is the output of: df -h /boot

    omv 8.1.1-1 synchrony | 6.17 proxmox kernel

    plugins :: omvextrasorg 8.0.2 | kvm 8.0.7 | compose 8.1.5 | cterm 8.0 | borgbackup 8.1.7 | cputemp 8.0 | mergerfs 8.0 | scripts 8.0.1 | writecache 8.1.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • /boot and /boot/efi are different partitions. /boot is typically on / on amd64 systems. What is the output of: df -h /boot

    Code
    root@HOST:~# df -h /boot
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/sdg2        27G  4.6G   21G  18% /
    
    root@HOST:~# df -h /boot/efi/
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/sdg1       511M  5.9M  506M   2% /boot/efi

    OK, I got my answer.

    My /boot is on the rootfs partition so, plenty of space to use the ISOs from the kernel plugin.


    Sorry for all the above.

    I'll just boot once to SystemRescue and make the changes that need to be done.


    Thank you all

  • It seems that something isn't right with my system.

    When setting the kernel to boot once either with SystemRescue or Finnix, it just fails with:




    Maybe this is because I only have the proxmox kernel installed?!?

    • Official Post

    Maybe this is because I only have the proxmox kernel installed?!?

    there is a bug in grub 2.06 that causes this. 2.12 from backports fixes it. 2.06 from the promox repo works on some systems.

    omv 8.1.1-1 synchrony | 6.17 proxmox kernel

    plugins :: omvextrasorg 8.0.2 | kvm 8.0.7 | compose 8.1.5 | cterm 8.0 | borgbackup 8.1.7 | cputemp 8.0 | mergerfs 8.0 | scripts 8.0.1 | writecache 8.1.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • The iso is for amd64. Are you running on aarch64?

    It's pure amd64.

    there is a bug in grub 2.06 that causes this. 2.12 from backports fixes it. 2.06 from the promox repo works on some systems.

    Ok, I'll try to use backports kernel (why oh why did I remove all the kernels except pve 6.11!!! :D ) and see how it goes.


    Thanks

    • Official Post

    'll try to use backports kernel

    You don't have to use the backports kernel. Just need grub from backports.

    omv 8.1.1-1 synchrony | 6.17 proxmox kernel

    plugins :: omvextrasorg 8.0.2 | kvm 8.0.7 | compose 8.1.5 | cterm 8.0 | borgbackup 8.1.7 | cputemp 8.0 | mergerfs 8.0 | scripts 8.0.1 | writecache 8.1.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • there is a bug in grub 2.06 that causes this. 2.12 from backports fixes it. 2.06 from the promox repo works on some systems.

    That's my test situation:


  • You don't have to use the backports kernel. Just need grub from backports.

    I already have backports activated for the KVM plugin.

    Only kernel available is the proxmox (removed all the rest)

    Code
    USER@HOST:~ $ uname -a
    Linux panela 6.11.11-1-pve #1 SMP PREEMPT_DYNAMIC PMX 6.11.11-1 (2025-01-17T15:44Z) x86_64 GNU/Linux


    How will I install the grub via backports? (sorry for piggy-backing)


    My system has this:

    Code
    USER@HOST:~ $ dpkg -l | grep grub
    ii  grub-common                 2.06-13+pmx2      amd64        GRand Unified Bootloader (common files)
    ii  grub-efi-amd64              2.06-13+pmx2      amd64        GRand Unified Bootloader, version 2 (EFI-AMD64 version)
    ii  grub-efi-amd64-bin          2.06-13+pmx2      amd64        GRand Unified Bootloader, version 2 (EFI-AMD64 modules)
    ii  grub-efi-amd64-signed     1+2.06+13+pmx2      amd64        GRand Unified Bootloader, version 2 (amd64 UEFI signed by Debian)
    ii  grub2-common                2.06-13+pmx2      amd64        GRand Unified Bootloader (common files for version 2)
  • Precisely: apt install -t bookworm-backports grub2-common

    Yep, that simple (mental memory for next time)



    Will make a OS image first, just incase.

  • Finally managed to boot to SystemRescue (thank you ryeco and krisbee for the tips regarding grub) and did the needed changes via ssh for each of the drives I had with space cache v1.


    After rebooting back to OMV, no more complaints from dmesg and ALL drives are now, using the default v2:


    All working good.

  • Soma

    Added the Label resolved
  • Soma

    Hi, After reading your thread, I checked and saw the same warning, which I would also like to remove. Would you mind sharing your steps?

    omv 7.7.0-1 (Sandworm) | x86_64 | Linux 6.12.9+1~bpo12+1 kernel

    Plugins: kernel 7.1.4 | compose 7.3.3 | flashmemory 7.0.1 | cputemp 7.0.2 | apttool 7.1.1 | sharerootfs 7.0-1 | omvextrasorg 7.0.1

  • Would you mind sharing your steps?

    The info is all on the thread.


    All you need is to boot to a Live distro (with the kernel plugin, you can use the SystemRescue) and run the commands described on the link of post #1


    Basically:


    Take note of which drives need to be changed:

    You can do this on OMV CLI with (The output will be something like this)

    findmnt -t btrfs | grep ",space_cache,"


    Code
    ~ $ findmnt -t btrfs | grep ",space_cache,"
    /srv/dev-disk-by-uuid-425c5c15-b46e-4893-bbbe-24b743c25f1f /dev/nvme0n1 btrfs  rw,relatime,ssd,discard=async,space_cache,subvolid=5,subvol=/
    /srv/dev-disk-by-uuid-60fcc509-8d91-4963-8a89-8be1249b9f2d /dev/sda1    btrfs  rw,relatime,space_cache,subvolid=5,subvol=/
    /srv/dev-disk-by-uuid-7e674a01-d0b4-40c6-a27e-3aa187bc9ad7 /dev/sdc1    btrfs  rw,relatime,discard=async,space_cache,subvolid=5,subvol=/
    /srv/dev-disk-by-uuid-e2d03e5f-5d48-4fd2-bbce-68e8ebeab41b /dev/sda2    btrfs  rw,relatime,space_cache,subvolid=5,subvol=/

    The device/source column is the device name that need to be used on the commands inside the Live distro.

    My above case was /dev/nvme01 ; /dev/sda1 ; /dev/sda2 ; /dev/sdc1


    Launch a Live distro and run locally (or via SSH, if available. SystemRescue does this)


    btrfs check --clear-space-cache v1 /dev/sdXN Replace X by device letter. N with number if device has partitions made.


    If the above error's out due to deprecated/unknown argument, use

    btrfs rescue clear-space-cache v1 /dev/sdXN instead (this is due to recent changes on the latest version of btrfs-progs)


    Do it for all devices that need it.


    After, mount the devices to apply the v2 (I made a folder for each one)

    mkdir /mnt/btrfs_1

    mkdir /mnt/btrfs_2

    etc...


    Mount each device with:

    mount /dev/sdXN /mnt/btrfs_1 -o space_cache=v2

    Wait to finish


    mount /dev/sdXN /mnt/btrfs_2 -o space_cache=v2

    Wait to finish

    etc...


    After all devices are done, unmount them:

    umount /mnt/btrfs_1

    umount /mnt/btrfs_2

    etc...


    Reboot the Live system and boot back to OMV.

    All drives will now be with space cache v2 (free-space-tree) on default

    Check it with:

    findmnt -t btrfs | grep "space_cache=v2"


    The output will confirm it:

    Code
    ~ $ findmnt -t btrfs | grep "space_cache=v2"
    /srv/dev-disk-by-uuid-425c5c15-b46e-4893-bbbe-24b743c25f1f /dev/nvme1n1 btrfs  rw,relatime,ssd,discard=async,space_cache=v2,subvolid=5,subvol=/
    /srv/dev-disk-by-uuid-87961ac3-5cc6-47ad-a289-0fa0f357b83b /dev/sdd     btrfs  rw,relatime,ssd,discard=async,space_cache=v2,subvolid=5,subvol=/
    /srv/dev-disk-by-uuid-60fcc509-8d91-4963-8a89-8be1249b9f2d /dev/sda1    btrfs  rw,relatime,space_cache=v2,subvolid=5,subvol=/
    /srv/dev-disk-by-uuid-2cc04ef0-6019-4a4a-9bfb-fcdc2e4beb59 /dev/sdf     btrfs  rw,relatime,space_cache=v2,subvolid=5,subvol=/
    /srv/dev-disk-by-uuid-7e674a01-d0b4-40c6-a27e-3aa187bc9ad7 /dev/sdc1    btrfs  rw,relatime,discard=async,space_cache=v2,subvolid=5,subvol=/
    /srv/dev-disk-by-uuid-541a4acb-b6c4-497f-b19d-86e478a37773 /dev/sdb     btrfs  rw,relatime,space_cache=v2,subvolid=5,subvol=/
    /srv/dev-disk-by-uuid-e2d03e5f-5d48-4fd2-bbce-68e8ebeab41b /dev/sda2    btrfs  rw,relatime,space_cache=v2,subvolid=5,subvol=/


    Hope this helps.

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!