Posts by prtigger1
-
-
Hi Aaron
Thank you, i just realized, that these settings
exists for longer time and aren't new with
OMV 8.
Never noticed that before, because i was focused
on /etc/sysctl.conf...

But it's not to late to keep an eye on the
default changes...
I guess that will not fix the performance
problem with my Btrfs storage...
I decide to stay with the very good performing
6.12.xx stable kernel, until a newer backport
kernel is coming up. 6.17.13 is EOL right now.
Best regards
Prtigger
-
-
The scrub is only a little bit slower than with 6.17.13 bpo:
Code
Display MoreLinux pr-srv-01 6.17.4-2-pve #1 SMP PREEMPT_DYNAMIC PMX 6.17.4-2 (2025-12-19T07:49Z) x86_64 The programs included with the Debian GNU/Linux system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. Last login: Sat Jan 10 20:08:13 2026 from 192.168.150.100 root@pr-srv-01:~# root@pr-srv-01:~# btrfs scrub start -B -d /srv/dev-disk-by-uuid-a8a06053-0cc4-491d-adf2-01bb8a02fb44 Starting scrub on devid 1 Starting scrub on devid 2 Starting scrub on devid 3 Scrub device /dev/sdc (id 1) done Scrub started: Sat Jan 10 20:17:48 2026 Status: finished Duration: 1:08:31 Total to scrub: 457.41GiB Rate: 113.94MiB/s Error summary: no errors found Scrub device /dev/sdb (id 2) done Scrub started: Sat Jan 10 20:17:48 2026 Status: finished Duration: 1:08:29 Total to scrub: 457.66GiB Rate: 114.05MiB/s Error summary: no errors found Scrub device /dev/sdd (id 3) done Scrub started: Sat Jan 10 20:17:48 2026 Status: finished Duration: 1:08:27 Total to scrub: 456.39GiB Rate: 113.79MiB/s Error summary: no errors found root@pr-srv-01:~#Best regards
prtigger
-
Exact this was my idea today!
I disabled backports, cleaned up the kernel stuff and repos. Installed the lastest 6.12.63 kernel meta package...
After that, i installed the 6.17.4-2-pve kernel... I've both kernel to start.
First test (Writing 1.8GB file to Samba share): PVE Kernel is slower than Debian bpo 6.17.13 kernel, yesterday.
Compared to actual 6.12.63, the write performace is more than 20% slower... Worse!!
Btrfs scrub is just running for 41 minutes...
By the way, if the settings in:
/etc/sysctl.d/99-openmediavault-nonrot.conf
are useful for my hardware, i had not fully checked at this time... (-nonrot stands for ???)
I`ve got an extra conf file, optimized for my network stack.
At this time i override laptop.mode = 0 (Changed yesterday, after i post the information here)
But both kernels run with this settings...
Best regards
prtigger
Best regards
prtigger
-
Hi Aaron
That's a possible explanation...
The load is higher and the cache usage is
lower during scrub. That's what i can see.
The scrub time difference is massive...
Normally my system is performing pretty
well with the older kernel...
If there is any way to optimize this bad behavior, i don't know at this time.
Switching to other file system may be the
last step...
That's the actual status...
Thank's and best regards
prtigger
-
Hi folks
I find a fundamental degraded Btrfs performance (srub+filesystem write) with the bpo kernel 6.17.13. Compared with the stable 6.12.57 kernel
on my Supermicro server. No change on system, just change the boot kernel in Grub. Filesystem is a Btrfs raid 1 over 3 Samsung SSDs
Scrub with 6.17.13:
Code
Display Moreroot@pr-srv-01:~# btrfs scrub start -B -d /srv/dev-disk-by-uuid-a8a06053-0cc4-491d-adf2-01bb8a02fb44 Starting scrub on devid 1 Starting scrub on devid 2 Starting scrub on devid 3 Scrub device /dev/sdc (id 1) done Scrub started: Thu Jan 8 21:26:19 2026 Status: finished Duration: 1:07:10 Total to scrub: 456.26GiB Rate: 115.93MiB/s Error summary: no errors found Scrub device /dev/sdb (id 2) done Scrub started: Thu Jan 8 21:26:19 2026 Status: finished Duration: 1:07:15 Total to scrub: 457.12GiB Rate: 116.01MiB/s Error summary: no errors found Scrub device /dev/sdd (id 3) done Scrub started: Thu Jan 8 21:26:19 2026 Status: finished Duration: 1:07:10 Total to scrub: 455.78GiB Rate: 115.81MiB/s Error summary: no errors found root@pr-srv-01:~#Scrub with 6.12.57
Code
Display Moreroot@pr-srv-01:~# btrfs scrub start -B -d /srv/dev-disk-by-uuid-a8a06053-0cc4-491d-adf2-01bb8a02fb44 Starting scrub on devid 1 Starting scrub on devid 2 Starting scrub on devid 3 Scrub device /dev/sdc (id 1) done Scrub started: Fri Jan 9 07:29:23 2026 Status: finished Duration: 0:33:00 Total to scrub: 457.41GiB Rate: 236.56MiB/s Error summary: no errors found Scrub device /dev/sda (id 2) done Scrub started: Fri Jan 9 07:29:23 2026 Status: finished Duration: 0:32:42 Total to scrub: 457.66GiB Rate: 238.86MiB/s Error summary: no errors found Scrub device /dev/sdd (id 3) done Scrub started: Fri Jan 9 07:29:23 2026 Status: finished Duration: 0:33:46 Total to scrub: 456.39GiB Rate: 230.67MiB/s Error summary: no errors found root@pr-srv-01:~#That´s double time for exact the same Btrfs data filesystem!
Copying a large file from Windows 11 client to the Samba share is slower,too. But not that much.
At this time, no idea what changed in Btrfs kernel driver, but on my old dual core Intel Atom D510 server, this is bad stuff with 6.17.13.
If anybody noticed identical performance issues, please keep me informed.
Ideas welcome!
Best regards
prtigger
-
Thank's for your info.
And all this because of a change in Debian 13?... Because this never happened before.
It's not a big thing, but not very nice...

Best regards
prtigger
-
Thank's for your info!
In addition, i have no idea where the /etc/issue file is generated, after an OMV
update is inside... If i've got more information, i would try to implement a 'clear screen'
before the login text output...
Best regards
prtigger
-
You may execute systemctl list-units | grep getty and post the output here.
Hi
here it is:
Coderoot@pr-srv-01:~# systemctl list-units | grep getty getty@tty1.service loaded active running Getty on tty1 system-getty.slice loaded active active Slice /system/getty getty.target loaded active active Login Prompts root@pr-srv-01:~#Another question:
Where is the place of generating the 'beep' signal (systemd?).
Best regards
prtigger
-
Here is the photo right after update to latest version and reboot (Login coming up on cosole):
Here is the photo 10 seconds later, when 'beep' signal is coming up:
with the older OMV versions the new output always overwrites the exiting bootup output...
That´s what i mean...
Best regards
prtigger
-
Should i execute the command
omv-salt deploy run issue
right after a had done an update with
omv-upgrade?
For me it looks like, that only a clear
command is missing, before the updated
login information is displayed (after boot on beep signal).
The result is that the old information is not
overwritten like before...
With the next update, i'll take a picture of this...
Best regards
prtigger
-
Thank you for your information!
If i've got time, i'll take a look how to
migrate my sysctl.conf to the new version.
Best regards
prtigger
-
In addition:
Another small problem i found after the upgrade:
The upgrade procedure 'killed' my modified /etc/sysctl.conf
Best regards
prtigger
-
Hi folks
After updating from 7.7.24-4 to 8.x i figured out, that on the server console screen, the login screen is not cleared again, when it´s
updated (after OMV version update) with new omv-version number, at beep signal.
Now with OMV 8 all new login information are added right after the bootup login information, on same page.
With 7.x, 6.x this never happened.
Is this a feature?
Best regards
prtigger
-
Hi Aaron
Thank you, enabling backports in omv-extras helped!
Best regards
prtigger
-
Hi folks
Is there something to do, to get the Trixie
bpo kernel? I guess, that backports were
enabled by default. I only got the stable 6.12
kernel...
Best regards
prtigger
-
-
Hi Aaron
Thank you for patching...
Best regards
prtigger
-
Hi Aaron
No, it's not breaking my system!
Sorry, i don't want to bother you with this warnings...
If i know, that this is a known problem and
i don't have to think about it, everything is
fine for me...
Thank's and best regards
prtigger