nope, i don't think so. i was just wondering why this happened, since it was working on 7.x and saw a similar thread, so i though it might be related
Posts by molnart
-
-
i forgot to mention i am on ZFS. so maybe this is somehow related to how the ZFS datasets are served to NFS?
if you add it back now, do they work?
no, they stop working again. i also tried removing the NFS shares and re-adding them (had to do it in the past a few times) but did not help until i removed the subtree_check param
also realized showmount is not even working when executing locally on the OMV host.
This is my /etc/exports file. Not sure where those last 3 lines are coming from
Code
Display More/export/Images 192.168.50.0/24(fsid=949dfd02-f8d3-4218-9718-bb0b12f83354,rw,subtree_check,insecure,sync,crossmnt,no_root_squash) /export/Camera 192.168.50.0/24(fsid=66488731-3e06-4764-8f88-2429c77d3193,rw,insecure,crossmnt) /export/Backups 192.168.50.1/24(fsid=a4030be9-ace3-46cc-a885-524df59276d0,rw,async,no_wdelay,crossmnt,insecure,no_root_squash,insecure_locks) /export/downloads 192.168.50.60/24(fsid=124d3cce-0e98-4d05-bd04-148fcb43047b,rw,insecure,crossmnt,insecure_locks,no_root_squash) /export/Movies 192.168.50.60/24(fsid=69037c28-3b71-4df8-9ec9-b21a97c22d31,rw,insecure,crossmnt) /export/Music 192.168.50.60/24(fsid=3bdd7c74-3e29-436e-91a1-fe3a9e5cee22,rw,insecure,crossmnt) /export/Documents 192.168.50.60/24(fsid=001c6e95-38bb-442f-bf29-2f57bf094729,rw,subtree_check,insecure,crossmnt) /export/Photos 192.168.50.60/24(fsid=1b43221a-8b63-4079-b8cb-a49f47f03d3c,rw,insecure,crossmnt) /export/various 192.168.50.0/24(fsid=170572e0-d5c9-4160-a0f1-559a43560ce7,rw,subtree_check,insecure,crossmnt) /export 192.168.50.0/24(ro,fsid=0,root_squash,subtree_check,insecure) /export 192.168.50.1/24(ro,fsid=0,root_squash,subtree_check,insecure) /export 192.168.50.60/24(ro,fsid=0,root_squash,subtree_check,insecure) -
i have noticed the same issue after updating to OMV 8 - my NFS shares were empty on all my clients.
realized that i had to remove the subtree_check from all the shares to show up on the clients.
Also not sure if related, but the showmount -e command does not seem to work since OMV 8. i am just getting a clnt_create: RPC: Program not registered error.
-
What is installed at this point?
just the 6.14 kernel. no proxmox-kernel or proxmox-headers meta package - because that installs the 6.17 kernel as far as i understood
Code
Display More$ dpkg -l | grep proxmox ii proxmox-headers-6.14 6.14.11-4 all Latest Proxmox Kernel Headers ii proxmox-headers-6.14.11-4-pve 6.14.11-4 amd64 Proxmox Kernel Headers rc proxmox-kernel-6.11.11-1-pve-signed 6.11.11-1 amd64 Proxmox Kernel Image (signed) ii proxmox-kernel-6.14 6.14.11-4 all Latest Proxmox Kernel Image ii proxmox-kernel-6.14.11-4-pve-signed 6.14.11-4 amd64 Proxmox Kernel Image (signed) rc proxmox-kernel-6.14.8-3-bpo12-pve-signed 6.14.8-3~bpo12+1 amd64 Proxmox Kernel Image (signed) rc proxmox-kernel-6.5.13-3-pve-signed 6.5.13-3 amd64 Proxmox Kernel Image (signed) rc proxmox-kernel-6.5.13-5-pve-signed 6.5.13-5 amd64 Proxmox Kernel Image (signed) rc proxmox-kernel-6.8.12-1-pve-signed 6.8.12-1 amd64 Proxmox Kernel Image (signed) rc proxmox-kernel-6.8.12-10-pve-signed 6.8.12-10 amd64 Proxmox Kernel Image (signed) rc proxmox-kernel-6.8.12-11-pve-signed 6.8.12-11 amd64 Proxmox Kernel Image (signed) rc proxmox-kernel-6.8.12-12-pve-signed 6.8.12-12 amd64 Proxmox Kernel Image (signed) rc proxmox-kernel-6.8.12-13-pve-signed 6.8.12-13 amd64 Proxmox Kernel Image (signed) rc proxmox-kernel-6.8.12-14-pve-signed 6.8.12-14 amd64 Proxmox Kernel Image (signed) rc proxmox-kernel-6.8.12-15-pve-signed 6.8.12-15 amd64 Proxmox Kernel Image (signed) rc proxmox-kernel-6.8.12-16-pve-signed 6.8.12-16 amd64 Proxmox Kernel Image (signed) rc proxmox-kernel-6.8.12-2-pve-signed 6.8.12-2 amd64 Proxmox Kernel Image (signed) rc proxmox-kernel-6.8.12-3-pve-signed 6.8.12-3 amd64 Proxmox Kernel Image (signed) rc proxmox-kernel-6.8.12-4-pve-signed 6.8.12-4 amd64 Proxmox Kernel Image (signed) rc proxmox-kernel-6.8.12-5-pve-signed 6.8.12-5 amd64 Proxmox Kernel Image (signed) rc proxmox-kernel-6.8.12-6-pve-signed 6.8.12-6 amd64 Proxmox Kernel Image (signed) rc proxmox-kernel-6.8.12-7-pve-signed 6.8.12-7 amd64 Proxmox Kernel Image (signed) rc proxmox-kernel-6.8.12-8-pve-signed 6.8.12-8 amd64 Proxmox Kernel Image (signed) rc proxmox-kernel-6.8.12-9-pve-signed 6.8.12-9 amd64 Proxmox Kernel Image (signed)You download from github.
Thanks, all fixed now
-
thanks a lot for your help, i really appreciate it.
so i did this:
- sudo dpkg -P proxmox-kernel-6.17
- run omv-upgrade to see how things are working, but still got the attempt to build dkms for 6.17
- so i removed the nvidia drivers by running apt remove nvidia-kernel-dkms
- ran omv-upgrade again, this removed a bunch of unused nvidia dependencies
- rebooted
- ran sudo apt install nvidia-driver but this has again attempted (and failed) to build the module for 6.17
looking at dpkg -l there are still some remnants of the 6.17 kernel
Code
Display More$ dpkg -l | grep 6.17 pi proxmox-headers-6.17 6.17.4-1 all Latest Proxmox Kernel Headers ii proxmox-headers-6.17.4-1-pve 6.17.4-1 amd64 Proxmox Kernel Headers ic proxmox-kernel-6.17.4-1-pve-signed 6.17.4-1 amd64 Proxmox Kernel Image (signed) $ sudo dpkg -P proxmox-headers-6.17 dpkg: dependency problems prevent removal of proxmox-headers-6.17: proxmox-default-headers depends on proxmox-headers-6.17. dpkg: error processing package proxmox-headers-6.17 (--purge): dependency problems - not removing Errors were encountered while processing: proxmox-headers-6.17 molnart@omv6:~$ sudo dpkg -P proxmox-headers-6.17.4-1-pve dpkg: dependency problems prevent removal of proxmox-headers-6.17.4-1-pve: proxmox-headers-6.17 depends on proxmox-headers-6.17.4-1-pve. dpkg: error processing package proxmox-headers-6.17.4-1-pve (--purge): dependency problems - not removing Errors were encountered while processing: proxmox-headers-6.17.4-1-pvei assume these dependencies are the proxmox meta packages.
- so i did a sudo apt remove proxmox-headers-6.17 followed by a sudo apt autoremove. this time there were no 6.17 packages listed by dpkg anymore.
- rebooted again
- sudo apt install nvidia-driver - this time modules only built for 6.14
- rebooted
- success
so now everything seems to be OK, but i don't have the proxmox kernel meta packages installed. i wonder if this will will bite me back in the future.
this is what the fix7to8upgrade script fixes
where do i find this script? does not seem to be installed on the system
now i should update by secondary server as well. this one is running bare-metal, so no easy rollback. but also no docker, zfs or nvidia drivers, so hopefully will go smoothly.
-
Try: sudo omv-salt deploy run nginx phpfpm
Thanks, that was it ! The web ui now works.
But in the UI all the plugins look to be stuck at their 7.x versions
Not sure if this is a bug or something related to my install. According to dpkg -l the plugin versions are correct:
Code
Display More$ dpkg -l | grep openmediavault ii openmediavault 8.0.3-1 all openmediavault - The open network attached storage solution ii openmediavault-backup 8.0.1 all backup plugin for OpenMediaVault. ii openmediavault-compose 8.0.6 all OpenMediaVault compose plugin ii openmediavault-cterm 8.0 all openmediavault container exec terminal plugin ii openmediavault-diskclone 7.0 all disk clone plugin for OpenMediaVault. ii openmediavault-diskstats 8.0-4 all openmediavault disk monitoring plugin ii openmediavault-ftp 8.0-8 all openmediavault FTP-Server plugin ii openmediavault-kernel 8.0.4 all kernel package ii openmediavault-keyring 1.0.2-2 all GnuPG archive keys of the openmediavault archive ii openmediavault-mounteditor 8.0 all openmediavault mount editor plugin ii openmediavault-omvextrasorg 8.0.2 all OMV-Extras.org Package Repositories for OpenMediaVault ii openmediavault-onedrive 8.0-5 all openmediavault OneDrive plugin ii openmediavault-resetperms 8.0 all Reset Permissions ii openmediavault-rsnapshot 8.0 all OpenMediaVault rsnapshot backup plugin. ii openmediavault-salt 8.0 amd64 Extra Python packages required by Salt on openmediavault ii openmediavault-sftp 8.0 all sftp server ii openmediavault-sharerootfs 8.0-1 all openmediavault share root filesystem plugin ii openmediavault-snmp 8.0-4 all openmediavault SNMP (Simple Network ManagementProtocol) plugin ii openmediavault-zfs 8.0.1 amd64 OpenMediaVault plugin for ZFSThe kernel plugin allows installing the 6.14 proxmox kernel on omv8 which should allow compiling the nvidia module.
yep, that is the one i am running, and its working fine, incl. the nvidia drivers.
but i still have the 6.17 kernel somewhere lingering around, as every apt update tries to build the nvidia drivers for it, basically blocking me from installing or removing anything.
the 6.17 kernel is not visible from the UI, but also at the same time 6.14 is NOT marked as the default (i assume 6.17 is then)
for a moment it appeared (after trying to remove non-proxmox kernels) and when i tried to uninstall it from the UI i got this - basically looks like the unbuild dkms module is blocking the removal of kernel
Code
Display MoreReading package lists... Building dependency tree... Reading state information... Package 'proxmox-kernel-6.17.4-1-pve' is not installed, so not removed The following packages will be REMOVED: proxmox-kernel-6.17.4-1-pve-signed* 0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded. 2 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Setting up nvidia-kernel-dkms (550.163.01-2) ... Removing old nvidia-current/550.163.01 DKMS files... Module nvidia-current/550.163.01 for kernel 6.14.11-4-pve (x86_64): Before uninstall, this module version was ACTIVE on this kernel. Deleting /lib/modules/6.14.11-4-pve/updates/dkms/nvidia-current.ko Deleting /lib/modules/6.14.11-4-pve/updates/dkms/nvidia-current-modeset.ko Deleting /lib/modules/6.14.11-4-pve/updates/dkms/nvidia-current-drm.ko Deleting /lib/modules/6.14.11-4-pve/updates/dkms/nvidia-current-uvm.ko Deleting /lib/modules/6.14.11-4-pve/updates/dkms/nvidia-current-peermem.ko Running depmod... done. Deleting module nvidia-current/550.163.01 completely from the DKMS tree. Loading new nvidia-current/550.163.01 DKMS files... Building for 6.14.11-4-pve and 6.17.4-1-pve Building initial module nvidia-current/550.163.01 for 6.14.11-4-pve Sign command: /lib/modules/6.14.11-4-pve/build/scripts/sign-file Signing key: /var/lib/dkms/mok.key Public certificate (MOK): /var/lib/dkms/mok.pub Building module(s)........................................ done. Signing module /var/lib/dkms/nvidia-current/550.163.01/build/nvidia.ko Signing module /var/lib/dkms/nvidia-current/550.163.01/build/nvidia-modeset.ko Signing module /var/lib/dkms/nvidia-current/550.163.01/build/nvidia-drm.ko Signing module /var/lib/dkms/nvidia-current/550.163.01/build/nvidia-uvm.ko Signing module /var/lib/dkms/nvidia-current/550.163.01/build/nvidia-peermem.ko Installing /lib/modules/6.14.11-4-pve/updates/dkms/nvidia-current.ko Installing /lib/modules/6.14.11-4-pve/updates/dkms/nvidia-current-modeset.ko Installing /lib/modules/6.14.11-4-pve/updates/dkms/nvidia-current-drm.ko Installing /lib/modules/6.14.11-4-pve/updates/dkms/nvidia-current-uvm.ko Installing /lib/modules/6.14.11-4-pve/updates/dkms/nvidia-current-peermem.ko Running depmod..... done. Building initial module nvidia-current/550.163.01 for 6.17.4-1-pve Sign command: /lib/modules/6.17.4-1-pve/build/scripts/sign-file Signing key: /var/lib/dkms/mok.key Public certificate (MOK): /var/lib/dkms/mok.pub Building module(s).......................................(bad exit status: 2) Failed command: env NV_VERBOSE=1 make -j4 modules KERNEL_UNAME=6.17.4-1-pve Error! Bad return status for module build on kernel: 6.17.4-1-pve (x86_64) Consult /var/lib/dkms/nvidia-current/550.163.01/build/make.log for more information. dpkg: error processing package nvidia-kernel-dkms (--configure): installed nvidia-kernel-dkms package post-installation script subprocess returned error exit status 10 dpkg: dependency problems prevent configuration of nvidia-driver: nvidia-driver depends on nvidia-kernel-dkms (= 550.163.01-2) | nvidia-kernel-550.163.01 | nvidia-open-kernel-550.163.01; however: Package nvidia-kernel-dkms is not configured yet. Package nvidia-kernel-550.163.01 is not installed. Package nvidia-kernel-dkms which provides nvidia-kernel-550.163.01 is not configured yet. Package nvidia-open-kernel-550.163.01 is not installed. dpkg: error processing package nvidia-driver (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: nvidia-kernel-dkms nvidia-driver ** CONNECTION LOST ** -
Headers don't cause modules to be built. kernels do.
well, there where no excess kernels in the OMV menu except the one pve kernel booted.
IF you have pve-headers or proxmox-headers installed
i am quite positive i had those installed. also the installation procedure from the Kernels menu must have installed them.
I am guessing you are removing the proxmox header meta package that the zfs plugin has an 'or' dependency on that I linked to above.
I have removed the kernels from the web interface and then the headers by doing sudo apt remove proxmox-headers-6.5 proxmox-headers-6.8 proxmox-headers-6.11. But in the end it might have removed the meta packages as well, as now when i try to remove the 6.17 kernel (which looks like it does not play with the trixie nvidia drivers) it also tries to remove the meta packages
Code
Display More$ uname -a Linux omv6 6.14.11-4-pve #1 SMP PREEMPT_DYNAMIC PMX 6.14.11-4 (2025-10-10T08:04Z) x86_64 GNU/Linux $ sudo apt remove proxmox-kernel-6.17 proxmox-headers-6.17 The following packages were automatically installed and are no longer required: proxmox-headers-6.17.4-1-pve proxmox-kernel-6.17.4-1-pve-signed Use 'sudo apt autoremove' to remove them. REMOVING: proxmox-default-headers proxmox-headers-6.17 proxmox-kernel-6.17 pve-headers Summary: Upgrading: 0, Installing: 0, Removing: 4, Not Upgrading: 0 4 not fully installed or removed. Freed space: 58.4 kB Continue? [Y/n] n Abort.i have also noticed that dpkg still lists a lot of kernel residues:
Code
Display More$ dpkg -l | grep kernel rc cpufrequtils 008-2 amd64 utilities to deal with the cpufreq Linux kernel feature ii ipmitool 1.8.19-9 amd64 utility for IPMI control with kernel driver orLAN interface (daemon) ii kmod 34.2-2 amd64 tools for managing Linux kernel modules ii libaio1t64:amd64 0.3.113-8+b1 amd64 Linux kernel AIO access library - shared library ii libdrm-amdgpu1:amd64 2.4.124-2 amd64 Userspace interface to amdgpu-specific kernel DRM services -- runtime ii libdrm-common 2.4.124-2 all Userspace interface to kernel DRM services -- common files ii libdrm-dev:amd64 2.4.124-2 amd64 Userspace interface to kernel DRM services -- development files ii libdrm-intel1:amd64 2.4.124-2 amd64 Userspace interface to intel-specific kernel DRM services -- runtime ii libdrm-nouveau2:amd64 2.4.124-2 amd64 Userspace interface to nouveau-specific kernelDRM services -- runtime ii libdrm-radeon1:amd64 2.4.124-2 amd64 Userspace interface to radeon-specific kernel DRM services -- runtime ii libdrm2:amd64 2.4.124-2 amd64 Userspace interface to kernel DRM services -- runtime ii libsctp1:amd64 1.0.21+dfsg-1 amd64 user-space access to Linux kernel SCTP - shared library ii libtraceevent1:amd64 1:1.8.4-2 amd64 Linux kernel trace event library (shared library) ii liburing2:amd64 2.9-1 amd64 Linux kernel io_uring access library - shared library ii nfs-kernel-server 1:2.8.3-1 amd64 support for NFS kernel server ii nvidia-kernel-common 20240109+1 amd64 NVIDIA binary kernel module support files iF nvidia-kernel-dkms 550.163.01-2 amd64 NVIDIA binary kernel module DKMS source ii nvidia-kernel-support 550.163.01-2 amd64 NVIDIA binary kernel module support files ii nvidia-modprobe 570.133.07-1 amd64 utility to load NVIDIA kernel modules and create device nodes ii openmediavault-kernel 8.0.4 all kernel package rc proxmox-kernel-6.11.11-1-pve-signed 6.11.11-1 amd64 Proxmox Kernel Image (signed) ii proxmox-kernel-6.14 6.14.11-4 all Latest Proxmox Kernel Image ii proxmox-kernel-6.14.11-4-pve-signed 6.14.11-4 amd64 Proxmox Kernel Image (signed) rc proxmox-kernel-6.14.8-3-bpo12-pve-signed 6.14.8-3~bpo12+1 amd64 Proxmox Kernel Image (signed) iU proxmox-kernel-6.17 6.17.4-1 all Latest Proxmox Kernel Image iF proxmox-kernel-6.17.4-1-pve-signed 6.17.4-1 amd64 Proxmox Kernel Image (signed) rc proxmox-kernel-6.5.13-3-pve-signed 6.5.13-3 amd64 Proxmox Kernel Image (signed) rc proxmox-kernel-6.5.13-5-pve-signed 6.5.13-5 amd64 Proxmox Kernel Image (signed) rc proxmox-kernel-6.8.12-1-pve-signed 6.8.12-1 amd64 Proxmox Kernel Image (signed) rc proxmox-kernel-6.8.12-10-pve-signed 6.8.12-10 amd64 Proxmox Kernel Image (signed) rc proxmox-kernel-6.8.12-11-pve-signed 6.8.12-11 amd64 Proxmox Kernel Image (signed) rc proxmox-kernel-6.8.12-12-pve-signed 6.8.12-12 amd64 Proxmox Kernel Image (signed) rc proxmox-kernel-6.8.12-13-pve-signed 6.8.12-13 amd64 Proxmox Kernel Image (signed) rc proxmox-kernel-6.8.12-14-pve-signed 6.8.12-14 amd64 Proxmox Kernel Image (signed) rc proxmox-kernel-6.8.12-15-pve-signed 6.8.12-15 amd64 Proxmox Kernel Image (signed) rc proxmox-kernel-6.8.12-16-pve-signed 6.8.12-16 amd64 Proxmox Kernel Image (signed) rc proxmox-kernel-6.8.12-2-pve-signed 6.8.12-2 amd64 Proxmox Kernel Image (signed) rc proxmox-kernel-6.8.12-3-pve-signed 6.8.12-3 amd64 Proxmox Kernel Image (signed) rc proxmox-kernel-6.8.12-4-pve-signed 6.8.12-4 amd64 Proxmox Kernel Image (signed) rc proxmox-kernel-6.8.12-5-pve-signed 6.8.12-5 amd64 Proxmox Kernel Image (signed) rc proxmox-kernel-6.8.12-6-pve-signed 6.8.12-6 amd64 Proxmox Kernel Image (signed) rc proxmox-kernel-6.8.12-7-pve-signed 6.8.12-7 amd64 Proxmox Kernel Image (signed) rc proxmox-kernel-6.8.12-8-pve-signed 6.8.12-8 amd64 Proxmox Kernel Image (signed) rc proxmox-kernel-6.8.12-9-pve-signed 6.8.12-9 amd64 Proxmox Kernel Image (signed) ii pve-firmware 3.17-2 all Binary firmware code for the pve-kernel ii rsyslog 8.2504.0-1 amd64 reliable system and kernel logging daemon ic zfs-dkms 2.3.2-2 all OpenZFS filesystem kernel modules for Linuxthe big question now is, how to move forward?
i have a mostly functioning OMV 8 install - meaning ZFS works, docker works, shares work. but i cant get into the web ui because of error 502 and cannot build the nvidia drivers for kernel 6.17, while i also cannot remove it, because it wants to remove the proxmox meta packages too.
i kinda consider this half-success and am reluctant to go back to OMV7 and start from scratch, given i see little chance to get somewhere under 2-3 hours (twice that much if i have to do it again with all the kernels installed).
do i have any paths to "fix" the OMV 8 install?
-
i am not expecting the upgrade script to handle this, i am trying to upgrade somehow with manual interventions, but so far they are failing. i have the proxmox kernel installed and i ve been running it for the past years. 6.11.11-2-pve is the only kernel i have installed. yet, i cannot remove zfs-dkms, because it wants to remove omv-zfs as well. i could try to remove it with dpkg, maybe it will keep omv-zfs intact, but i am bit reluctant to do that.
i tried to remove the dkms module via sudo dkms remove zfs/2.3.2 then update to kernel 6.14, but somehow all the old headers i have removed before crept back it, so its again rebuilding the kernel modules for 40 minutes, for 6.8, 6.11, 6.14 and 6.17. the rebuild in this case fails because there is already a newer module in the kernel:
CodeError! Module version 2.3.2-2 for zfs.ko is not newer than what is already found in kernel 6.14.11-4-pve (2.3.4-pve1). You may override by specifying --force. Error! Module version 2.3.2-2 for spl.ko is not newer than what is already found in kernel 6.14.11-4-pve (2.3.4-pve1). You may override by specifying --force.Anyhow, tried to update, then reboot, then remove the zfs-dkms package, then reboot again. i had to manually import the zfs pools and unlock them, but i still cannot get into the OMV webui and getting that 502 error.
there is also a remaining dkms error for nvidia drivers and kernel 6.17 but i don't think thats blocking OMV since i am booted into kernel 6.14
-
i have spent at least 6 hours trying to update to OMV 8, with no luck so far.
one thing that caused extra annoyance was the presence of kernel headers even after removing the kernels itself via the OMV Web UI. This has caused kernel modules to be rebuilt at basically every step, with each rebuild taking 20-30 minutes for the many headers installed - (and the modules get rebuilt even when the kernels are being removed with apt remove).
the issue with the update is, that PVE kernel 6.17 which comes with OMV 8 is incompatibile with zfs-dkms 2.3.2
Code
Display MoreDKMS (dkms-3.2.2) make.log for zfs/2.3.2 for kernel 6.17.4-1-pve (x86_64) Sat Dec 27 23:56:57 CET 2025 Running the pre_build script # command: cd /var/lib/dkms/zfs/2.3.2/build/ && /var/lib/dkms/zfs/2.3.2/build/configure --disable-dependency-tracking --prefix=/usr --with-config=kernel --with-linux=/lib/modules/6.17.4-1-pve/build --with-linux-obj=/lib/modules/6.17.4-1-pve/build --with-qat= --host= .... checking kernel source directory... /lib/modules/6.17.4-1-pve/build checking kernel build directory... /lib/modules/6.17.4-1-pve/build checking kernel source version... 6.17.4-1-pve configure: error: *** Cannot build against kernel version 6.17.4-1-pve. *** The maximum supported kernel version is 6.14.i have tried manually installing kernel and headers 6.14, also trying to remove zfs-dkms post upgrade (since its still stuck at 2.3.2 after the update) but i still could not log in into web ui and the whole zfs service was down.
if i try to remove zfs-dkms before the upgrade, it wants to remove the zfs plugin as well. also apt autoremove is not removing the packages marked as no longer required.
Code
Display More$ sudo apt remove zfs-dkms [sudo] password for molnart: Reading package lists... Done Building dependency tree... Done Reading state information... Done The following packages were automatically installed and are no longer required: libnvpair3linux libuutil3linux libzfs6linux libzpool6linux zfsutils-linux Use 'sudo apt autoremove' to remove them. The following packages will be REMOVED: openmediavault-zfs zfs-dkms zfs-zed 0 upgraded, 0 newly installed, 3 to remove and 0 not upgraded. After this operation, 19.4 MB disk space will be freed. Do you want to continue? [Y/n] n Abort. molnart@omv6:~$ sudo apt autoremove Reading package lists... Done Building dependency tree... Done Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. -
i have reverted because it was 3 AM and needed a functional OMV setup and restoring a snapshot took like 10 seconds. will give it another try today and try to start earlier

-
I released a new version of omv-extras 7.0.6 that fixed the upgrade on my test system. I was able to reproduce the failure before that.
i have run into the very same issue as the OP by running omv-release-upgrade. i too have zfs-dkms installed and building the kernel module for kernel 6.17 failed, so i could boot only the 6.11 bookworm kernel. for now i have just reverted to version 7 and did not try any fixes, mostly because i realized that i have too many kernels installed, that are just prolonging the process.
also i am not sure if i should entirely remove dkms, since i need it (probably) for the nvidia drivers and gpu acceleration. anyhow my understanding from the this thread was that omv-extras 7.0.6 should handle the update fine even with zfs-dkms
-
some progress to follow here https://github.com/openmediava…diavault/debian/changelog
-
my expectation is, that OMV8 will be out shortly before the support for Debian 12 expires, which will be in June 2026. I would not expect a release before april. also it seems that since OMV 6 it moved to a kind of a "rolling release" model with new features delivered in minor versions, the major ones essentially "just" bumping the Debian release. given that no significant features seem to be around the corner i'd expect the same with OMV8
-
i am thinking moving to a more standard REST API could help the OMV ecosystem and potentially bring in plugin developers.
(as much as i love OMV, i feel their maintainers are getting irritated at every slight joke attempt or feature suggestion)
-
the biggest question when this issue will be moved to OMV 9.x
https://github.com/openmediavault/openmediavault/issues/301 -
sudo dkms install zfs/2.3.1 and a reboot was all i needed
-
solved with an hour long chatgpt session. turns out there was a mismatch in my ZFS version and ZFS kernel module.
-
I have recently noticed I am not getting any notifications about ZFS scrubs. When checking, i realized the `zed` service is failing. Trying to run zed manually, get the `zfs_unavail_pool` error, but my pools are available, as repoted by `zpool status` - see below.
What could be causing this?
Code
Display Moremolnart@omv6:/var/log$ sudo zed -Fv Ignoring "zed.rc": not executable by user Registered zedlet "statechange-notify.sh" Registered zedlet "pool_import-led.sh" Registered zedlet "resilver_finish-notify.sh" Registered zedlet "history_event-zfs-list-cacher.sh" Registered zedlet "all-syslog.sh" Registered zedlet "scrub_finish-notify.sh" Registered zedlet "statechange-slot_off.sh" Registered zedlet "vdev_clear-led.sh" Registered zedlet "vdev_attach-led.sh" Registered zedlet "statechange-led.sh" Registered zedlet "deadman-slot_off.sh" Registered zedlet "data-notify.sh" Registered zedlet "zed.rc.dpkg-dist" Registered zedlet "zed-functions.sh" Registered zedlet "resilver_finish-start-scrub.sh" ZFS Event Daemon 2.3.1-1~bpo12+1 (PID 1088249) Add Agent: init Diagnosis Engine: register module Retire Agent: register module zed_disk_event_init Processing events since eid=0 Waiting for new udev disk events... Exiting zed_disk_event_fini zfs_agent_consumer_thread: exiting Retire Agent: fmd.accepted: 0 Retire Agent: unregister module Diagnosis Engine: fmd.accepted: 0 Diagnosis Engine: fmd.caseopen: 0 Diagnosis Engine: fmd.casesolved: 0 Diagnosis Engine: fmd.caseclosed: 0 Diagnosis Engine: old_drops: 0 Diagnosis Engine: dev_drops: 0 Diagnosis Engine: vdev_drops: 0 Diagnosis Engine: import_drops: 0 Diagnosis Engine: resource_drops: 0 Diagnosis Engine: unregister module Add Agent: fini zfs_unavail_pool: examining 'StoragePool' (state 7) zfs_unavail_pool: examining 'z-ssd' (state 7) molnart@omv6:/var/log$ zpool status pool: StoragePool state: ONLINE scan: scrub repaired 0B in 20:40:28 with 0 errors on Sun May 11 21:04:29 2025 config: NAME STATE READ WRITE CKSUM StoragePool ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 a755e11b-566a-4e0d-9e1b-ad0fe75c569b ONLINE 0 0 0 7038290b-70d1-43c5-9116-052cc493b97f ONLINE 0 0 0 678a9f0c-0786-4616-90f5-6852ee56d286 ONLINE 0 0 0 raidz1-1 ONLINE 0 0 0 93e98116-7a8c-489d-89d9-d5a2deb600d4 ONLINE 0 0 0 c056dab7-7c01-43b6-a920-5356b76a64cc ONLINE 0 0 0 ce6b997b-2d4f-4e88-bf78-759895aae5a0 ONLINE 0 0 0 errors: No known data errors pool: z-ssd state: ONLINE scan: scrub repaired 0B in 00:04:00 with 0 errors on Sun May 11 00:28:05 2025 config: NAME STATE READ WRITE CKSUM z-ssd ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 173b4876-db9d-d948-b75c-ce4d475428b8 ONLINE 0 0 0 54cc058c-3097-d242-9975-483d147300c1 ONLINE 0 0 0 errors: No known data errors -
the isssue here rather is that OMV does not handle the smartctl output from SAS drives, therefore any SMART failures may go unnoticed, as there will be no email alerts, no nothing...
-
depending on the type of content and file size, rsync can be painfully slow.
from my limited knowledge zfs send/recieve it the recommended way of transfering data between pools. i have recently discovered bzfs https://github.com/whoschek/bzfs which looks like a wrapper around zfs send/receive. I havent tried it myself but the time will come soon, as i also possibly will need to recreate my zfs pool