Update from OMV6 to OMV7 can't get Proxmox kernel working

  • I upgraded my OMV6 system to OMV7, I did have some issues with the upgrade but got everything to work again and boot however ZFS was no longer working. I tried installing both proxmox kernel 6.2 and 6.5 however my system will not boot when I select these kernels. I really would like top avoid reinstalling again to get everything working. I have the system up with the following kernel Debian GNU/Linux, with Linux 6.1.0-18-amd64. Any suggestions on where I can look to find the cause of the boot issues because the boot output doets not give anything

  • What were the symptoms of ZFS not working after the upgrade? Did/does the pool refuse to import? I am I correct in thinking you were not using a PVE kernel in your OMV6 install, just the debian kernel? It could be the upgrade process did not re-build the zfs modules for the new debian kernel in OMV7.


    I've never had problems getting PVE kernels to run on OVM6 or OMM7. So what symptoms are you getting when trying to boot with these kernels?


    Places to look include status of zfs systmed services, dmksm status, journalctl logs

  • I get a 500 error that the zfs module is not loaded and should try modprobe, I tried but it says that for the native omv kernel it can't get the module for zfs.


    The systems with the proxmox kernel is that it loads the nvme drive then thats it, nothing goes any further. Hard to explain, journbelctl says nothing

  • I've just done a OMV6 to OMV7 using omv-release-upgrade starting with a simple zfs pool and using debian kernel only on a fully update OMV6 instance. All's fine after upgrade, to I don't think there's a lurking bug in the upgrade script.


    Please output of the following commands which will help id any problems with your upgrade:


    1. dkms status 


    2. zfs -V


    3. dpkg -l | egrep "zfs|openm"

  • The problem is not exactly zfs, I have found that the proxmox kernel works better and more reliably without breaking my zfs pool and that's what I would like to get working again. I will try later tonight again after purging the nvidia drivers to see if that helps to get the proxmox kernel installed. I have removed the openmediavault-zfs plugin because when installing the proxmox kernel it gave an error with the zfs plugin installed.

  • The feed back I asked for would be helpful, as I believe I may have found a bug in installing pve kernels in some cases.

    There is the output of the requested commands, zfs tab still says:

    and modprobe zfs says

    Bash
    root@omvserver:~# modprobe zfs
    modprobe: FATAL: Module zfs not found in directory /lib/modules/6.1.0-18-amd64
    root@omvserver:~#
  • Re-installing openmediavault-zfs has confused things a little from what might have been your system when you first posted. What kernel is running at the moment? From the outputs above, if it's debian kernel and not a pve kernel, the your system is looking for a non-existent module as dkms is empty.


    What's output of dpkg -l | grep "proxm" and uname -a ?

  • As stated above, I am unable to boot using a pve kernel. I'm still using the Debian GNU/Linux, with Linux 6.1.0-18-amd64 kernel. Both pve 6.2 and 6.5 kernels install successfully no problem but when I boot to them they do the initial ramdisk load then switches to saying /dev/nvmensp1 clean 5788474/7748489493 blocks or something like that on the attached monitor console output and doesn't go any further, I am not sure how to better explain the issue, appologies.

  • As stated above, I am unable to boot using a pve kernel. I'm still using the Debian GNU/Linux, with Linux 6.1.0-18-amd64 kernel. Both pve 6.2 and 6.5 kernels install successfully no problem but when I boot to them they do the initial ramdisk load then switches to saying /dev/nvmensp1 clean 5788474/7748489493 blocks or something like that on the attached monitor console output and doesn't go any further, I am not sure how to better explain the issue, appologies.

    OMV may eventually boot after it appears to stall. That typically means it's waiting on systemd mounts. You would get more info during the boot process if you temporarily remove the word "quiet" from the boot params via the grub menu before booting.


    As you have no dkms entries for zfs, your pool will not be accessible when booting with the debian kernel. You'll need to provide output as requested if you want to sort this out. Also a screen shot of your current WebUI tab "System |Kernel"

  • Thank you for your reply and patience, I appologies as I am knowledgeable with Linux but certainly not an expert yet at diagnosis on these matters. I will send you the screenshot of the WebUI tab "System > Kernel" once I get home today from work. Is there any other information I can give you that might help? Could it be that both 6.2 and 6.5 kernels do not like my GT710 Nvidia GPU? I have also done nano /etc/defualt/grub and removed the quite part from it to get output from booting the 6.5 kernel to better understand whats going on. How long should I wait before I force the system to power down again when I test the 6.5 kernel again? My monitor is connected to the GPU hdmi output

  • You've cross posted with the same question elsewhere. This thread is not relevant to you if you start with pve kernel +zfs + omv

    Also I was running Proxmox kernel + zfs + OMV6, upgraded to OMV7, could not get zfs working, couldn't boot into pve kernel after upgrade

  • I don't mean you edit /etc/defualt/grub which you'd have to follow with update-grub for it to be effective. I mean at boot, go to the grub menu press "e" for edit, then find the line that ends in "quiet" and backspace over it then F10 to boot.


    Waiting or not depends on the console error messages. TBH, I don't think of servers running with Nvidia graphic cards but OMV comes with these Nvidia packages:


    Code
    root@omv6vm:~# dpkg -l | grep nvidia
    ii  glx-alternative-nvidia          1.2.1~deb11u1                         amd64        allows the selection of NVIDIA as GLX provider
    ii  nvidia-installer-cleanup        20151021+13                           amd64        cleanup after driver installation with the nvidia-installer
    ii  nvidia-kernel-common            20151021+13                           amd64        NVIDIA binary kernel module support files
    ii  nvidia-modprobe                 525.78.01-1~bpo11+1                   amd64        utility to load NVIDIA kernel modules and create device nodes
    ii  nvidia-tesla-470-alternative    470.223.02-4~deb12u1                  amd64        allows the selection of NVIDIA as GLX provider (Tesla 470 version)
    ii  nvidia-tesla-470-kernel-support 470.223.02-4~deb12u1                  amd64        NVIDIA binary kernel module support files (Tesla 470 version)
    root@omv6vm:~#

    It's years since I've run any system with a separate graphics card and have forgotten the basics of getting nvidia cards running with any form of linux.


    What's output of dpkg -l | grep "pve"?

  • Thank you Krisbee for your help, I did as you suggested and noticed that the system got stuck at nvidiafb: Device ID: 10de128b so as I figured it was my old Nvidia GPU that was holding back the boot process, did some googling and found others had a similar problem so I did as suggested and replace "quiet" with "nomodeset" and that got it to boot using the pve kernel 6.5. After that I installed the correct driver for my GPU and now it is working as expected, imported my zpool and now have access to my drives. Thank you for your assistance.

  • SILENT001

    Hat das Label gelöst hinzugefügt.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!