Issues with kernel 5.10.0-0.bpo.3-amd64 and NFS

  • Hi,


    We have two single-node proxmox-VE systems.

    Both of them have their own Storage via NFS from two separate OMV-Hosts.

    (Complete seperate locations)

    The NFS-Storage holds the "Harddisks" for several VM's.


    The hardware for Proxmox-VE hosts and OMV completly is the same.


    If one upgrade OMV to use Kernel 5.10.0-0.bpo.3-amd64, than the VM's can't boot up from harddisk anymore.


    I checked the VM's disks using a recovery-bootCD image and could reach the disks within running VM and no data lost.

    But ever i've tried to boot from harddisks VM got stuck in seabios-screen.


    By booting from Kernel 5.7.0-0.bpo.2-amd64, all is running fine.


    Comparing the Load-Graphs between OMV running 5.10.0-0.bpo.3 or 5.7.0-0.bpo.2, the 5.10 hosts has a significant higher load in this scenario.


    Would bei nice, if one could check and maybe explain, what issues this missbehaviour.


    Thanks for reading

    • Offizieller Beitrag

    Shouldn’t be long as the Debian kernel 5.10.15 has been released.

    bullseye and sid are on 5.10.19. Unless your systems are bleeding edge hardware, I would consider using the proxmox kernel. Much more stable in my opinion.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.6 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Thanks for reading and suggestions made.


    I'll stay on 5.7 and set up a testing System for updates to come.

    Storage Systems Hardware is not the newest one, but still good for the job.

    So no trouble with latest hardware features and no need for the freshest Kernels from that point of view.

  • I stumbled across this kernel bug last night while trying to work out why my Kodis suddenly got corrupt video from the OMV server.

    My hardware is far from bleeding edge. Is there any OMV reason for me to not stick to the stock Debian kernels?

  • There seems to be confusion.

    OMV is just an application build on top of DEBIAN distribution. Changing kernels has usually no impact on OMV as long as the minimum requirements for API's are met by it. Meaning of API is explained in https://en.wikipedia.org/wiki/API

    omv 6.9.6-2 (Shaitan) on RPi CM4/4GB with 64bit Kernel 6.1.21-v8+

    2x 6TB 3.5'' HDDs (CMR) formatted with ext4 via 2port PCIe SATA card with ASM1061R chipset providing hardware supported RAID1


    omv 6.9.3-1 (Shaitan) on RPi4/4GB with 32bit Kernel 5.10.63 and WittyPi 3 V2 RTC HAT

    2x 3TB 3.5'' HDDs (CMR) formatted with ext4 in Icy Box IB-RD3662-C31 / hardware supported RAID1

    For Read/Write performance of SMB shares hosted on this hardware see forum here

    • Offizieller Beitrag

    Is there any OMV reason for me to not stick to the stock Debian kernels?

    Most OMV users are using desktop grade hardware that typically needs a newer kernel. Newer kernels also give you improved versions of btrfs. Prior to the 5.10 kernel, the backports kernels were pretty good. That said, I run the proxmox kernel. The 5.4 kernel is the Ubuntu 20.04 LTS kernel and is rock solid. omv-extras also give you the option to turn backports off. You would have to install the regular kernel but it is possible.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.6 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • OK, thanks. I'd wondered whether, as the backports repo got added and enabled automatically - I think when I updated to OMV5 - that possibly something in OMV5 depended on kernel 5 functionality. If it's just there to support recent hardware, I can safely stick with the stock kernel, which I'm now running and NFS is working again.

    • Offizieller Beitrag

    Backports is enabled by default on OMV 4 and OMV 5. When upgrading from OMV 5, I agree backports probably got re-enabled.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.6 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!