Page dump error on new install

  • I'm considering migrating to OMV from FreeNAS (long time user) and decided to give OMV a test to see how it works and what it can do, so I installed it in a Hyper-V VM on Windows. I have installed a few plugins (docker, sftp, zfs, letsencrypt) and it seems to work well enough except for one critical error shown below.


    https://i.postimg.cc/dVgR7WFP/OMV-Page-Bug.png


    The foreground window is the output show on the VM's TTY at the login screen, the SSH window in the background is logged in as root.


    I have no idea what is causing this to occur, but it spits this information out to console in the middle of whatever I'm doing. I have seen it occur multiple times.


    I would like to figure out what the heck is going on because this doesn't look to be stable enough for me to want to use to replace FreeNAS which has been rock solid.


    Any ideas?


    Thanks,

    • Offizieller Beitrag

    Is the VM fully updated with the latest 4.19 kernel? Are you running freenas in a hyper-v vm on windows?

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I've installed all updates for OVM as far as I know (apt-get says there is nothing new). This is my kernel, not sure if it's the latest.


    Linux openmediavault 4.19.0-0.bpo.1-amd64 #1 SMP Debian 4.19.12-1~bpo9+1 (2018-12-30) x86_64 GNU/Linux


    No, FreeNAS is running on actual hardware. If OMV tests out then I will move OMV over to said hardware, but given that this issue occurred within an hour of me testing on a VM I'm a bit hesitant to move to metal without understanding the issue.

    • Offizieller Beitrag

    This is my kernel, not sure if it's the latest.

    It is the latest.


    No, FreeNAS is running on actual hardware. If OMV tests out then I will move OMV over to said hardware, but given that this issue occurred within an hour of me testing on a VM I'm a bit hesitant to move to metal without understanding the issue.

    I don't consider this a fair comparison. I think OMV is more stable on hardware than a hyper-v VM especially if that hardware is stable running FreeNAS. It could be the 4.19 isn't quite as stable on hyper-v yet since it is newly released.


    I would also mention that the 4.19 is a backports kernel from unreleased Debian 10 and not the stable Debian 9 kernel. While I have had no issues with it on hardware, kvm/proxmox, and vmware, it still could cause a bit of instability.


    Which brings me to why I run the proxmox kernel on some systems. First, proxmox uses the Debian 9 userland with the Ubuntu 18.04 LTS kernel and this is well tested by them. Second, this is exactly what OMV is when I run the proxmox kernel on an OMV system. Third, I find this kernel ultra stable on Ubuntu 18 and Debian. Finally, it includes the zfs modules (no compiling) which makes kernel upgrades much faster if you use zfs.


    Since you seem very concerned about stability, I would be very confident that you physical box running FreeNAS would be just as stable running OMV 4.x with the proxmox kernel.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    I would like to figure out what the heck is going on because this doesn't look to be stable enough for me to want to use to replace FreeNAS which has been rock solid.

    You couldn't do anything that would realistically be considered "stability" testing, in a Windows client VM. For a fair comparison, OMV would need to be installed on actual hardware or ran from a purpose built VM server like Proxmox or ESXi.

  • I don't consider this a fair comparison. I think OMV is more stable on hardware than a hyper-v VM especially if that hardware is stable running FreeNAS. It could be the 4.19 isn't quite as stable on hyper-v yet since it is newly released.

    You couldn't do anything that would realistically be considered "stability" testing, in a Windows client VM. For a fair comparison, OMV would need to be installed on actual hardware or ran from a purpose built VM server like Proxmox or ESXi.

    Oh, I completely agree. I wasn't attempting to do a stability test/comparison at all - only testing it from a feature perspective. I simply had never encountered an error like that before and was surprised. Thank you both for your feedback.



    Which brings me to why I run the proxmox kernel on some systems. First, proxmox uses the Debian 9 userland with the Ubuntu 18.04 LTS kernel and this is well tested by them. Second, this is exactly what OMV is when I run the proxmox kernel on an OMV system. Third, I find this kernel ultra stable on Ubuntu 18 and Debian. Finally, it includes the zfs modules (no compiling) which makes kernel upgrades much faster if you use zfs.

    This may be something I ought to consider as well, specifically because I do plan on using ZFS. Since I have your attention, I'd like to ask two related questions:

    • How does the ZFS plugin get along with the proxmox kernel? Is the plugin aware of the proxmox kernel and doesn't force unnecessary recompiles of the stock kernel when upgrading?
    • Out of curiosity, do you know of any (simple) way to get OMV installed onto a zfs formatted partition rather than ext4?
    • Offizieller Beitrag

    How does the ZFS plugin get along with the proxmox kernel?

    It doesn't know anything about the kernel.


    Is the plugin aware of the proxmox kernel and doesn't force unnecessary recompiles of the stock kernel when upgrading?

    The plugin doesn't actually do the compiles. The zfs-dkms package does this and it is smart enough to release the zfs module is already there skipping the compile.


    Out of curiosity, do you know of any (simple) way to get OMV installed onto a zfs formatted partition rather than ext4?

    Simple? No. Why? Do you really need the OS on zfs?

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    Simple? No. Why? Do you really need the OS on zfs?

    I wholeheartedly agree with ryecoaaron. The default, EXT4 is simple, reliable, recovers well from dirty shutdowns and, if needed, it's possible to repair it.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!