Why Proxmox VE for zfs?

  • I hope I'm asking in the right place...


    Why does OMV use Proxmox VE for ZFS modules when the same can easily be installed from stretch-backports repository? (by installing zfs-dkms and zfsutils-linux as stated in the official github repository of ZoL)
    Are there known issues that prevents this?


    What is it that makes Proxmox the better option?


    Thanks in advance,
    Julian

  • That explains it. The Bintray's "Depends" still states the following so I thought it to be the case:


    libnvpair1linux, libuutil1linux, libzfs2linux, libzpool2linux, linux-headers-amd64 | [b]pve-headers[/b], openmediavault (>= 4.1.0), openmediavault-omvextrasorg (>= 4.0.3), zfs-dkms | [b]pve-headers[/b], zfsutils-linux, zfs-zed


    @https://bintray.com/openmediav…s/openmediavault-zfs/view



    For anyone wondering how, simply using the stretch-backports repo to install zfs-dkms and zfsutil-linux will do the job.


    Thank you cabrio_leo.

    • Offizieller Beitrag

    For anyone wondering how, simply using the stretch-backports repo to install zfs-dkms and zfsutil-linux will do the job.

    The stretch-backports repo is enabled by default on OMV 4.x installs and yes those packages will do the job but you still have to compile the module. You don't have to compile the module with the proxmox kernel. I also have found the proxmox kernel to be more stable than the backports kernel especially while the backports kernel version hasn't stabilized.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • The stretch-backports repo is enabled by default on OMV 4.x installs and yes those packages will do the job but you still have to compile the module. You don't have to compile the module with the proxmox kernel. I also have found the proxmox kernel to be more stable than the backports kernel especially while the backports kernel version hasn't stabilized.


    I've run some tests with "AJA System Test" and "BlackMagic Disk Speed Test" to measure SMB throughput on a 10Gbps NIC and get some unstable performance with the proxmox kernel so I thought I'd try out the backports. (Not a confirmed behavior)
    I'll come back with more stabilized results.

    • Offizieller Beitrag

    I've run some tests with "AJA System Test" and "BlackMagic Disk Speed Test" to measure SMB throughput on a 10Gbps NIC and get some unstable performance with the proxmox kernel so I thought I'd try out the backports. (Not a confirmed behavior)
    I'll come back with more stabilized results.

    Surprising since the proxmox kernel is the Ubuntu kernel which is has much more testing than the backports kernel does. If you are using the 5.3 proxmox kernel (still in testing), that might explain more. I would've thought proxmox would have seen stability issues since plenty of their users have large systems with 10gbe or faster. Although, I am using ubuntu 18 with the 5.3 HWE kernel with a 10gbe card for my nfs server (nvme storage) and it is rock stable.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Surprising since the proxmox kernel is the Ubuntu kernel which is has much more testing than the backports kernel does. If you are using the 5.3 proxmox kernel (still in testing), that might explain more. I would've thought proxmox would have seen stability issues since plenty of their users have large systems with 10gbe or faster. Although, I am using ubuntu 18 with the 5.3 HWE kernel with a 10gbe card for my nfs server (nvme storage) and it is rock stable.

    I've run some tests over the weekend both with Proxmox (Debian GNU/Linux, with Linux 4.15.18-23-pve) and my default installed kernel (Debian GNU/Linux, with Linux 4.9.0-11-amd64).
    I had no write operation fails on the default kernel while having constant write fails on the Proxmox kernel.


    This was tested on the exact same hardware (because I was just switching kernels back and forth) so I can confirm, at least in my use case, that the default debian kernel is more stable for ZFS volume's write operations than Proxmox.
    Perhaps there are some system tunables that could overwrite this behavior on the Proxmox side but I haven't tested anything yet...


    If you can think of any tunable parameters that might be the root cause of this, please shout out a few :)
    Many thanks

    • Offizieller Beitrag

    Having write operation failures is very strange in my book. I have run the proxmox kernel on a lot of hardware but I don't use zfs. So, I would guess this is an issue with the zfs code not the kernel. I would curious to see if you have issues with the proxmox 6 kernel/zfs module. And I don't think there is a tunable parameter to fix. What hardware are you using anyway?

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!