Best way to run OMV on Proxmox VE

  • Hi all!


    Sorry for the NOOB question.


    I actually have a TrueNAS-12.0-STABLE runing bare metal installed in a mirrored SSDs and with 6 disk in a RAIDZ2 + 1 disk for SLOG pool for data.


    I want to migrate this to a Proxmox VE server and have this pool serve Proxmox VMs and Containers and also have some shares to my VMs data (Plex data, NVR data, ...)


    I want to keep my ZFS RAIDZ2 with SLOG


    As far as I understand Proxmox can handle ZFS native as well as OMV.


    What's the best way of doing that? Have the ZFS pool setup on Proxmox and set a mount point for OMV VM or install proxmox in a mirrored SSD and passthrough 6 disks+SLOG to OMV and have OMV to manage ZFS pool?


    If I passthrough the disks to OMV will Proxmox be able to store VMs and Containers on OMV ZFS pool? Is there any performance issues in this setup?


    Is there any performance or any other issues running OMV in a container on Proxmox instead of VM?


    What's the best approach to this?


    kind regards

  • What's the best way of doing that?

    That is the million dollar question.


    The situation isn't unique to running OMV as a VM. FreeNAS/TrueNAS/whatever the hell it is called now would still have the same issue.


    If you pass the disks through to OMV, then you would have to serve the space back to proxmox using nfs or tgt. If the VM isn't running, that is weird.


    If you create large virtual hard drives for the OMV VM, then proxmox loses space and it is difficult to take it back. Proxmox cannot access the files on those drives directly and would need nfs shared to it.


    If you enable nfs on proxmox (via command line, this is what I do), then you can mount the nfs shares on OMV. But if you have nfs enabled on proxmox, you may not need OMV.

    Is there any performance or any other issues running OMV in a container on Proxmox instead of VM?

    OMV does not work in containers. If it did, the container wouldn't cause performance issues any more than anything else running in a container since containers have very little overhead.

    omv 5.6.13 usul | 64 bit | 5.11 proxmox kernel | omvextrasorg 5.6.2 | kvm plugin 5.1.6
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • Proxmox has many more features and versatility than OMV. Why limit yourself?

    While I agree, proxmox is not a fileserver. So, it makes sense that you don't want to configure samba and/or nfs by hand on proxmox.

    omv 5.6.13 usul | 64 bit | 5.11 proxmox kernel | omvextrasorg 5.6.2 | kvm plugin 5.1.6
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • gwrosenbaum: If I may add just my 2cents: Would it not be an option and do the opposite as you inititally stipulated? I have similar requirements as you, and feel very happy using OMV as base system, with all disks and shares under its control, and then running in KVM several machines which I either let access shares, or created dynamic qcow2-files within a share? Many other features of Proxmox I did not feel the need to utilize anyway. By the way, I do not even use the Proxmox-kernel anymore, but run the standard backport kernels, which do a great job (if I am wrong regarding the kernel, I am happy to be corrected from the community here).

  • Both can be used in the way that works for you. For my money Proxmox is likely more stable than OMV. Not a knock to OMV by any means. Make a list of pro's and cons and do what works best for you.


    Proxmox: Pro, Team (large ?) of developers. Con, commercial company and could change at their wim. See centos. Pro: lots of users in production around the world.


    OMV: Pro, Free and open with community support. Con, One main developer with some help from community.


    I have been using OMV since the early days and love it. Moving it to a vm on Proxmox was the best thing I did. Your mileage may very.

    If you make it idiot proof, somebody will build a better idiot.

  • For my money Proxmox is likely more stable than OMV. Not a knock to OMV by any means.

    If you run the proxmox kernel on OMV, then you have identical userland and kernel as proxmox since proxmox 6.x is Debian 10 too. So, there might be some weird issues with the OMV web interface but the stability should be identical.

    omv 5.6.13 usul | 64 bit | 5.11 proxmox kernel | omvextrasorg 5.6.2 | kvm plugin 5.1.6
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • I am not sure what Proxmox tweaks but I would bet it is more than just the kernel.

    There are a few other things like zfs and grub (you can see the packages here - http://download.proxmox.com/de…ubscription/binary-amd64/) but most of the differences are just the proxmox services themselves. The packages that are difference actually get installed on an OMV system when you install the proxmox kernel. So, the core stability should be nearly identical.


    Here are the packages installed on your OMV system from the proxmox repo when you install the proxmox kernel.


    dmeventd

    dmsetup

    grub-common

    grub-pc

    grub-pc-bin

    grub2-common

    ifupdown

    libdevmapper-event1.02.1:amd64

    libdevmapper1.02.1:amd64

    liblvm2cmd2.03:amd64

    libspice-server1:amd64

    lvm2

    pve-firmware

    pve-headers

    pve-headers-5.4

    pve-headers-5.4.78-2-pve

    pve-kernel-5.4

    pve-kernel-5.4.55-1-pve

    pve-kernel-5.4.65-1-pve

    pve-kernel-5.4.78-2-pve

    smartmontools

    omv 5.6.13 usul | 64 bit | 5.11 proxmox kernel | omvextrasorg 5.6.2 | kvm plugin 5.1.6
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

  • Just saying.

    Sorry but that was Debian's change that caused that by changing the zfs packages and dependencies. Your system would've been broken even if you didn't have the proxmox kernel installed.


    And Debian releasing the 5.10 kernel while the 0.8.6 zfs packages (which are not compatible with the 5.10 kernel), screwed up peoples systems as well. So, that is twice the proxmox system didn't have an issue and better stability than Debian...

    omv 5.6.13 usul | 64 bit | 5.11 proxmox kernel | omvextrasorg 5.6.2 | kvm plugin 5.1.6
    omv-extras.org plugins source code and issue tracker - github


    Please read this before posting a question.
    Please don't PM for support... Too many PMs!

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!