OMV, ZFS and Proxmox

  • Hi,


    i've switched my server to Proxmox and I'm running OMV as KVM. I did some test's and got ZFS inside the OMV VM working. I assigned my 3x 4TB WD Red Disks to the KVM and imported the pool there (https://pve.proxmox.com/wiki/Physical_disk_to_kvm). Is this a "good" or a "bad" practice? From what I've saw (only a bit testing) it worked - but is there maybe something I am missing why this is a bad idea or could be?


    Thank you!


    - Sebastian

    • Offizieller Beitrag

    I don't claim to be an expert on this but here is how I did it. My disks are zfs on the proxmox. That way I can give as much as I need to omv or other vm's. Also I think zfs may be more stable on proxmox. See the threads about zfs problems on these forums. I am also not sure about performance. If you are using omv for vm storage you would be adding a layer. vm-proxmox-disk vs vm-proxmox-omv-disk Not sure it would make that much difference depending on your load. Either way it should work tho.

  • Ok, that would be an option as well. I already got 4TB of data on the pool. If I would like to do it the way you mentioned I would be forced to create a virtual disk on the zfs pool and move all data from the pool to the virtual disk in the pool, or?


    Thank you,
    Sebastian

  • Well - i dont know. I was a bit confused, as ZFS used only ~300MB of ram inside the OMV KVM. Anyway I need to agree with you: ZFS is more mature in Proxmox. Did you create a raw disk and assigned it to Openmediavault, or did you do it in a different way (zvol..)?


    Thank you!


    Sebastian

    • Offizieller Beitrag

    Again I don't claim to be an expert on this, I am still learning too. I have four 2TB drives. I created a pair of mirrors.


    zpool status
    pool: R10-4x2TB
    state: ONLINE
    scan: scrub repaired 0B in 9h16m with 0 errors on Sun May 13 09:40:41 2018
    config:


    NAME STATE READ WRITE CKSUM
    R10-4x2TB ONLINE 0 0 0
    mirror-0 ONLINE 0 0 0
    wwn-0x5000c50072c359e7 ONLINE 0 0 0
    wwn-0x5000c50072c358a5 ONLINE 0 0 0
    mirror-1 ONLINE 0 0 0
    wwn-0x5000c500668b65e8 ONLINE 0 0 0
    wwn-0x5000c5003e6cd752 ONLINE 0 0 0
    logs
    wwn-0x50026b7251131ae8-part5 ONLINE 0 0 0
    cache
    wwn-0x50026b7251131ae8-part6 ONLINE 0 0 0


    Then I assign disks from that. The logs and cache are on an ssd along with a few small vms.

  • Okay :)


    I was playing around a bit, and found the solution that best fits my needs:


    I created some ZFS datasets on Proxmox, and configured a network bridge (without briding ports - so like a "virtual network", in my case 192.168.1.0/28) between Proxmox and OMV (with VirtIO NIC). Then I created some NFS Shares on Proxmox and connected to them via RemoteMount Plugin in OMV. Speed is like native (the VirtIO Interface did incredible 35Gbs when I tested it with some iperf Benchmarks), and now I dont need and passthrough. Works like a charme for me :)

  • Zitat von Geebee

    Then I created some NFS Shares on Proxmox and connected to them via RemoteMount Plugin in OMV. Speed is like native (the VirtIO Interface did incredible 35Gbs when I tested it with some iperf Benchmarks), and now I dont need and passthrough

    Can You Screen shot how you did this or explain i am still new to the virtual world

  • Okay :)


    I was playing around a bit, and found the solution that best fits my needs:


    I created some ZFS datasets on Proxmox, and configured a network bridge (without briding ports - so like a "virtual network", in my case 192.168.1.0/28) between Proxmox and OMV (with VirtIO NIC). Then I created some NFS Shares on Proxmox and connected to them via RemoteMount Plugin in OMV. Speed is like native (the VirtIO Interface did incredible 35Gbs when I tested it with some iperf Benchmarks), and now I dont need and passthrough. Works like a charme for me :)

    Hello GeeBee ,

    Like Lonnon , I would love also a screenshot or some more explanations on how to do this bridging.

    Thanks a lot !

    Maybe Lonnon or other here have a clue how to do this ?

    Any help would be greatly appreciated !

    Thanks !

  • Okay :)


    I was playing around a bit, and found the solution that best fits my needs:


    I created some ZFS datasets on Proxmox, and configured a network bridge (without briding ports - so like a "virtual network", in my case 192.168.1.0/28) between Proxmox and OMV (with VirtIO NIC). Then I created some NFS Shares on Proxmox and connected to them via RemoteMount Plugin in OMV. Speed is like native (the VirtIO Interface did incredible 35Gbs when I tested it with some iperf Benchmarks), and now I dont need and passthrough. Works like a charme for me :)

    UP ! :)

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!