OMV, ZFS and Proxmox

    • OMV 4.x

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • OMV, ZFS and Proxmox

      Hi,

      i've switched my server to Proxmox and I'm running OMV as KVM. I did some test's and got ZFS inside the OMV VM working. I assigned my 3x 4TB WD Red Disks to the KVM and imported the pool there (pve.proxmox.com/wiki/Physical_disk_to_kvm). Is this a "good" or a "bad" practice? From what I've saw (only a bit testing) it worked - but is there maybe something I am missing why this is a bad idea or could be?

      Thank you!

      - Sebastian
    • I don't claim to be an expert on this but here is how I did it. My disks are zfs on the proxmox. That way I can give as much as I need to omv or other vm's. Also I think zfs may be more stable on proxmox. See the threads about zfs problems on these forums. I am also not sure about performance. If you are using omv for vm storage you would be adding a layer. vm-proxmox-disk vs vm-proxmox-omv-disk Not sure it would make that much difference depending on your load. Either way it should work tho.
      If you make it idiot proof, somebody will build a better idiot.
    • Well - i dont know. I was a bit confused, as ZFS used only ~300MB of ram inside the OMV KVM. Anyway I need to agree with you: ZFS is more mature in Proxmox. Did you create a raw disk and assigned it to Openmediavault, or did you do it in a different way (zvol..)?

      Thank you!

      Sebastian
    • Again I don't claim to be an expert on this, I am still learning too. I have four 2TB drives. I created a pair of mirrors.

      zpool status
      pool: R10-4x2TB
      state: ONLINE
      scan: scrub repaired 0B in 9h16m with 0 errors on Sun May 13 09:40:41 2018
      config:

      NAME STATE READ WRITE CKSUM
      R10-4x2TB ONLINE 0 0 0
      mirror-0 ONLINE 0 0 0
      wwn-0x5000c50072c359e7 ONLINE 0 0 0
      wwn-0x5000c50072c358a5 ONLINE 0 0 0
      mirror-1 ONLINE 0 0 0
      wwn-0x5000c500668b65e8 ONLINE 0 0 0
      wwn-0x5000c5003e6cd752 ONLINE 0 0 0
      logs
      wwn-0x50026b7251131ae8-part5 ONLINE 0 0 0
      cache
      wwn-0x50026b7251131ae8-part6 ONLINE 0 0 0

      Then I assign disks from that. The logs and cache are on an ssd along with a few small vms.
      If you make it idiot proof, somebody will build a better idiot.

      The post was edited 4 times, last by donh: Format sucked ().

    • Okay :)

      I was playing around a bit, and found the solution that best fits my needs:

      I created some ZFS datasets on Proxmox, and configured a network bridge (without briding ports - so like a "virtual network", in my case 192.168.1.0/28) between Proxmox and OMV (with VirtIO NIC). Then I created some NFS Shares on Proxmox and connected to them via RemoteMount Plugin in OMV. Speed is like native (the VirtIO Interface did incredible 35Gbs when I tested it with some iperf Benchmarks), and now I dont need and passthrough. Works like a charme for me :)