OMV move from Physical to VM?

  • frankly I find Proxmox is much more friendlier overall.
    and it does support a lot of hardware. maybe even more than ESXi as version 4.3 uses the latest kernel.
    what's more Proxmox does not need any external management console for anything. it is fully manageable via built-in WebUI where ESXi needs a windows based management console for many of advances options.
    but to each is their own.

    omv 3.0.56 erasmus | 64 bit | 4.7 backport kernel
    SM-SC846(24 bay)| H8DME-2 |2x AMD Opteron Hex Core 2431 @ 2.4Ghz |49GB RAM
    PSU: Silencer 760 Watt ATX Power Supply
    IPMI |3xSAT2-MV8 PCI-X |4 NIC : 2x Realteck + 1 Intel Pro Dual port PCI-e card
    OS on 2×120 SSD in RAID-1 |
    DATA: 3x3T| 4x2T | 2x1T

  • Proxmox is using the 4.4 kernel (LTS and basically the same as the Ubuntu 16 kernel) which has much better hardware support than ESXi 6.0. What hardware are you running that ESXi supports that Proxmox doesn't?

    I will use a HP Micro Server Gen with a Celeron G1610t and 4 GB of Ram. In my mind, I decided to use ESxi, because it has a better fan system management (6 % with ESXi against 20% with proxmox)...
    But I'm still open, if we can solve my problem :)

    Home Server: HP Proliant Gen 8 | CPU: G1610t | Ram: 8 Go DDR3 | OS Disk: Kingston SSDNow E-Series 32 GB | Storage Disks: 4xSeagate NAS HDD 3 To - Raid 5 | OS: OMV 2.2.13 Stone burner + Docker

  • frankly I do not see how you can solve the problem if your hardware does not support passthrough.
    without it you can not mount your drives within VM.
    the only other solution is to mount it in the hypervisor and do NFS share on it. than do a remote mount in OMV VM. but than you can not really manage the drives in OMV you will have to do all from ESXi.

    omv 3.0.56 erasmus | 64 bit | 4.7 backport kernel
    SM-SC846(24 bay)| H8DME-2 |2x AMD Opteron Hex Core 2431 @ 2.4Ghz |49GB RAM
    PSU: Silencer 760 Watt ATX Power Supply
    IPMI |3xSAT2-MV8 PCI-X |4 NIC : 2x Realteck + 1 Intel Pro Dual port PCI-e card
    OS on 2×120 SSD in RAID-1 |
    DATA: 3x3T| 4x2T | 2x1T

    • Offizieller Beitrag

    HP Micro Server Gen

    This is completely supported by Proxmox.


    I decided to use ESxi, because it has a better fan system management (6 % with ESXi against 20% with proxmox)

    ESXi has obvious fan management for your system. Proxmox doesn't have fan management in the web interface. Obviously the fan defaults are different but that doesn't mean the hardware isn't supported. Personally I would not run the fan less than 20% because it is not doing anything. This would mean the hard drives and cpu are running hotter than usual.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • frankly I do not see how you can solve the problem if your hardware does not support passthrough.
    without it you can not mount your drives within VM.
    the only other solution is to mount it in the hypervisor and do NFS share on it. than do a remote mount in OMV VM. but than you can not really manage the drives in OMV you will have to do all from ESXi.

    After I've read this answer, I reconsider my plan. Thanks.

    This is completely supported by Proxmox.

    ESXi has obvious fan management for your system. Proxmox doesn't have fan management in the web interface. Obviously the fan defaults are different but that doesn't mean the hardware isn't supported. Personally I would not run the fan less than 20% because it is not doing anything. This would mean the hard drives and cpu are running hotter than usual.

    Yeah I agree with you. But the noise is an important factor for me.
    At the end, I chose to install OMV directly on my SSD, and virtualise applications I really need with Docker. So far, I'm really happy with it. <3
    But thanks anyways to answer my concerns/problems. :)

    Home Server: HP Proliant Gen 8 | CPU: G1610t | Ram: 8 Go DDR3 | OS Disk: Kingston SSDNow E-Series 32 GB | Storage Disks: 4xSeagate NAS HDD 3 To - Raid 5 | OS: OMV 2.2.13 Stone burner + Docker

  • if noise control it so important I would suggest replacing the fans with a silent type like Noctura or alike.
    the only one you can not replace or control is the one in PSU and even that can be managed by replacing the PSU with a higher grade model. the only stopping factor is cost.


    I have build my server on an old SuperMicro chassis. it is maybe an 8-9 yo 4U 24 bay box.
    I bought it from a refurbisher directly. when it come in and I plug the sukcer in I though I was in the airport. that thing was loud. so I got a bunch of silent fans from local MicroCenter modding the box to hell and back. the silent PSU for the box where in $200+ each.not good fit as I needed 2 (redundant PSUs) and the whole box cost me $400 including shipping. so I gut it out the whole setup and put in a regular ATX 760W PSU. yes not redundant but silent. if I want redundant PSUs I can get an adapter for dual ATX PSU setup.
    I dumped all 8 80mm fans and put in 2 silent 80mm on the back wall and 4 140mm instead of fan wall in the middle. the air flow is the same but the whole rig is almost silent.

    omv 3.0.56 erasmus | 64 bit | 4.7 backport kernel
    SM-SC846(24 bay)| H8DME-2 |2x AMD Opteron Hex Core 2431 @ 2.4Ghz |49GB RAM
    PSU: Silencer 760 Watt ATX Power Supply
    IPMI |3xSAT2-MV8 PCI-X |4 NIC : 2x Realteck + 1 Intel Pro Dual port PCI-e card
    OS on 2×120 SSD in RAID-1 |
    DATA: 3x3T| 4x2T | 2x1T

  • Yeah, that is also what I figured out during my research. A CPU upgrade will cost me at least 200 euros, and I also need to my this p410 raid card + battery kit. I can use this money to buy an second-hand server with more cores and threads for around 250 euros. So I'll have 2 machines, one only for storing (and small services running with Docker now) and later a dedicated machine for ESXI/Proxmox.

    Home Server: HP Proliant Gen 8 | CPU: G1610t | Ram: 8 Go DDR3 | OS Disk: Kingston SSDNow E-Series 32 GB | Storage Disks: 4xSeagate NAS HDD 3 To - Raid 5 | OS: OMV 2.2.13 Stone burner + Docker

  • I was thinking of moving my OMV to a VM server also but I never found an option for my external drives that didn't require a "work around". Work around to me have always been hit or miss. As I have 16 TB of space on my External HD enclosures it's hard for me to just drop them. My Current OMV system is working for the most part (reboots force me to remount the Drives) I have set up a cron job to do this but it doesn't always work. I do have time to figure out a solution.


    One other thing. How do i tell in a VM OS if my PCI Express USB 3.1 card is working?

  • Yeah, that is also what I figured out during my research. A CPU upgrade will cost me at least 200 euros, and I also need to my this p410 raid card + battery kit. I can use this money to buy an second-hand server with more cores and threads for around 250 euros. So I'll have 2 machines, one only for storing (and small services running with Docker now) and later a dedicated machine for ESXI/Proxmox.


    So what did you finally do?


    I have the same hardware setup and thinking about virtualizing omv in esxi...

  • This sounds very interesting to me.
    Is there a tutorial for (semi)noob with limited proficiency?

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!