OMV move from Physical to VM?

    • Offizieller Beitrag

    Proxmox 4.3 was just released and the 4.4 kernel isn't redhat. It is the Ubuntu 16.04 4.4 kernel.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Hey @ryecoaaron thanks for all the input. My current setup is OMV on SSD and 4*4TB HDDs in SNAPRAID+AUFS Setup. If i would install promox on separate SSD and install OMV as VM how what would be the best way to get my actually OMV Settings back and also my actually snapraid+aufs setup from my harddrives. Without creating it new and reroll the backup ?

    • Offizieller Beitrag

    If you passthrough your current ssd and the 4 4tb hard drives to a newly created VM (delete the initial hard drive created during VM creation), you would just have to fix the networking and it should run (linux is good that way :) )

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Another (my) approach using Proxmox & OMV is:


    - Proxmox is installed on a 32GB SSD.
    - All the external storage (xxxTB)is installed and given to proxmox, defined via LVM.
    - OMV (VM) has 2 disks: 8 GB for System and xxTB for DATA. Storage is defined in Proxmox GUI (LVM)
    - Every VM uses same approach. OMV is only used as a NAS for physical machines, Video, ....


    Advantages:
    All storage is handled/controlled/checked by proxmox. No need to passtrough. Very easy to maintain.
    If any machine needs more storage, size (of the second disk) can be incremented (via LVM)
    If all storage defined to proxmox is used, increase storage to Proxmox (under same VG) and again you can resize LVM for a given VM
    For VMs access to storage is quicker trough Proxmox ( For rest of physical machines is a bit slower ).
    Much more flexible for creating/destroying VMs, I.E. testing new releases of OMV (actually testing ZFS 3.03)


    Regards

    OMV 4.x. OMV-Extras ZFS iSCSI Infiniband. Testing OMV 5.1. Testing OMV arm64

    6 Mal editiert, zuletzt von vcp_ai ()

  • it would be lovely, but I suck when it comes to LVM. or ZFS.
    I wold risk using LVM or ZFS for OS drive but how do I manage my data safely.
    I have this setup right now.


    BOX: Supermicro SC846
    MB: H8DME-2 with CPU: 2x AMD Opteron Hex Core 2431 @ 2.4Ghz for total of 12 cores
    RAM: 49GB DDR2 PC-5300 @ 667mHz ECC
    NIC : 2x onBoard Realteck +
    1 Intel Pro Dual port PCI-e card for total of 4 ports
    DRIVES :
    OS : 2×240 SSD
    DATA: 2x3T HDD --> SnapRaid Parity
    4x2T HDD
    2x1T HDD



    currently running OMV 3.0.40 with SnapRaid+MergerFS
    all data drives are formated with BTRFS


    this post really made me think in moving to Proxmx+OMV setup.
    even span the ProxMox VM to set it out.


    but having some reservations for now.


    my concerns are, in order of importance


    #1. managing and expanding data storage
    what the most easiest way of doing it?
    OMV provides nice an some what easy UI for that but how would I do it on Proxmox setup?


    #2. data pool safety and integrity.
    if drive fail how not to loose data and keep running while recovering?
    how to monitor drives and catch failure before it happens? I know nothing is 100% but one can try.
    again OMV has a nice plugin for running SMART and email if something comes up. Proxmox???


    #3. ??? not sure yet what to ask here.

    omv 3.0.56 erasmus | 64 bit | 4.7 backport kernel
    SM-SC846(24 bay)| H8DME-2 |2x AMD Opteron Hex Core 2431 @ 2.4Ghz |49GB RAM
    PSU: Silencer 760 Watt ATX Power Supply
    IPMI |3xSAT2-MV8 PCI-X |4 NIC : 2x Realteck + 1 Intel Pro Dual port PCI-e card
    OS on 2×120 SSD in RAID-1 |
    DATA: 3x3T| 4x2T | 2x1T

  • -LVM is very simple to use. It's main advantge is the abilty to expand storages. I repeat is very easy to use!


    Proxmox is a Virtualization GUI that makes you create/manage virtual machines. Of course have (can have) all the linux utilities that you can need. Proxmox handles LVM from it´s GUI. (lot of tutorials)
    -ZFS is not so simple, but seems to have much more power (currently testing)


    Proxmox supports from the box ZFS (I've not yet had the time to test).


    If you like to 'play' definitively go to Proxmox:
    You can create a virtual machine, install OMV, add 4 disks to create a RAIDZ2 pool , detroy 1 or 2 disks and probe that data is still there. Add two new drives and check how they are resilvered. And finally add 4 more disks join them them to the pool to obtain doble storage.


    Once decided the configuration make your OMV serve your data, and when another OMV release is out, create a new VM and test it. If you like the new release, you can 'move' your disk to your new VM, and you are running the 'new' OMV release with your 'old' data.


    (Note ZFS for OMV 3.0x is currently in Beta stage).

    OMV 4.x. OMV-Extras ZFS iSCSI Infiniband. Testing OMV 5.1. Testing OMV arm64

    • Offizieller Beitrag

    -LVM is very simple to use. It's main advantge is the abilty to expand storages. I repeat is very easy to use!

    LVM can also be dangerous if the disks underneath it don't have redundancy and the LVs/VGs are spread across multiple disks.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • LVM can also be dangerous if the disks underneath it don't have redundancy and the LVs/VGs are spread across multiple disks.

    and herein lies the problem.
    my setup does not have hardware raid, and I don't really want one.
    but LVM was designed to make the raided setup be more flexible and extendable.


    further more if I want to use ZFS it would be on the HOST (Proxmox)
    not the VM, unless I am passing the raw disks into VM. all management therefore would be on the HOST.
    if I am passing disks into VM than I can do the same thing I do now BTRFS snapraid+mergerfs


    of the issues I have on my hands is that I have a mixture of disks of different sizes.
    ZFS and LVM will not let me use them in a single pool. while having several pools do make some sense in specific situations, like I can use my 1TB drives to store the VM data and config as well as ISOs
    the rest of the drives I would like to pool into one big storage pool.

    omv 3.0.56 erasmus | 64 bit | 4.7 backport kernel
    SM-SC846(24 bay)| H8DME-2 |2x AMD Opteron Hex Core 2431 @ 2.4Ghz |49GB RAM
    PSU: Silencer 760 Watt ATX Power Supply
    IPMI |3xSAT2-MV8 PCI-X |4 NIC : 2x Realteck + 1 Intel Pro Dual port PCI-e card
    OS on 2×120 SSD in RAID-1 |
    DATA: 3x3T| 4x2T | 2x1T

  • First of all, I must say that I no expecience with Snapraid + MergeFS. So I can miss something important.


    Agree with you:
    For data disks, use ZFS under proxmox (and also raidXX what ever you choose)
    Pass a ZFS volume to the VM (no need to raid inside the VM)


    You can also use your 2 x SSDs to hold (besides Proxmox system) system disks for some of the VMs. (I have proxmox running on 32 GB SSD). So your 'special' VMS can beneficiate from the speed of them.

    OMV 4.x. OMV-Extras ZFS iSCSI Infiniband. Testing OMV 5.1. Testing OMV arm64

  • I do not think I will move to ZFS right now. too many mismatched disks for my liking.
    I guess I will rebuild my setup with Proxmox on ZFS/raid1 on my SSDs
    and pass the data disks sunce 2x1TB to OMV VM
    will use the 1TB ZFS raid1 for VM data and system backup on host
    the rest is OMV...

    omv 3.0.56 erasmus | 64 bit | 4.7 backport kernel
    SM-SC846(24 bay)| H8DME-2 |2x AMD Opteron Hex Core 2431 @ 2.4Ghz |49GB RAM
    PSU: Silencer 760 Watt ATX Power Supply
    IPMI |3xSAT2-MV8 PCI-X |4 NIC : 2x Realteck + 1 Intel Pro Dual port PCI-e card
    OS on 2×120 SSD in RAID-1 |
    DATA: 3x3T| 4x2T | 2x1T

  • There is no btrfs plug-in.
    It just since v3.0 is on newer kernel, it supports btrfs . To have btrfs support on v2 you need to run back port kernel.


    Sent from my SGH-T889 using Tapatalk

    omv 3.0.56 erasmus | 64 bit | 4.7 backport kernel
    SM-SC846(24 bay)| H8DME-2 |2x AMD Opteron Hex Core 2431 @ 2.4Ghz |49GB RAM
    PSU: Silencer 760 Watt ATX Power Supply
    IPMI |3xSAT2-MV8 PCI-X |4 NIC : 2x Realteck + 1 Intel Pro Dual port PCI-e card
    OS on 2×120 SSD in RAID-1 |
    DATA: 3x3T| 4x2T | 2x1T

  • If you are running v2 there is no option. The system just supports btrfs, as in you can add drives preformated with btrfs, and use command line to install btrfs tools and manage the drives. As is, without backport kernel, omv does not recognize drives with btrfs and think they are empty. So, ssh to the server, run apt-get instal btrfs-tools
    And you can create fs on the new drive. Once you do so it will show up in file system tab.
    Or if you already have drives with btrfs, they should be available for mounting as soon as you add them.
    On my server, all my drives were mounted right after reboot.


    On version 3 you can format drive with btrfs from the file system tab. No cli needed.


    Sent from my SGH-T889 using Tapatalk

    omv 3.0.56 erasmus | 64 bit | 4.7 backport kernel
    SM-SC846(24 bay)| H8DME-2 |2x AMD Opteron Hex Core 2431 @ 2.4Ghz |49GB RAM
    PSU: Silencer 760 Watt ATX Power Supply
    IPMI |3xSAT2-MV8 PCI-X |4 NIC : 2x Realteck + 1 Intel Pro Dual port PCI-e card
    OS on 2×120 SSD in RAID-1 |
    DATA: 3x3T| 4x2T | 2x1T

  • Hello!


    I use this thread because I think my problem is quite similar. My HP N54L has unfortunately died (RIP), leaving my data inaccessible. As you can see on my description, I have a Raid 5 4*3To Software, managed my OMV 2.
    I already bought a new server (HP Microserver G8- Celeron G1610t & 4GB), and I'm planning to set up a small home-lab with Esxi, without forgetting its primary function: the NAS part.


    So I wonder, How can I remount my raid 5 in an OMV VM using ESXI?


    I don't mind if I need to buy some hardware parts.


    Thanks in advance

    Home Server: HP Proliant Gen 8 | CPU: G1610t | Ram: 8 Go DDR3 | OS Disk: Kingston SSDNow E-Series 32 GB | Storage Disks: 4xSeagate NAS HDD 3 To - Raid 5 | OS: OMV 2.2.13 Stone burner + Docker

  • I think the only way is to do a pass-through of all the drives into VM and reassemble the raid there.
    several issues jump out here.
    #1. does the hardware supports drive passthrough in ESXi?
    #2. do you know how to do that in ESXi?
    #3. do you have enough drive spaces for all the data drives and system drives on the new server?



    if it was me, I would load the drives onto spare PC , backup the data onto external space. and try all of it on empty drives. would not play with the live data if I could help it.


    also is there any reason you go with ESXi? what advantages it gives you over other solutions.
    just want to know so no obligation to answer that one :)

    omv 3.0.56 erasmus | 64 bit | 4.7 backport kernel
    SM-SC846(24 bay)| H8DME-2 |2x AMD Opteron Hex Core 2431 @ 2.4Ghz |49GB RAM
    PSU: Silencer 760 Watt ATX Power Supply
    IPMI |3xSAT2-MV8 PCI-X |4 NIC : 2x Realteck + 1 Intel Pro Dual port PCI-e card
    OS on 2×120 SSD in RAID-1 |
    DATA: 3x3T| 4x2T | 2x1T

  • Hello vl1969


    #1. No, the Celeron has no VT-D. But I can change it. Otherwise, the hardware is fully supported by ESXi. HP has even made a special ESXi image for this server.
    #2. I have no clue, but I'm not afraid to learn and play around with it, until I have a setup which fulfills my needs.
    #3. I will directly use the 4*3To directly for my data, and recycle my old Kingston 32GB SSD for the datastore for ESXi. ESXi will be installed on a microSD.


    I already have a backup of my important data. The rest can be found again.


    I go with ESXi, because, compared to Proxmox, ESXi has a better compatibility with my hardware. I can also run OMV as ost, and then use virtualbox plugin, or docker. But I alread did it I want to learn hypervisors. That's why ESXi :)

    Home Server: HP Proliant Gen 8 | CPU: G1610t | Ram: 8 Go DDR3 | OS Disk: Kingston SSDNow E-Series 32 GB | Storage Disks: 4xSeagate NAS HDD 3 To - Raid 5 | OS: OMV 2.2.13 Stone burner + Docker

    • Offizieller Beitrag

    I go with ESXi, because, compared to Proxmox, ESXi has a better compatibility with my hardware.

    Proxmox is using the 4.4 kernel (LTS and basically the same as the Ubuntu 16 kernel) which has much better hardware support than ESXi 6.0. What hardware are you running that ESXi supports that Proxmox doesn't?

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!