OMV on Proxmox

    • Offizieller Beitrag

    with proxmox you can install on ZFS raid 1 right from the installer. you do not need to install Debian first

    You can't use zfs for the OS drive with the proxmox installer. So, if you wanted a mirror OS drive (mdadm or zfs), you would have to install Debian first. Otherwise, proxmox sets up LVM for the OS drive. I suppose you could make an LVM mirror.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Yes you can. Version 4.3 give you several options during install including zfs in raid format


    Sent from my SGH-T889 using Tapatalk

    omv 3.0.56 erasmus | 64 bit | 4.7 backport kernel
    SM-SC846(24 bay)| H8DME-2 |2x AMD Opteron Hex Core 2431 @ 2.4Ghz |49GB RAM
    PSU: Silencer 760 Watt ATX Power Supply
    IPMI |3xSAT2-MV8 PCI-X |4 NIC : 2x Realteck + 1 Intel Pro Dual port PCI-e card
    OS on 2×120 SSD in RAID-1 |
    DATA: 3x3T| 4x2T | 2x1T

  • okay, that sounds interesting. But for ZFS I should go to ECC-RAM. Is it possible to install Guests on this ZFS as well? Do I have to create a second partition or does proxmox install them there anyway? What do I have to do for inserting a second disk to build up virtual disks for the vms?

  • I can not answear that.
    Also I do belive that new proxmox gives you an option for regular raid too, just don't remeber. You can try running set-up yourself , or I will try it latter on and let you know. I am sure about zfs option but not mdadm.


    As for second partiton, I think proxmox installs guest on main drive by default, you will have to reconfigure set-up to move guests to other location



    Sent from my SGH-T889 using Tapatalk

    omv 3.0.56 erasmus | 64 bit | 4.7 backport kernel
    SM-SC846(24 bay)| H8DME-2 |2x AMD Opteron Hex Core 2431 @ 2.4Ghz |49GB RAM
    PSU: Silencer 760 Watt ATX Power Supply
    IPMI |3xSAT2-MV8 PCI-X |4 NIC : 2x Realteck + 1 Intel Pro Dual port PCI-e card
    OS on 2×120 SSD in RAID-1 |
    DATA: 3x3T| 4x2T | 2x1T

    • Offizieller Beitrag

    Yes you can. Version 4.3 give you several options during install including zfs in raid format

    That must be new to 4.2/4.3 then. I haven't installed fresh since 4.1. You definitely can't use mdadm raid though (just checked). Proxmox doesn't even install the mdadm package.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • it is new. I think starting from 4.3


    and yes raid only available using ZFS.


    what you do is boot from Proxmox installer, and click on option button when prompted to choose hard drive. once on selection screen if you choose ZFS as file system you will be able to select from RAID0 to ZVOLUME-3 as an install option and choose the drives to use to assemble the volume from.
    it works and works very well, I have several test VMs build out nicely. still have not pull the trigger on the actual server though. and now that I have a failed drive in it I need to fix that first. luckily the drive that failed has no data on it what so ever except some old ISOs and VM images I do not use. a leftover from last build virtualbox play around.




    PS>>" Niemand", if you want to use raid of any kind except ZFS you will need to go with Debian install first and then install Proxmox on it via CLI. there are several HOW-TOs on is on HowtoForge and there is on on Proxmox WiKi ,if you don't mind ZFS it is a straightforward from Proxmox Installer.


    Sata2 should be just fine, in fact I believe my home server is on Sata2 now.
    also ECC RAM is a good to have for ZFS but unless you really build your server ZFS all out it is not that important, if you only use it for System drive in Raid 1 it does not seam to be much different. not a lot of strain for it to be of importance IMHO. however if you do plan to build out a full server, storage and all on ZFS than you should consider ECC ram.

    omv 3.0.56 erasmus | 64 bit | 4.7 backport kernel
    SM-SC846(24 bay)| H8DME-2 |2x AMD Opteron Hex Core 2431 @ 2.4Ghz |49GB RAM
    PSU: Silencer 760 Watt ATX Power Supply
    IPMI |3xSAT2-MV8 PCI-X |4 NIC : 2x Realteck + 1 Intel Pro Dual port PCI-e card
    OS on 2×120 SSD in RAID-1 |
    DATA: 3x3T| 4x2T | 2x1T

    Einmal editiert, zuletzt von vl1969 ()

    • Offizieller Beitrag

    does anyone know for what CEPH is good for? If I understood it correctly it is for clustering? does it make sense for using it @home?

    It is good for lots of things but it is not for home.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    Why not? What about Cluster and iscsi?

    Why are you torturing yourself? ceph is a distributed filesystem designed to run on lots of servers to serve files/storage/etc to lots of servers. iscsi is useful if you want to give other systems storage but you need lots of fast networking to make it work well. Most of this stuff is used for enterprise setups and not easy to setup or maintain. None of them work with OMV except iSCSI which has a plugin. A home system does not need any of this stuff. What are you trying to accomplish?

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Back.


    HDD Passtrough is working fine. But I got over samba only 50 mb/s writing speed. It doesn't matter, if it is a passtrough-HDD or a LVM-Container. With OMV3 Baremetal and MergerFS I get 110 mb/s..
    The change of cache doesn't have an effect either...


    And the biggest problem: HD-Idle or Spindown itself doesn't work!

    Did you every resolve you speed issues with write performance and if so what did you change?


    Thanks,
    Aaron

    Sumpermicro A1SAM-C2550F
    2xKingston 8GB 240-Pin DDR3 SDRAM ECC Unbuffered DDR3 1600 (PC3 12800) Server Memory w/TS Hynix B Model
    16GB SSD for Proxmox
    64GB SSD for VM (OMV 2.2.13 & Win7 x64)

  • I solved many problems in this way:
    Got 2 Servers instead of 1.
    Server1: OMV baremetal and just fileserverstuff
    Server2: Proxmox with Ubuntuserver, Debian, Mint, Win7 and OMV3 --> OMV3 is just used for Internetstuff (FTP, Plex (only Musik), VPN,...) where I don't need high transferrates. But if I copy stuff now its about 112mb/s (different server than before; I think it was because of the older chipset of the AM3+ Board)

  • wish I could solve mine like that :)


    but I only have one server box, so still can't decide which way to go.

    omv 3.0.56 erasmus | 64 bit | 4.7 backport kernel
    SM-SC846(24 bay)| H8DME-2 |2x AMD Opteron Hex Core 2431 @ 2.4Ghz |49GB RAM
    PSU: Silencer 760 Watt ATX Power Supply
    IPMI |3xSAT2-MV8 PCI-X |4 NIC : 2x Realteck + 1 Intel Pro Dual port PCI-e card
    OS on 2×120 SSD in RAID-1 |
    DATA: 3x3T| 4x2T | 2x1T

  • Yea as I have gone through this process I have wondered why the hell I just don't have two servers and spend money to save time but then I finally got Proxmox up and running with 2 VMs and list night wanted to test another OS and just created another VM and got it up and running in no time. Was really nice not to have to plan downtime and swap boot devices etc..

    Sumpermicro A1SAM-C2550F
    2xKingston 8GB 240-Pin DDR3 SDRAM ECC Unbuffered DDR3 1600 (PC3 12800) Server Memory w/TS Hynix B Model
    16GB SSD for Proxmox
    64GB SSD for VM (OMV 2.2.13 & Win7 x64)

  • Hello there,


    As I have question regarding OMV virtualization with Proxmox I'll take my chance here instead of starting a new topic :)


    Here is my current environment @home:


    2x Proxmox physical servers (i5/16GB NUCs) hosting several HA VMs (DNS, DHCP, Firewall, Apache, Gateway, remote DSL/4G access, DL tools, etc, ...)
    2x Synology NAS (412+/415+)
    2x Proxmox virtual servers (VBox), one on each NAS


    All 4 Proxmox are in the same cluster and have a quorum vote. Only physical hosts are hosting VMs (HA group).
    Outside of VBox, NAS are serving iSCSI/NFS/CIFS and xFTP/rsync (+some Syno tools).


    I've planed to replace the oldest Syno by an HP µserv Gen8 running OMV3. So far it is installed (i3 3240/8GB), most datas have been migrated including the VBox instance. I've got to play around with kernels, using Aaron's VBox packages, and regarding iSCSI I had to configure it outside of OMV (ietadm) as File IO LUN is not available using the openmediavault-iscsi plugin.


    On another topic I created about iSCSI, Aaron pointed me to the fact I could virtualize OMV and the iSCSI target. I can see some advantages in this solution.


    1.- If I install Proxmox on the Gen8, I can get rid of the 2x VBox instances as I'll have a 3rd physical node to add in the cluster.
    2.- I'm no more dependent on VBox (and iSCSI) regarding the kernel I use in OMV, this makes upgrades safer.
    3.- The remaining Syno will get rid of its last "community" package and won't suffer ugrade outages as well.


    Now the estimated drawbacks:


    1.- I've created a single R5 using 4x2TB whole disks on the Gen8:


    root@omv1:~# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4]
    md127 : active raid5 sda[0] sdd[3] sdc[2] sdb[1]
    5860150272 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
    bitmap: 2/15 pages [8KB], 65536KB chunk


    I'd like if possible keep the data already transferred... The Gen8 CPU is VT-x but VT-d, could I assign the 4 disks to the OMV VM and recover the md ? Or should I mount the md under Proxmox directly and ???


    2.- If answer to #1 is yes, I still won't have storage left to create the boot disk for the OMV VM except on the Proxmox's boot SSD and this means that it won't be mirrored. And same for the iSCSI VM who will need its own storage to create the LUNs ...


    Maybe the easiest solution (as all datas are still available on the "to be replaced NAS") is to restart from scratch. So how do I manage those 4 disks in order to:


    1.- Dedicated 100GB R5 for VMs boot disks
    2.- Dedicated 500GB R5 for iSCSI LUNs
    3.- Dedicated remaning GB R5 for OMV usage


    Thanks a lot in advance for your time and advice !


    Olivier

    • Offizieller Beitrag

    The Gen8 CPU is VT-x but VT-d, could I assign the 4 disks to the OMV VM and recover the md ? Or should I mount the md under Proxmox directly and ???

    I don't think you can passthrough a drive without VT-d. If you did have VT-d, you can passthrough each drive and the array will assemble in the guest. I ran my server in proxmox like that for a long time. Mounting the array in proxmox wouldn't allow you to keep your data but it would solve question #2.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I don't think you can passthrough a drive without VT-d. If you did have VT-d, you can passthrough each drive and the array will assemble in the guest. I ran my server in proxmox like that for a long time. Mounting the array in proxmox wouldn't allow you to keep your data but it would solve question #2.

    It looks like I always miss something to make this configuration viable ...
    I've doubled check and my processor is not VT-d so no pass-through for the OMV VM. Next.


    I could install mdadm directly on Proxmox but wiki says mdraid is not supported for any version of Proxmox VE ... OK, let's install it anyway. It will probably discover my R5 out of the box and I'll be able to mount it under /var/lib/vz in order to have a 5+TB local storage. Datas are still present in the md but I can not just present a host directory to the guest right ?
    So let's get rid of the datas, I'll just have to restart rsync tasks from the Syno, not a big deal.


    Should I keep this single md and use it to hosts qcow2 ? Should I create several vDisks for the OMV VM ? Like a 32GB for the OS and a 5TB for the datas ? Same for the "iSCSI VM" ?


    Advice welcome !


    Thanks,


    Olivier

    • Offizieller Beitrag

    I could install mdadm directly on Proxmox but wiki says mdraid is not supported for any version of Proxmox VE ... OK, let's install it anyway. It will probably discover my R5 out of the box and I'll be able to mount it under /var/lib/vz in order to have a 5+TB local storage. Datas are still present in the md but I can not just present a guest directory to the host right ?

    I installed mdadm on my proxmox and used it for a long time with no problems. Proxmox sees it as a mount point just like a single filesystem. It just doesn't have any web interface sections to handle it. Data will be present. Your VMs just can't see those files.


    Should I keep this single md and use it to hosts qcow2 ?

    Yes


    Should I create several vDisks for the OMV VM ?

    At least two since you can store any data on the OS drive. I usually make the OS drive 16GB.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!