OMV on Proxmox

  • Hello Aaron,


    I've reinstalled my server as a Proxmox node and made an OMV3 VM.
    So far everything's OK but I have 2 questions, most related to PM than OMV but as it's kind of a cross thread maybe you'll help anyway ;)


    1.- My VM is based on 2x qcow2 disks, vda 16GB and vdb 5TB. How can I backup only the sda under PM ? All my VMs are based on a single vdisk so far ...
    Found the "no backup" option at the vdisk level ...


    2.- As R5 is managed by PM now, what kind of monitoring should I use for both SMART and MD health ?


    Thanks in advance,


    Olivier

  • @Belokan


    exclude vdb with backup=0 in your vmid.conf. Have a look at this thread as an example:


    https://forum.proxmox.com/thre…ackup-on-proxmox-4.25807/


    Greetings Hoppel

    ----------------------------------------------------------------------------------
    openmediavault 6 | proxmox kernel | zfs | docker | kvm
    supermicro x11ssh-ctf | xeon E3-1240L-v5 | 64gb ecc | 8x10tb wd red | digital devices max s8
    ---------------------------------------------------------------------------------------------------------------------------------------

    • Offizieller Beitrag

    what kind of monitoring should I use for both SMART and MD health ?

    Command line? smartctl and cat /proc/mdstat work for me.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    Do you think 2x vCPU and 2GB ram should be enough for the OMV VM ?

    Yes.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Hi,


    I'll cpoy/paste a topic I've opened on Proxmox forum regarding I/O issue. Maybe you'll be able to help !


    -------------------


    Hello,


    I have @home a small 3 nodes cluster running PVE 4.4.12. It's based on 2x NUC with core i5 and 16GB and formerly, a virtual node based on a virtualbox instance on a NAS just used for Quorum purpose.


    As I've planed to replace one of my NAS (Syno) by an HP µserver Gen8 I've installed PVE on it too in order to get rid of the virtual node and virtualize the NAS instead.


    The HP boots on a dedicated SSD and I've configured a R5 based on 4x2TB which in mounted under /var/lib/vz and acts as a local storage.


    When I "bench" the local storage, using rsync --progress or scp, I'm able to write from local SDD to R5 at an average of ~250MB/s and from a remote client at an average of ~110MB/s (limited by the 1GB connection).


    I've created an OpenMediaVault3 VM localy with a 16GB Virtio/qcow2 for the OS and a extra 4.5TB Virtio/qcow2 for the DATA:


    bootdisk: virtio0
    cores: 4
    ide2: none,media=cdrom
    memory: 4096
    name: vmomv1
    net0: virtio=7E:6F:E9:E8:3B: D0,bridge=vmbr0
    net1: virtio=B6:F1:4E: D1:A7:61,bridge=vmbr1
    numa: 0
    onboot: 1
    ostype: l26
    scsihw: virtio-scsi-pci
    smbios1: uuid=4f89b895-e7a0-46ee-a95f-6a441a116191
    sockets: 1
    virtio0: local:114/vm-114-disk-1.qcow2,size=16G
    virtio1: local:114/vm-114-disk-2.qcow2,backup=0,size=4608G


    When I write files to this VM (I've made tests with a basic Jessie VM too), being using smb, nfs or scp/rsync, throughput starts around 80MB/s for few seconds and then drops to few MB/s, sometime stalled, then increase to 50MB/s and so on ... Average is about 15MB/s for my tests based to a single 2GB file.


    During that period, Proxmox shows IO Delay about 1 to 10% on the GUI. If someone could help me to explain/twink/analyze this behavior I'll be really grateful !


    PS: I've tried adding RAM and vCPU to the guest, tried several kernels (3.16/4.8/4.9), disk emulation (ide/scsi/virtio) and caching options with no luck ...


    -----------------


    Thanks in advance for your help



    Olivier

    • Offizieller Beitrag

    What vCPU are you using? I always use host. I would defintely stay with virtio for scsi and networking. And not that it would help but do you have qemu-guest-agent installed? I'd be curious to see what kind of throughput you get with one drive. The proxmox forum may not be real helpful since they don't support mdadm raid.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Hello,


    I've installed qemu-guest-agent and enabled it in VM options. I've changed the vCPU type from default to Host with not a big change ...
    So far I'm not able to "break" the R5 in order to test on a single disk, but my benchmarks made on the host itself showed that performances were more than satisfying.


    Olivier


    EDIT: Should I get different performance replacing qcow2 by RAW ?
    EDIT2: I've just mounted a VM's NFS share on the host and ran a tar backup to the R5 and the OMV's GUI showed about 100% CPU usage during the process (4 vCPU). Is it normal ?

  • Hi,


    I've created a tests dedicated VM in order to avoid to stop/start my OMV and provisioned it with different disks (raw, qcow2 and vmdk) and vmdk put aside (very bad perf), there's no big difference raw Vs qcow2.
    I've removed the barrier (barrier=0) in the host's fstab for the local storage and it seems performance is a bit more stable.


    I'm doing some tests tuning vm.dirty* ratios but not sure if I should modify the host, the guest or both ... We'll see.


    I can't see what the impact of a soft R5 on the host (which works perfectly so far performance speaking) has a such big impact on the guests ... Is that why mdadm is not supported by Proxmox ?

    • Offizieller Beitrag

    It isn't supported because there is no web interface section for it. Maybe most of their customers are using hardware raid?? I have no idea why you have such bad performance. I have installed Proxmox on a lot of systems and haven't seen this issue.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Here is a test from host's boot disk (SSD) to host's R5 (4x2To SATA):


    root@pve3:/tmp# rsync --progress 2gb.file /var/lib/vz/test/
    2gb.file
    2,007,521,280 100% 198.95MB/s 0:00:09 (xfr#1, to-chk=0/1)


    This is just after a reboot and "2gb.file" is a random Linux ISO copied/renamed for the test. So no cache or nothing ... And performance looks pretty good to me for an entry level server.
    So the problem is definitively not on the hosts's I/O. But as I've tested several VMs (not only the OMV one) with same issue there's only Proxmox left in the middle right ?


    Have a nice day.

    • Offizieller Beitrag

    But as I've tested several VMs (not only the OMV one) with same issue there's only Proxmox left in the middle right ?

    Looks that way.



    I'll gain VT-d and then I'll be able to pass-through the R5 disks "as it is" directly to the VM right ?

    Yes.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Hello,


    I'm back with a Xeon and with my 4 SATA disks directly attached to the VM.
    As the R5 (on the PVE node) was hosting both BOOT and DATA disks, I've moved the BOOT disk to CEPH and copied as much datas as possible from the DATA qcow2 to the R5 before destroying the DATA disk (that was using already about 60% of the R5).


    I'm OK with performances now as they're far above my simple 1Gbits that serves the OMV instance.


    Question: I've moved to Xeon because VT-d was not available on my i3, but I've seen some threads on PM forum where people were able to run VMs with SATA pass-through without VT-d ... Any idea on what is the status on that ? Does it just not work without VT-d or is it more "virtualized" or ???


    Thanks !

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!