New openmediavault-kvm plugin

    • Offizieller Beitrag

    5.1.6 is in the repo. If you stop the pool and then start it, it will set the autostart.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    i think it tries to activate before md0 is mounted

    The libvirtd service is already starting after local-fs. So, not sure what I could do from the plugin to help this. Maybe the autostart didn't get enabled. What is the output of: sudo virsh pool-list

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • The libvirtd service is already starting after local-fs. So, not sure what I could do from the plugin to help this. Maybe the autostart didn't get enabled. What is the output of: sudo virsh pool-list

    as you can see the my /dev/md0 gets mounted as last


    see /dev/proc/self/mounts



    here is some log


    23:18
    'mountpoint_srv_dev-disk-by-id-md-name-omv5.hans.lan-vol1' status failed (1) -- /srv/dev-disk-by-id-md-name-omv5.hans.lan-vol1 is not a mountpointmonit


    23:17
    'filesystem_srv_dev-disk-by-id-md-name-omv5.hans.lan-vol1' unable to read filesystem '/srv/dev-disk-by-id-md-name-omv5.hans.lan-vol1' statemonit


    23:17
    Filesystem '/srv/dev-disk-by-id-md-name-omv5.hans.lan-vol1' not mountedmonit


    23:17
    Lookup for '/srv/dev-disk-by-id-md-name-omv5.hans.lan-vol1' filesystem failed -- not found in /proc/self/mountsmonit


    23:17
    'proftpd' process is not runningmonit


    23:17
    internal error: Failed to autostart storage pool 'vm-hdd': cannot open directory '/srv/dev-disk-by-id-md-name-omv5.hans.lan-vol1/Volume1/vm': No such file or directorylibvirtd


    23:17
    cannot open directory '/srv/dev-disk-by-id-md-name-omv5.hans.lan-vol1/Volume1/vm': No such file or directorylibvirtd


    23:17
    internal error: Failed to autostart storage pool 'iso': cannot open directory '/srv/dev-disk-by-id-md-name-omv5.hans.lan-vol1/Volume1/iso': No such file or directorylibvirtd


    23:17
    cannot open directory '/srv/dev-disk-by-id-md-name-omv5.hans.lan-vol1/Volume1/iso': No such file or directorylibvirtd


    23:17 daemon_ready: STATUS=daemon 'smbd' finished starting up and ready to serve connections


    root@omv5:/proc/self# cat mounts

    sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0

    proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0

    udev /dev devtmpfs rw,nosuid,relatime,size=16353120k,nr_inodes=4088280,mode=755 0 0

    devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0

    tmpfs /run tmpfs rw,nosuid,noexec,relatime,size=3275600k,mode=755 0 0

    /dev/sdh1 / ext4 rw,noatime,nodiratime,errors=remount-ro 0 0

    securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0

    tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0

    tmpfs /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0

    tmpfs /sys/fs/cgroup tmpfs ro,nosuid,nodev,noexec,mode=755 0 0

    cgroup2 /sys/fs/cgroup/unified cgroup2 rw,nosuid,nodev,noexec,relatime,nsdelegate 0 0

    cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,xattr,name=systemd 0 0

    pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0

    none /sys/fs/bpf bpf rw,nosuid,nodev,noexec,relatime,mode=700 0 0

    cgroup /sys/fs/cgroup/net_cls,net_prio cgroup rw,nosuid,nodev,noexec,relatime,net_cls,net_prio 0 0

    cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpu,cpuacct 0 0

    cgroup /sys/fs/cgroup/rdma cgroup rw,nosuid,nodev,noexec,relatime,rdma 0 0

    cgroup /sys/fs/cgroup/hugetlb cgroup rw,nosuid,nodev,noexec,relatime,hugetlb 0 0

    cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory 0 0

    cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0

    cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset 0 0

    cgroup /sys/fs/cgroup/pids cgroup rw,nosuid,nodev,noexec,relatime,pids 0 0

    cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0

    cgroup /sys/fs/cgroup/perf_event cgroup rw,nosuid,nodev,noexec,relatime,perf_event 0 0

    cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0

    hugetlbfs /dev/hugepages hugetlbfs rw,relatime,pagesize=2M 0 0

    systemd-1 /proc/sys/fs/binfmt_misc autofs rw,relatime,fd=40,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=185 0 0

    mqueue /dev/mqueue mqueue rw,relatime 0 0

    debugfs /sys/kernel/debug debugfs rw,relatime 0 0

    sunrpc /run/rpc_pipefs rpc_pipefs rw,relatime 0 0

    tmpfs /tmp tmpfs rw,relatime 0 0

    nfsd /proc/fs/nfsd nfsd rw,relatime 0 0

    /dev/sdg1 /srv/dev-disk-by-id-ata-Samsung_SSD_860_EVO_1TB_S3Z9NB0KC50997K-part1 btrfs rw,relatime,ssd,space_cache,subvolid=5,subvol=/ 0 0

    /dev/sdf1 /srv/dev-disk-by-id-ata-SanDisk_SD7TB3Q-256G-1006_151144401754-part1 btrfs rw,relatime,ssd,space_cache,subvolid=5,subvol=/ 0 0

    /dev/sdf1 /export/ssd btrfs rw,relatime,ssd,space_cache,subvolid=5,subvol=/ 0 0

    /dev/sde1 /srv/dev-disk-by-uuid-f035e8af-65c5-4f1e-b492-64ada042e657 ext4 rw,relatime,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group 0 0

    /dev/sdh1 /var/folder2ram/var/log ext4 rw,noatime,nodiratime,errors=remount-ro 0 0

    folder2ram /var/log tmpfs rw,relatime 0 0

    /dev/sdh1 /var/folder2ram/var/tmp ext4 rw,noatime,nodiratime,errors=remount-ro 0 0

    folder2ram /var/tmp tmpfs rw,relatime 0 0

    /dev/sdh1 /var/folder2ram/var/lib/openmediavault/rrd ext4 rw,noatime,nodiratime,errors=remount-ro 0 0

    folder2ram /var/lib/openmediavault/rrd tmpfs rw,relatime 0 0

    /dev/sdh1 /var/folder2ram/var/spool ext4 rw,noatime,nodiratime,errors=remount-ro 0 0

    folder2ram /var/spool tmpfs rw,relatime 0 0

    /dev/sdh1 /var/folder2ram/var/lib/rrdcached ext4 rw,noatime,nodiratime,errors=remount-ro 0 0

    folder2ram /var/lib/rrdcached tmpfs rw,relatime 0 0

    /dev/sdh1 /var/folder2ram/var/lib/monit ext4 rw,noatime,nodiratime,errors=remount-ro 0 0

    folder2ram /var/lib/monit tmpfs rw,relatime 0 0

    /dev/sdh1 /var/folder2ram/var/cache/samba ext4 rw,noatime,nodiratime,errors=remount-ro 0 0

    folder2ram /var/cache/samba tmpfs rw,relatime 0 0

    192.168.2.1:/volume1/omv /mnt/nfs nfs4 rw,relatime,vers=4.1,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=192.168.3.1,local_lock=none,addr=192.168.2.1 0 0

    /dev/sdf1 /srv/dev-disk-by-id-ata-SanDisk_SD7TB3Q-256G-1006_151144401754-part1/ssd/docker/btrfs btrfs rw,relatime,ssd,space_cache,subvolid=5,subvol=/ 0 0

    /dev/sdf1 /export/ssd/docker/btrfs btrfs rw,relatime,ssd,space_cache,subvolid=5,subvol=/ 0 0

    nsfs /run/docker/netns/77ed2202af78 nsfs rw 0 0

    nsfs /run/docker/netns/5e5e697b16d2 nsfs rw 0 0

    nsfs /run/docker/netns/b5a0f9e25932 nsfs rw 0 0

    nsfs /run/docker/netns/a8e448f28249 nsfs rw 0 0

    tmpfs /run/user/0 tmpfs rw,nosuid,nodev,relatime,size=3275596k,mode=700 0 0

    /dev/md0 /srv/dev-disk-by-id-md-name-omv5.hans.lan-vol1 btrfs rw,relatime,space_cache,subvolid=5,subvol=/ 0 0

    /dev/md0 /export/Volume1 btrfs rw,relatime,space_cache,subvolid=5,subvol=/ 0 0

    • Offizieller Beitrag

    here is some log

    I understand why it happens. It just isn't the fault of the plugin. Maybe the same ugly workaround I added for docker would work.


    As root (not sudo):

    Code
    mkdir -p /etc/systemd/system/libvirtd.service.d
    cat <<EOF > /etc/systemd/system/libvirtd.service.d/waitAllMounts.conf
    [Unit]
    After=local-fs.target $(systemctl list-units --type=mount | grep /srv | awk '{ print $1 }' | tr '\n' ' ')
    EOF
    systemctl daemon-reload

    -OR-

    Code
    mkdir -p /etc/systemd/system/libvirtd.service.d
    cat <<EOF > /etc/systemd/system/libvirtd.service.d/waitAllMounts.conf
    [Unit]
    $(xmlstarlet sel -t -m "//path" -v . -n /etc/libvirt/storage/autostart/*.xml | xargs -I@ echo "RequiresMountsFor="@)
    EOF
    systemctl daemon-reload

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    Einmal editiert, zuletzt von ryecoaaron ()

  • I understand why it happens. It just isn't the fault of the plugin. Maybe the same ugly workaround I added for docker would work.


    As root (not sudo):

    Code
    mkdir -p /etc/systemd/system/libvirtd.service.d
    cat <<EOF > /etc/systemd/system/libvirtd.service.d/waitAllMounts.conf
    [Unit]
    After=local-fs.target $(systemctl list-units --type=mount | grep /srv | awk '{ print $1 }' | tr '\n' ' ')
    EOF
    systemctl daemon-reload

    -OR-

    Hi


    many thanks i use this rebooted the system and it works

    you are the best


    a little side note i still get the mails from MONIT that

    The system monitoring needs your attention.



    Host: \omv5


    Date: Sun, 27 Jun 2021 11:24:38


    Service: mountpoint_srv_dev-disk-by-id-md-name-omv5.hans.lan-vol1


    Event: Status failed


    Description: status failed (1) -- /srv/dev-disk-by-id-md-name-omv5.hans.lan-vol1 is not a mountpoint



    This triggered the monitoring system to: alert




    You have received this notification because you have enabled the system monitoring on this host.


    To change your notification preferences, please go to the 'System | Notification' or 'System | Monitoring' page in the web interface.

  • Is it possible to install the plugin on a Rpi4B?


    I am getting the following error:



    omv 5.5.23-1 usul arm64

    omv 5.5.23-1 usul x64


    • Offizieller Beitrag

    Is it possible to install the plugin on a Rpi4B?

    I have done a little testing on an RPi4 but it is not a good candidate for VMs. Why do you want VMs on an RPi? It is extremely limited in cpu and ram for VMs.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • VMware published a version of ESXi for RPi 4 as it seems to be a cheap test bed


    omv 6.9.6-2 (Shaitan) on RPi CM4/4GB with 64bit Kernel 6.1.21-v8+

    2x 6TB 3.5'' HDDs (CMR) formatted with ext4 via 2port PCIe SATA card with ASM1061R chipset providing hardware supported RAID1


    omv 6.9.3-1 (Shaitan) on RPi4/4GB with 32bit Kernel 5.10.63 and WittyPi 3 V2 RTC HAT

    2x 3TB 3.5'' HDDs (CMR) formatted with ext4 in Icy Box IB-RD3662-C31 / hardware supported RAID1

    For Read/Write performance of SMB shares hosted on this hardware see forum here

    • Offizieller Beitrag

    VMware published a version of ESXi for RPi 4 as it seems to be a cheap test bed

    Yes and I have tried it. It basically is only usable on the 8GB and intended for IoT use cases. I also didn't say the kvm plugin doesn't work (hence why I tested on the RPi). I just said it isn't a good candidate. I am a VMware admin at work and go to VMworld every year. ESXi on ARM was not written with only the RPi in mind. There are many non-RPi ARM boards that have expandable ram and many more cpu cores that it would be reasonable to use ESXi on. No reason to try and point out that I am wrong.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Is there a way of pruning my VM list?

    I have way to many VM's now. But I would like to keep them backed up else where.

    I know I can copy/move/backup the storage (qcow2) and the xml config file for each vm.

    Q1. But how can I remove them from the KVM plugin VM list?

    Q2. And can they be restored to the KVM plugin Vm list, if need be?

    • Offizieller Beitrag

    But how can I remove them from the KVM plugin VM list?

    If you delete the VM (not delete+storage), then the virtual disks are still around. It does not keep the xml around but that should be easy to recreate.


    And can they be restored to the KVM plugin Vm list, if need be?

    Create a new VM and attach an existing disk. If you had more than one disk, add the additional existing disks after the VM is created.

    • Offizieller Beitrag

    Can I cheat with virsh define /PathToBackup/OriginalVm.xml to restore a vm back into the KVM plugin?

    Yep.

  • Hi

    after the update to probably 5.6.12.1 the hostbridge stop working no DHCP or static ip


    docker net is working normal network also working


    please help


    dmesg gives this

    [ 624.378703] br0: port 2(vnet0) entered blocking state

    [ 624.378705] br0: port 2(vnet0) entered disabled state

    [ 624.378755] device vnet0 entered promiscuous mode

    [ 624.378893] br0: port 2(vnet0) entered blocking state

    [ 624.378894] br0: port 2(vnet0) entered forwarding state

    [ 624.574755] audit: type=1400 audit(1625400135.588:28): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="libvirt-158cd1bc-1f2d-43b8-9981-271892c21089" pid=6964 comm="apparmor_parser"

    [ 627.008944] kvm [6966]: vcpu0, guest rIP: 0xffffffff97e7f8f4 disabled perfctr wrmsr: 0xc2 data 0xffff

    [ 675.828241] br0: port 2(vnet0) entered disabled state

    [ 675.830947] device vnet0 left promiscuous mode

    [ 675.830953] br0: port 2(vnet0) entered disabled state

    [ 676.266663] audit: type=1400 audit(1625400187.280:29): apparmor="STATUS" operation="profile_remove" profile="unconfined" name="libvirt-158cd1bc-1f2d-43b8-9981-271892c21089" pid=7142 comm="apparmor_parser"


    my host-bridge config

    virsh # net-info host-bridge

    Name: host-bridge

    UUID: 955efacb-d92d-4e96-8a98-b674fcf05bf8

    Active: yes

    Persistent: yes

    Autostart: yes

    Bridge: br0



    default is WORKING


    virsh # net-info default

    Name: default

    UUID: e440718a-96b4-4949-b8af-016c8c619ace

    Active: yes

    Persistent: yes

    Autostart: yes

    Bridge: virbr0


    Netplan config


    root@omv5:/etc/netplan# cat 10-openmediavault-default.yaml

    network:

    version: 2

    renderer: networkd

    root@omv5:/etc/netplan# cat 60-openmediavault-br0.yaml

    network:

    ethernets:

    eno1:

    addresses: []

    dhcp4: false

    dhcp6: false

    wakeonlan: true

    bridges:

    br0:

    addresses:

    - 192.168.3.1/22

    gateway4: 192.168.2.254

    dhcp4: false

    dhcp6: false

    link-local: []

    nameservers:

    addresses:

    - 192.168.2.254

    search: [hans.lan]

    interfaces:

    - eno1



    -rw-r--r-- 1 root root 43 May 2 12:35 10-openmediavault-default.yaml

    -rw-r--r-- 1 root root 381 May 2 12:35 60-openmediavault-br0.yaml

  • Hi,


    I have strange one, iv'e been trying to delete the below but it just doesnt happen. Iv'e managed to delete the VHD. It doesnt matter i choose "Delete" or "Delete + Stroage". Any ideas on what i can do?

    Normally if its in red it's bad!!!


    Machine 1 - Dell OptiPlex 790 - Core i5-2400 3.10GHz - 16GB RAM - OMV5

    Machine 2 - Raspberry PI4 - ARMv7 - 2GB - OMV5

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!