Yeah that didnt work, tried a differant browser as well same problem.
New openmediavault-kvm plugin
- ryecoaaron
- Geschlossen
-
-
By chance, are you using an older version of the plugin, AND do you have a space in the name?
-
after the update to probably 5.6.12.1 the hostbridge stop working no DHCP or static ip
apparmor isn't helping you. I would try removing it.
Any ideas on what i can do?
Not create VMs with spaces in the name. The plugin hasn't allow them in a long time. virsh doesn't understand them. You could try:
sudo virsh undefine 'Android TV 1'
-
apparmor isn't helping you. I would try removing it.
hi i disabled apparomor and rebooted but still not working
root@omv5:/etc/netplan# systemctl stop apparmor
root@omv5:/etc/netplan# systemctl disable apparmor
Synchronizing state of apparmor.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install disable apparmor
Removed /etc/systemd/system/sysinit.target.wants/apparmor.service.
-
i disabled apparomor and rebooted but still not working
I would've remove it but I guess stopping should work. Non-kvm traffic is working on the bridge br0? Does netplan apply give you any output? Did you try rebooting? Does everything look right in ip a? Because virsh says the bridge is ok, I'm not sure what else the plugin could do to fix this. Maybe docker did something weird with iptables? What kernel are you using?
-
Not create VMs with spaces in the name. The plugin hasn't allow them in a long time. virsh doesn't understand them. You could try:
sudo virsh undefine 'Android TV 1'
Thank you so much, have learnt my lesson now. No more VM's names with spaces
-
I would've remove it but I guess stopping should work. Non-kvm traffic is working on the bridge br0? Does netplan apply give you any output? Did you try rebooting? Does everything look right in ip a? Because virsh says the bridge is ok, I'm not sure what else the plugin could do to fix this. Maybe docker did something weird with iptables? What kernel are you using?
Hi
netplan apply dont give any output
the trafic is working over br0 nas functions ar oke virbr0 is working ok
iptables ok all to everywhere
kernel i use
Debian GNU/Linux, with Linux 5.10.0-0.bpo.7-amd64
its like there is no traffic flowing from vnet0 <=> br0
bridge name bridge id STP enabled interfaces
br0 8000.6c3be50c9b0a no eno1
vnet0
docker0 8000.02425c559cdf no veth9a42e33
vethc418ea9
vethc537dbe
vethff32255
virbr0 8000.525400ec5f57 yes virbr0-nic
bridge link
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master br0 state forwarding priority 32 cost 4
6: vethc537dbe@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master docker0 state forwarding priority 32 cost 2
8: vethc418ea9@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master docker0 state forwarding priority 32 cost 2
10: veth9a42e33@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master docker0 state forwarding priority 32 cost 2
12: vethff32255@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master docker0 state forwarding priority 32 cost 2
14: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 master virbr0 state disabled priority 32 cost 100
15: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master br0 state forwarding priority 32 cost 100
brctl showstp br0
br0
bridge id 8000.6c3be50c9b0a
designated root 8000.6c3be50c9b0a
root port 0 path cost 0
max age 20.00 bridge max age 20.00
hello time 2.00 bridge hello time 2.00
forward delay 15.00 bridge forward delay 15.00
ageing time 300.00
hello timer 0.00 tcn timer 0.00
topology change timer 0.00 gc timer 18.89
flags
eno1 (1)
port id 8001 state forwarding
designated root 8000.6c3be50c9b0a path cost 4
designated bridge 8000.6c3be50c9b0a message age timer 0.00
designated port 8001 forward delay timer 0.00
designated cost 0 hold timer 0.00
flags
vnet0 (2)
port id 8002 state forwarding
designated root 8000.6c3be50c9b0a path cost 100
designated bridge 8000.6c3be50c9b0a message age timer 0.00
designated port 8002 forward delay timer 0.00
designated cost 0 hold timer 0.00
flags
net.ipv4.ip_forward = 1
this is ip a
ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UP group default qlen 1000
link/ether 6c:3b:e5:0c:9b:0a brd ff:ff:ff:ff:ff:ff
3: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 6c:3b:e5:0c:9b:0a brd ff:ff:ff:ff:ff:ff
inet 192.168.3.1/22 brd 192.168.3.255 scope global br0
valid_lft forever preferred_lft forever
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:5c:55:9c:df brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:5cff:fe55:9cdf/64 scope link
valid_lft forever preferred_lft forever
6: vethc537dbe@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether ca:7b:c2:a1:47:bc brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::c87b:c2ff:fea1:47bc/64 scope link
valid_lft forever preferred_lft forever
8: vethc418ea9@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether a6:07:72:73:3f:6a brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::a407:72ff:fe73:3f6a/64 scope link
valid_lft forever preferred_lft forever
10: veth9a42e33@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether be:35:c8:7c:11:d6 brd ff:ff:ff:ff:ff:ff link-netnsid 2
inet6 fe80::bc35:c8ff:fe7c:11d6/64 scope link
valid_lft forever preferred_lft forever
12: vethff32255@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether c6:48:b7:2e:87:4a brd ff:ff:ff:ff:ff:ff link-netnsid 3
inet6 fe80::c448:b7ff:fe2e:874a/64 scope link
valid_lft forever preferred_lft forever
13: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 52:54:00:ec:5f:57 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
14: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
link/ether 52:54:00:ec:5f:57 brd ff:ff:ff:ff:ff:ff
root@omv5:~# iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT udp -- anywhere anywhere udp dpt:domain
ACCEPT tcp -- anywhere anywhere tcp dpt:domain
ACCEPT udp -- anywhere anywhere udp dpt:bootps
ACCEPT tcp -- anywhere anywhere tcp dpt:67
Chain FORWARD (policy DROP)
target prot opt source destination
ACCEPT all -- anywhere 192.168.122.0/24 ctstate RELATED,ESTABLISHED
ACCEPT all -- 192.168.122.0/24 anywhere
ACCEPT all -- anywhere anywhere
REJECT all -- anywhere anywhere reject-with icmp-port-unreachable
REJECT all -- anywhere anywhere reject-with icmp-port-unreachable
DOCKER-USER all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
ACCEPT udp -- anywhere anywhere udp dpt:bootpc
Chain DOCKER (1 references)
target prot opt source destination
ACCEPT tcp -- anywhere 172.17.0.2 tcp dpt:9000
ACCEPT tcp -- anywhere 172.17.0.3 tcp dpt:8000
ACCEPT tcp -- anywhere 172.17.0.2 tcp dpt:8000
ACCEPT tcp -- anywhere 172.17.0.4 tcp dpt:9090
ACCEPT tcp -- anywhere 172.17.0.4 tcp dpt:http-alt
ACCEPT tcp -- anywhere 172.17.0.5 tcp dpt:8082
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target prot opt source destination
DROP all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere
-
kernel i use
Debian GNU/Linux, with Linux 5.10.0-0.bpo.7-amd64
I haven't been impressed with the Debian 5.10 backports kernel. I use the Proxmox 5.11 kernel everywhere.
iptables ok all to everywhere
I agree. That looks ok.
Have you tried stopping and starting the network? Are you sure it isn't the guest? Sorry, not coming up with any ideas.
-
hi
I would've remove it but I guess stopping should work. Non-kvm traffic is working on the bridge br0? Does netplan apply give you any output? Did you try rebooting? Does everything look right in ip a? Because virsh says the bridge is ok, I'm not sure what else the plugin could do to fix this. Maybe docker did something weird with iptables? What kernel are you using?
i think i find the problem
i put in a n extra network card i create a bridge br1 and i defined a bridge with virsh reboot the machine
then the bridge br0 starts working again
but what i see is the docker0 goes to br1
brctl show
bridge name bridge id STP enabled interfaces
br0 8000.6c3be50c9b0a no eno1
vnet0
br1 8000.90e2ba2958c4 no enp2s0f0
docker0 8000.02425e070b9f no veth1697619
vetha1b497f
vethc4138da
vetheaa4ab6
virbr0 8000.525400ec5f57 yes virbr0-nic
-
also reverted the kernel to omv5 5.10.0-0.bpo.5-amd64 to no avail
when i remove the network card br1 and undefined net work in virsh it stops working
-
but what i see is the docker0 goes to br1
It seems docker is doing something to steal br0 then. Are you setting up any networks with docker-compose?
-
-
option to clone VMs using virt-clone
Finally got to use the Clone vm feature.
It works great, thanks ryecoaaron
Could a new feature be added to the KVM plugin to backup storage and xml files to another location?
And another new feature to "Define VM from XML file" ?
-
Could a new feature be added to the KVM plugin to backup storage and xml files to another location?
I am working on backup. That is not one of the easier things to add and still have it work with everything virsh.
And another new feature to "Define VM from XML file" ?
There is no file chooser in OMV. So, it would have to be a manually specified path.
-
It seems docker is doing something to steal br0 then. Are you setting up any networks with docker-compose?
Hi i did some more testing
when i remove docker reboot the machine KVM works
then i install docker then docker + kvm works (so no reboot)
if i reboot the machine again KVM stops working.
on the 27 of june you let me do this (maybe this has something to do with it ??)
root (not sudo):
Code -
maybe this has something to do with it ??
All that does is make libvirtd wait to start until those filesystems are mounted. When kvm networking isn't working what is the output of: systemctl status libvirtd
-
All that does is make libvirtd wait to start until those filesystems are mounted. When kvm networking isn't working what is the output of: systemctl status libvirtd
this is in working state (so docker installed after reboot)
systemctl status libvirtd
● libvirtd.service - Virtualization daemon
Loaded: loaded (/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/libvirtd.service.d
└─waitAllMounts.conf
Active: active (running) since Tue 2021-07-06 16:12:32 CEST; 1h 31min ago
Docs: man:libvirtd(8)
Main PID: 1850 (libvirtd)
Tasks: 33 (limit: 32768)
Memory: 16.8G
CGroup: /system.slice/libvirtd.service
├─ 1850 /usr/sbin/libvirtd
├─ 2010 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leaseshelper
├─ 2011 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leaseshelper
└─16835 /usr/bin/qemu-system-x86_64 -name guest=win10-clone1,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-4-win10-clone1/master-key.aes -machine pc-q
Jul 06 17:08:59 omv5 dnsmasq[2010]: using nameserver 192.168.2.254#53
Jul 06 17:08:59 omv5 dnsmasq[2010]: reading /etc/resolv.conf
Jul 06 17:08:59 omv5 dnsmasq[2010]: using nameserver 192.168.2.254#53
Jul 06 17:08:59 omv5 dnsmasq[2010]: reading /etc/resolv.conf
Jul 06 17:08:59 omv5 dnsmasq[2010]: using nameserver 192.168.2.254#53
Jul 06 17:08:59 omv5 libvirtd[1850]: host doesn't support hyperv 'spinlocks' feature
Jul 06 17:09:01 omv5 dnsmasq[2010]: reading /etc/resolv.conf
Jul 06 17:09:01 omv5 dnsmasq[2010]: using nameserver 192.168.2.254#53
Jul 06 17:09:01 omv5 dnsmasq[2010]: reading /etc/resolv.conf
Jul 06 17:09:01 omv5 dnsmasq[2010]: using nameserver 192.168.2.254#53
-
this is in working state (so docker installed after reboot)
As expected, the waiting for mounts isn't causing a problem. docker is. I'm not sure what docker is doing to steal the bridge though.
-
All that does is make libvirtd wait to start until those filesystems are mounted. When kvm networking isn't working what is the output of: systemctl status libvirtd
this is the output in non working state
root@omv5:~# systemctl status libvirtd
● libvirtd.service - Virtualization daemon
Loaded: loaded (/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/libvirtd.service.d
└─waitAllMounts.conf
Active: active (running) since Tue 2021-07-06 18:44:51 CEST; 1min 48s ago
Docs: man:libvirtd(8)
Main PID: 2694 (libvirtd)
Tasks: 20 (limit: 32768)
Memory: 53.0M
CGroup: /system.slice/libvirtd.service
├─2694 /usr/sbin/libvirtd
├─2856 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leaseshelper
└─2857 /usr/sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/lib/libvirt/libvirt_leaseshelper
Jul 06 18:45:07 omv5 dnsmasq[2856]: using nameserver 192.168.2.254#53
Jul 06 18:45:09 omv5 dnsmasq[2856]: reading /etc/resolv.conf
Jul 06 18:45:09 omv5 dnsmasq[2856]: using nameserver 192.168.2.254#53
Jul 06 18:46:28 omv5 dnsmasq[2856]: reading /etc/resolv.conf
Jul 06 18:46:28 omv5 dnsmasq[2856]: using nameserver 192.168.2.254#53
Jul 06 18:46:28 omv5 dnsmasq[2856]: reading /etc/resolv.conf
Jul 06 18:46:28 omv5 dnsmasq[2856]: using nameserver 192.168.2.254#53
Jul 06 18:46:28 omv5 libvirtd[2694]: libvirt version: 5.0.0, package: 4+deb10u1 (Guido Günther <agx@sigxcpu.org> Thu, 05 Dec 2019 00:22:14 +0100)
Jul 06 18:46:28 omv5 libvirtd[2694]: hostname: omv5
Jul 06 18:46:28 omv5 libvirtd[2694]: internal error: End of file from qemu monitor
-
this is the output in non working state
Here are a couple of things to try:
https://serverfault.com/questi…ks-libvirt-bridge-network
http://blog.shahada.abubakar.n…r-breaks-kvm-bridge-fixed
But I like this one the best from https://bbs.archlinux.org/viewtopic.php?id=233727:
And restart docker using:
Jetzt mitmachen!
Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!