KVM Virtual Machines and bridged networking

  • I have been struggling to achieve what I thought was a very common way of connecting your virtual machines to your LAN: bridged networking but it turned out to be extremely laborious, mostly because non of the many online tutorials on the matter provided anything close to the solution. So I'm posting my solution (?) here to share but also to validate whether it actually is a good solution.


    So I'm starting on a freshly installed OMV 5.5.3-1 (with proxmox kernel) and a KMV image that has previously been running on another machine. I was able import to the image into Cockpit and the VM has network connection when I choose "Direct attachment".


    The problem is that "Direct attachment" implies that the VM can access the LAN and the LAN can access the VM but the VM cannot access the host. So bridge mode is what I want, which translates to "Bridge to LAN" in Cockpit. What Cockpit doesn't tell you is that if you choose "Bridge to LAN" as your interface type, your source must be an actual bridge, not an ethernet card or anything else. And, even worse, Cockpit (at least the version that OMV installs) doesn't seem to provide you any way of actually creating a bridge. Neither does OMV.


    So I'm back to the command line but in order to find the right way of creating and managing a bridge, I need to know how OMV manages network devices. Is it via /etc/network/interfaces? It looks like it, because changes in that file (more correctly in /etc/network/interfaces.d will be picked up upon reboot. But that is also strange because OMV/Debian 10 is supposed to have switched to netplan and systemd.


    So I decided to follow Major Hayden's famous tutorial on How to create a bridge for virtual machines using systemd-networkd (well, almost: I used DHCP instead of static IPs) and while it provided me with a br0 to select as a source for my VM in Cockpit the VM never received an IP from the router (despite ip a showing that it actually is connected to the bridge which itself did receive an IP). At some point I even managed to have the bridge get its own IP and the host get another. But the VM never got any.


    So I figured that - for whatever reason - virtual machines are not per default allowed to properly join the club so I dug deeper into how systemd-networkd works and figured that I need to somehow tell systemd that any VM showing up should get its IP via DHCP. I found instructions on telling systemd that any ethernet card that shows up should be referred to the DHCP server (use en* as a selector under [Match] but I wasn't quite sure how to achieve the same for virtual machines but since vnet0 kept popping up in different places when the VM was running, I tried vnet* and... tadaaa it worked: my finally got an IP and was reachable from the LAN while it could also reach the host. 8)


    So, more specifically, in addition to Major Hayden's instructions, I also created a mydhcp.network (in /etc/systemd/network/) that looks like this:

    Code: mydhcp.network
    [Match]
    Name=vnet*
    [Network]
    DHCP=ipv4

    This is not the end of the story yet, but let me pause here for a second and ask: am I the first one trying to run a VM in bridged mode? I don't think so. But why am I running into all these problems then, that no one else seems to run into? What am I doing wrong? Please let me know?


    And please let me know whether there is a better way of doing this, because even though networking now works flawlessly (as far as I can tell), networkctl still thinks that things are not working as they should (see line 6/IDX 4):

    Bash
    # networkctl
    IDX LINK TYPE OPERATIONAL SETUP
    1 lo loopback carrier unmanaged
    2 enp0s31f6 ether degraded configured
    3 br0 bridge routable configured
    4 vnet0 ether degraded configuring
    5 docker0 bridge routable unmanaged
    7 vethdd5f630 ether degraded unmanaged
    6 links listed.


    So if things are working fine, why don't I just ignore that vnet0 is degraded and stuck in configuring? Well, because "someone" does care about vnet0 being stuck in configuring (or rather: being reported as being stuck in configuring) and that "someone" is systemd-network-wait-online which, in turn leads to cockpit not starting properly:



    Apparently this is a bug in systemd (see https://github.com/systemd/systemd/issues/6441) but, again, if this is so (and if it exists since three years), why are apparently so few people affected by it? What am I doing differently?


    And what is the best way of handling this bug? This answer suggests to mask the process, but I'm not sure I want to do that since the main reason I migrated to OMV is that I do not want to mess too much with the OS and let OMV take care of these things...


    Edit: I solved this by doing my config in netplan instead of systemd. See this post below.

  • I am trying to setup a bridge connetion in cockpit from 2 weeks without a solution.

    Your method is working??

    HP Microserver Gen8 , 2X3TB data , 1x4TB snapraid sync,SSD for OS. 16g Ecc ram.

    Plugin :

    mergerFs

    Snapraid

    fail2ban

    Docker:

    Pihole

    Transmission-vpn

    jackett

    Headphones

    Duckdns

    Wireguard

    Resilio-sync

  • Thanks for your feedback.

    So, It's really strange that nobody uses VM with brigde connection in this forum .

    Let's see, sooner or later a solution will be found ;)

    HP Microserver Gen8 , 2X3TB data , 1x4TB snapraid sync,SSD for OS. 16g Ecc ram.

    Plugin :

    mergerFs

    Snapraid

    fail2ban

    Docker:

    Pihole

    Transmission-vpn

    jackett

    Headphones

    Duckdns

    Wireguard

    Resilio-sync

  • OK, I finally got this to work properly by moving up one level from `systemd` to `netplan`.


    I found this:


    Code
    /etc/netplan$ ll
    total 16K
    drwxr-xr-x 2 root root 4.0K Jul 18 02:54 ./
    drwxrwxr-x 108 root root 4.0K Jul 21 17:00 ../
    -rw-r--r-- 1 root root 43 Jul 18 02:54 10-openmediavault-default.yaml
    -rw-r--r-- 1 root root 146 Jul 18 02:54 20-openmediavault-enp0s31f6.yaml

    So whatever you put in the OMV GUI under "network" comes out here as a yaml file? Here it is:


    Code
    GNU nano 3.2 /etc/netplan/30-openmediavault-br0.yaml
    network:
    bridges:
    br0:
    dhcp4: yes
    dhcp6: no
    link-local: []
    interfaces:
    - enp0s31f6


    I also made sure that in `/etc/netplan/20-openmediavault-enp0s31f6.yaml` all dhcps are on false because the ethernetcard is not supposed to get an IP, only the bridge.


    To see what kind of systemd files my new yaml file will produce I did `netplan generate` and found that, in addition to the configuration for the enthernet card, there are now two new files for my bridge


    Code
    /run/systemd/network$ ls -la
    total 12
    drwxr-xr-x 2 root root 100 Jul 24 15:37 .
    drwxr-xr-x 20 root root 460 Jul 24 16:23 ..
    -rw-r--r-- 1 root root 30 Jul 24 15:37 10-netplan-br0.netdev
    -rw-r--r-- 1 root root 125 Jul 24 15:37 10-netplan-br0.network
    -rw-r--r-- 1 root root 105 Jul 24 15:37 10-netplan-enp0s31f6.network


    To compare my previous systemd setup with this new one created by netplan, I did cd /run/systemd/network/ && cat *` (which gives me netplans config) and `cd /etc/systemd/network/ && cat * (which gives me my old config) and compared the two in notepad++ (A linuz wizzard would have achieved all that with a long pipe command, but for me notepad is easier). On the left is my old config and on the right is what netplan created for me (just to avoid misunderstandings: the netplan config is not just the result of the above yaml file but also of the other two yaml files that OMV already created):




    I marked the differences with numbers to be able to refer to them more easily.


    1. This is because of the `link-local: []` in th yaml file. I pasted it from some template. Not sure how important it is.

    2. I have a feeling that this might be what I was missing in my old config

    3. Not sure if this is important.

    4. This was my way of making sure that any virtual machine connecting to the bridge would get an IP. Netplan doesn't seem to need that. My guess is that it's because of the strabge type below in no. 5

    5. Netplan identifies the ethernet card by its MAC adress rather than its name, which is probably more reliable. I don'ät quite understand the `type` which netplan created but somehow it seems to make sure that virtual machines bind to the vridge and get an IP. But I understand it even less if `!vlan` means "NOT vlan", ... but since it works, I havenät investigated further.

    6. Nevermind. See no. 1 above.


    So I figured that this new config is worthy of replacing my manual systemd-networkd setup described in the OP above. I deleted (moved) the files in `/etc/systemd/network/` and did `netplan try`, when everything worked I accepted the new config with Enter. Done.


    I no longer have any failing systemd-network-wait-online.


    Question: I imitated the OMV file-naming scheme when creating my yaml file for netplan, hoping that OMV would pick it up and perhaps display it in the GUI but this is not the case. But maybe it will at some point be overwritten bei OMV? Should I rename it to something more original?

  • Great !!!

    I will try asap this method.

    Why don't you write an " How-To-Get working bridge connection in Kvm" ???

    I think it would be useful for many peole !!!!

    Best regards.

    HP Microserver Gen8 , 2X3TB data , 1x4TB snapraid sync,SSD for OS. 16g Ecc ram.

    Plugin :

    mergerFs

    Snapraid

    fail2ban

    Docker:

    Pihole

    Transmission-vpn

    jackett

    Headphones

    Duckdns

    Wireguard

    Resilio-sync

  • ok i have tried to follow the correct order in webgui:

    1. created ethernet :eth1

    2. created 3 vlans on eth1 : eth1.20, eth1.40, eth1.60

    3. tried to create br1 bridge with the 2 vlans but i get the following error:


    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; omv-salt deploy run systemd-networkd 2>&1' with exit code '1': hpnas.mamra.duckdns.org: ---------- ID: configure_etc_network_interfaces Function: file.managed Name: /etc/network/interfaces Result: True Comment: File /etc/network/interfaces is in the correct state Started: 23:21:50.225188 Duration: 21.404 ms Changes: ---------- ID: remove_systemd_networkd_config_files Function: module.run Result: True Comment: file.find: [] Started: 23:21:50.247082 Duration: 1.072 ms Changes: ---------- file.find: ---------- ID: remove_empty_systemd_networkd_config_files Function: module.run Result: True Comment: file.find: [] Started: 23:21:50.248235 Duration: 1.03 ms Changes: ---------- file.find: ---------- ID: remove_netplan_config_files Function: module.run Result: True Comment: file.find: ['/etc/netplan/10-openmediavault-default.yaml', '/etc/netplan/20-openmediavault-eth1.yaml', '/etc/netplan/50-openmediavault-eth1.20.yaml', '/etc/netplan/50-openmediavault-eth1.40.yaml', '/etc/netplan/50-openmediavault-eth1.60.yaml', '/etc/netplan/60-openmediavault-br0.yaml'] Started: 23:21:50.249347 Duration: 1.204 ms Changes: ---------- file.find: - /etc/netplan/10-openmediavault-default.yaml - /etc/netplan/20-openmediavault-eth1.yaml - /etc/netplan/50-openmediavault-eth1.20.yaml - /etc/netplan/50-openmediavault-eth1.40.yaml - /etc/netplan/50-openmediavault-eth1.60.yaml - /etc/netplan/60-openmediavault-br0.yaml ---------- ID: configure_netplan_default Function: file.managed Name: /etc/netplan/10-openmediavault-default.yaml Result: True Comment: File /etc/netplan/10-openmediavault-default.yaml updated Started: 23:21:50.250633 Duration: 6.664 ms Changes: ---------- diff: New file mode: 0644 ---------- ID: configure_netplan_ethernet_eth1 Function: file.managed Name: /etc/netplan/20-openmediavault-eth1.yaml Result: True Comment: File /etc/netplan/20-openmediavault-eth1.yaml updated Started: 23:21:50.257383 Duration: 20.559 ms Changes: ---------- diff: New file mode: 0644 ---------- ID: configure_netplan_vlan_eth1.40 Function: file.managed Name: /etc/netplan/50-openmediavault-eth1.40.yaml Result: True Comment: File /etc/netplan/50-openmediavault-eth1.40.yaml updated Started: 23:21:50.278030 Duration: 19.113 ms Changes: ---------- diff: New file mode: 0644 ---------- ID: configure_netplan_vlan_eth1.20 Function: file.managed Name: /etc/netplan/50-openmediavault-eth1.20.yaml Result: True Comment: File /etc/netplan/50-openmediavault-eth1.20.yaml updated Started: 23:21:50.297232 Duration: 16.772 ms Changes: ---------- diff: New file mode: 0644 ---------- ID: configure_netplan_vlan_eth1.60 Function: file.managed Name: /etc/netplan/50-openmediavault-eth1.60.yaml Result: True Comment: File /etc/netplan/50-openmediavault-eth1.60.yaml updated Started: 23:21:50.314090 Duration: 16.764 ms Changes: ---------- diff: New file mode: 0644 ---------- ID: configure_netplan_bridge_br0 Function: file.managed Name: /etc/netplan/60-openmediavault-br0.yaml Result: True Comment: File /etc/netplan/60-openmediavault-br0.yaml updated Started: 23:21:50.330944 Duration: 25.858 ms Changes: ---------- diff: New file mode: 0644 ---------- ID: configure_netplan_bridge_br1 Function: file.managed Name: /etc/netplan/60-openmediavault-br1.yaml Result: True Comment: File /etc/netplan/60-openmediavault-br1.yaml updated Started: 23:21:50.356892 Duration: 21.586 ms Changes: ---------- diff: New file mode: 0644 ---------- ID: apply_netplan_config Function: cmd.run Name: netplan apply Result: False Comment: Command "netplan apply" run Started: 23:21:50.379045 Duration: 80.007 ms Changes: ---------- pid: 34444 retcode: 78 stderr: /etc/netplan/60-openmediavault-br1.yaml:3:5: Error in network definition: Updated definition 'eth1.20' changes device type eth1.20: ^ stdout: Summary for xxxxxxxxxx ------------- Succeeded: 11 (changed=11) Failed: 1 ------------- Total states run: 12 Total run time: 232.033 ms




    Can anyone help?

  • Found a way around this:

    Created the bridge network br0 with only the eth0.

    Modified /sys/class/net/br0/bridge/vlan_filtering form 0 to 1

    Created the vlans directly on kvm virtual machine(pfsense)

  • Nice, but I can confirm that you can create the bridge on omv5 webgui.

    Yes, the bridge option has now appeared for me to. Must have been a recent update.



    But the problem is that I cannot select my ethernet card. It's not showing in the list.



    So you had your card in the list of the "Add bridge" dialogue?


    I wonder what would happen if I removed the ethernet card from the OMV webui and then readded it again. Maybe it would show up then? But I'm not daring to do it as it may seriously mess up my system...

  • I have comlete new installed my system. I have created a bride. In Cockpit in the Machine Settings the Network interfaces is bridge to lan connected to the bridge. But i can not connect to my host etc. to my omv share or the internet......


    please i need help

  • The Interface is configured as bridge interace (br0) in OMV.


    I can connect to my OMV Host (Shares) and my networtk when i add 2 networtk interfaces to the Virtual Machine.


    One ist configured as brigde and the second as direct.


    The second get i ip over dhcp the first as bridge not. For the bridge interface i have set manually a IP address and then i can connect to OMV.


    But this cannot be the solution. What i need to do, that i can do this whit only one interface?

  • I think I found the solution, but not sure if it works in all of your scenario.

    So after installed OMV5 with OMV extras, install Docker; Portainer; Cockpit.

    Next I've added HassOS KVM VM in Cockpit as a new VM.

    Next in OMV5 WebUI go to Network settings and fisrt delete my Interface : enp4s0

    After I added a new Bridge, with name: Bridge, and attached the enp4s0 interface to bridge.

    Next I set the same static IP address, as it had before with the normal interface. Till that point I didn't Save the OMV5 config chages!!!

    After Save to full configuration changes in OMV5, so you will lose the connection only for 2-5 pings.

    Finally I've set the existing HassOS VM the following in Network settings: Direct connection; Model type: e1000; br0

    In the end of the story, I get a valid LAN DHCP IP address to my HassOS VM, and reachable by local clients.

  • I was also struggling with this exact problem and this solved it for me. Many thanks! :)

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!