KVM Virtual Machines and bridged networking

  • I have been struggling to achieve what I thought was a very common way of connecting your virtual machines to your LAN: bridged networking but it turned out to be extremely laborious, mostly because non of the many online tutorials on the matter provided anything close to the solution. So I'm posting my solution (?) here to share but also to validate whether it actually is a good solution.


    So I'm starting on a freshly installed OMV 5.5.3-1 (with proxmox kernel) and a KMV image that has previously been running on another machine. I was able import to the image into Cockpit and the VM has network connection when I choose "Direct attachment".


    The problem is that "Direct attachment" implies that the VM can access the LAN and the LAN can access the VM but the VM cannot access the host. So bridge mode is what I want, which translates to "Bridge to LAN" in Cockpit. What Cockpit doesn't tell you is that if you choose "Bridge to LAN" as your interface type, your source must be an actual bridge, not an ethernet card or anything else. And, even worse, Cockpit (at least the version that OMV installs) doesn't seem to provide you any way of actually creating a bridge. Neither does OMV.


    So I'm back to the command line but in order to find the right way of creating and managing a bridge, I need to know how OMV manages network devices. Is it via /etc/network/interfaces? It looks like it, because changes in that file (more correctly in /etc/network/interfaces.d will be picked up upon reboot. But that is also strange because OMV/Debian 10 is supposed to have switched to netplan and systemd.


    So I decided to follow Major Hayden's famous tutorial on How to create a bridge for virtual machines using systemd-networkd (well, almost: I used DHCP instead of static IPs) and while it provided me with a br0 to select as a source for my VM in Cockpit the VM never received an IP from the router (despite ip a showing that it actually is connected to the bridge which itself did receive an IP). At some point I even managed to have the bridge get its own IP and the host get another. But the VM never got any.


    So I figured that - for whatever reason - virtual machines are not per default allowed to properly join the club so I dug deeper into how systemd-networkd works and figured that I need to somehow tell systemd that any VM showing up should get its IP via DHCP. I found instructions on telling systemd that any ethernet card that shows up should be referred to the DHCP server (use en* as a selector under [Match] but I wasn't quite sure how to achieve the same for virtual machines but since vnet0 kept popping up in different places when the VM was running, I tried vnet* and... tadaaa it worked: my finally got an IP and was reachable from the LAN while it could also reach the host. 8)


    So, more specifically, in addition to Major Hayden's instructions, I also created a mydhcp.network (in /etc/systemd/network/) that looks like this:

    Code: mydhcp.network
    [Match]
    Name=vnet*
    [Network]
    DHCP=ipv4

    This is not the end of the story yet, but let me pause here for a second and ask: am I the first one trying to run a VM in bridged mode? I don't think so. But why am I running into all these problems then, that no one else seems to run into? What am I doing wrong? Please let me know?


    And please let me know whether there is a better way of doing this, because even though networking now works flawlessly (as far as I can tell), networkctl still thinks that things are not working as they should (see line 6/IDX 4):

    Bash
    # networkctl
    IDX LINK TYPE OPERATIONAL SETUP
    1 lo loopback carrier unmanaged
    2 enp0s31f6 ether degraded configured
    3 br0 bridge routable configured
    4 vnet0 ether degraded configuring
    5 docker0 bridge routable unmanaged
    7 vethdd5f630 ether degraded unmanaged
    6 links listed.


    So if things are working fine, why don't I just ignore that vnet0 is degraded and stuck in configuring? Well, because "someone" does care about vnet0 being stuck in configuring (or rather: being reported as being stuck in configuring) and that "someone" is systemd-network-wait-online which, in turn leads to cockpit not starting properly:



    Apparently this is a bug in systemd (see https://github.com/systemd/systemd/issues/6441) but, again, if this is so (and if it exists since three years), why are apparently so few people affected by it? What am I doing differently?


    And what is the best way of handling this bug? This answer suggests to mask the process, but I'm not sure I want to do that since the main reason I migrated to OMV is that I do not want to mess too much with the OS and let OMV take care of these things...


    Edit: I solved this by doing my config in netplan instead of systemd. See this post below.

  • I am trying to setup a bridge connetion in cockpit from 2 weeks without a solution.

    Your method is working??

    HP Microserver Gen8 , 2X3TB data , 1x4TB snapraid sync,SSD for OS. 16g Ecc ram.

    Plugin :

    mergerFs

    Snapraid

    fail2ban

    Docker:

    Pihole

    Transmission-vpn

    jackett

    Headphones

    Duckdns

    Wireguard

    Resilio-sync

  • Thanks for your feedback.

    So, It's really strange that nobody uses VM with brigde connection in this forum .

    Let's see, sooner or later a solution will be found ;)

    HP Microserver Gen8 , 2X3TB data , 1x4TB snapraid sync,SSD for OS. 16g Ecc ram.

    Plugin :

    mergerFs

    Snapraid

    fail2ban

    Docker:

    Pihole

    Transmission-vpn

    jackett

    Headphones

    Duckdns

    Wireguard

    Resilio-sync

  • OK, I finally got this to work properly by moving up one level from `systemd` to `netplan`.


    I found this:


    Code
    /etc/netplan$ ll
    total 16K
    drwxr-xr-x 2 root root 4.0K Jul 18 02:54 ./
    drwxrwxr-x 108 root root 4.0K Jul 21 17:00 ../
    -rw-r--r-- 1 root root 43 Jul 18 02:54 10-openmediavault-default.yaml
    -rw-r--r-- 1 root root 146 Jul 18 02:54 20-openmediavault-enp0s31f6.yaml

    So whatever you put in the OMV GUI under "network" comes out here as a yaml file? Here it is:


    Code
    GNU nano 3.2 /etc/netplan/30-openmediavault-br0.yaml
    network:
    bridges:
    br0:
    dhcp4: yes
    dhcp6: no
    link-local: []
    interfaces:
    - enp0s31f6


    I also made sure that in `/etc/netplan/20-openmediavault-enp0s31f6.yaml` all dhcps are on false because the ethernetcard is not supposed to get an IP, only the bridge.


    To see what kind of systemd files my new yaml file will produce I did `netplan generate` and found that, in addition to the configuration for the enthernet card, there are now two new files for my bridge


    Code
    /run/systemd/network$ ls -la
    total 12
    drwxr-xr-x 2 root root 100 Jul 24 15:37 .
    drwxr-xr-x 20 root root 460 Jul 24 16:23 ..
    -rw-r--r-- 1 root root 30 Jul 24 15:37 10-netplan-br0.netdev
    -rw-r--r-- 1 root root 125 Jul 24 15:37 10-netplan-br0.network
    -rw-r--r-- 1 root root 105 Jul 24 15:37 10-netplan-enp0s31f6.network


    To compare my previous systemd setup with this new one created by netplan, I did cd /run/systemd/network/ && cat *` (which gives me netplans config) and `cd /etc/systemd/network/ && cat * (which gives me my old config) and compared the two in notepad++ (A linuz wizzard would have achieved all that with a long pipe command, but for me notepad is easier). On the left is my old config and on the right is what netplan created for me (just to avoid misunderstandings: the netplan config is not just the result of the above yaml file but also of the other two yaml files that OMV already created):




    I marked the differences with numbers to be able to refer to them more easily.


    1. This is because of the `link-local: []` in th yaml file. I pasted it from some template. Not sure how important it is.

    2. I have a feeling that this might be what I was missing in my old config

    3. Not sure if this is important.

    4. This was my way of making sure that any virtual machine connecting to the bridge would get an IP. Netplan doesn't seem to need that. My guess is that it's because of the strabge type below in no. 5

    5. Netplan identifies the ethernet card by its MAC adress rather than its name, which is probably more reliable. I don'ät quite understand the `type` which netplan created but somehow it seems to make sure that virtual machines bind to the vridge and get an IP. But I understand it even less if `!vlan` means "NOT vlan", ... but since it works, I havenät investigated further.

    6. Nevermind. See no. 1 above.


    So I figured that this new config is worthy of replacing my manual systemd-networkd setup described in the OP above. I deleted (moved) the files in `/etc/systemd/network/` and did `netplan try`, when everything worked I accepted the new config with Enter. Done.


    I no longer have any failing systemd-network-wait-online.


    Question: I imitated the OMV file-naming scheme when creating my yaml file for netplan, hoping that OMV would pick it up and perhaps display it in the GUI but this is not the case. But maybe it will at some point be overwritten bei OMV? Should I rename it to something more original?

  • Great !!!

    I will try asap this method.

    Why don't you write an " How-To-Get working bridge connection in Kvm" ???

    I think it would be useful for many peole !!!!

    Best regards.

    HP Microserver Gen8 , 2X3TB data , 1x4TB snapraid sync,SSD for OS. 16g Ecc ram.

    Plugin :

    mergerFs

    Snapraid

    fail2ban

    Docker:

    Pihole

    Transmission-vpn

    jackett

    Headphones

    Duckdns

    Wireguard

    Resilio-sync

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!