Posts by tophee

    I installed the new plugin and since my VMs were nicely showing up in it, I went ahead and uninstalled cockpit as I was unable to access it anyway (don't know what happened but I kept getting a NET::ERR_CERT_INVALID error (Safari would exit/crash immediately, but Edge at least showed me the error) and no reboot and no update would help).


    What I didn't expect, though, was that uninstalling cockpit would also remove KVM, rendering the KVM plugin non-functional... So I reinstalled the new plugin and my machines reappeared on the list, but I'm not able to start them. Clicking on State -> Start will very briefly flash up a "Loading ..." modal and that's it. Nothing happens. The VM will not start.


    Update: Since the VMs were set to auto-start, I got them to start by restarting the server. Since then, I am also able to start and stop them via the KVM plugin tab.


    One thing I noticed, though: the tab doesn't refresh/update automatically when a machine changes state. It would be great if it could do that, at least when the change of state was initiated in that tab.

    Since I didn't know where to start on the software side, I tried adding some RAM (before: 8 GB, now: 24GB) and it looks like it solved the problem of delays. So, after all, the problem turns out to have been rather simple and precisely as I guessed above: what was taking so much time at first call was moving stuff from wherever it was into RAM. This would have been evident to me if I had seen lots of swap usage but I didn't see that. But maybe I wasn't looking properly?


    Anyway, what probably should have alerted me is the orange part of the CPU graph above: it represents the time the CPU spends waiting for som I/O operation to complete. Here is the same diagram before and after adding RAM (at about midnight):



    See how the Wait-IO almost disappears after I added RAM? So, for me this really an interesting lesson in how RAM affects speed. I have no idea to what extent this could have been diagnosed from the information I provided in the OP, but since noone provided any diagnosis, I'm guessing that this is a comples issue so that if you are seeing the same symptoms, your solution may still be different. But I dare say: if you are seeing a lot of Wait IO in your CPU usage, adding RAM is worth a try.


    You probably don't need to triple your RAM, like I did, but it's interesting to see that if you do, Debian will use it for something:



    Not sure how exactly this works, but it's fascinating and a big plus for Linux. Some months ago, I added RAM to my Windows desktop PC and I did *not* see anything like that. I rarely see Windows not using more than 50% of available RAM. So rarely that I was wondering for some time, whether there was something wrong with the RAM or if some setting is preventing Windows from using what it has... Well, to be fair: in order to make this a proper comparison I should run a dozen docker containers and two virtual machines on my desktop. Chances are that memory useage would increase.


    But anyway: I find it interesting how Debian only gradually increased RAM usage over the course of a couple of hours. I like to imagine it like an animal set free from long time captivity: it first has to explore the new space before realizing it is free. Is it really true? Can I go this far? And still further?

    I am running OMV 5 with 12 docker containers and two virtual machines (KVM). None of these containers or instances is particularly busy (most of them are not doing anything, actually), so my CPU usage is below 20% most of the time:



    But whenever I try to access the webpage provided by one of the containers, a vm or even OMV's own UI, I have a delay of between 2 and perhaps 10 seconds until the page loads. During that time, the browser shows "Waiting for ...":



    This delay only happens the first time I access the URL, after that it is fast. Not sure how long I have to wait until it gets slow again, but less than 1-2 hours.


    Could someone give me a hint where/how I can start troubleshooting this. How can I narrow down what is causing this delay? It could have something to do with the network configuration. Since VMs, docker containers and the host itself are showing the same symptoms, I'm guessing it must be the hosts config, but how do I check this?


    Or perhaps I am running out of RAM?



    I read somewhere that almost full RAM is not a bad thing because it means that precious resource is well utilized. But how do I know whether it's getting too tight?


    Or is it related to the more severe issue I'm (sometimes) seeing when trying to login to Cockpit?:



    This TLS handshake problem seems tp be specific to Cockpit and not specific to me as I've seen several reports about it...


    So, where to start?

    My OMV is acting a bit strangely. Here is what I'm seeing in the syslog:



    Any hints what might be going on and/or how to fix it?

    Nice, but I can confirm that you can create the bridge on omv5 webgui.

    Yes, the bridge option has now appeared for me to. Must have been a recent update.



    But the problem is that I cannot select my ethernet card. It's not showing in the list.



    So you had your card in the list of the "Add bridge" dialogue?


    I wonder what would happen if I removed the ethernet card from the OMV webui and then readded it again. Maybe it would show up then? But I'm not daring to do it as it may seriously mess up my system...

    OK, I finally got this to work properly by moving up one level from `systemd` to `netplan`.


    I found this:


    Code
    /etc/netplan$ ll
    total 16K
    drwxr-xr-x 2 root root 4.0K Jul 18 02:54 ./
    drwxrwxr-x 108 root root 4.0K Jul 21 17:00 ../
    -rw-r--r-- 1 root root 43 Jul 18 02:54 10-openmediavault-default.yaml
    -rw-r--r-- 1 root root 146 Jul 18 02:54 20-openmediavault-enp0s31f6.yaml

    So whatever you put in the OMV GUI under "network" comes out here as a yaml file? Here it is:


    Code
    GNU nano 3.2 /etc/netplan/30-openmediavault-br0.yaml
    network:
    bridges:
    br0:
    dhcp4: yes
    dhcp6: no
    link-local: []
    interfaces:
    - enp0s31f6


    I also made sure that in `/etc/netplan/20-openmediavault-enp0s31f6.yaml` all dhcps are on false because the ethernetcard is not supposed to get an IP, only the bridge.


    To see what kind of systemd files my new yaml file will produce I did `netplan generate` and found that, in addition to the configuration for the enthernet card, there are now two new files for my bridge


    Code
    /run/systemd/network$ ls -la
    total 12
    drwxr-xr-x 2 root root 100 Jul 24 15:37 .
    drwxr-xr-x 20 root root 460 Jul 24 16:23 ..
    -rw-r--r-- 1 root root 30 Jul 24 15:37 10-netplan-br0.netdev
    -rw-r--r-- 1 root root 125 Jul 24 15:37 10-netplan-br0.network
    -rw-r--r-- 1 root root 105 Jul 24 15:37 10-netplan-enp0s31f6.network


    To compare my previous systemd setup with this new one created by netplan, I did cd /run/systemd/network/ && cat *` (which gives me netplans config) and `cd /etc/systemd/network/ && cat * (which gives me my old config) and compared the two in notepad++ (A linuz wizzard would have achieved all that with a long pipe command, but for me notepad is easier). On the left is my old config and on the right is what netplan created for me (just to avoid misunderstandings: the netplan config is not just the result of the above yaml file but also of the other two yaml files that OMV already created):




    I marked the differences with numbers to be able to refer to them more easily.


    1. This is because of the `link-local: []` in th yaml file. I pasted it from some template. Not sure how important it is.

    2. I have a feeling that this might be what I was missing in my old config

    3. Not sure if this is important.

    4. This was my way of making sure that any virtual machine connecting to the bridge would get an IP. Netplan doesn't seem to need that. My guess is that it's because of the strabge type below in no. 5

    5. Netplan identifies the ethernet card by its MAC adress rather than its name, which is probably more reliable. I don'ät quite understand the `type` which netplan created but somehow it seems to make sure that virtual machines bind to the vridge and get an IP. But I understand it even less if `!vlan` means "NOT vlan", ... but since it works, I havenät investigated further.

    6. Nevermind. See no. 1 above.


    So I figured that this new config is worthy of replacing my manual systemd-networkd setup described in the OP above. I deleted (moved) the files in `/etc/systemd/network/` and did `netplan try`, when everything worked I accepted the new config with Enter. Done.


    I no longer have any failing systemd-network-wait-online.


    Question: I imitated the OMV file-naming scheme when creating my yaml file for netplan, hoping that OMV would pick it up and perhaps display it in the GUI but this is not the case. But maybe it will at some point be overwritten bei OMV? Should I rename it to something more original?

    Yes, I also started with that docker image. But it doesn't include the add-ons. Not a problem for experienced HASS users but for someone getting started, I think it's better to run it either as a native install (e.g. on s a rsapberry pi) or on a vm. Otherwise you will have difficulties following many of the tutorials available which assume that version of HASS (including your own, if I'm not mistaken. I have watched many of them, and take the opportunity to thank you here!).

    I would like to run Home Assistant (HASS) in a virtual machine on OpenMediaVault 5. Up until OMV 4, this was easy because OMV (and the underlying Debian) supported VirtualBox and VB apparently didn't have any problems booting UEFI images. Because OMV 5 (and the underlying Debian 10) no longer support VirtualBox, OMV now uses KVM (libvirt) for virtual machines (and it supports Cockpit to manage them). Unfortunately, this entails that it is no longer trivial to boot UEFI images on OMV/Debian 10, and - you guessed it - the official Home Assistant image for KVM (QCOW2) needs UEFI and trying to import and boot it in Cockpit will fail. I was unable to find any button or command in Cockpit that allows me to set the boot mode to UEFI.


    I somehow figured out how to do it and posted the answer here: https://superuser.com/a/1571327/148208

    I have been struggling to achieve what I thought was a very common way of connecting your virtual machines to your LAN: bridged networking but it turned out to be extremely laborious, mostly because non of the many online tutorials on the matter provided anything close to the solution. So I'm posting my solution (?) here to share but also to validate whether it actually is a good solution.


    So I'm starting on a freshly installed OMV 5.5.3-1 (with proxmox kernel) and a KMV image that has previously been running on another machine. I was able import to the image into Cockpit and the VM has network connection when I choose "Direct attachment".


    The problem is that "Direct attachment" implies that the VM can access the LAN and the LAN can access the VM but the VM cannot access the host. So bridge mode is what I want, which translates to "Bridge to LAN" in Cockpit. What Cockpit doesn't tell you is that if you choose "Bridge to LAN" as your interface type, your source must be an actual bridge, not an ethernet card or anything else. And, even worse, Cockpit (at least the version that OMV installs) doesn't seem to provide you any way of actually creating a bridge. Neither does OMV.


    So I'm back to the command line but in order to find the right way of creating and managing a bridge, I need to know how OMV manages network devices. Is it via /etc/network/interfaces? It looks like it, because changes in that file (more correctly in /etc/network/interfaces.d will be picked up upon reboot. But that is also strange because OMV/Debian 10 is supposed to have switched to netplan and systemd.


    So I decided to follow Major Hayden's famous tutorial on How to create a bridge for virtual machines using systemd-networkd (well, almost: I used DHCP instead of static IPs) and while it provided me with a br0 to select as a source for my VM in Cockpit the VM never received an IP from the router (despite ip a showing that it actually is connected to the bridge which itself did receive an IP). At some point I even managed to have the bridge get its own IP and the host get another. But the VM never got any.


    So I figured that - for whatever reason - virtual machines are not per default allowed to properly join the club so I dug deeper into how systemd-networkd works and figured that I need to somehow tell systemd that any VM showing up should get its IP via DHCP. I found instructions on telling systemd that any ethernet card that shows up should be referred to the DHCP server (use en* as a selector under [Match] but I wasn't quite sure how to achieve the same for virtual machines but since vnet0 kept popping up in different places when the VM was running, I tried vnet* and... tadaaa it worked: my finally got an IP and was reachable from the LAN while it could also reach the host. 8)


    So, more specifically, in addition to Major Hayden's instructions, I also created a mydhcp.network (in /etc/systemd/network/) that looks like this:

    Code: mydhcp.network
    [Match]
    Name=vnet*
    [Network]
    DHCP=ipv4

    This is not the end of the story yet, but let me pause here for a second and ask: am I the first one trying to run a VM in bridged mode? I don't think so. But why am I running into all these problems then, that no one else seems to run into? What am I doing wrong? Please let me know?


    And please let me know whether there is a better way of doing this, because even though networking now works flawlessly (as far as I can tell), networkctl still thinks that things are not working as they should (see line 6/IDX 4):

    Bash
    # networkctl
    IDX LINK TYPE OPERATIONAL SETUP
    1 lo loopback carrier unmanaged
    2 enp0s31f6 ether degraded configured
    3 br0 bridge routable configured
    4 vnet0 ether degraded configuring
    5 docker0 bridge routable unmanaged
    7 vethdd5f630 ether degraded unmanaged
    6 links listed.


    So if things are working fine, why don't I just ignore that vnet0 is degraded and stuck in configuring? Well, because "someone" does care about vnet0 being stuck in configuring (or rather: being reported as being stuck in configuring) and that "someone" is systemd-network-wait-online which, in turn leads to cockpit not starting properly:



    Apparently this is a bug in systemd (see https://github.com/systemd/systemd/issues/6441) but, again, if this is so (and if it exists since three years), why are apparently so few people affected by it? What am I doing differently?


    And what is the best way of handling this bug? This answer suggests to mask the process, but I'm not sure I want to do that since the main reason I migrated to OMV is that I do not want to mess too much with the OS and let OMV take care of these things...


    Edit: I solved this by doing my config in netplan instead of systemd. See this post below.

    OMV overrides the /etc/network/interfaces with a warning, see https://github.com/openmediava…etworkd/10cleanup.sls#L28. If a user still wants to use this outdated feature, then custom configurations muts be located in /etc/network/interfaces.d.

    I'm not sure I quite understand how this works. The part I understand is that changes should go to /etc/network/interfaces.d. But what I don't understand is why those changes would even be picked up by OMV when it is no longer using this outdated feature. Or is it possible to mix /etc/network/interfaces with netplan and systemd, i.e. to have some settings specified in the old system and some in the new?

    I just install the proxmox kernel from OMV-extra and reboot.

    Then my ethernet interface was moved to NetworkManager. So I was able to use the cockpit network interface to configure the bridge and it work.

    Thanks.

    How did you create the bridge?


    I believe I'm using the proxmox kernel (Debian GNU/Linux, with Linux 5.4.44-2-pve) and I can do stuff in cockpit but I cannot create the bridge there and I'm guessing that you didn't either. Did you create it on the command line or somewhere in OMV?

    I'm stuck with the exact same issue. Did you in the meantime manage to solve it?


    From what I was able to figure out so far, the reason for this behaviour is that "Direct attachment" apparently defaults to macvtap in private mode (which means that the vm can connect to outside machines but not the host). The alternative would be isolated mode, where the VM can connect to the host and other VMs but not the outside machines, but I wouldn't know how to achieve that and that's not what we want anyway.


    So what's needed is a bridged network which is called "Bridge to LAN" in Cockpit, but I cannot get it to work because I have no bridge to select as a source (which you're supposed to do according to this excellent tutorial) and I see no way of creating a bridge in Cockpit. Anyone who managed to create a bridged connection?