KVM memory balloon functionality not running on MediaVault when running in VM

    • OMV 4.x

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • KVM memory balloon functionality not running on MediaVault when running in VM

      Hi,

      I'm running OpenMediaVault under KVM using Proxmox with a PCIe passthrough SATA card.
      This seems to work. I can't yet tell about the long term stability but it looks allright.
      However Proxmox is not able to see/control the used memory of the guest OS as the balloon functionality is not running (pve.proxmox.com/pve-docs/chapter-qm.html#qm_memory).
      This is however included in the Linux kernel for quite some time now.
      Any idea how to fix this?

      Regards,

      Poke43
    • I think qemu-guest-agent is in the kernel so no install would be needed. I have always enable it so it can shutdown from proxmox gui or system. I am sure it could be involved with other things too.

      To see if it is working once enabled with the latest version of proxmox. Look at the vm summary page, you should see the ip address.
      If you make it idiot proof, somebody will build a better idiot.
    • donh wrote:

      I think qemu-guest-agent is in the kernel so no install would be needed.
      The drivers are in the kernel but the agent package installs the agent service which is necessary for quite a few things:
      • querying and setting guest system time
      • performing guest filesystem sync operation
      • initiating guest shutdown or suspend to ram
      • accessing guest files
      • freezing/thawing guest filesystem operations
      • others
      That said, I'm not sure if it is needed for balloon features. I have a production proxmox server but I don't use ballooning.
      omv 4.1.11 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.11
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!