High load average caused by multiple instances of omv-engined

  • Hi everyone, I've been experiencing some repeated behavior on my omv server and I'm hoping to any help or insights as to how to address this.


    My setup includes a couple of mergerfs filesystems mounted via NFS. The physical drives are connected via an HBA card and passed through to omv. OMV ver. is 6.4.0-3 (Shaitan) in a debian container. Everything is up to date etc, and persists after reboots/restarting services individually. Mostly video files.


    For the most part everything has been stable, but recently I've been having random periods of huge spiking in server load and IO wait, which also makes the NFS shares unresponsive, as well as my VMs which have these shares mounted, which normally results in me having to force kill the process and/or reboot the machine.

    During troubleshooting, I noticed a large number of blkid -o full and omv-engined commands appearing in ps aux. iotop sometimes shows a significant IO percentage for mergerfs, although weirdly not for the most recent occurrence.

    I noticed multiple instances of the omv-engined daemon running simultaneously, which seemed unusual.

    for example: ps -ef

    Code
    root     1046841  614244  0 18:32 ?        00:00:00 omv-engined
    root     1046847 1046841  0 18:32 ?        00:00:00 omv-engined
    root     1046850 1046841  0 18:32 ?        00:00:00 omv-engined
    root     1046855 1046841  0 18:32 ?        00:00:00 omv-engined
    root     1046859 1046841  0 18:32 ?        00:00:00 omv-engined
    root     1046862 1046841  0 18:32 ?        00:00:00 omv-engined
    root     1046865 1046841  0 18:32 ?        00:00:00 omv-engined
    root     1046867 1046841  0 18:32 ?        00:00:00 omv-engined
    root     1046871 1046841  0 18:32 ?        00:00:00 omv-engined
    root     1046875 1046841  0 18:32 ?        00:00:00 omv-engined

    and my dashboard:

    • Offizieller Beitrag

    During troubleshooting, I noticed a large number of blkid -o full and omv-engined commands appearing in ps aux. iotop sometimes shows a significant IO percentage for mergerfs, although weirdly not for the most recent occurrence.

    I noticed multiple instances of the omv-engined daemon running simultaneously, which seemed unusual.

    The is a forking daemon, so every request is a child process. Additional several requests are splitted into several parallel tasks. In your case i think a RPC is fetching file system information from ~8 disks in parallel.

  • Thank you for that clarification -- I should explain further that the above is just an excerpt, and there are many many more instances of omv-engined running than posted above.


    For full context, I have 6 physical disks mounted, 2 umounted but installed, and 2 virtual mergerfs pools.

    I guess this may still be normal expected behavior for the omv engine, I will explore other reasons why my server load / io wait is ballooning.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!