Beiträge von iago27

    Update: The solution was apparently related to the salt['omv_utils.get_root_filesystem'] function returning a path without the leading '/'. I manually added the full path to /srv/salt/omv/deploy/collectd/plugins/disk.sls and now omv-salt deploy run collectd completes successfully.


    Changed (~line 60)

    Code
    # Append the root file system.
    {% set root_fs = salt['omv_utils.get_root_filesystem']() %}

    to

    Code
    # Append the root file system.
    {% if grains['virtual_subtype'] != 'LXC' %}
      {% set root_fs = salt['omv_utils.get_root_filesystem']() %}
    {% else %}
      {% set root_fs = '<PATH/TO/ZPOOL>' %}
    {% endif %}

    or alternatively just comment out the existing line and add your absolute path in instead

    Hi, I am consistently encountering the below error in my system when I try to write/update my config after mounting a new disk. OMV is running in an LXC in Proxmox VE 7. It appears to be trying to probe or remount the root filesystem//the rpool within which the LXC data is stored, but this is not exposed within the container itself. This has not previously been a persistent issue until now.


    An excerpt of the error message is posted below:


    For further context, this was a previously used hard drive that I had mounted on the system. I removed all the files, wiped it, reformatted it, and then mounted the new clean partition. When I try to write my updated config, I encounter the above error. I have rebooted the machine several times but the problem persists.


    I believe this change of partitions within the same disk may have confused omv. Thanks for any help.


    Edit 2: The problem appears to be -- as expected -- related to the fact I was running it in an LXC container on a proxmox host, which didn't seem to have access to the zfs pool root path from within the container. I tried including this path explicitly in the container and adding the block device to the cgroup, but this still didn't help.

    Thank you for that clarification -- I should explain further that the above is just an excerpt, and there are many many more instances of omv-engined running than posted above.


    For full context, I have 6 physical disks mounted, 2 umounted but installed, and 2 virtual mergerfs pools.

    I guess this may still be normal expected behavior for the omv engine, I will explore other reasons why my server load / io wait is ballooning.

    Hi everyone, I've been experiencing some repeated behavior on my omv server and I'm hoping to any help or insights as to how to address this.


    My setup includes a couple of mergerfs filesystems mounted via NFS. The physical drives are connected via an HBA card and passed through to omv. OMV ver. is 6.4.0-3 (Shaitan) in a debian container. Everything is up to date etc, and persists after reboots/restarting services individually. Mostly video files.


    For the most part everything has been stable, but recently I've been having random periods of huge spiking in server load and IO wait, which also makes the NFS shares unresponsive, as well as my VMs which have these shares mounted, which normally results in me having to force kill the process and/or reboot the machine.

    During troubleshooting, I noticed a large number of blkid -o full and omv-engined commands appearing in ps aux. iotop sometimes shows a significant IO percentage for mergerfs, although weirdly not for the most recent occurrence.

    I noticed multiple instances of the omv-engined daemon running simultaneously, which seemed unusual.

    for example: ps -ef

    Code
    root     1046841  614244  0 18:32 ?        00:00:00 omv-engined
    root     1046847 1046841  0 18:32 ?        00:00:00 omv-engined
    root     1046850 1046841  0 18:32 ?        00:00:00 omv-engined
    root     1046855 1046841  0 18:32 ?        00:00:00 omv-engined
    root     1046859 1046841  0 18:32 ?        00:00:00 omv-engined
    root     1046862 1046841  0 18:32 ?        00:00:00 omv-engined
    root     1046865 1046841  0 18:32 ?        00:00:00 omv-engined
    root     1046867 1046841  0 18:32 ?        00:00:00 omv-engined
    root     1046871 1046841  0 18:32 ?        00:00:00 omv-engined
    root     1046875 1046841  0 18:32 ?        00:00:00 omv-engined

    and my dashboard:

    Hi, commenting here to report that I'm having the same issue. Are you running OMV baremetal or in a container? Asking because I've got mine in a proxmox LXC (debian 11)

    Have you tried a different browser, TBH I'm taking a shot/s in the dark here, have you looked at fstab cat /etc/fstab my understanding is that mntent passes the information to fstab

    I can confirm that the fstab looks normal. I believe part of the problem is that when you call fstab everything gets mounted successfully, but then OMV unmounts it immediately for some reason, so it essentially fails silently on the part of the person calling mount -a.