Posts by Peppa123

    I had a similar problem and had deactivated the backports. I then installed the 6.1.0-33-amd64 kernel - not the Proxmox kernel.


    However, after some searching I found that my ‘sources.list’ was not complete and therefore the appropriate packages could not be found because the corresponding requirement for ZFS was not configured.


    ALT

    Code
    deb http://deb.debian.org/debian/ bookworm main
    deb-src http://deb.debian.org/debian/ bookworm main
    
    
    # bookworm-updates, to get updates before a point release is made;
    # see https://www.debian.org/doc/manuals/debian-reference/ch02.en.html#_updates_and_backports
    
    
    deb http://deb.debian.org/debian/ bookworm-updates main contrib non-free
    deb-src http://deb.debian.org/debian/ bookworm-updates main contrib non-free


    NEW

    Code
    deb http://deb.debian.org/debian/ bookworm main contrib non-free
    deb-src http://deb.debian.org/debian/ bookworm main contrib non-free
    
    
    # bookworm-updates, to get updates before a point release is made;
    # see https://www.debian.org/doc/manuals/debian-reference/ch02.en.html#_updates_and_backports
    
    
    deb http://deb.debian.org/debian/ bookworm-updates main contrib non-free
    deb-src http://deb.debian.org/debian/ bookworm-updates main contrib non-free

    I was then able to install the missing package ‘zfs-dkms’ so that the ZFS modules could be built to match my kernel. The kernel sources must of course also be installed beforehand.


    The ZFS service ‘zfs-zed’, which previously could not start due to a lack of kernel modules, then also worked.


    However, this means that different licence types are used and this is displayed accordingly during the installation.


    I can now test again whether this also works with the backport kernel.

    No the copy of the container data and the information about the container/version is already copied and backuped by my own skript. The only missing part is the automatic registration/Sync of the yml file with the compose GUI data or import of docker compose data.


    Syncing of the compose folder is done via rsync.

    Hi there, I am using two pcs for OMV7, one productiv and one as "cold standby" server for my docker environement and I do not want to use docker swarm as ha solution because both pcs are several kilometres apart and only connected via VPN (DSL etc. not enough bandwidth.


    So for now I am syncronising backups of docker data and compose files and doing the manual way in OMV / Compose Plugin in "Sync changes from file" or "Import" docker compose yml files to the GUI. Everything else is done via self written skript in backing up dockerdata / container information / compose file and restoring this once per day on the cold standby site.


    To automate the re(registration) of new or changed docker compose files during the rsync sync it would be nice, if there would be a command line syntax available to use for this steps. Or maybe another way for automating this is possible.


    Is it possible to use some command line syntax in a bash file that is also generated from the gui plugin during using these commands "Sync changes from file" or "Import / Import one" docker compose file?


    Or some way to do a complete backup for restoring this like a bare metal desaster recovery on another OVM installation that is same version?

    Hi there, would it be possible to set a persistent network interface in the wol plugin? Actually when I want to sent a wol package everytime I have to select a interface from which to sent but there is only one interface configured to use in the drop down field.


    So just for the usage it would be one selection less - select the configured device, click on "send" and here you go - finished :)

    Ok actually no issue just the question as I said it is a little confusing to install newer and two kernel headers for the running kernel as normally only the corresponding kernel headers should be installed.


    On my "old" hardware the newest kernel 6.8 has the problem with some new default when using "virtiommu" active in bios. So I did not found during searching some forums if there would be some performance impact if I disable virtiommu" in bios of the server.

    Testing "USB removal" with VM offline -> everything ok - all information is shown and removal is possible:


    Testing with running VM -> hm ok it shows "Bus Number and Device ID" and not the full description. Removal throws error:


    Failed to remove USB device.

    error: Failed to detach device from /tmp/virsh_usbXE4KCM

    error: device not found: device not present in domain configuration

    virsh detach-device --domain Windows-Server-2022 --file '/tmp/virsh_usbXE4KCM' --persistent --config --live


    OMV\Exception: Failed to remove USB device.

    error: Failed to detach device from /tmp/virsh_usbXE4KCM

    error: device not found: device not present in domain configuration

    virsh detach-device --domain Windows-Server-2022 --file '/tmp/virsh_usbXE4KCM' --persistent --config --live in /usr/share/openmediavault/engined/rpc/kvm.inc:3298

    Stack trace:

    #0 /usr/share/openmediavault/engined/rpc/kvm.inc(2229): OMVRpcServiceKvm->virshCommand()

    #1 [internal function]: OMVRpcServiceKvm->removeUsb()

    #2 /usr/share/php/openmediavault/rpc/serviceabstract.inc(122): call_user_func_array()

    #3 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod()

    #4 /usr/sbin/omv-engined(535): OMV\Rpc\Rpc::call()

    #5 {main}

    While using the latest release of omv7 and the plugins the removal box of trying to remove a usb device online or offline (vm running or not) is empty. Last year I had a problem with the kvm plugin that usb devices are shown but the removal was not possible.


    Actually the removal option shows "no usb" device. Adding usb devices is possible, the remaining usb devices not connected to the vm are shown.


    Is there still a bug in the actually version of the usb removal option?

    Upgrade procedure on odroid n2 also successfull. But during installation of new omv 7 versions there are still some warnings:


    Setting up Salt environment ...
    /usr/lib/python3/dist-packages/salt/utils/http.py:8: DeprecationWarning: 'cgi' is deprecated and slated for removal in Python 3.13 import cgi
    /usr/lib/python3/dist-packages/salt/utils/jinja.py:9: DeprecationWarning: 'pipes' is deprecated and slated for removal in Python 3.13 import pipes
    /usr/lib/python3/dist-packages/salt/grains/core.py:2711: DeprecationWarning: Use setlocale(), getencoding() and getlocale() instead ) = locale.getdefaultlocale()
    /usr/lib/python3/dist-packages/salt/grains/core.py:2711: DeprecationWarning: Use setlocale(), getencoding() and getlocale() instead ) = locale.getdefaultlocale()
    /usr/lib/python3/dist-packages/salt/grains/core.py:2711: DeprecationWarning: Use setlocale(), getencoding() and getlocale() instead ) = locale.getdefaultlocale()
    /usr/lib/python3/dist-packages/salt/grains/core.py:2711: DeprecationWarning: Use setlocale(), getencoding() and getlocale() instead ) = locale.getdefaultlocale()
    /usr/lib/python3/dist-packages/salt/grains/core.py:2711: DeprecationWarning: Use setlocale(), getencoding() and getlocale() instead ) = locale.getdefaultlocale()
    /usr/lib/python3/dist-packages/salt/grains/core.py:2711: DeprecationWarning: Use setlocale(), getencoding() and getlocale() instead ) = locale.getdefaultlocale()
    Processing system modifications ...

    Hopefully stopping cron/anacron/monit during the upgrade will fix that. I've added that yesterday (but not released), see https://github.com/openmediava…cdd424d64cce2879f44b8a210.

    I saw that the pre exec script for stopping these services are now included in the actually version. So I test the upgrade again and it worked. So I will test it on my other OMV installatations on different hardware also. The installation on my hp microserver gen8 is now on version 7.

    Hi there, during upgrading one of my omv installations from 6 (latest version) to 7 the upgrade hangs during the process on setting up anacron and waits for some password prompt initiated by systemd. Also monit wants to start and send mails but during upgrade this is not possible anymore because of deinstalled php 7.4 version.


    Here is what I have found in the logs. I also reported a bug on github but the case is already closed and I had not enough information. So I reinstalled my image and did the upgrade again. The only thing I can do is killing the postinstallation processes but anacron then has to be setup again, but what is the installer doing to avoid the password agent call.


    There is nothing special on my omv6 setup actually.


    Setting up libdatrie1:amd64 (0.2.13-2+b1) ...

    Setting up monit (1:5.33.0-1) ...


    Configuration file '/etc/monit/monitrc.distrib'

    ==> Deleted (by you or by a script) since installation.

    ==> Package distributor has shipped an updated version.

    ==> Keeping old config file as default.

    Setting up libmagic-mgc (1:5.44-3) ...

    Setting up ncal (12.1.8) ...

    Setting up anacron (2.3-36) ...


    root 29180 28352 0 11:19 pts/2 00:00:00 /bin/sh /var/lib/dpkg/info/anacron.postinst configure 2.3-30

    root 29232 29180 0 11:19 pts/2 00:00:00 /usr/bin/perl /usr/bin/deb-systemd-invoke restart anacron.service anacron.timer

    root 29239 29232 0 11:19 pts/2 00:00:00 systemctl --quiet --system restart anacron.service anacron.timer

    root 29240 29239 0 11:19 pts/2 00:00:00 /bin/systemd-tty-ask-password-agent --watch


    Jan 23 11:19:14 g8-db-omv6 anacron[1055]: Received SIGUSR1

    Jan 23 11:19:14 g8-db-omv6 systemd[1]: Stopping anacron.service - Run anacron jobs...


    root@g8-db-omv6:~# systemctl list-jobs

    JOB UNIT TYPE STATE

    2829 anacron.timer restart waiting

    2720 anacron.service restart running



    Jan 23 11:19:14 g8-db-omv6 systemd[1]: Stopping anacron.service - Run anacron jobs...

    Jan 23 11:19:42 g8-db-omv6 monit[29179]: 'g8-db-omv6' Monit 5.33.0 started

    Jan 23 11:19:42 g8-db-omv6 monit[29179]: Cannot connect to [127.0.0.1]:25 -- Connection refused

    Jan 23 11:19:42 g8-db-omv6 monit[29179]: Cannot open a connection to the mailserver 127.0.0.1:25 -- Operation now in progress

    Jan 23 11:19:42 g8-db-omv6 monit[29179]: Mail: Delivery failed -- no mail server is available

    Jan 23 11:19:42 g8-db-omv6 monit[29179]: Alert handler failed, retry scheduled for next cycle

    Jan 23 11:19:42 g8-db-omv6 monit[29179]: 'php-fpm' process is not running

    Jan 23 11:19:42 g8-db-omv6 monit[29179]: Cannot connect to [127.0.0.1]:25 -- Connection refused

    Jan 23 11:19:42 g8-db-omv6 monit[29179]: Cannot open a connection to the mailserver 127.0.0.1:25 -- Operation now in progress

    Jan 23 11:19:42 g8-db-omv6 monit[29179]: Mail: Delivery failed -- no mail server is available

    Jan 23 11:19:42 g8-db-omv6 monit[29179]: Adding event to the queue file /var/lib/monit/events/1706005182_555db6131d10 for later delivery

    Jan 23 11:19:42 g8-db-omv6 monit[29179]: 'php-fpm' trying to restart

    Jan 23 11:19:42 g8-db-omv6 monit[29179]: 'php-fpm' start: '/bin/systemctl start php7.4-fpm'

    Jan 23 11:20:12 g8-db-omv6 monit[29179]: 'php-fpm' failed to start (exit status 1) -- '/bin/systemctl start php7.4-fpm': Failed to start php7.4-fpm.service: Unit php7.4-fpm.service is masked.#012