Posts by Peppa123

    Testing "USB removal" with VM offline -> everything ok - all information is shown and removal is possible:


    Testing with running VM -> hm ok it shows "Bus Number and Device ID" and not the full description. Removal throws error:


    Failed to remove USB device.

    error: Failed to detach device from /tmp/virsh_usbXE4KCM

    error: device not found: device not present in domain configuration

    virsh detach-device --domain Windows-Server-2022 --file '/tmp/virsh_usbXE4KCM' --persistent --config --live


    OMV\Exception: Failed to remove USB device.

    error: Failed to detach device from /tmp/virsh_usbXE4KCM

    error: device not found: device not present in domain configuration

    virsh detach-device --domain Windows-Server-2022 --file '/tmp/virsh_usbXE4KCM' --persistent --config --live in /usr/share/openmediavault/engined/rpc/kvm.inc:3298

    Stack trace:

    #0 /usr/share/openmediavault/engined/rpc/kvm.inc(2229): OMVRpcServiceKvm->virshCommand()

    #1 [internal function]: OMVRpcServiceKvm->removeUsb()

    #2 /usr/share/php/openmediavault/rpc/serviceabstract.inc(122): call_user_func_array()

    #3 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod()

    #4 /usr/sbin/omv-engined(535): OMV\Rpc\Rpc::call()

    #5 {main}

    While using the latest release of omv7 and the plugins the removal box of trying to remove a usb device online or offline (vm running or not) is empty. Last year I had a problem with the kvm plugin that usb devices are shown but the removal was not possible.


    Actually the removal option shows "no usb" device. Adding usb devices is possible, the remaining usb devices not connected to the vm are shown.


    Is there still a bug in the actually version of the usb removal option?

    Upgrade procedure on odroid n2 also successfull. But during installation of new omv 7 versions there are still some warnings:


    Setting up Salt environment ...
    /usr/lib/python3/dist-packages/salt/utils/http.py:8: DeprecationWarning: 'cgi' is deprecated and slated for removal in Python 3.13 import cgi
    /usr/lib/python3/dist-packages/salt/utils/jinja.py:9: DeprecationWarning: 'pipes' is deprecated and slated for removal in Python 3.13 import pipes
    /usr/lib/python3/dist-packages/salt/grains/core.py:2711: DeprecationWarning: Use setlocale(), getencoding() and getlocale() instead ) = locale.getdefaultlocale()
    /usr/lib/python3/dist-packages/salt/grains/core.py:2711: DeprecationWarning: Use setlocale(), getencoding() and getlocale() instead ) = locale.getdefaultlocale()
    /usr/lib/python3/dist-packages/salt/grains/core.py:2711: DeprecationWarning: Use setlocale(), getencoding() and getlocale() instead ) = locale.getdefaultlocale()
    /usr/lib/python3/dist-packages/salt/grains/core.py:2711: DeprecationWarning: Use setlocale(), getencoding() and getlocale() instead ) = locale.getdefaultlocale()
    /usr/lib/python3/dist-packages/salt/grains/core.py:2711: DeprecationWarning: Use setlocale(), getencoding() and getlocale() instead ) = locale.getdefaultlocale()
    /usr/lib/python3/dist-packages/salt/grains/core.py:2711: DeprecationWarning: Use setlocale(), getencoding() and getlocale() instead ) = locale.getdefaultlocale()
    Processing system modifications ...

    Hopefully stopping cron/anacron/monit during the upgrade will fix that. I've added that yesterday (but not released), see https://github.com/openmediava…cdd424d64cce2879f44b8a210.

    I saw that the pre exec script for stopping these services are now included in the actually version. So I test the upgrade again and it worked. So I will test it on my other OMV installatations on different hardware also. The installation on my hp microserver gen8 is now on version 7.

    Hi there, during upgrading one of my omv installations from 6 (latest version) to 7 the upgrade hangs during the process on setting up anacron and waits for some password prompt initiated by systemd. Also monit wants to start and send mails but during upgrade this is not possible anymore because of deinstalled php 7.4 version.


    Here is what I have found in the logs. I also reported a bug on github but the case is already closed and I had not enough information. So I reinstalled my image and did the upgrade again. The only thing I can do is killing the postinstallation processes but anacron then has to be setup again, but what is the installer doing to avoid the password agent call.


    There is nothing special on my omv6 setup actually.


    Setting up libdatrie1:amd64 (0.2.13-2+b1) ...

    Setting up monit (1:5.33.0-1) ...


    Configuration file '/etc/monit/monitrc.distrib'

    ==> Deleted (by you or by a script) since installation.

    ==> Package distributor has shipped an updated version.

    ==> Keeping old config file as default.

    Setting up libmagic-mgc (1:5.44-3) ...

    Setting up ncal (12.1.8) ...

    Setting up anacron (2.3-36) ...


    root 29180 28352 0 11:19 pts/2 00:00:00 /bin/sh /var/lib/dpkg/info/anacron.postinst configure 2.3-30

    root 29232 29180 0 11:19 pts/2 00:00:00 /usr/bin/perl /usr/bin/deb-systemd-invoke restart anacron.service anacron.timer

    root 29239 29232 0 11:19 pts/2 00:00:00 systemctl --quiet --system restart anacron.service anacron.timer

    root 29240 29239 0 11:19 pts/2 00:00:00 /bin/systemd-tty-ask-password-agent --watch


    Jan 23 11:19:14 g8-db-omv6 anacron[1055]: Received SIGUSR1

    Jan 23 11:19:14 g8-db-omv6 systemd[1]: Stopping anacron.service - Run anacron jobs...


    root@g8-db-omv6:~# systemctl list-jobs

    JOB UNIT TYPE STATE

    2829 anacron.timer restart waiting

    2720 anacron.service restart running



    Jan 23 11:19:14 g8-db-omv6 systemd[1]: Stopping anacron.service - Run anacron jobs...

    Jan 23 11:19:42 g8-db-omv6 monit[29179]: 'g8-db-omv6' Monit 5.33.0 started

    Jan 23 11:19:42 g8-db-omv6 monit[29179]: Cannot connect to [127.0.0.1]:25 -- Connection refused

    Jan 23 11:19:42 g8-db-omv6 monit[29179]: Cannot open a connection to the mailserver 127.0.0.1:25 -- Operation now in progress

    Jan 23 11:19:42 g8-db-omv6 monit[29179]: Mail: Delivery failed -- no mail server is available

    Jan 23 11:19:42 g8-db-omv6 monit[29179]: Alert handler failed, retry scheduled for next cycle

    Jan 23 11:19:42 g8-db-omv6 monit[29179]: 'php-fpm' process is not running

    Jan 23 11:19:42 g8-db-omv6 monit[29179]: Cannot connect to [127.0.0.1]:25 -- Connection refused

    Jan 23 11:19:42 g8-db-omv6 monit[29179]: Cannot open a connection to the mailserver 127.0.0.1:25 -- Operation now in progress

    Jan 23 11:19:42 g8-db-omv6 monit[29179]: Mail: Delivery failed -- no mail server is available

    Jan 23 11:19:42 g8-db-omv6 monit[29179]: Adding event to the queue file /var/lib/monit/events/1706005182_555db6131d10 for later delivery

    Jan 23 11:19:42 g8-db-omv6 monit[29179]: 'php-fpm' trying to restart

    Jan 23 11:19:42 g8-db-omv6 monit[29179]: 'php-fpm' start: '/bin/systemctl start php7.4-fpm'

    Jan 23 11:20:12 g8-db-omv6 monit[29179]: 'php-fpm' failed to start (exit status 1) -- '/bin/systemctl start php7.4-fpm': Failed to start php7.4-fpm.service: Unit php7.4-fpm.service is masked.#012

    And here are the open tasks. I changed the root disks some weeks before and duplicated the installation to the new disk. Before it was a 300 GB Western Digital Hard Drive and now it is a Kingston NVMe Disk.


    As I see there is a tasks that tries to delete the entry "/", but the corresponding UUID does not exists anymore and is now different for the "/" entry.


    Actually I am doing and did nothing about deleting the root filesystem. I also don't know why during "pending tasks" there is a job trying to dismount the root filesystem. The only things I did was testing with the "kvm" plugin and this was weeks before. And the "kvm" plugin is installing the "shareroot" plugin, so deinstalling it, also deinstalls the "kvm" plugin.


    Here is the output:

    Here is the output:


    [

    {

    "comment": "Seagate Festplatte 4TB",

    "dir": "/srv/dev-disk-by-uuid-94537b05-9875-4501-9a8c-3aa6cbda18db",

    "freq": 0,

    "fsname": "/dev/disk/by-uuid/94537b05-9875-4501-9a8c-3aa6cbda18db",

    "hidden": false,

    "opts": "defaults,nofail,user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl",

    "passno": 2,

    "type": "ext4",

    "usagewarnthreshold": 95,

    "uuid": "86d9bed5-fbb2-49ad-a7ce-4e773428f874"

    },

    {

    "comment": "Samsung EVO SSD 2 TB",

    "dir": "/srv/dev-disk-by-uuid-35b74cc0-e206-49a2-ab0a-eb64c2e2d317",

    "freq": 0,

    "fsname": "/dev/disk/by-uuid/35b74cc0-e206-49a2-ab0a-eb64c2e2d317",

    "hidden": false,

    "opts": "defaults,nofail,user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl",

    "passno": 2,

    "type": "ext4",

    "usagewarnthreshold": 90,

    "uuid": "ec518a26-7a45-4577-9fb4-42d352b089eb"

    },

    {

    "comment": "WD SATA 5TB",

    "dir": "/srv/dev-disk-by-uuid-cd15dd88-5d5b-4390-8553-91abadbeb1cb",

    "freq": 0,

    "fsname": "/dev/disk/by-uuid/cd15dd88-5d5b-4390-8553-91abadbeb1cb",

    "hidden": false,

    "opts": "defaults,nofail,user_xattr,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,acl",

    "passno": 2,

    "type": "ext4",

    "usagewarnthreshold": 95,

    "uuid": "6286d700-1285-47c5-9694-cfc18069c416"

    },

    {

    "comment": "",

    "dir": "/export/Filme",

    "freq": 0,

    "fsname": "/srv/dev-disk-by-uuid-cd15dd88-5d5b-4390-8553-91abadbeb1cb/Filme/",

    "hidden": false,

    "opts": "bind,nofail",

    "passno": 0,

    "type": "none",

    "usagewarnthreshold": 0,

    "uuid": "af6c19b3-a056-4ae9-a52b-5f8587a17758"

    },

    {

    "comment": "",

    "dir": "/",

    "freq": 0,

    "fsname": "/dev/disk/by-uuid/4fee3c4a-d782-47f1-8972-ef8ba821ac52",

    "hidden": true,

    "opts": "errors=remount-ro",

    "passno": 1,

    "type": "ext4",

    "usagewarnthreshold": 0,

    "uuid": "79684322-3eac-11ea-a974-63a080abab18"

    }

    ]

    I have the same problem on my installation and it is not nice that during pending commits there are tries to unmount "/". Why? Only Power Cycle is possible after this, because "/" is unmounted and nothing is possible to do after this, because everything is "gone".


    omv-rpc -u admin "Config" "applyChanges" "{\"modules\": $(cat /var/lib/openmediavault/dirtymodules.json), \"force\": true}"

    {"response":null,"error":{"code":0,"message":"Removing the directory '\/' has been aborted, the resource

    is busy.","trace":"OMV\\Exception: Removing the directory '\/' has been aborted, the resource is busy. in \/usr\/share\/openmediavault\/engined\/module\/fstab.inc:65\nStack trace:\n#0 [internal function]: Engined\\Module\\FSTab->deleteEntry(Array)\n#1 \/usr\/share\/php\/openmediavault\/engine\/module\/moduleabstract.inc(157): call_user_func_array(Array, Array)\n#2 \/usr\/share\/openmediavault\/engined\/module\/fstab.inc(31): OMV\\Engine\\Module\\ModuleAbstract->execTasks('delete')\n#3 \/usr\/share\/openmediavault\/engined\/rpc\/config.inc(167): Engined\\Module\\FSTab->preDeploy()\n#4 [internal function]: Engined\\Rpc\\Config->applyChanges(Array, Array)\n#5 \/usr\/share\/php\/openmediavault\/rpc\/serviceabstract.inc(123): call_user_func_array(Array, Array)\n#6 \/usr\/share\/php\/openmediavault\/rpc\/rpc.inc(86): OMV\\Rpc\\ServiceAbstract->callMethod('applyChanges', Array, Array)\n#7 \/usr\/sbin\/omv-engined(537): OMV\\Rpc\\Rpc::call('Config', 'applyChanges', Array, Array, 1)\n#8 {main}"}}

    That is the error message I wrote already above