Beiträge von votdev

    I've updated the plugin to get it running again, but now the image version is fixed to v2.32.0 because with newer versions a random password is generated for admin user which is not visible to the user due to how running the container via podman/systemd is implemented. So they can not log in.

    I think this is the death blow for the standalone plugin.


    I think it is now better to switch to the Kubernetes plugin where you can install the latest version of Filebrowser and Filebrowser Quantum.

    Hmm, it looks like the problem still exists. I have no idea what is going wrong here. If the Dashboard is deployed it is working, but after a reboot of the system it does no work anymore and the container is restarted allover because of the Address already in use. :/

    I may have. Looks like somewhere along the line my systemd-resolved.service got disabled. I restarted and all seems well. Not sure how this happened, but all seems OK now. thanks

    If you have executed

    Bash
    # omv-salt deploy run systemd-networkd 


    your problem should have been solved automatically.

    hmm but not everyone uses k8s, relying only on k8s, that's how I see it, forcing k8s into the system is not nice, unless there will be a split into k8s or docker of such a plugin then ok

    Kubernetes is much powerful than Podman and it is much easier to add new apps this way.

    I simply do not have the time for such time consuming development that is required for plugins based on Podman.

    K3s, the runtime engine in the Kubernetes plugin, is running nearly everywhere and shouldn’t be a show stopper.

    If the community contributes plugins based on Podman, I am still happy. From now on new apps added by myself are added only as K8s recipes.

    I've reformatted an external BTRSF formatted disk. Reformatted it as NTFS. Yet when I connect it OMV sends out hourly messages like below. How do I stop this no longer valid cron job?

    Code
    /etc/cron.hourly/openmediavault-cleanup_sf_snapshots:
    ERROR: not a btrfs filesystem: /srv/dev-disk-by-uuid-89ddba4e-f8f4-4afb-af91-a4761cf08817
    ERROR: can't access '/srv/dev-disk-by-uuid-89ddba4e-f8f4-4afb-af91-a4761cf08817'
    Traceback (most recent call last):  File "/sbin/omv-sfsnapadm", line 286, in <module>    sys.exit(main())             ^^^^^^  File "/sbin/omv-sfsnapadm", line 282, in main    cli()  File "/usr/lib/python3/dist-packages/click/core.py", line 1130, in __call__    return self.main(*args, **kwargs)           ^^^^^^^^^^^^^^^^^^^^^^^^^^  File "/usr/lib/python3/dist-packages/click/core.py", line 1055, in main    rv = self.invoke(ctx)         ^^^^^^^^^^^^^^^^  File "/usr/lib/python3/dist-packages/click/core.py", line 1657, in invoke    return _process_result(sub_ctx.command.invoke(sub_ctx))                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^  File "/usr/lib/python3/dist-packages/click/core.py", line 1404, in invoke    return ctx.invoke(self.callback, **ctx.params)           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^  File "/usr/lib/python3/dist-packages/click/core.py", line 760, in invoke    return __callback(*args, **kwargs)           ^^^^^^^^^^^^^^^^^^^^^^^^^^^  File "/usr/lib/python3/dist-packages/click/decorators.py", line 84, in new_func    return ctx.invoke(f, obj, *args, **kwargs)           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^  File "/usr/lib/python3/dist-packages/click/core.py", line 760, in invoke    return __callback(*args, **kwargs)           ^^^^^^^^^^^^^^^^^^^^^^^^^^^  File "/sbin/omv-sfsnapadm", line 218, in cleanup_cmd    snapshots: Dict[SnapshotName, Dict[SnapshotKind, Dict[UnixTimestamp, Snapshot]]] = list_snapshots(                                                                                       ^^^^^^^^^^^^^^^  File "/sbin/omv-sfsnapadm", line 58, in list_snapshots    output = openmediavault.procutils.check_output(             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^  File "/usr/lib/python3/dist-packages/openmediavault/procutils.py", line 63, in check_output    return subprocess.check_output(*popenargs, **kwargs)           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^  File "/usr/lib/python3.11/subprocess.py", line 466, in check_output    return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^  File "/usr/lib/python3.11/subprocess.py", line 571, in run    raise CalledProcessError(retcode, process.args,
    subprocess.CalledProcessError: Command '['btrfs', 'subvolume', 'list', '-s', '-q', '-u', '/srv/dev-disk-by-uuid-89ddba4e-f8f4-4afb-af91-a4761cf08817']' returned non-zero exit status 1.

    There is still a mount point configuration for the now non-existing Btrfs filesystem in the database. Delete it and you'll get rid of these messages.

    Keep in mind that the database is the source of truth in OMV.

    Is there any chance the package version on the omv repo could be updated please so we can install it on our office nas setup.

    You can download and install the Debian package from SUSE OBS on your own next time if you need a newer version immediately.

    The package has been uploaded to the OMV repo as well.