Posts by Peppa123

    I tried to edit the parameters for example for one disk on "/storage/disks" for spindown time/enabling write cache and when I try to apply the "pending configuration change" omv throws an error:


    - OMV8 has all latest updates installed



    ....

    Sorry I was a bit confused about the status of zfs plugin / running kernel and the same error messages about not to be able to build zfs modules on kernel 6.14, so I did the following things:


    - removed all kernel sources > 6.8x

    - removed all linux packages with status "rc"

    - removed zfs plugin


    - reinstalled kernel 6.17.4

    - reinstalled zfs plugin

    - after first reinstall and comit in omv gui zfs plugin throws "rcp error"

    - reinstalled zfs plugin again - was shown as "not installed"

    - commit once again in omv gui to configuration changes

    - now plugin is working

    - rebooted

    - did a "zpool import" to get all pools again

    - rebooted

    hmm after running the script new errors are thrown during the running on zfs-dkms. Now the deinstallation of "omv-zfs" plugin and reinstallation fails.


    Building initial module zfs/2.3.5 for 6.14.11-5-pve
    Sign command: /lib/modules/6.14.11-5-pve/build/scripts/sign-file
    Signing key: /var/lib/dkms/mok.key
    Public certificate (MOK): /var/lib/dkms/mok.pub

    Running the pre_build script.................... done.
    Building module(s)...(bad exit status: 2)
    Failed command:
    make -j6 KERNELRELEASE=6.14.11-5-pve

    Error! Bad return status for module build on kernel: 6.14.11-5-pve (x86_64)
    Consult /var/lib/dkms/zfs/2.3.5/build/make.log for more information.

    Just tried to install all latest updates for OMV8 / OMV8 Plugins and got depency update problemss with "omv-zfs-provider-pve:amd64=8.0.0"


    1. omv-zfs-provider-pve:amd64=8.0.0 is selected for install

    2. omv-zfs-provider-pve:amd64 Depends proxmox-headers-6.14 | proxmox-headers-6.17

    but none of the choices are installable:

    [no choices]


    The system was upgraded from omv7 to omv8 some days before. Some way to fix the depencies?


    Yes that's correct. I "wiped" the two discs within the storage plugin and get back to the zfs plugin and they were not listed in the drop down field for usable discs. So I created the pool on the command line and the auto refresh in the plugin now showed the new pool so I created the filesystems with the zfs plugin.

    Hi there, I would like to use the zfs plugin to create more than one pool with datasets/volumes on different discs. Actually one pool with datasets/volumes is already created and I added two new discs to the server.


    So I tried to create a different new pool but the "devices" drop down field stays "empty". The two new discs are "empty", all partitions were deleted. So "normally" it should be possible to create new pool on one to many discs.


    Is this a "bug"? For now I will create the new zfs pools on the command line without plugin.

    I had a similar problem and had deactivated the backports. I then installed the 6.1.0-33-amd64 kernel - not the Proxmox kernel.


    However, after some searching I found that my ‘sources.list’ was not complete and therefore the appropriate packages could not be found because the corresponding requirement for ZFS was not configured.


    ALT

    Code
    deb http://deb.debian.org/debian/ bookworm main
    deb-src http://deb.debian.org/debian/ bookworm main
    
    
    # bookworm-updates, to get updates before a point release is made;
    # see https://www.debian.org/doc/manuals/debian-reference/ch02.en.html#_updates_and_backports
    
    
    deb http://deb.debian.org/debian/ bookworm-updates main contrib non-free
    deb-src http://deb.debian.org/debian/ bookworm-updates main contrib non-free


    NEW

    Code
    deb http://deb.debian.org/debian/ bookworm main contrib non-free
    deb-src http://deb.debian.org/debian/ bookworm main contrib non-free
    
    
    # bookworm-updates, to get updates before a point release is made;
    # see https://www.debian.org/doc/manuals/debian-reference/ch02.en.html#_updates_and_backports
    
    
    deb http://deb.debian.org/debian/ bookworm-updates main contrib non-free
    deb-src http://deb.debian.org/debian/ bookworm-updates main contrib non-free

    I was then able to install the missing package ‘zfs-dkms’ so that the ZFS modules could be built to match my kernel. The kernel sources must of course also be installed beforehand.


    The ZFS service ‘zfs-zed’, which previously could not start due to a lack of kernel modules, then also worked.


    However, this means that different licence types are used and this is displayed accordingly during the installation.


    I can now test again whether this also works with the backport kernel.

    No the copy of the container data and the information about the container/version is already copied and backuped by my own skript. The only missing part is the automatic registration/Sync of the yml file with the compose GUI data or import of docker compose data.


    Syncing of the compose folder is done via rsync.

    Hi there, I am using two pcs for OMV7, one productiv and one as "cold standby" server for my docker environement and I do not want to use docker swarm as ha solution because both pcs are several kilometres apart and only connected via VPN (DSL etc. not enough bandwidth.


    So for now I am syncronising backups of docker data and compose files and doing the manual way in OMV / Compose Plugin in "Sync changes from file" or "Import" docker compose yml files to the GUI. Everything else is done via self written skript in backing up dockerdata / container information / compose file and restoring this once per day on the cold standby site.


    To automate the re(registration) of new or changed docker compose files during the rsync sync it would be nice, if there would be a command line syntax available to use for this steps. Or maybe another way for automating this is possible.


    Is it possible to use some command line syntax in a bash file that is also generated from the gui plugin during using these commands "Sync changes from file" or "Import / Import one" docker compose file?


    Or some way to do a complete backup for restoring this like a bare metal desaster recovery on another OVM installation that is same version?

    Hi there, would it be possible to set a persistent network interface in the wol plugin? Actually when I want to sent a wol package everytime I have to select a interface from which to sent but there is only one interface configured to use in the drop down field.


    So just for the usage it would be one selection less - select the configured device, click on "send" and here you go - finished :)

    Ok actually no issue just the question as I said it is a little confusing to install newer and two kernel headers for the running kernel as normally only the corresponding kernel headers should be installed.


    On my "old" hardware the newest kernel 6.8 has the problem with some new default when using "virtiommu" active in bios. So I did not found during searching some forums if there would be some performance impact if I disable virtiommu" in bios of the server.