Beiträge von belierzz

    As you can see the pool is imported and ready to use from Shell, BUT

    is not detected by OMV webGUI, so it can works as expected ( I expected that the imported pool can be show or mounted in storage -> filesystem so I can share some folders, see free space, etc...

    For me it works exactly as you describe - ZFS pool / datasets show up in OMV6 filesystems and can be shared as usual. I did nothing different than you did.

    Ryecoaaron,


    Your work is much appreciated and your approach more than only reasonable. Please take your time and no need to rush.


    For sure I've made an fresh installation of OMV6 on a different SSD rather than an upgrade and I'm back on OMV5 without ANY hassle. As long as OMV5 is supported, at least I can wait. BR

    When I say imported, I mean imported by the plugin which can't be done on OMV 6.x other than by an rpc call (no, I haven't looked at what that would take).

    I'm in the same position regarding OMV6 and ZFS and imported pool does not show up after importing.


    Additionally, already mounted file system (no ZFS) but EXT4) disappears from WebUI as soon as pool is imported and appears again as soon as the pool has been exported.

    Thanks for the feedback and well understood regarding the swap file systems.


    I'm no programmer and certainly not in the position to improve your code. The memory usage tab in the performance statistics show correct values, it is just the overview tab.


    As stated - cosmetic only. Thanks and enjoy Christmas holidays.

    Hi, I have been using OMV 5 for a while now and it is working pretty stable for me.
    Only two minor issues that came to my attention recently:


    - the RAM usage indication on the dashboard deos not reflect the real RAM usage of the system
    - I have ZFS installed and at least approx. 4GB of the RAM are used constantly. The RAM indicator shows less than 1% of 16GB RAM used


    - Since a recent openmediavault update the SWAP partition is shown in the filesystem information, and that was never before. Unfortunately
    I can't say when this one has been introduced.


    Everything else working like a charm. Thanks and keep up the good work!

    One empty line in expert settings before TEMPPROCNAMES="-" did the trick. Furthermore updated /lib/systemd/system/autoshutdown.service in line 12.


    The memory usage in the dashboard on OMV 5.0.10 shows constantly 0.6%, whereas the memory usage in the report (as well as cockpit)
    show approx. 12G of 16G. The 12G should reflect the reality on my ZFS system, this is also what OMV 4 showed.

    Hi, made a pretty nice setup with OMV 5 and everything is working after 2 days of trial and error.
    Proxmox Kernel seems to be must for ZFS (Backports hasn't recognized my SATA Controller) and for docker I had to remove the apparmor package
    (else all containers restarting continously).


    As stated, all working as in OMV 4, only the autoshutdown plugin does not allow to exclude process supervision as it was possible in version 4 with
    TEMPPROCNAMES="-".


    SYSLOG shows the following errors several times

    Code
    Oct  5 16:16:00 NAS systemd[1]: /lib/systemd/system/autoshutdown.service:11: PIDFile= references path below legacy directory /var/run/, updating /var/run/autoshutdown.pid ‚Üí /run/autoshutdown.pid; please update the unit file accordingly.

    and


    Code
    Oct  5 16:17:45 NAS autoshutdown.sh[35429]: /usr/sbin/autoshutdown.sh: line 741: TRUE: command not found

    The autoshutdown plugin itself is then running in FAKE mode without this option being set in the menu before.

    I can second savellm. Same for me. Docker is running, containers are working fine. But in the omv gui docker is completely gone.


    Installation of openmediavault-docker-gui 4.1.4 results in the above error messages.

    Since update to OMV 4.1.0-1 the contrast in the docker window is fine for me.



    Edit: Sorry, for me also still low contrast in the docker plugin. I hade the zfs plugin in mind and here the contrast is
    finally fine after 4.1.0-1.

    Also changed back to docker for plex and plexpy in arrakis, but got plexpy working 100% before together with openmediavault-plexmediaserver:


    Install the plex plugin as usual, enable plexpy with port 8181. Then:


    First:

    Code
    sudo adduser --system --no-create-home plexpy
    sudo chown plexpy:nogroup -R /opt/plexpy


    Second:



    Code
    - systemctl enable plexpy
    - systemctl start plexpy

    Do I understand it right? There is no backward compatibility, if you go the route to zfs 0.7.3, you can't get back to 0.6.x?


    Greetings Hoppel

    During the installation of zfs 0.7.3 you will be asked if you want to upgrade (enhance) the pool with new features. If you go this way, no chance to get back
    to 0.6.x. But It's optional and you can easily decline.

    I made a fresh install of 3.0.91 shortly, could install and use the plex plugin without issues. Only plexpy didn't work right from the start.
    Could be solved with



    Code
    sudo adduser --system --no-create-home plexpy
    sudo chown plexpy:nogroup -R /opt/plexpy
    sudo systemctl enable plexpy
    sudo systemctl start plexpy

    Since then plex is running smoothly for approx. one week. With docker I had massive issues with DLNA, even with --host setup. Therefore no real option for the time being.