Beiträge von tincanfury

    OK, I've got this for my tunnel,


    and this for my client,

    I was able to load the settings into the Wireguard client on my laptop. I was able to connect, however I was unable to ssh into my NAS or connect to the OMV Dashboard from my browser. So I'm guessing something is not set up correctly, but I'm not sure what I need to change?


    thanks!

    This is now holding me up from applying Wireguard settings.

    Is there a reason one failure holds up all the other pending changes?


    I'm going to be traveling and am thinking of opening my OMV Dashboard to the internet (on non-standard port) while I'm away from home in case I need to manage my NAS. I was wondering if there are any tips for securing the Dashboard, other than a very strong password)?


    Thanks!

    I'm running on an Asustor NAS with,

    Debian GNU/Linux, with Linux 6.2.16-20-bpo11-pve


    is there a package I need to install?

    Ah, ran into something.

    My Nextcloud container has,

    Code
        depends_on:
          - mariadb

    However even with mariadb container installed I get an error with the Nextcloud container.

    Is this not supported?

    Should I have both containers in the same File, or is there something else I need to be doing?


    thanks!


    update: I created a File with Nextcloud, mariadb, and swag, and created it that way. This puts them all in the same stack together, not sure if that is why it wasn't working before?

    The plugin would have to try to use portainer's compose file and environment files in order to manage the containers. And I'm not sure portainer would like that.


    I did write a script to *try* to import compose files from portainer. I don't use portainer so it hasn't had a lot of testing. But worst thing you would have to do is delete something that imported wrong.


    You can run docker compose commands from the command line against files in the plugin but you need to know the paths/names for the compose file and environment file(s). You cannot edit the files though. The OMV database is the source of truth. See this post for other info - RE: Old compose files after OMV restore

    Understood! Easy enough for me to manually recreate, I wanted to make sure I wasn't missing something. So far the OMV Dashboard has worked well and adding container files is easy, so I'm on board with doing the work through it.

    Thanks for all the work on this, it's a nice addition to the Dashboard.

    And because? Occasionally there are updates that require a restart. If you don't, the system may respond strangely.

    it's just fun having a long uptime 😜


    Can you post the full error in a code box? Use the </> button to create a code box in the forum.

    I rebooted and these same configuration changes message appeared.


    In recreating the portainer container with the new system, if I'm understanding the Files section of the new Dashboard docker Compose system I could create individual yml files for the containers in my original docker-compose.yml file and manage my container updates. I attempted to do this, but the system does not appear to "link" an existing container to one of these new yml files in how the portainer entry I created will show the container status.


    I assume this means I would need to do similarly for my existing containers as I did for portainer and remove the existing container and have the Dashboard system recreate it?
    Is there a way to import my original docker-compose.yml file to create individual yml? I can do it manually, but figured I'd check if this has been "automated".
    In doing this, I also assume if I wanted to update the container I would no longer be able to do so from the command line?


    Thanks for the help/answers!

    Pending configuration changes

    You must apply these changes in order for them to take effect.The following modules will be updated:

    • compose
    • nfs
    • nginx
    • samba
    • sharedfolders
    • systemd


    When I select apply I'm provided the following error, but I'm not seeing anything obvious that needs to be fixed.


    500 - Internal Server Error

    Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; export LANGUAGE=; omv-salt deploy run --no-color nfs 2>&1' with exit code '1': debian: ---------- ID: configure_default_nfs-common Function: file.managed Name: /etc/default/nfs-common Result: True Comment: File /etc/default/nfs-common is in the correct state Started: 11:15:49.552374 Duration: 558.458 ms Changes: ---------- ID: configure_default_nfs-kernel-server Function: file.managed Name: /etc/default/nfs-kernel-server Result: True Comment: File /etc/default/nfs-kernel-server is in the correct state Started: 11:15:50.111257 Duration: 614.82 ms Changes: ---------- ID: configure_nfsd_exports Function: file.managed Name: /etc/exports Result: True Comment: File /etc/exports is in the correct state Started: 11:15:50.726495 Duration: 979.244 ms Changes: ---------- ID: divert_nfsd_exports Function: omv_dpkg.divert_add Name: /etc/exports Result: True Comment: Leaving 'local diversion of /etc/exports to /etc/exports.distrib' Started: 11:15:51.707570 Duration: 38.805 ms Changes: ---------- ID: start_rpc_statd_service Function: service.running Name: rpc-statd Result: True Comment: The service rpc-statd is already running Started: 11:15:55.7...

    I think you should have only one driver. Either json-file or local.


    https://docs.docker.com/config/containers/logging/configure/


    Check what is filling /var/log and why. The logs should give you a hint, what is going wrong.

    several sites made it sound like both json-file and local could be used together, but removing the local one got it started! This is also after "clearing out" my /var/log location.


    as for the size of the logs being large, I agree, and I'll have to start digging into that.

    However, I don't see why the size command is not used in logrotate to catch large log files and rotate them so the log drive does not fill up? I'm pretty sure my other/previous linux systems did this and it seemed to work fine. Is there something about using logrotate this way that I'm not aware of that informed the decision for OMV?


    Thanks for the quick response on docker, your help is much appreciated in getting it up and running again!