Beiträge von dojrude

    Do you see any containers in the Stats tab? What is the full path of the shared folder you set on the Settings tab? You can find in the Storage -> Shared Folders tab in the Absolute Path column (may need to make it visible). Also, what is the output of:


    dpkg -l | grep -E "openme|docker"

    Yes, I see the running containers listed in the stats tab and can view the logs and inspect without issue.


    root@storage:~# dpkg -l | grep -E "openme|docker"

    ii docker-ce 5:24.0.2-1~debian.11~bullseye amd64 Docker: the open-source application container engine

    ii docker-ce-cli 5:24.0.2-1~debian.11~bullseye amd64 Docker CLI: the open-source application container engine

    ii docker-compose 1.25.0-1 all Punctual, lightweight development environments using Docker

    ii docker-compose-plugin 2.18.1-1~debian.11~bullseye amd64 Docker Compose (V2) plugin for the Docker CLI.

    rc omvextras-unionbackend 5.0.2 all union filesystems backend plugin for openmediavault

    ii openmediavault 6.4.0-3 all openmediavault - The open network attached storage solution

    ii openmediavault-compose 6.7.6 all OpenMediaVault compose plugin

    ii openmediavault-kernel 6.4.8 all kernel package

    ii openmediavault-keyring 1.0 all GnuPG archive keys of the OpenMediaVault archive

    ii openmediavault-mergerfs 6.3.7 all mergerfs plugin for openmediavault.

    ii openmediavault-omvextrasorg 6.3.1 all OMV-Extras.org Package Repositories for OpenMediaVault

    ii openmediavault-sharerootfs 6.0.2-1 all openmediavault share root filesystem plugin

    ii openmediavault-snapraid 6.1 all snapraid plugin for OpenMediaVault.

    ii python3-docker 4.1.0-1.2 all Python 3 wrapper to access docker.io's control socket

    ii python3-dockerpty 0.4.1-2 all Pseudo-tty handler for docker Python client (Python 3.x)


    Thanks.

    This is not possible, if the containers have been created with the plugin and are running they should be displayed in the containers tab.

    Press Ctrl+Shift+R

    I've tried clearing the cache and history (I run Safari) and still the containers are not listed in the plugin.


    What do you mean not possible?


    What is the containers tab doing to determine what is displayed or not?

    Whether containers are configured via the plugin or portainer, they're still running containers as far as docker is concerned (docker ps), so how does the plugin differentiate between those created via the plugin and those that weren't.


    Could something within my yaml config be causing them to be excluded?


    yaml scripts were created from the autocompose section of the plugin for existing running containers, the containers were then stopped and removed from Portainer. I then re-created the containers using the yaml script from within the compose plugin files tab. containers are running fine, I just don't see any information displayed about them under the containers tab.

    I am running openmediavault-compose 6.7.6


    I have re-deployed all my containers in the compose plugin using the Compose -> Files -> Create option supplying yaml info.


    All containers are running and are seen as limited from the Portainer -> Stacks menu.


    docker ps shows all containers running.


    Compose -> Containers is blank and shows nothing about the running containers.


    How do I get the running container info to display in the Compose -> Containers tab?


    Thanks.

    Since reverting to 5.5.0.0 my system has been up for 3days and 15hrs despite continual button mashing and sleep/wake cycles of the Apple TV while at home on lockdown.

    This is longer than any single uptime while using 5.6.0.0


    Think I'm going to stay on 5.5.0.0 for the foreseeable future.

    Mines been doing similar recently, but had put it down to my youngest mashing the buttons on the Apple TV and somehow an NFS issue causing the server to fall over, but looking at my uptimes, prior to July 13th I had uptimes of 34 days and 19 days and now it's rebooting multiple times a day;



    I did manage to capture a photo of the dump screen just before it rebooted - not sure if this will help;

    qpcePaJ.jpg


    I've now reverted back to kernel 5.5.0 for a week and will see how it goes.


    Thanks.

    I previously used Carbon Copy Cloner to perform sparse bundle backups to OMV4 via the Netatalk protocol.


    I've just upgraded to OMV5 and with the removal of Netatalk, have migrated all my shares over to SMB.


    While File/Folder based backups from Carbon Copy cloner to SMB work fine, a sparse bundle based backup to the same share fails.


    I've tried enabling TimeMachine support as recommended elsewhere, but this doesn't seem to make a difference.


    I've confirmed from the command line that I can read/write to the share, so access is not a problem.


    TimeMachine backups to another share are working fine, so this appears to be limited to Carbon Copy Cloner.


    Just wondered if anyone else had the same issue and has found a fix for the issue?


    Thanks.

    looks like this is a problem with the docker host (OMV) accessing the running containers, as I didn't test OMV to the other running containers and it seems I can't access those from the OMV command line either.


    Guess there's a setting somewhere to enable access from the host.....


    Maybe this I just found?


    Communication with the Docker host over macvlan

    • When using macvlan, you cannot ping or communicate with the default namespace IP address. For example, if you create a container and try to ping the Docker host’s eth0, it will not work. That traffic is explicitly filtered by the kernel modules themselves to offer additional provider isolation and security.
    • A macvlan subinterface can be added to the Docker host, to allow traffic between the Docker host and containers. The IP address needs to be set on this subinterface and removed from the parent address.

    Just span up 2 new instances of pihole (with empty/new configs) on different IP's in my network range and get exactly the same behaviour.
    Everything in my network can get to them both except for OMV.


    OMV can even ping/route to an IP address one above and one below one of the new pihole addresses, but cannot get to pihole itself on the address in the middle.


    I really have no idea what's going on here.

    Set your router's DNS servers to point to your pihole. Then everything that goes through your router will route through pihole for DNS.


    Sent from my BBF100-2 using Tapatalk

    Thanks, but it already is.


    Seems really strange that OMV can't get to pihole, almost like it's purposely blocked for some reason.

    All docker containers (including pihole) are already configured using Mac_vlan networking with dedicated ip addresses for each container.


    Is this what you mean?


    pihole is not using bridged networking and neither is OMV.

    I've setup pihole in docker and it's working fine.
    All my local clients are able to query DNS and access the internet etc.
    Today I realised that none of my docker instances were going through pihole and then realised it's because docker uses the host DNS, which was still pointing directly at the router.


    I updated the DNS and rebooted, but then realised that OMV couldn't resolve anything and neither could the running containers, so I've started to do some testing and have discovered that weirdly, OMV can't route to pihole and pihole can't route to docker. I've double checked all addresses, net masks and default routes and everything is correct.


    Everything else on my network can talk to pihole and vice versa, just not OMV.
    OMV can also communicate with everything else on the network and vice versa, just not pihole.


    This is a flat network, everything in the 192.168.1.0/24 range with a 192.168.1.1 gateway.


    Any ideas on what's going on and why this isn't working?


    Thanks for looking.

    I spun up a test of 5.0.5 to see what's changed and noticed that Netatalk doesn't appear to be included any more - doesn't show up in omv-extras either.
    Are there any plans to include it at a later date or am I destined for the CLI?


    Thanks.

    Please ignore, I've done some reading and investigation and it looks like config.xml references the volume label;


    <fsname>/dev/disk/by-label/storage</fsname>
    <dir>/srv/dev-disk-by-label-storage</dir>


    This is also seen in fstab.


    I assume that once I've cloned/copied the existing data to the new volume, I can change the volume label of the current storage volume to something else, rename the new volume to the old label and reboot.