I would uninstall the docker-compose package (not the docker-compose-plugin package)
sudo apt-get purge docker-compose
Thank you. It's now working since removing the docker-compose package.
I would uninstall the docker-compose package (not the docker-compose-plugin package)
sudo apt-get purge docker-compose
Thank you. It's now working since removing the docker-compose package.
Do you see any containers in the Stats tab? What is the full path of the shared folder you set on the Settings tab? You can find in the Storage -> Shared Folders tab in the Absolute Path column (may need to make it visible). Also, what is the output of:
dpkg -l | grep -E "openme|docker"
Yes, I see the running containers listed in the stats tab and can view the logs and inspect without issue.
root@storage:~# dpkg -l | grep -E "openme|docker"
ii docker-ce 5:24.0.2-1~debian.11~bullseye amd64 Docker: the open-source application container engine
ii docker-ce-cli 5:24.0.2-1~debian.11~bullseye amd64 Docker CLI: the open-source application container engine
ii docker-compose 1.25.0-1 all Punctual, lightweight development environments using Docker
ii docker-compose-plugin 2.18.1-1~debian.11~bullseye amd64 Docker Compose (V2) plugin for the Docker CLI.
rc omvextras-unionbackend 5.0.2 all union filesystems backend plugin for openmediavault
ii openmediavault 6.4.0-3 all openmediavault - The open network attached storage solution
ii openmediavault-compose 6.7.6 all OpenMediaVault compose plugin
ii openmediavault-kernel 6.4.8 all kernel package
ii openmediavault-keyring 1.0 all GnuPG archive keys of the OpenMediaVault archive
ii openmediavault-mergerfs 6.3.7 all mergerfs plugin for openmediavault.
ii openmediavault-omvextrasorg 6.3.1 all OMV-Extras.org Package Repositories for OpenMediaVault
ii openmediavault-sharerootfs 6.0.2-1 all openmediavault share root filesystem plugin
ii openmediavault-snapraid 6.1 all snapraid plugin for OpenMediaVault.
ii python3-docker 4.1.0-1.2 all Python 3 wrapper to access docker.io's control socket
ii python3-dockerpty 0.4.1-2 all Pseudo-tty handler for docker Python client (Python 3.x)
Thanks.
Nope, still shows nothing
Can you post the output of (likely need to attach a text file)
sudo omv-aptclean repos
sudo omv-salt deploy run compose
Please find text file attached.
This is not possible, if the containers have been created with the plugin and are running they should be displayed in the containers tab.
Press Ctrl+Shift+R
I've tried clearing the cache and history (I run Safari) and still the containers are not listed in the plugin.
What do you mean not possible?
What is the containers tab doing to determine what is displayed or not?
Whether containers are configured via the plugin or portainer, they're still running containers as far as docker is concerned (docker ps), so how does the plugin differentiate between those created via the plugin and those that weren't.
Could something within my yaml config be causing them to be excluded?
yaml scripts were created from the autocompose section of the plugin for existing running containers, the containers were then stopped and removed from Portainer. I then re-created the containers using the yaml script from within the compose plugin files tab. containers are running fine, I just don't see any information displayed about them under the containers tab.
I am running openmediavault-compose 6.7.6
I have re-deployed all my containers in the compose plugin using the Compose -> Files -> Create option supplying yaml info.
All containers are running and are seen as limited from the Portainer -> Stacks menu.
docker ps shows all containers running.
Compose -> Containers is blank and shows nothing about the running containers.
How do I get the running container info to display in the Compose -> Containers tab?
Thanks.
Did you guys see this thread?
Since dropping from kernel 5.6.0.0 back down to 5.5.0.0 I haven't had a single reboot and NFS is back to working as normal.
Since reverting to 5.5.0.0 my system has been up for 3days and 15hrs despite continual button mashing and sleep/wake cycles of the Apple TV while at home on lockdown.
This is longer than any single uptime while using 5.6.0.0
Think I'm going to stay on 5.5.0.0 for the foreseeable future.
Mines been doing similar recently, but had put it down to my youngest mashing the buttons on the Apple TV and somehow an NFS issue causing the server to fall over, but looking at my uptimes, prior to July 13th I had uptimes of 34 days and 19 days and now it's rebooting multiple times a day;
root@storage:/var/log# last -x reboot
reboot system boot 5.6.0-0.bpo.2-am Mon Jul 20 19:04 still running
reboot system boot 5.6.0-0.bpo.2-am Sun Jul 19 23:51 still running
reboot system boot 5.6.0-0.bpo.2-am Sun Jul 19 13:05 still running
reboot system boot 5.6.0-0.bpo.2-am Sat Jul 18 23:30 still running
reboot system boot 5.6.0-0.bpo.2-am Sat Jul 18 14:55 still running
reboot system boot 5.6.0-0.bpo.2-am Sat Jul 18 13:02 - 14:54 (01:52)
reboot system boot 5.6.0-0.bpo.2-am Fri Jul 17 18:32 - 14:54 (20:22)
reboot system boot 5.6.0-0.bpo.2-am Thu Jul 16 19:36 - 14:54 (1+19:18)
reboot system boot 5.6.0-0.bpo.2-am Thu Jul 16 08:58 - 14:54 (2+05:56)
reboot system boot 5.6.0-0.bpo.2-am Wed Jul 15 19:40 - 14:54 (2+19:14)
reboot system boot 5.6.0-0.bpo.2-am Wed Jul 15 11:17 - 14:54 (3+03:37)
reboot system boot 5.6.0-0.bpo.2-am Wed Jul 15 10:48 - 11:16 (00:27)
reboot system boot 5.6.0-0.bpo.2-am Wed Jul 15 09:44 - 10:48 (01:03)
reboot system boot 5.6.0-0.bpo.2-am Wed Jul 15 08:40 - 10:48 (02:07)
reboot system boot 5.6.0-0.bpo.2-am Wed Jul 15 08:30 - 10:48 (02:17)
reboot system boot 5.6.0-0.bpo.2-am Wed Jul 15 00:57 - 10:48 (09:51)
reboot system boot 5.6.0-0.bpo.2-am Tue Jul 14 19:25 - 10:48 (15:22)
reboot system boot 5.6.0-0.bpo.2-am Tue Jul 14 09:34 - 10:48 (1+01:13)
reboot system boot 5.6.0-0.bpo.2-am Mon Jul 13 20:23 - 10:48 (1+14:24)
reboot system boot 5.6.0-0.bpo.2-am Mon Jul 13 09:13 - 10:48 (2+01:35)
reboot system boot 5.6.0-0.bpo.2-am Mon Jul 13 09:11 - 09:12 (00:01)
reboot system boot 5.5.0-0.bpo.2-am Tue Jun 9 08:55 - 09:12 (34+00:17)
reboot system boot 5.5.0-0.bpo.2-am Tue Jun 9 08:50 - 08:54 (00:04)
reboot system boot 5.5.0-0.bpo.2-am Tue Jun 9 08:43 - 08:50 (00:06)
reboot system boot 5.5.0-0.bpo.2-am Wed May 20 19:08 - 08:50 (19+13:41)
reboot system boot 4.19.0-9-amd64 Wed May 20 19:00 - 19:07 (00:07)
reboot system boot 4.19.0-9-amd64 Wed May 20 18:56 - 19:00 (00:04)
Display More
I did manage to capture a photo of the dump screen just before it rebooted - not sure if this will help;
I've now reverted back to kernel 5.5.0 for a week and will see how it goes.
Thanks.
I previously used Carbon Copy Cloner to perform sparse bundle backups to OMV4 via the Netatalk protocol.
I've just upgraded to OMV5 and with the removal of Netatalk, have migrated all my shares over to SMB.
While File/Folder based backups from Carbon Copy cloner to SMB work fine, a sparse bundle based backup to the same share fails.
I've tried enabling TimeMachine support as recommended elsewhere, but this doesn't seem to make a difference.
I've confirmed from the command line that I can read/write to the share, so access is not a problem.
TimeMachine backups to another share are working fine, so this appears to be limited to Carbon Copy Cloner.
Just wondered if anyone else had the same issue and has found a fix for the issue?
Thanks.
For anyone else with the same problem, it looks like the host access section on this page may solve the issue.
https://blog.oddbit.com/post/2…-docker-macvlan-networks/
I'll give it a go tomorrow and see what happens.
looks like this is a problem with the docker host (OMV) accessing the running containers, as I didn't test OMV to the other running containers and it seems I can't access those from the OMV command line either.
Guess there's a setting somewhere to enable access from the host.....
Maybe this I just found?
Communication with the Docker host over macvlan
Just span up 2 new instances of pihole (with empty/new configs) on different IP's in my network range and get exactly the same behaviour.
Everything in my network can get to them both except for OMV.
OMV can even ping/route to an IP address one above and one below one of the new pihole addresses, but cannot get to pihole itself on the address in the middle.
I really have no idea what's going on here.
Set your router's DNS servers to point to your pihole. Then everything that goes through your router will route through pihole for DNS.
Sent from my BBF100-2 using Tapatalk
Thanks, but it already is.
Seems really strange that OMV can't get to pihole, almost like it's purposely blocked for some reason.
Because I use IP policy routing through my VPN, which I don’t think will work with containers running in bridged mode.
if I get time tomorrow, I might spin up another instance of pihole on another ip and see if I have the same issue from OMV.
All docker containers (including pihole) are already configured using Mac_vlan networking with dedicated ip addresses for each container.
Is this what you mean?
pihole is not using bridged networking and neither is OMV.
I've setup pihole in docker and it's working fine.
All my local clients are able to query DNS and access the internet etc.
Today I realised that none of my docker instances were going through pihole and then realised it's because docker uses the host DNS, which was still pointing directly at the router.
I updated the DNS and rebooted, but then realised that OMV couldn't resolve anything and neither could the running containers, so I've started to do some testing and have discovered that weirdly, OMV can't route to pihole and pihole can't route to docker. I've double checked all addresses, net masks and default routes and everything is correct.
Everything else on my network can talk to pihole and vice versa, just not OMV.
OMV can also communicate with everything else on the network and vice versa, just not pihole.
This is a flat network, everything in the 192.168.1.0/24 range with a 192.168.1.1 gateway.
Any ideas on what's going on and why this isn't working?
Thanks for looking.
Please ignore, I've done some reading and investigation and it looks like config.xml references the volume label;
<fsname>/dev/disk/by-label/storage</fsname>
<dir>/srv/dev-disk-by-label-storage</dir>
This is also seen in fstab.
I assume that once I've cloned/copied the existing data to the new volume, I can change the volume label of the current storage volume to something else, rename the new volume to the old label and reboot.