Checked that as well and noticed it had bullseye in it. Come to think of it, I think added that repository to add ctop. Checked the packages list provided by the repo and they were pretty minimal.
Thanks for the help!
Checked that as well and noticed it had bullseye in it. Come to think of it, I think added that repository to add ctop. Checked the packages list provided by the repo and they were pretty minimal.
Thanks for the help!
I can't recall if I did an upgrade from OMV5 to OMV6 in this instance or not since it's virtual. I think I just stood up another OMV VM using OMV6 and moved the storage vmdk's to the new VM.
This is the only repo referencing buster. Everything else (debian repo related at least) points to bullseye.
Just didn't want to randomly go and change it to bullseye if it was configured to buster for a reason but if it should be pointing to bullseye I'll change it.
When trying to update today I'm getting an error:
Err:14 http://packages.azlux.fr/debian buster Release
404 Not Found [IP: 2a01:728:401:1c::100 80]
Reading package lists... Done
E: The repository 'http://packages.azlux.fr/debian buster Release' no longer has a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
It looks like this repo is referenced at /etc/apt/sources.list.d/azlux.list
contents:
deb http://packages.azlux.fr/debian/ buster main
Any reason this would be pointing to buster when 6.x uses bullseye?
Also is this repo important?
Yup, seeing it as well. Also posted here Receiving hourly email from cron about snapshot cleanup
I tried enabling and disabling the automatic cleanup, but it's still sending the notifications.
Kinda had a feeling apparmor-utils would install apparmor as well.
I did notice that on my headless debian system I didn't have any issues with the upgrade, which obviously has apparmor running.
Either way, thanks for all the testing/work you do for OMV! Definitely appreciate it!
Looks like docker FINALLY released the release notes for 23.
Docker Engine 23.0 release notes
"Known issues
Some Debian users have reported issues with containers failing to start after upgrading to the 23.0 version. The error message indicates that the issue is due to a missing apparmor_parser
dependency:"
"The workaround to this issue is to install the apparmor-utils
package manually:
Not sure why some people are having issues with Portainer. Works fine for me with the apparmor grub change. I agree that rolling back to the older version of docker isn't the best fix as you'll eventually have to update, and since it's a change that came directly from Docker, I wouldn't expect it to magically work later without making some form of change to apparmor.
Considering Portainer is a relatively simple reinstall, I would try that over rolling back to an older Docker. If it still fails, try reinstalling the Portainer container from the CLI to see what could be causing the issue.
Yup, that worked for me as well. I'm guessing docker is detecting apparmor in the kernel and trying to load profiles thinking apparmor is installed?
Display Moredocker installed via omv-extras doesn't install it. If your system is trying install it, it is a dependency of something else.
aaron@omv6dev:~$ dpkg -l | grep -E "docker|apparmor"
ii docker-ce 5:20.10.23~3-0~debian-bullseye amd64 Docker: the open-source application container engine
ii docker-ce-cli 5:20.10.23~3-0~debian-bullseye amd64 Docker CLI: the open-source application container engine
ii docker-compose-plugin 2.15.1-1~debian.11~bullseye amd64 Docker Compose (V2) plugin for the Docker CLI.
ii libapparmor1:amd64 2.13.6-10 amd64 changehat AppArmor library
ii python3-docker 4.1.0-1.2 all Python 3 wrapper to access docker.io's control socket
I would like to see the output of omv-extras installing docker and if it does install apparmor, the output of trying to uninstall apparmor.
root@nas:~# dpkg -l | grep -E "docker|apparmor"
ii docker-ce 5:23.0.0-1~debian.11~bullseye amd64 Docker: the open-source application container engine
ii docker-ce-cli 5:23.0.0-1~debian.11~bullseye amd64 Docker CLI: the open-source application container engine
ii docker-compose 1.25.0-1 all Punctual, lightweight development environments using Docker
ii docker-ctop 0.7.7 amd64 Top-like interface for container metrics
ii libapparmor1:amd64 2.13.6-10 amd64 changehat AppArmor library
ii python3-docker 4.1.0-1.2 all Python 3 wrapper to access docker.io's control socket
ii python3-dockerpty 0.4.1-2 all Pseudo-tty handler for docker Python client (Python 3.x)
I'm really not sure what it was working, and after the update, I'm having apparmor issues.
I appear to be having this exact same issue suddenly after an update of docker this morning.
Apparmor isn't installed, docker is detecting it. The only apparmor package is the libapparmor package.
I would like to avoid installing apparmor.
I initially installed docker via omv-extras>docker.
I'm running kernel 5.15 on OMV 6
So I just got the update that enables this, and now I'm getting spammed that it can't open /etc/ssl/certs/openmediavault-*.crt....because I don't have any created or imported certs. Is there any way to disable the check so I don't get spammed daily telling me it can't open anything matching that pattern.
The update info says "Use the ‘OMV_SSL_CERTIFICATE_CHECK_EXPIRY_DAYS’ environment variable to customize the number of days to check for." But it says nothing about how to disable the check.
I had a similar issue a long time ago. For me it was because I placed the docker installation in another location besides the default /var/lib/docker. I think it was because I placed it on a share that I created instead of using the actual full filesystem path. This caused lots of rights issues and what was happening was the DNS resolver (resolv.conf) for the containers only gave the root user access to read the file. This caused DNS to fail entirely within the container unless the container was running as root. Not sure if this is your issue your not.
If your containers are not running as root user, open a shell (or SSH) to your OMV server. Go to the location where you have docker configured for it's location (default is /var/lib/docker)...go to the containers folder go into the UID of the container you're having issues with, do an ls -l and make sure that that all have read rights. If not, your problem is AC/rightsL related.
I went ahead and created an issue. Didn't see one listed.
For me I have two jobs scheduled in the GUI. One enabled, one disabled.
Both jobs exist in openmediavault-userdefined, both are executing as if they're both set to enabled even though one is disabled.
The GUI doesn't get the enabled or disabled status from openmediavault-userdefined, that's just where it stores the job. It gets the info for the gui from config.xml.
Looking at config.xml the for the disabled job, the enable flag is <enable>0</enable>...which says it should be disabled.
In config.xml, the enabled job is <enable>1</enable>
So somewhere in OMV's scripts when it's updating the cron jobs, it appears to be ignoring whether or not the job is supposed to be enabled.
Came here to say "Me to". Only noticed this recently because I added a cron job this week that I would normally keep disabled and will only manually run it. I originally thought that maybe me enabling it, then disabling it caused some odd issue so I deleted it entirely, then recreated it but left it disabled when I saved it. It's still running.
Running OMV 5.5.19-1
the output I'm getting for ls -al /etc/cron.d/openmediavault-userdefined
-rw-r--r-- 1 root root 558 Dec 25 07:22 /etc/cron.d/openmediavault-userdefined
Well...I'm now at a point where I really don't know. I've pointed docker to a new location on my main storage disk (that is NOT a shared folder) and let it create the folder itself. Tried pulling and creating a new instance of FreshRSS (all this is while SSH in as root) just to test it out using docker-compose....failed. Loads of permission denied when it was trying to chmod during the setup. Checked to see if dockerd is running as root and it is. Decided to change it back to /var/lib/docker and try that as well. Same problem. So now I have no clue what the issue is or why I'm having it.
That or it could be an ACL issue. I really have no clue. I'm almost tempted to nuke my entire docker configuration and start from scratch hoping to fix it.
Ok, so I THINK the issue might have to do with a umask issue but not sure how/where. I checked the shared folder where my docker data is stored (we'll called it /dockershare/docker/...) at /dockershare/docker/containers/xxxxxxxxxxxxxx......every single hosts and resolve.conf file were all 640 with owner of root:root instead of 644 and root:root.
I really would love to get this fixed as it'll allow me to finally move some of my dockers off of the VM and on to OMV....as well as get rid of future headaches when I roll out future containers.
I'm not sure if this is an OMV issue, a Docker issue, a docker config issue, an image issue, or a local filesystem rights issue. It seems like some of my Docker containers that run their processes as a user other than root, have problems with DNS. For example, my Nextcloud container has been unable to check for updates. The process runs as www-data. When I run "docker -it -u www-data nextcloud /bin/bash" and then run "curl http://www.google.com", it fails to resolve the url. If I connect to the container as "docker -it nextcloud /bin/bash" and run "curl http://www.google.com" it returns data as expected.
I checked the rights of /etc/resolve.conf and it was set to rw only for root and no other permissions (600). After setting the permissions to 644, suddenly everything worked.
So my question is...is this issue caused by the container, by docker, by the docker config, or by the underlying FS permissions?
I have docker using a shared folder instead of the default /var/lib/docker location. When I created the Nextcloud instance, I pointed it to the location I wanted and it created the folder for me so the rights on the "config" folder should be correct. Although the config folder is mounted to /var/www/html within the container anyways.
I have docker running on an Ubuntu VM (completely separate from OMV), and I've never once run into this issue on it. Granted it's also using the default /var/lib/docker. When I've run into this issue on my OMV system, I would even test it on my Ubuntu system and wouldn't have the problem there.