The question was can I assume they can't be stopped. The answer was no I can't
Meaning my assumption is incorrect.
The question was can I assume they can't be stopped. The answer was no I can't
Meaning my assumption is incorrect.
Then how can they be stopped?
Can I assume no replies = they cannot be stopped?
Hi all,
I'm getting these upgrade emails:
apticron report [Tue, 26 Mar 2024 10:24:06 +1100]
========================================================================
apticron has detected that some packages need upgrading on:
omv.localdomain
[ 172.17.0.1 172.19.0.1 172.20.0.1 172.21.0.1 172.30.0.1 192.168.10.2 ]
[ 192.168.32.1 192.168.96.1 ]
The following packages are currently pending an upgrade:
openmediavault-compose 7.1.2
========================================================================
How can I stop these emails?
TIA
BTW - I don't use OMV to manage my containers. It is all done via Portainer.
Seems the healthcheck is failing:
root@omv:~# docker exec 36a4c795c771 wget --no-verbose --tries=1 --spider http://localhost
Connecting to localhost ([::1]:80)
wget: can't connect to remote host: Connection refused
However this works:
root@omv:~# docker exec 36a4c795c771 wget --no-verbose --tries=1 --spider http://127.0.0.1
Connecting to 127.0.0.1 (127.0.0.1:80)
Connecting to 127.0.0.1 (127.0.0.1:80)
remote file exists
root@omv:~#
This is only failing since upgrading to OMV 7. Why??
Thanks for your reply.
Note: You posted the compose file without indents, I trust you have it written correctly on your system with the correct indents.
Yes, it's a Portainer Stack so it would not work without the correct syntax. Just copied and pasted from Portainer.
I will check the communicatation between the containers.
EDIT: Fixed the indentation.
At some point during the update did you change the port used by the OMV GUI to the default 80?
OMV GUI has always used port 80 and still does. Here is the container stack:
services:
### Wallabag ###
wallabag:
image: wallabag/wallabag:latest
environment:
- MYSQL_ROOT_PASSWORD=wallaroot
- SYMFONY__ENV__DATABASE_DRIVER=pdo_mysql
- SYMFONY__ENV__DATABASE_HOST=db
- SYMFONY__ENV__DATABASE_PORT=3306
- SYMFONY__ENV__DATABASE_NAME=wallabag
- SYMFONY__ENV__DATABASE_USER=wallabag
- SYMFONY__ENV__DATABASE_PASSWORD=wallapass
- SYMFONY__ENV__DATABASE_CHARSET=utf8mb4
- SYMFONY__ENV__DATABASE_TABLE_PREFIX="wallabag_"
- SYMFONY__ENV__MAILER_HOST=127.0.0.1
- SYMFONY__ENV__MAILER_USER=~
- SYMFONY__ENV__MAILER_PASSWORD=~
- SYMFONY__ENV__FROM_EMAIL=wallabag@thebriars.net.au
- SYMFONY__ENV__DOMAIN_NAME=https://wallabag.somedomain.com.au
- SYMFONY__ENV__SERVER_NAME="My wallabag instance"
- PUID=1000
- PGID=100
- TZ=Australia/Sydney
ports:
- '6080:80'
volumes:
- /share/appdata/wallabag/images:/var/www/wallabag/web/assets/images
healthcheck:
test: ["CMD", "wget" ,"--no-verbose", "--tries=1", "--spider", "http://localhost"]
interval: 1m
timeout: 3s
hostname: wallabag.localdomain
labels:
- "diun.enable=true"
networks:
- wallabag
restart: unless-stopped
depends_on:
- db
- redis
### Mariadb ###
db:
image: mariadb:latest
volumes:
- /share/appdata/wallabag/data:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=wallaroot
- PUID=1000
- PGID=100
- TZ=Australia/Sydney
healthcheck:
test: ["CMD", "mariadb-admin" ,"ping", "-h", "localhost", "--password=<wallaroot>"]
interval: 20s
timeout: 3s
networks:
- wallabag
restart: unless-stopped
### Redis ###
redis:
image: redis:alpine
environment:
- PUID=1000
- PGID=100
- TZ=Australia/Sydney
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 20s
timeout: 3s
networks:
- wallabag
restart: unless-stopped
networks:
wallabag:
Alles anzeigen
The container wallabag-wallabag is giving the unhealthy status.
Hi all,
Just updating OMV 6 -> 7 and all seems to be well except for one container staying in an unhealthy state. Seems this is the issue:
Connecting to localhost ([::1]:80) wget: can't connect to remote host: Connection refused
Would something of changed in OMV7 to cause this?
TIA
not ok
No it is not.
I manage all my docker containers via Portainer and not via OMV.
What is your advise?
TIA
Hi all,
Going to upgrade from 6.9.14-1 to stable version 7 so ran the command sudo omv-salt stage run deploy as per instructions. This is the output:
All good to proceed with the upgrade, as there are errors at the bottom?
TIA
Command seems to be working now:
root@omv:~# update-smart-drivedb -v
Download branches/RELEASE_7_2_DRIVEDB/drivedb.h with curl
curl -f --max-redirs 0 -H Accept-Encoding: identity -o /var/lib/smartmontools/drivedb/drivedb.h.new https://svn.code.sf.net/p/smartmontools/code/branches/RELEASE_7_2_DRIVEDB/smartmontools/drivedb.h
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 237k 100 237k 0 0 168k 0 0:00:01 0:00:01 --:--:-- 168k
Download branches/RELEASE_7_2_DRIVEDB/drivedb.h.raw.asc with curl
curl -f --max-redirs 0 -H Accept-Encoding: identity -o /var/lib/smartmontools/drivedb/drivedb.h.new.raw.asc https://svn.code.sf.net/p/smartmontools/code/branches/RELEASE_7_2_DRIVEDB/smartmontools/drivedb.h.raw.asc
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 833 100 833 0 0 1307 0 --:--:-- --:--:-- --:--:-- 1307
gpg: keybox '/var/lib/smartmontools/drivedb/.gnupg.1449209.tmp/pubring.kbx' created
gpg: key EA74AB25721042C5: 6 signatures not checked due to missing keys
gpg: /var/lib/smartmontools/drivedb/.gnupg.1449209.tmp/trustdb.gpg: trustdb created
gpg: key EA74AB25721042C5: public key "Smartmontools Signing Key (through 2025) <smartmontools-database@listi.jpberlin.de>" imported
gpg: Total number processed: 1
gpg: imported: 1
gpg: no ultimately trusted keys found
gpg: Signature made Sun 20 Aug 2023 01:21:53 AEST
gpg: using RSA key DEC7AE47968B7C875947F2A9EA74AB25721042C5
gpg: Good signature from "Smartmontools Signing Key (through 2025) <smartmontools-database@listi.jpberlin.de>" [unknown]
gpg: aka "Smartmontools Signing Key (through 2020) <smartmontools-database@listi.jpberlin.de>" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: DEC7 AE47 968B 7C87 5947 F2A9 EA74 AB25 7210 42C5
/usr/sbin/smartctl: syntax OK
/var/lib/smartmontools/drivedb/drivedb.h is already up to date
root@omv:~#
root@omv:~#
root@omv:~# update-smart-drivedb
/var/lib/smartmontools/drivedb/drivedb.h is already up to date
root@omv:~#
Alles anzeigen
Hi all,
Received an email this morning with this in it:
/etc/cron.weekly/openmediavault-update-smart-drivedb:
/usr/sbin/update-smart-drivedb: RELEASE_7_2_DRIVEDB/drivedb.h: download failed (curl: exit 28)
run-parts: /etc/cron.weekly/openmediavault-update-smart-drivedb exited with return code 1
Here's the result of running the command via CLI:
I'd prefer mail only in case of failure. Otherwise too many mails with risk of not being read (carefully).
This is the way....
Hi all,
I don't have the Anacron Plug-In installed. OMV Version: 6.9.6-1 (Shaitan)
I received this email this morning, here is the first part:
/etc/cron.weekly/openmediavault-update-smart-drivedb:
Download branches/RELEASE_7_2_DRIVEDB/drivedb.h with curl
curl -f --max-redirs 0 -H Accept-Encoding: identity -o /var/lib/smartmontools/drivedb/drivedb.h.new
As I've never received this email before I'm wondering why it has happened now?
TIA
No idea. I've never had a system that does this to investigate more.
Any ideas how to approach the problem and track down the cause?
I've just found that adding mount -a as a scheduled task to run at reboot has fixed everything. No more monit emails, and docker now starts after a reboot.
Maybe a hint to the cause of the issue?
Just rebooted and here is the output of journalctl -u docker . Just included date/time of today.
Aug 07 05:54:02 omv dockerd[6558]: time="2023-08-07T05:54:02.030524656+10:00" level=info msg="ignoring event" container=44e5d155889f1b6eb7e2f0381350f828834c1467ffddd123970d23a9048408bd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 05:54:02 omv dockerd[6558]: time="2023-08-07T05:54:02.051785164+10:00" level=warning msg="ShouldRestart failed, container will not be restarted" container=44e5d155889f1b6eb7e2f0381350f828834c1467ffddd123970d23a9048408bd daemonShuttingDown=true error="restart canceled" execDuration=16h14m5.860560187s exitStatus="{0 2023-08-06 19:54:01.999380789 +0000 UTC}" hasBeenManuallyStopped=false restartCount=0
Aug 07 05:54:03 omv dockerd[6558]: time="2023-08-07T05:54:03.517905920+10:00" level=info msg="ignoring event" container=4bae39387b0f89aaeca74a4879e5396ced826fb2b090ec14c3f693d2d6e74ed1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 05:54:03 omv dockerd[6558]: time="2023-08-07T05:54:03.527556967+10:00" level=warning msg="ShouldRestart failed, container will not be restarted" container=4bae39387b0f89aaeca74a4879e5396ced826fb2b090ec14c3f693d2d6e74ed1 daemonShuttingDown=true error="restart canceled" execDuration=16h14m7.29550975s exitStatus="{0 2023-08-06 19:54:03.506971321 +0000 UTC}" hasBeenManuallyStopped=false restartCount=0
Aug 07 05:54:05 omv dockerd[6558]: time="2023-08-07T05:54:05.346091875+10:00" level=info msg="ignoring event" container=75d66e5957ca91b074105d3ce979839372c10468770894739725f18479f76318 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 05:54:05 omv dockerd[6558]: time="2023-08-07T05:54:05.373731615+10:00" level=warning msg="ShouldRestart failed, container will not be restarted" container=75d66e5957ca91b074105d3ce979839372c10468770894739725f18479f76318 daemonShuttingDown=true error="restart canceled" execDuration=16h14m6.965839253s exitStatus="{0 2023-08-06 19:54:05.314916058 +0000 UTC}" hasBeenManuallyStopped=false restartCount=0
Aug 07 05:54:07 omv dockerd[6558]: time="2023-08-07T05:54:07.277990964+10:00" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=cdddc596c520a6ed4dd88537e480a9d2356e6c9411e2c0eedaf16846183327de
Aug 07 05:54:07 omv dockerd[6558]: time="2023-08-07T05:54:07.310260002+10:00" level=info msg="ignoring event" container=cdddc596c520a6ed4dd88537e480a9d2356e6c9411e2c0eedaf16846183327de module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 05:54:07 omv dockerd[6558]: time="2023-08-07T05:54:07.319685912+10:00" level=warning msg="ShouldRestart failed, container will not be restarted" container=cdddc596c520a6ed4dd88537e480a9d2356e6c9411e2c0eedaf16846183327de daemonShuttingDown=true error="restart canceled" execDuration=16h14m9.154897983s exitStatus="{137 2023-08-06 19:54:07.299591583 +0000 UTC}" hasBeenManuallyStopped=false restartCount=0
Aug 07 05:54:07 omv dockerd[6558]: time="2023-08-07T05:54:07.329111093+10:00" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=cd656699d5c8e71c775db14d37411db296303f33b9c0f0ba25981af81c9b1687
Aug 07 05:54:07 omv dockerd[6558]: time="2023-08-07T05:54:07.411485274+10:00" level=info msg="ignoring event" container=cd656699d5c8e71c775db14d37411db296303f33b9c0f0ba25981af81c9b1687 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 07 05:54:07 omv dockerd[6558]: time="2023-08-07T05:54:07.421805314+10:00" level=warning msg="ShouldRestart failed, container will not be restarted" container=cd656699d5c8e71c775db14d37411db296303f33b9c0f0ba25981af81c9b1687 daemonShuttingDown=true error="restart canceled" execDuration=16h14m9.588065727s exitStatus="{137 2023-08-06 19:54:07.400608616 +0000 UTC}" hasBeenManuallyStopped=false restartCount=0
Aug 07 05:54:07 omv dockerd[6558]: time="2023-08-07T05:54:07.820353802+10:00" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
Aug 07 05:54:07 omv dockerd[6558]: time="2023-08-07T05:54:07.821072319+10:00" level=info msg="Daemon shutdown complete"
Aug 07 05:54:07 omv dockerd[6558]: time="2023-08-07T05:54:07.821243822+10:00" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
Aug 07 05:54:07 omv systemd[1]: docker.service: Succeeded.
Aug 07 05:54:07 omv systemd[1]: Stopped Docker Application Container Engine.
Aug 07 05:54:07 omv systemd[1]: docker.service: Consumed 52min 16.637s CPU time.
Alles anzeigen
root@omv:~# omv-salt deploy run compose
debian:
----------
ID: /etc/systemd/system/docker.service.d/waitAllMounts.conf
Function: file.managed
Result: True
Comment: File /etc/systemd/system/docker.service.d/waitAllMounts.conf updated
Started: 06:08:20.072674
Duration: 41.912 ms
Changes:
----------
diff:
---
+++
@@ -1,5 +1,2 @@
-
[Unit]
-
-After=local-fs.target srv-dev\x2ddisk\x2dby\x2duuid\x2d1cf75e95\x2d6873\x2d4d8d\x2db5d3\x2d3d760953dc41.mount srv-dev\x2ddisk\x2dby\x2duuid\x2d2dc66f1b\x2dee1e\x2d4473\x2db188\x2d3ac4e6ac9ab6.mount
-
+After=local-fs.target srv-dev\x2ddisk\x2dby\x2duuid\x2d1cf75e95\x2d6873\x2d4d8d\x2db5d3\x2d3d760953dc41.mount srv-dev\x2ddisk\x2dby\x2duuid\x2d2dc66f1b\x2dee1e\x2d4473\x2db188\x2d3ac4e6ac9ab6.mount
----------
ID: systemd_daemon_reload_docker
Function: cmd.run
Name: systemctl daemon-reload
Result: True
Comment: Command "systemctl daemon-reload" run
Started: 06:08:20.115230
Duration: 275.617 ms
Changes:
----------
pid:
11242
retcode:
0
stderr:
stdout:
----------
ID: configure_etc_docker_dir
Function: file.directory
Name: /etc/docker
Result: True
Comment: The directory /etc/docker is in the correct state
Started: 06:08:20.391245
Duration: 1.565 ms
Changes:
----------
ID: /etc/docker/daemon.json
Function: file.serialize
Result: True
Comment: File /etc/docker/daemon.json is in the correct state
Started: 06:08:20.392942
Duration: 49.492 ms
Changes:
----------
ID: docker
Function: service.running
Result: True
Comment: The service docker is already running
Started: 06:08:21.653409
Duration: 43.492 ms
Changes:
----------
ID: configure_compose_scheduled_jobs
Function: file.managed
Name: /etc/cron.d/omv-compose-backup
Result: True
Comment: File /etc/cron.d/omv-compose-backup updated
Started: 06:08:21.697344
Duration: 122.145 ms
Changes:
----------
diff:
New file
mode:
0644
----------
ID: docker_install_packages
Function: pkg.installed
Result: True
Comment: The following packages were installed/updated: docker.io
Started: 06:08:21.822372
Duration: 28390.519 ms
Changes:
----------
containerd:
----------
new:
1.4.13~ds1-1~deb11u4
old:
containerd.io:
----------
new:
old:
1.6.21-1
docker-ce:
----------
new:
old:
5:24.0.2-1~debian.11~bullseye
docker-ce-cli:
----------
new:
old:
5:24.0.2-1~debian.11~bullseye
docker.io:
----------
new:
20.10.5+dfsg1-1+deb11u2
old:
runc:
----------
new:
1.0.0~rc93+ds1-5+deb11u2
old:
tini:
----------
new:
0.19.0-1
old:
----------
ID: docker_compose_install_packages
Function: pkg.installed
Result: True
Comment: The following packages were installed/updated: docker-compose
Started: 06:08:50.221884
Duration: 7593.004 ms
Changes:
----------
docker-compose:
----------
new:
1.25.0-1
old:
python3-attr:
----------
new:
20.3.0-1
old:
python3-dockerpty:
----------
new:
0.4.1-2
old:
python3-docopt:
----------
new:
0.6.2-3
old:
python3-importlib-metadata:
----------
new:
1.6.0-2
old:
python3-jsonschema:
----------
new:
3.2.0-3
old:
python3-more-itertools:
----------
new:
4.2.0-3
old:
python3-pyrsistent:
----------
new:
0.15.5-1+b3
old:
python3-setuptools:
----------
new:
52.0.0-4
old:
python3-texttable:
----------
new:
1.6.3-2
old:
python3-zipp:
----------
new:
1.0.0-3
old:
Summary for debian
------------
Succeeded: 8 (changed=5)
Failed: 0
------------
Total states run: 8
Total run time: 36.518 s
root@omv:~#
Alles anzeigen
root@omv:~# cat /etc/systemd/system/docker.service.d/waitAllMounts.conf
[Unit]
After=local-fs.target srv-dev\x2ddisk\x2dby\x2duuid\x2d1cf75e95\x2d6873\x2d4d8d\x2db5d3\x2d3d760953dc41.mount srv-dev\x2ddisk\x2dby\x2duuid\x2d2dc66f1b\x2dee1e\x2d4473\x2db188\x2d3ac4e6ac9ab6.mount
root@omv:~# systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Drop-In: /etc/systemd/system/docker.service.d
└─waitAllMounts.conf
Active: failed (Result: exit-code) since Mon 2023-08-07 06:08:48 AEST; 3min 10s ago
TriggeredBy: ● docker.socket
Docs: https://docs.docker.com
Process: 12507 ExecStart=/usr/sbin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock $DOCKER_OPTS (code=exited, status=1/FAILURE)
Main PID: 12507 (code=exited, status=1/FAILURE)
CPU: 79ms
root@omv:~#
Alles anzeigen
For some reason the main storage drive, that also contains docker is taking a while to get mounted and that's the issue, I think. Docker is not waiting for the drive to be mounted before trying to start. My two cents.
Hope all the above helps.
the name of the environmental variable has changed.
Thanks very much, I will give that a try tomorrow.