So my original interface is a bonded link and oddly that page lets me clear the dns field and save so I have a way to remove the dup for now. But it seems odd the simple interface (which has mac address above the dns field) won't let me save dns list blank.
Posts by dsm1212
-
-
On OMV7 I had an unused nic on my nas for a long time. I wanted to configure it and it was saved with the domain name server blank. The dns server was already in my resolv.conf but to be consistent I put in my router ip for the dns server on the new interface and saved. Since my other nic had the same dns server I had two of the same value in my resolv.conf and it bumped another entry down to four (which shows some warning that only 3 will be used). So I went back into the ui to remove the dns server on this port but the UI won't let me save it blank. It says dns is a required field. On most systems all nic's will have the same dns config so I think we should do one of 1) don't write out dups to resolv.conf, or 2) allow to save the interface with dns blank, 3) move the dns config out of the interface config, or 4) show the same dns config values on every interface page. Maybe this is complicated by the fact that I have ipv4 configured static and ipv6 dynamic.
-
Do you have pve kernel and zfs?
I do not have these in use.
-
I had a flawless upgrade. I had to install the md plugin but the installer told me that. Zero issues so far. I was a little worred about having pihole running under docker, but the installer did everything it needed I guess without shutting that down. After reboot everything just worked.
Nice work to votdev and whomever else contributed!
-
You can always check the changelog to see if there are updates, but remember omv7 is in progress now so most likely things will slow down on omv6.
-
Well, those are normally fairly frequent if you look at daemon log. You shouldn't have GB of them though. Maybe grep them to another file so you can tell how much they are really causing.
If your syslog is the standard format then something like this will give you a count of lines by service:
$ awk '{print $5}' syslog | sed s/\[0-9\]//g | sort | uniq -c
294 CRON[]:
1 MR_MONITOR[]:
31 anacron[]:
3 apt-helper[]:
112 containerd[]:
22 cron-apt:
34 dockerd[]:
3 fstrim[]:
85 kernel:
21 omv-sfsnapadm:
5 openmediavault-check_btrfs_errors:
1 openmediavault-check_ssl_cert_expiry:
1 openmediavault-pending_config_changes:
2 rsync-a-feec-c--eae:
2 rsync-bbd-db--b-aecad:
2 rsync-bcca-ca-f-e-ccafaeb:
2 rsync-daca-edd-b-ad-cdca:
2 rsync-efbc-ed-eb-baa-bf:
2 rsync-fcecb-c-ed-b-bfdfada:
1 rsyslogd:
28 smartd[]:
3 systemd-networkd-wait-online[]:
28 systemd-networkd[]:
24 systemd-udevd[]:
272 systemd[]:
1 upsd[]:
-
On the next rotate .1 gets compressed to .2.gz. But this is the default behavior on all of our systems so I wouldn't dwell on it.
To fix your problem you need to look through the latest log and address whatever is spamming it. Maybe debug level logging is on for some service you have installed. Or there is some pernicious error getting logged.
Because of the other issue I mentioned your journald log files are probably huge too. I would sort the issue and then you can truncate the journal log to a time after it is fixed.
-
Logrotate doesn't compress the .1 file version until it creates the next one. There is a setting for this you can added to /etc/logrotate.conf. I think it is called delayedCompress. However if you disk is full you won't be able to compress it anyhow.
You already have 2G of syslog so I'd like through the current log and see what is spamming it. My syslog from today is only 54kB :-).
steve
-
Those files should be covered by logrotate. Can you check if that is running regularly? It normally is called via /etc/cron.daily/logrotate.
Related to this though journald in my install was not limited and would grow to fill the disk. I requested an enhancement for this but it got auto-closed (https://github.com/openmediavault/openmediavault/issues/1457). Journald size is not limited in size with the default install. Add this line after [Journal} in /etc/systemd/journald.conf and restart journald.
SystemMaxUse=10G
-
That slice is just sort of the big bucket resources are coming out of. You could create a smaller slice for docker and limit it. Then you would just have docker crashing, but it would be better to figure out what is leaking. Just run "docker stats" and watch it. Hopefully it will be obvious.
steve
-
How is your swap configured? Is it possible the system is starting to swap and swap is either not available or not working? When it comes to memory though its hard to tell who is a victiim and who is the guilty party.
The oom killer has a scoring algorithm to pick a victim and it's picking the process running QtWebEngine. Try shutting down whatever docker container is using the Qt libratry and see if the system gets stable.
Also maybe change the mount -a to mount the exact disk which was problematic. The mount -a is causing docker to remount some things every time you run the command and that might be stressing a little used code path leading to a leak somewhere.
steve
-
The mounting is suspicious and should not be necessary. Did you set up megaraid cli so you can monitor the card? There are instructions around for installing the drivers on debian and then you can use storcli or connect from megaraid UI running on another desktop. From there you can watch the temps and you can check all the settings. Maybe the drive is disconnecting for some reason you will see in the megaraid log. Cabling even can be finicky.
If not cabling or disk issue, the controller could be overheating. These hw raid cards are notorious for this. LSI/Avago says they can run super hot, but it really seems like a bad idea long term to me. I have a sas lsi raid card, 9361-8i, and what I did was 1) reseat the heatsink on the raid controller chip with new thermal paste, 2) used screws to tighten it down instead of the screws with spring clips that were on there. and 3) picked up a small noctua fan. Find some small screws the right size and you can actually attach the fan to the metal fins on the heat sink. Just screw between the fins. With that my controller is runing at ~50C. I didn't just make this up, it's what many people do with these cards. Even 50C sounds hot but that damn controller chip reported 95C before the changes.
steve
-
Thanks for the quick turn-around!
-
i just took a docker-ce update and now the Files list all show down. docker-compose ls still shows output like this below. Everything is running and in fact I can still stop/start from the Files list. They just all show down.
steve
NAME STATUS CONFIG FILES
apps-nonvpn running(7) /vol/dev-disk-by-id-md-name-warehouse21-2/docker/compose/apps-nonvpn/apps-nonvpn.yml
apps-vpn running(11) /vol/dev-disk-by-id-md-name-warehouse21-2/docker/compose/apps-vpn/apps-vpn.yml
dns running(2) /vol/dev-disk-by-id-md-name-warehouse21-2/docker/compose/dns/dns.yml
nextcloud running(2) /vol/dev-disk-by-id-md-name-warehouse21-2/docker/compose/nextcloud/nextcloud.yml
portainer running(1) /vol/dev-disk-by-id-md-name-warehouse21-2/docker/compose/portainer/portainer.yml
swag running(2) /vol/dev-disk-by-id-md-name-warehouse21-2/docker/compose/swag/swag.yml
tagging running(3) /vol/dev-disk-by-id-md-name-warehouse21-2/docker/compose/tagging/tagging.yml
tandoor running(3) /vol/dev-disk-by-id-md-name-warehouse21-2/docker/compose/tandoor/tandoor.yml
-
Your volume setup looks incorrect to me. So the tandoor guy has an odd default setup he provides. I wanted the static files mapped so I could delete them from time to time as there were versions building up in there or something. I let nginx go to an anonymous volume. My config predates the new UI that supported env files so I just replicated the env settings to all three containers. It looks like this below. I've xxx out any secrets. I think I added the dependency health condition because the default config suggested was racing on startup and sometimes failing. Maybe that's what you are hitting, it's been a long time since I set this one up.
Code
Display Moreversion: "3" services: db_recipes: restart: always image: postgres:11-alpine container_name: db_recipes hostname: db_recipes volumes: - /apps/tandoor/postgresql:/var/lib/postgresql/data environment: - DEBUG=0 - ENABLE_SIGNUP=1 - SQL_DEBUG=0 - ALLOWED_HOSTS=* - SECRET_KEY=xxx - TIMEZONE=America/New_York - DB_ENGINE=django.db.backends.postgresql - POSTGRES_HOST=db_recipes - POSTGRES_PORT=5432 - POSTGRES_USER=djangouser - POSTGRES_PASSWORD=xxx - POSTGRES_DB=djangodb - FRACTION_PREF_DEFAULT=0 - COMMENT_PREF_DEFAULT=1 - SHOPPING_MIN_AUTOSYNC_INTERVAL=5 - GUNICORN_MEDIA=0 - REVERSE_PROXY_AUTH=0 - EMAIL_HOST=smtp.gmail.com - EMAIL_PORT=465 - EMAIL_HOST_USER=xxx - EMAIL_HOST_PASSWORD=xxx - EMAIL_USE_TLS=0 - EMAIL_USE_SSL=1 - DEFAULT_FROM_EMAIL=xxx - "ACCOUNT_EMAIL_SUBJECT_PREFIX=[Tandoor Recipes] " healthcheck: test: ["CMD-SHELL", "pg_isready -d djangodb -U djangouser"] interval: 30s start_period: 30s timeout: 10s retries: 5 web_recipes: image: vabene1111/recipes restart: always container_name: web_recipes hostname: web_recipes environment: - DEBUG=0 - ENABLE_SIGNUP=1 - SQL_DEBUG=0 - ALLOWED_HOSTS=* - SECRET_KEY=xxx - TIMEZONE=America/New_York - DB_ENGINE=django.db.backends.postgresql - POSTGRES_HOST=db_recipes - POSTGRES_PORT=5432 - POSTGRES_USER=djangouser - POSTGRES_PASSWORD=xxx - POSTGRES_DB=djangodb - FRACTION_PREF_DEFAULT=0 - COMMENT_PREF_DEFAULT=1 - SHOPPING_MIN_AUTOSYNC_INTERVAL=5 - GUNICORN_MEDIA=0 - REVERSE_PROXY_AUTH=0 - EMAIL_HOST=smtp.gmail.com - EMAIL_PORT=465 - EMAIL_HOST_USER=xxx - EMAIL_HOST_PASSWORD=xxx - EMAIL_USE_TLS=0 - EMAIL_USE_SSL=1 - DEFAULT_FROM_EMAIL=xxx - "ACCOUNT_EMAIL_SUBJECT_PREFIX=[Tandoor Recipes] " volumes: - /apps/tandoor/staticfiles:/opt/recipes/staticfiles - nginx_config:/opt/recipes/nginx/conf.d - /apps/tandoor/mediafiles:/opt/recipes/mediafiles depends_on: db_recipes: condition: service_healthy nginx_recipes: image: nginx:mainline-alpine restart: always container_name: nginx_recipes ports: - 8089:80 environment: - DEBUG=0 - ENABLE_SIGNUP=1 - SQL_DEBUG=0 - ALLOWED_HOSTS=* - SECRET_KEY=xxx - TIMEZONE=America/New_York - DB_ENGINE=django.db.backends.postgresql - POSTGRES_HOST=db_recipes - POSTGRES_PORT=5432 - POSTGRES_USER=djangouser - POSTGRES_PASSWORD=xxx - POSTGRES_DB=djangodb - FRACTION_PREF_DEFAULT=0 - COMMENT_PREF_DEFAULT=1 - SHOPPING_MIN_AUTOSYNC_INTERVAL=5 - GUNICORN_MEDIA=0 - REVERSE_PROXY_AUTH=0 - EMAIL_HOST=smtp.gmail.com - EMAIL_PORT=465 - EMAIL_HOST_USER=xxx - EMAIL_HOST_PASSWORD=xxx - EMAIL_USE_TLS=0 - EMAIL_USE_SSL=1 - DEFAULT_FROM_EMAIL=xxx - "ACCOUNT_EMAIL_SUBJECT_PREFIX=[Tandoor Recipes] " depends_on: - web_recipes volumes: - nginx_config:/etc/nginx/conf.d:ro - /apps/tandoor/staticfiles:/static - /apps/tandoor/mediafiles:/media volumes: nginx_config: #staticfiles:
-
Yes looks fine. Althought the quiet thing was disappointing :-). I could swear it still printed the main text but with quiet it printed nothing. Maybe I tried a different command. For the up command -quiet-pull doesn't do anything without -pull.
Edit: Maybe I'm wrong and up -quiet-pull does what I want.
-
So, a lot of people have asked for the plugin to show all the containers running on the system. I spent a bunch of time trying to make the work with the current Containers tab (not happy with results). But the current Containers tab enumerates the containers from the compose files maintained in the plugin. What would people think if I renamed the current Containers tab to Services (enumerated from docker-compose) and create a new tab called Containers (enumerated from docker containers ls --all)?
These menus are already under services->compose so it might look a little odd using services. But having said that whatever this is called, I'll just humbly suggest having a menu entry underneath it for each of the files. Then when you click on a file entry you see just the containers in that compose file. I would find this very useful. It's a portainer model that I liked using and it differentiates it better from the containers list.
-
The yml was changed so recently since I had to upgrade to a hotfix release.
How is the .env referenced in your yaml file for the one that fails? I ask because if you use a full file path it's not really allowed. The error message looks like it was trying to find a full file path in the folder where you have the compose file. From the docs:
Relative path MUST be resolved from the Compose file’s parent folder. As absolute paths prevent the Compose file from being portable, Compose implementations SHOULD warn users when such a path is used to set
env_file
. -
You should be able to see that in the Images tab already. The tag will just be empty.
Ok, that's nice. I'll try that for a bit. There might be unused ones still tagged, but in those cases it's usually because I changed the compose file and I should know what changed. Mostly I just want to quickly test what was updated.
-
It lets me see how fast images are downloading.
Not excited about adding this but I will look at it.
On portainer I would go to the images screen and it showed me which was not used, maybe that is another option. I've been using this command from the cli to see unused images since the pull screen is hard to follow. Basically after pulling i want to know what was updated so I can take a look and make sure is working.
thanks
steve
<this only works for tags that get updated though like latest>
docker images --format {{.Repository}}:{{.Tag}} | grep "\<none\>"