Posts by m4tt0

    Never thought about it much. I just find the setup practical. All files are in shared folders as they are not only accessed by docker containers, but also via SMB, etc. And storing all Docker Container Files in Appdata ensures they are all in one place, I can access and edit them via SMB, too, they are regularly backed up as part of my file server backup solution, and it gives me "independence" from the OS. If, for instance, my server or server OS breaks down, I can easily setup a new system with docker, restore all files, including the docker configurations and I'm up and running without much hassle. I could even change the OS, or (one day) upload everything into some virtualized cloud server. It would still work.


    But there are many roads to Rome, and I don't think there is any "right or wrong" here. Just personal preference.

    I've got no clue whether my setup is "best practice", but it grew over time and has been very useable since several years.

    Here my shared folder structure:


    /Appdata/adguardContains all docker container CONFIGURATION files, but I map other shared folders (e.g. media or download folders from below) into the containers, too
    /easyepg
    /...
    /BackupI use this to backup all kind of clients on my network
    /DocumentsAll my personal records, office files, invoices, you name it. LIMITED ACCESS.
    /Media/Home-videosRoot directory to host all my media files
    /Movies
    /Music
    /Music-Videos
    /Photos
    /Recordings

    /TV-Series
    /Public/DownloadsI use this to store files I want to share across my network. FULLY ACCESSIBLE (that is the difference to /Documents).
    /Shares
    /...


    For what it's worth. Hope this helps...

    Thanks, macom , but not really. I understand that Volker suspects "some issues" with nginx, also by pointing at the code snippet issuing the alerts, but I don't know how to debug the problem at the nginx side. Also, I don't think I've messed around with the nginx installation whatsoever. The only "messing around" I did, was upgrading from OMV4 to OMV5 instead of installing from scratch...

    I've found a similar report on reddit. They recommended running the following to fix it:

    Code
    omv-salt deploy run nginx phpfpm

    I ran this command two days ago and the problems have not reappeared since. As I've never been able to reproduce the problem though, and two days are clearly not enough to prove anything, I'll just continue watching and waiting for now...


    EDIT: I encountered the same chain of warnings again, about a week after running the command. So most likely not a solution either...

    Thanks for clarifying. Yes, I was completely surprised by this, too. While troubleshooting I stumbled over some scripts people created to fix the hostname resolution for static IPs, but they looked like hacks. I've come across similar issues with Unifi devices before, where they go their own way. I think in their minds you are just not "supposed" to fix IPs on your clients, but just fix them centrally on the router through their controller and ecosystem.


    And yes, I believe Adguard could do the DHCP serving, too. If I run into trouble with this again, I'll certainly give it a try. Thanks again!

    I ran into trouble with a bond interface on my OMV server yesterday and want to report it, as I'm not sure my system is cleanly configured now and as this might be a bug within the bond configuration and/or a missing feature within omv-firstaid.


    My OMV server has two 1GB NICs, which were configured as bond0 with a static IP address. Trying to change the static IP address to DHCP and applying the new configuration the OMV webui errored out. I'm not sure whether this was a problem on the OMV or on the router side. In any case, afterwards my server disappeared from the network.


    I've then logged into the CLI of my server and ran omv-firstaid to recover a working network configuration. I just tried to reestablish one of the NICs as IP4 client using DHCP, but the configuration with omv-firstaid failed on both,, despite attempts to reboot the server. I've inspected the network configuration wihtin /etc/openmediavault/config.xml and found that the bond-parameter within the interface section still contained a 1. I've changed it to 0 and rebooted, but to no avail.


    I've then found reports on similar issues with RPi's (not what I have, my server is Intel-based), and ran the following commands to fix my network configuration:

    Code
    sudo netplan apply
    sudo omv-salt deploy run systemd-networkd

    After rebooting the device again, the last configuration I tried to establish via omv-firstaid fortunately started working and the server reappeared on my network.


    Logging back into the OMV-GUI, I was immediately asked to apply a changed configuration, which I did. It took several minutes, but afterwards everything seemed fine. The configured NIC is correctly listed under Network -> Interfaces. I've left the second NIC unconfigured and waived the bond for now, as I had no appetite to ask for even more trouble.


    The only observation I've made is that ip a shows two bridge interfaces (something like br-0345234343436) in addition to the two NICs I've expected. I'm not sure whether these are remnants of the broken bond configuration or unrelated (e.g. I do run bridge networks within docker on that server, too).


    In any case, interested in how you read the situation and to share my experience in case somebody else runs into similar issues.

    I have issues with hostname resolution on my local network and am looking for help. Until recently I used "local" as domain name, which created problems with a linux-based client running mDNS. I've therefore changed it, but still have problems. Let me try to describe:


    My setup:

    • My router (Unifi Dream Machine Pro) sits on 192.168.1.1 and the domain name is configured as "lan" there.
    • My OMV server sits on 192.168.1.81 and the domain name is configured as "lan" there as well.
    • I run Adguard in a Docker container on my OMV server. As such, 192.168.1.81 (my OMV server) is configured as 1st name server in the network configuration of my router
    • Within Adguard "[/lan/]192.168.1.1" is configured as Upstream-DNS-Server, to ensure that all addresses within the "lan" domain are resolved by my router directly.

    What I observe:

    • All external hostnames are resolved fine.
    • Almost all hostnames of clients within my "lan" LAN resolve without problems, too, and I can ping them using "ping hostname" or "ping hostname.lan", as expected.
    • I've tested this from a Debian WSL on my Win10 laptop, as well as from a linux-based client within my LAN.
    • The only exception is my OMV server: It neither resolves by "servername", nor by "servername.lan".
    • Now, if I execute "ping servername.local" from my Debian WSL is DOES resolve ("local" was the old domain name within my router as well as my OMV server). The same command fails (as expected) from my linux-based client, though.

    Do you have any idea what goes wrong here? Or at least how I could debug this moving forward? Any advice appreciated.

    Hmh, there still is a failure message for this in your log-file. Something's not right. I guess it's permissions.

    Do all your application data folders really sit under /mnt, i.e. /mnt/zoneminder here, for all your docker containers? No need to adapt those paths from dlandon's example configuration?

    You have an issue with the group id "955" and need to change it. Watch some TechnoDadLife YouTube videos on installing docker containers on OMV. PUID and PGID parameters need to be adapted for most docker containers. He explains it very well.


    Not sure you have other issues, too, but I'd fix that one first...