Posts by m4tt0

    Hmh, there still is a failure message for this in your log-file. Something's not right. I guess it's permissions.

    Do all your application data folders really sit under /mnt, i.e. /mnt/zoneminder here, for all your docker containers? No need to adapt those paths from dlandon's example configuration?

    You have an issue with the group id "955" and need to change it. Watch some TechnoDadLife YouTube videos on installing docker containers on OMV. PUID and PGID parameters need to be adapted for most docker containers. He explains it very well.


    Not sure you have other issues, too, but I'd fix that one first...

    Just ran the regular update procedure to upgrade openmediavault-backup and omvextras today and found the above error code in the console output.


    Here the full log entry:

    Not sure this is relevant, but wanted to report it, in case it is and others run into it, too...

    The problem reoccurs every day at roughly the same time at 12:26/12:27. I've increased the samba log level, hoping to capture more detail on the root cause. You'll find it here (too long to post inline): https://pastebin.com/hbQpys2M

    The connection between the containers broke at exactly 12:27:49 today. The log excerpt covers that time stamp, plus previous and subsequent events.


    I'd really, really appreciate any insights in what keeps causing this or on how to debug this further...

    I use OMV as file and media server. I've been running a Tvheadend container (host mode) and an Oscam container (bridge mode) for several months without problems. Oscam is listening to TVH requests on port 9000. The setup used to work without problems for several months. Some weeks ago, I started to run into issues. At apparently random times (usually once or twice per day), the Oscam container would lose its "connection" to the TVH container and start writing "unknown socket" errors in it's log file. At that stage I also lost my ability to descramble programs via Oscam. Restarting either Tvheadend or Oscam (no matter which) solved the problem and reestablished the proper configuration.


    In order to "debug" the problem, I compared the timestamps of when Oscam starts writing the "unknown socket" errors, with the OMV syslog, and found smbd error messages appeared coincidently. Here the output of systemctl status smbd, which describes the errors (the last two lines also appear in the syslog):

    192.168.1.1 is the IP address of my gateway. Do you have any idea, what "breaks" the connection of the two docker containers? I'm wondering whether the smbd error is causal or just coincident with that breakage, but also don't know how to "debug" this further...


    Any help appreciated...

    I believed I've figured out the rsyncd / smbd problem. Permissions for the rsync target were not set correctly. Not related. The first part of the question remains though. Any advice appreciated...

    gderf Yes, all updates have been installed.

    votdev I'm not exactly sure how to check that, but got the following:

    Code
    root@MyRS:/etc/systemd# systemctl is-active network
    inactive
    root@MyRS:/etc/systemd# systemctl is-active resolvd
    inactive

    I do experience regular connection issues across docker containers since some time though. They coincide with the following syslog entries.

    Code
    Sep 11 00:09:39 MyRS rsyncd[1963]: forward name lookup for unifi.local failed: Name or service not known
    Sep 11 00:09:39 MyRS rsyncd[1963]: connect from UNKNOWN (192.168.1.1)
    Sep 11 00:09:44 MyRS smbd[1966]: [2020/09/11 00:09:44.986037, 0] ../source3/smbd/process.c:335(read_packet_remainder)
    Sep 11 00:09:44 MyRS smbd[1966]: read_fd_with_timeout failed for client 192.168.1.1 read error = NT_STATUS_END_OF_FILE.

    Might not be related though and I don't want to hijack the thread...

    Interesting. Just checked on my system. I had upgraded from OMV 4 to 5. /etc/resolv.conf was not symlinked. There is no resolve directory within /run/systemd. I ran the omv-salt command, but no change. /etc/resolv.conf just contains my (standard) nameserver entry directly.

    Thanks again, votdev. Works brilliantly! My bond is up and running... :-)


    Still gave me headaches, but just because of my setup and because I did not really know what I was doing when I started. As this might be useful for others, here a little summary of my learnings:


    Situation:

    - I have two NICs on my OMV NAS, which are both connected to a managed Unifi 24 Port PoE Switch, which in turn connects to a cable modem via a Unifi Security Gateway.

    - I configured everything with my laptop which connects to the Unifi network via Unifi access points and WLAN.

    - Objective was to bind the two NAS NICs to increase bandwidth and fail-over security between server and switch


    Complication:

    - 1: My Unifi Controller docker runs on the OMV server, so "losing" that means I cannot configure anything via my laptop anymore

    - 2: I run a Pi-Hole docker on my OMV server, too, so "losing" that means I can neither resolve hostnames nor connect to the internet anymore


    Solution:

    - It helps to start by removing Pi-Hole from the uplink chain, by reverting to the USG as standard gateway and DNS server in the Unifi controller. Less trouble.

    - Before binding the NICs on my OMV server, I aggregated two empty(!) ports on my Unifi switch, to "prepare" the receiving end for the bond. This allowed me to keep the connection with the network.

    - Only then, I deleted the network configuration on the OMV server and created the bond, which works now, thanks to votdev's recent changes.

    - After applying the new configuration, the bond started configuring.

    - I then replugged the two LAN cables coming from the OMV server into the pre-configured aggregated ones, and the bond was active.

    - So far so good, but the bond came with a new MAC address, and the Unifi gateway would not resolve the hostname correctly anymore.

    - To fix that, I logged into the security gateway and edited /etc/hosts by replacing the MAC address of the old single LAN connection by the new one from the bond.

    - The last step was to reconfigure Pi-Hole: I had to delete and recreate the MacVlan network, as the parent had changed to "bond0". Finally, I reinstated it as standard gateway and DNS server in the Unifi controller and all was done. :-)

    I believe I have some hicup in my network configuration after my upgrade from 4 to 5:

    If I change the configuration of a network interface, e.g. by simply adding an IP adress for the default gateway, and then apply the configuration, I get the following error:

    Do you have any idea how to fix this problem?

    allwan Could you elaborate on how you have achieved that, please? I have two LAN adapters, which are configured individually. I tried and failed several times to link them. My problem was that apparently you have to delete the configurations in order to create the bond from scratch. But removing the configuration, I lost access to the web-ui. It's like a catch 22. I've seen reports working around that by installing WLAN adapters in addition, but could not get those to work either...


    As such, any advice appreciated!