Posts by 71CHi2OOeuF2

    The situation is the following:


    - It doesn't work. The lock file doesn't get deleted

    - OMV maintainers claim it works

    - The SMB maintainer doesn't answer


    => Broken.


    My advice: Just move on and take something else. OMV is dead to me.

    What are the best practices for running either portainer or cockpit over a TLS encrypted network channel?

    Scenario reproduction:


    1) In OMV5, Open System -> Certificates -> SSL. Import a certificate created by a CA

    2) Open System -> General Settings, choose the imported certificate and check the Enable SSL/TLS box

    3) Clear browser cache and reload the page with https://. The browser should not show any certificate errors

    4) Under System -> OMV-Extras -> Cockpit, click Install.

    5) Click Open web. It will show a certificate error, because the presented certificate is not the one defined in the General Settings (Step 2 above), but rather a self-signed certificate generated by OMV. This is probably the case because nginx is not set to use the same certificate for the port where cockpit is listening on.


    What are the best practices to get this working?

    Are there any plans to configure virtual hosts for the internal nginx service in OMV 5?

    Reason: I would like to specify a TLS certificate for my docker hosts, like portainer which can be installed from the omv-extras tab. I have to use certificates due to HSTS cookies on the root domain.

    I know that I could install an nginx-proxy docker container and setup the docker hosts using environment variables, but, imo, this is far from convenient to do on the command line.

    I would have already raised the issue on the bugtracker, but I cannot reproduce it.
    Is anyone else who has encountered this able to safely reproduce it?


    From the actual script I cannot even fathom any case - except for maybe the server losing power while the script is still running - where the cleanup trap would not be sprung.
    Yet the lockfile was again present on my 24/7 server this evening, with all the above listed consequences.
    Puzzling..


    I don't have time to investigate on this issue now. I have created a cronjob that deletes everything in /var/run/samba-recycle* daily.


    Further things you might check:


    1) Change the shebang line to /bin/bash instead of /bin/sh
    2) Try looking at the following post, maybe it has something to do with it?
    https://stackoverflow.com/ques…tion-and-trap-exit-signal

    Hello


    I have installed docker-gui from omv-extras on my OMV 3.0 system.
    I can successfully download images and create containers from them.
    However, I would like to create two containers and communicate between the two and I fail to do so.


    Specifically, I would like to create the library/mariadb and zabbix/zabbix-server-mysql containers. Later I would like to add the WebGUI too.
    Now one container is the server and one is the SQL-DB server, so obviously the server needs to talk to the SQL-DB server.


    Since I wouldn't like to expose the whole SQL-DB server to my other clients on the network, I tried to create both containers on the bridge network interface of docker. On the mariadb container, I exposed TCP/3306 like this:




    Now in the server container I need to provide the hostname or IP of the SQL-DB server as environment variable. How would I best do this? I don't know the IP-Address of the container until it is started, and even then it wouldn't be guaranteed to be static. Using hostname doesn't seem to work, even when "Host name" field is supplied in the above screenshot. The name cannot be resolved and is not added under /etc/hosts.


    Any ideas?



    System
    Linux nas 4.9.0-0.bpo.3-amd64 #1 SMP Debian 4.9.30-2+deb9u2~bpo8+1 (2017-06-27) x86_64 GNU/Linux


    docker version
    Client:
    Version: 17.05.0-ce
    API version: 1.29
    Go version: go1.7.5
    Git commit: 89658be
    Built: Thu May 4 22:04:27 2017
    OS/Arch: linux/amd64


    Server:
    Version: 17.05.0-ce
    API version: 1.29 (minimum version 1.12)
    Go version: go1.7.5
    Git commit: 89658be
    Built: Thu May 4 22:04:27 2017
    OS/Arch: linux/amd64
    Experimental: false



    default bridge
    docker0 Link encap:Ethernet HWaddr 02:42:3f:28:60:23
    inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
    UP BROADCAST MULTICAST MTU:1500 Metric:1
    RX packets:6571 errors:0 dropped:0 overruns:0 frame:0
    TX packets:8685 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:0
    RX bytes:583133 (569.4 KiB) TX bytes:4495922 (4.2 MiB)

    Oh oh oh, backports. Please note that it is not guaranteed that it will work. OMV does not test against backports.

    Regarding the backports: I don't remember adding backports manually to sources.list or sources.list.d.
    But I have


    Code
    deb http://httpredir.debian.org/debian jessie-backports main contrib non-free

    in /etc/apt/sources.list.d/openmediavault-kernel-backports.list


    The only non-standard action I have done was installing omv-extras. omv-extras only provide Debian GNU/Linux, with Linux 4.9.0-0.bpo.3-amd64 as kernel for me. I cannot choose a non-bpo kernel. Or how would I go to a stable kernel that is supported by omv?


    The guide I followed was: http://omv-extras.org/joomla/index.php/guides
    There I chose OMV 3.x (erasmus) (STILL BETA). Yes, it says it's beta but it doesn't provide a non-bpo version.

    Oh oh oh, backports. Please note that it is not guaranteed that it will work. OMV does not test against backports.

    # apt-get install samba
    Reading package lists... Done
    Building dependency tree
    Reading state information... Done
    Some packages could not be installed. This may mean that you have
    requested an impossible situation or if you are using the unstable
    distribution that some required packages have not yet been created
    or been moved out of Incoming.
    The following information may help to resolve the situation:


    The following packages have unmet dependencies:
    samba : Depends: samba-common (= 2:4.2.14+dfsg-0+deb8u7) but 2:4.2.14+dfsg-0+deb8u6 is to be installed
    Depends: samba-common-bin (= 2:4.2.14+dfsg-0+deb8u7) but 2:4.2.14+dfsg-0+deb8u6 is to be installed
    Depends: samba-libs (= 2:4.2.14+dfsg-0+deb8u7) but 2:4.2.14+dfsg-0+deb8u6 is to be installed
    E: Unable to correct problems, you have held broken packages.

    I don't know if my problem is the same as you describe. I can easily reproduce the steps by calling the same script as cron would from the bash command line and it will return to the command line within milliseconds. It clearly didn't remove the files and a glimpse into ".recycle" on the SMB share had confirmed the presentiment. Removing the ampersand mentioned will however resolve the issue I have and the recycle bin gets cleared.


    Can we get the packet maintainer on board? Maybe there is a reason that things are done the way they are now...

    Update:


    I asked the question in the debian forums and I got this answer. It looks like the asynchronous call to run-parts will cause the child command to be terminated since the parent / outer command will run the command in background (The &-symbol in "&>"). Mabe we can remove it, or is there any reason for it to run async?


    Also the author of the answer suggests to use mkdir for atomic operations.

    Did you ever figured this out?


    I have a similar issue. My samba recycle bin does not get cleaned.


    My settings:




    I realized that in /etc/cron.daily/openmediavault-samba-recycle the scripts in the directory /var/lib/openmediavault/cron.d are being run, so effectively it runs:


    /var/lib/openmediavault/cron.dsamba-recycle-xxxxxxx


    Looking at the script and putting a few debugging prints I figured out that the script thinks it is already running:


    # Exit if another job is running.
    [ -e /var/run/samba-recycle-xxxx ] && exit 0


    I have seen that the file truly exists at the specified location (/var/run), however it is not running anymore.
    Maybe some cleanup fails?


    Manually running the script /var/lib/openmediavault/cron.dsamba-recycle-xxxxxxx does clean the recycle bin correctly and /var/run/samba-recycle-xxx gets cleaned up after script termination.
    However everytime I run "run-parts --report /etc/cron.daily", the /var/run/samba-recycle-xxxx file persists.



    I have not done any fancy weird stuff like special ACLs or shutting down the system at a specified time. It runs for weeks but the recycle bin doesn't get cleared...


    Any ideas?

    I haven't figured out what caused the misbehaving. What came to my mind is that it could have been caused by DNS caching or similar.


    It works very stable right now. What I did:


    1) Added a static route on my router: 10.8.0.0/24 is on gateway 192.168.1.10 (my OMV/OpenVPN instance).
    2) Added push "route 192.168.1.0 255.255.255.0" in OpenVPN extra options. This rule is created on the client then with metric 35 and is therefore lower than my other existing/local rule of the local network.
    My clients routes when connected through VPN are then:


    Code
    Network Destination Netmask Gateway Interface Metric
    0.0.0.0 0.0.0.0 192.168.1.1 192.168.1.60 35
    0.0.0.0 128.0.0.0 10.8.0.5 10.8.0.6 35
    10.8.0.1 255.255.255.255 10.8.0.5 10.8.0.6 35
    ...
    128.0.0.0 128.0.0.0 10.8.0.5 10.8.0.6 35
    192.168.1.0 255.255.255.0 On-link 192.168.1.60 291
    192.168.1.0 255.255.255.0 10.8.0.5 10.8.0.6 35

    DNS resolution is instant and all network resources, be it on the local network or on the vpn network, are working just fine.

    Thanks, I have removed two of three SNAT rules and also removed the MASQUERADE rule.
    Additionally, I uninstalled iptables-persistent, which I installed manually and also removed the iptables-restore line from /etc/rc.local
    But after rebooting OMV I had again three SNAT rules in iptables =O Is the rule added for every interface that exists on the system? Because I have exactely three interfaces (included the tun0 device from OpenVPN). It doesn't hurt to have them there, just would like to have confirmed this looks normal to you guys too.


    What is weird now is that when connected over VPN all my browsers (Chrome, Edge, Firefox) are able to load all web resources (internet & local web pages). Also I can access my SMB share through windows explorer. I swear it wasn't working before I had added the MASQUERADE rule. I am puzzled why it does work now ?(


    The only thing not working now when connected through VPN is that nslookup in command line can not resolve any name. nslookup tries to use the dns server of the remote ISP (the one it would use when not connected through VPN) instead of using the local one (in the network of my OMV).


    I thought the DNS server of the interface the traffic is flowing through was used. Is that not the case for MS nslookup?


    ipconfig shows, that my local ethernet adapter has the local ISP DNS server IP set.
    the isatap tunnel adapter has my OMV-network DNS server set (which is the one i want to use). Could this be a route problem on my client?

    If I issue the command twice then my POSTROUTING chain just grows:


    I understand that since MASQUERADE is first it has precedence before SNAT. What I didn't mention in my previous post: I have a WRT-firmware on my router (Asuswrt-merlin) and I think I can add a static route in there. So after I have observed the packets as mentioned by subzero79 I will add a static route and remove the MASQUERADE line - hence I will only have the SNAT rules anymore :)


    Could it be that since I have installed iptables-persistent the SNAT rule will be added on every boot? I think if that is the case, I'll have to remove the SNAT rules from my iptables-persistent config file.



    EDIT: ifconfig eth0:


    eth0 Link encap:Ethernet HWaddr xx:xx:xx:xx:xx:xx
    inet addr:192.168.1.10 Bcast:192.168.1.255 Mask:255.255.255.0
    UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
    RX packets:121960352 errors:0 dropped:0 overruns:0 frame:0
    TX packets:35165756 errors:0 dropped:0 overruns:0 carrier:0
    collisions:0 txqueuelen:1000
    RX bytes:181307377105 (168.8 GiB) TX bytes:18499311834 (17.2 GiB)
    Memory:df300000-df37ffff

    Yes, I have seen the SNAT rules before I used iptables-save:


    192.168.1.10 is my OpenVPN/OMV server.
    I don't know why the SNAT rules are in this list three times. Can I savely delete two of them?

    Thanks for your answer. I will try it out as soon as possible. I am no network professional so it is good to get some background information.


    luxflow: Yes I have enabled that all traffic should be redirected through the VPN server (Which is what I want).

    Hello,


    I have installed the OpenVPN Plugin for OMV 3.0 and I could connect from a remote site to the site with OMV successfully. However I realized quickly that some of my internal/remote network resources, especially my DNS server and also internet access was not available until I ran the following command:


    Bash
    iptables -t nat -I POSTROUTING -s 10.8.0.0/24 -o eth0 -j MASQUERADE

    (I have put this line in iptables-persistent file and load iptables-restore in /etc/rc.local).


    I have read on other forums regarding similar issues that it is quite normal to enable masquerading when using OpenVPN.
    Is there any downside for enabling masquerading or is there a reason that the plugin doesn't do it automatically?


    I would like to understand if what I did is an exceptional case or if it is common and necessary in all cases and a normal procedure.


    Some background: Both the remote and local site do have the same subnet (192.168.1.0), which are not ideal (I know but I can't change any network).

    I only bring up these arguments when it applies to plugins that are not ported. If someone chooses to port them, I don't care if they should or should not be on a typical NAS because the work is being done by someone who wants to do that work. But, if someone is expecting me (pretty much no one else is helping) to port them, I am going to tell people why I don't want to port it.


    It isn't coming. It is already ready.


    Different plugin maintainers have different opinions on what a NAS should be used for. Obviously someone wants to work on jdownloader, nzb,etc. I don't want to work on dnsmasq.


    OK, I understand. Thanks for your answer.