Posts by VaryingLegwarmer

    Duckdns is out on the internet and can't resolve LAN ip addresses. It knows nothing about your LAN, doesen't know where it is and can't see it because it is a LAN address and not an internet address, with your LAN segregated by your router. There are likely many people around the world that have the same LAN ip ranges used that you have so even if duckdns could see into a LAN, how would it know to send the traffic to you instead of the guy down the road? and even if it did, as I mentioned, hairpinning is not allowed by a lot of the ISP's equipment.


    I can pretty well guarantee you that you have a DNS issue, but if you want to confirm for yourself, the next time you can't access a container using the domain, try using the ip address of your server and the port number of the container. You will be able to access it.

    I can access the resources using the local IP while this happens, like I mentioned I keep connected via ssh even when the urls were not resolving in the browser. Of course duckdns doesn't know anything about my LAN but when I hit the duckdns url (that is pointing to my local ip in duckdns) it doesn't need to know it, it just needs to resolve it to my local IP. If I'm inside my network of course it resolves to a local resource (my omv machine with NPM). If this wasn't the case my setup would not work at all but it does work for most of the time. This is a pretty standard setup for home servers in order to have a domain name and https for local resources so I thought others might know what was causing my issue since I don't think most are experiencing this. Are you familiar with this setup? Your explanation doesn't account for the fact that the resource is inaccessible only for a little while, after hitting the url several times eventually it does resolve back to the resource again.

    The idea still stands.


    When you enter a URL in a browser, your computer or devices queries a dns server to find out what the actual IP address is. With no dns servicing your LAN, the browser relies on the dns cache in your computer or router, but a cache is not permanent. Once a URL to ip resolution calls out of the cache, there is nothing left to point to the ip address so another lookup is performed. If the only dns server found is out on the internet then that is where the query is sent and since the internet dns servers can only resolve as far as you public ip but know nothing about your lan, there is nothing available to fill that request. You should always be able to access them on your lan using the ip and port, but some services will not work right, such as nextcloud or vaultwarden since nextcloud uses the domain for authorized acces and share link creation and vault warden required ssl certificates.


    If you don’t want to run Pihole or dnsmasq and direct your devices to use it as the dns server, the only other option is a hosts file edit on things that allow it.

    In duckdns I am configuring the url to resolve to my local IP, thought that was meant to account for that DNS routing to my local ip no? In the guides I followed they never mention this misbehavior.. besides that, why then after like 30 seconds of hitting the url it eventually brings back the service I meant to access originally? That's why I fear a container or service is "going to sleep". Routing works eventually but service is inaccessible for a while.

    Does this happen from outside your LAN or inside?

    I guess I should've offered more details. All access is done from inside the network, no external access is enabled. NPM urls are basic duckdns urls pointing to my containers in the same machine. I'm not running a local dns, as I understood NPM is enough for my use case and it is since it does proxy to the right services using the URLs I have set for them in NPM but it stops from time to time and that's the issue. Sometimes it can take days for the issue to happen but it does happen (with no server downtime). Does your diagnosis still apply? Setting up dnsmasq sounds like overkill when I already have npm which is (?) enough. Not that experienced with reverse proxies though let alone dns servers so any suggestions are appreciated.

    I'm facing a weird intermittent issue. From time to time, NPM's proxied hosts appear to be offline when accessed via their corresponding NPM URLs. Interestingly, NPM itself remains accessible during this time and typically I can regain access to the proxied hosts by visiting the admin interface URL linked to the NPM container through NPM. It's like the proxy goes to sleep for the hosts but not for its own url? Not sure what's going on or how to pin down the cause so suggestions are appreciated. At first I thought it was the drive spindown but docker data it's stored in an SSD that has all power management disabled. OMV installation drive doesn't have any power management enabled either. Does docker put containers to sleep by default? Every host proxied by NPM is a docker container as well. I can ssh in to OMV during this time too so there doesn't seem to be any downtime, it's like an NPM related issue somehow.

    I'm getting a similar error since I installed OMV. According to https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=966218#27 it was a kernel bug that should've been fixed way back in 5.9, we're at 6.1 in OMV6 and I'm still getting it. Now it's better (output below), I used to get the "firmware: filed to load iwlwifi-ty-a0-fg-a0-67.ucode" lines that OP was getting.


    Code
    $ sudo grep -i "iwlwi" /var/log/syslog
    Oct  1 19:36:59 server sensors[1689]: iwlwifi_1-virtual-0
    Oct  1 19:36:59 server kernel: [   19.324809] iwlwifi 0000:05:00.0: firmware: direct-loading firmware iwlwifi-cc-a0-72.ucode
    Oct  1 19:36:59 server kernel: [   19.324873] iwlwifi 0000:05:00.0: api flags index 2 larger than supported by driver
    Oct  1 19:36:59 server kernel: [   19.324907] iwlwifi 0000:05:00.0: TLV_FW_FSEQ_VERSION: FSEQ Version: 89.3.35.37
    Oct  1 19:36:59 server kernel: [   19.349632] iwlwifi 0000:05:00.0: firmware: failed to load iwl-debug-yoyo.bin (-2)
    Oct  1 19:36:59 server kernel: [   19.349838] iwlwifi 0000:05:00.0: firmware: failed to load iwl-debug-yoyo.bin (-2)
    Oct  1 19:36:59 server kernel: [   19.349910] iwlwifi 0000:05:00.0: loaded firmware version 72.daa05125.0 cc-a0-72.ucode op_mode iwlmvm
    Oct  1 19:36:59 server kernel: [   20.189033] iwlwifi 0000:05:00.0: Detected Intel(R) Wi-Fi 6 AX200 160MHz, REV=0x340
    Oct  1 19:36:59 server kernel: [   20.316499] iwlwifi 0000:05:00.0: Detected RF HR B3, rfid=0x10a100


    Workaround:

    # nano /etc/modprobe.d/iwlwifi.conf

    and then entering the following content

    options iwlwifi enable_ini=N

    After saving and exiting nano you will need to

    # update-initramfs -u

    Got one working that is very consistent. Still open to alternatives but this accomplishes the task I set out to do.


    So I just got chatgpt to do the groundwork for the script and tweaked it afterwards. It works pretty well, had to add mount -a because otherwise omv was erratic in automounting them after decrypting... it left some unmounted while it did mount others but after I added mount -a it seems to be mounting them as expected. Anyways, is there a way to make sure it picks up and mount the drives right after decrypting in an omv "idiomatic" way? Also wanted to stop docker before mounting but had issue starting it afterwards so any suggestion would be appreciated. Below is the script.



    I have several luks encrypted drives that I'm not interested in autodecrypting during boot. I'm fine with having to ssh in and run a script that asks for my passphrase once in order to unlock them all (plus also restart docker and mergerfs pool) whenever the machine needs to be restarted for whatever reason, is there a popular script/way to do this? Even if it's not quite what I need I could modify it. Searched the forum a little bit and didn't find anything quite like that. I found this https://github.com/TheFax/Auto…ster/automount_cryptodisk which isn't exactly what I need but I could modify it to fit me if there's nothing better out there. Just don't want to have to bang my head against the wall debugging a script trying to reinvent the wheel.

    Backing up and restoring the header
    The header on a LUKS-encrypted device contains details of the encryption method and cipher, and also the master key needed for en-/decryption, itself encrypted by up to 8 passphrases, stored in key slots 0-7. It is advisable to make a backup of the header whenever you create an encrypted device or add, remove or change any of the passphrases. If the header or any of the key slots become corrupt (or you accidentally remove all the keys! - see above), you can restore the header from a backup, which will restore the passphrases as they were in the backup.

    Does the plugin allow for the backing up of headers or is this a suggestion that you should do it in the CLI? I can't find how to do it from the plugin.

    I created a docker network for all of the containers whose access is going to be managed by npm so they're all on the same network. No external access is being granted to workbench!


    Quote

    In any case, if what you want is for npm to point to the GUI interface, you simply have to write the IP and port of the GUI in those host fields in npm. That should send you to the GUI.

    Yeah I was being an idiot eventually I managed to figure this out yesterday. Not sure why I didn't try it, for some reason I assumed it wasn't available to the proxy even though the network is bridge :facepalm:


    This issue has been solved. Thanks!

    I got nginx-proxy-manager working via Compose it points to deluge and NPM using a duckdns subdomains and letsencrypt, for both of these apps I'm able to reach them in their own subdomain with a valid letsencrypt certificate and HTTPS so I know npm is working but when I try to point to OMV's workbench, both on port 80 and changing the port to something like 8888 I keep getting "502 Bad Gateway", not sure why. I'm able to reach it via HTTP though using the domain set for it in npm. I've tried both enabling https on workbench and disabling it, enabling it can cause it to not be reachable, omv-firstaid has been very helpful in getting access back! That didn't happen before I set npm... anyways, how can I get NPM to point to workbench properly via https? All access is done locally.


    XgMS8WH.png