Posts by AbrahamLincoln

    Thanks! That was enough of a clue for me to sniff it out! Somehow the nginx and proxy-host folders had root permissions. Fixed it and it seems to be humming along.

    Thanks for having a look!


    Code
    ➜  data> cd logs
    ➜  logs> la
    total 16K
    -rw-r--r-- 1 OMVUser users 1.1K Jan  8 10:01 fallback_access.log
    -rw-r--r-- 1 OMVUser users  476 Jan  8 09:56 fallback_error.log
    drwxr-sr-x 2 root    root  4.0K Jan  8 10:20 nginx/
    drwxr-sr-x 2 root    root  4.0K Jan  8 10:20 proxy-host/
    ➜  logs> chown OMVUser:users nginx
    ➜  logs> chown OMVUser:users proxy-host

    Ah, I had added those because I was having the problems I was having and found something online...


    So I ran it without and still can't keep the container running.

    Here's the log:


    Code
    fail2ban  | 2025-01-09 16:27:32,337 fail2ban.jailreader     [1]: NOTICE  No file(s) found for glob /var/log/proxy-host*.log
    fail2ban  | 2025-01-09 16:27:32,337 fail2ban                [1]: ERROR   Failed during configuration: Have not found any log file for npm-docker jail
    fail2ban  | 2025-01-09 16:27:32,337 fail2ban                [1]: ERROR   Async configuration of server failed
    fail2ban  | Traceback (most recent call last):
    fail2ban  |   File "/usr/lib/python3.12/site-packages/fail2ban/client/fail2banserver.py", line 193, in start
    fail2ban  |     cli.configureServer(phase=phase)
    fail2ban  |   File "/usr/lib/python3.12/site-packages/fail2ban/client/fail2banclient.py", line 243, in configureServer
    fail2ban  |     raise ServerExecutionException('Async configuration of server failed')
    fail2ban  | fail2ban.client.fail2bancmdline.ServerExecutionException: Async configuration of server failed


    And here's the cleaned up compose file:


    This is helpful, thanks.

    I did a bit more poking around, but still no luck.


    If you don't mind, here's the NPM compose file. I'm using the same global variables I use for all my containers:


    And this is the fail2ban container:


    and global environment file, just in case:


    Code
    PUID=1000
    PGID=100
    TZ=America/New_York
    DATA=/srv/dev-disk-by-uuid-a0ce297e-9f49-4168-8f3c-454009c46288

    I've followed the NGINX Proxy Manager with fail2ban guide (thanks BernH!) but fail2ban repeatedly goes down.


    The problem seems to happen in Step 4 with setting up the logpaths.


    I added code from step 4 to the npm-docker.local file in the jail.d directory.

    When I ran the fail2ban docker, it would go down. Here's the log:


    I commented out the logpath and it runs:


    logpath = /var/log/default-host_*.log

    /var/log/proxy-host-*.log


    I'm at the boundary of my knowledge here. Maybe these log paths are different depending on your configuration, but I didn't see anything in the guide about that. I've tried some troubleshooting but didn't have any luck.

    Any ideas? (thanks in advance)

    That is telling you that it will depend on each case. Specifically, if your ISP is behind CGNAT you will have no choice but to send ALL your data through their servers.

    You can close your eyes if you want, but from the moment you depend on a third party you depend on their security and speed. If you use Tailscale you depend on Tailscale and Wireguard. If you use Wireguard you only depend on Wireguard. It is quite simple to understand.

    Thanks, this is exactly the kind of thing I was looking for when I said "Is that a fair summary? Am I miscalculating the risk somehow?" above.

    What I'm gathering is that for you, encrypted data on a machine you don't own, isn't worth it. For me, I trust encryption so I don't have to trust the machines that encrypted data travels through. Which means for me, it's worth it. Sounds like we're different in that way.

    That is not right. Traffic passes through Tailscale servers.

    I said "data" deliberately. Yes, your public keys and metadata go through tailscale servers, but your data does not. And then there are other advantages (magicDNS, setting up other machines, ease of use for friends who are less technical, etc) that again, for me, today, tip the scales.

    But if you don't agree, don't want to read their docs, or won't believe them if you do: 👍

    For anyone else reading this thread, the metadata tailscale has access to is:

    Quote

    limited metadata regarding your device used to access the Tailscale Solution, such as: the device name; relevant operating system type; host name; IP address; cryptographic public key; user agent (where applicable); language settings; date and time of access to the Tailscale Solution; logs describing connections and containing statistics about data sent to an from other devices (“Inter-Node Traffic Logs”); and version of Tailscale Solution installed. This information is needed to provide the Tailscale Solution to you. However, please note that Tailscale does not process, or have the ability to access, the content of User traffic data transmitted through the Tailscale Solution, which is fully end-to-end encrypted.



    But if there is no Tailscale there is no documentation to read.

    Well, there would be wireguard documentation to read.

    What does that mean? If you do not need to go through a third-party server, I would not do it in any case.

    For me, the simplicity of setting up tailscale on multiple machines with different operating systems vs. using tailscale's service instead of 100% my own is a risk I'm willing to take.

    From reading their docs, the data itself actually doesn't go through their servers and everything is encrypted.

    via their site:


    If what you're saying is running your stuff 100% pure through everything you own is always better – I get the spirit of that as a principle and if that works for you, great! I admire it. But I'm, personally, willing to take some calculated risks and this seems like a pretty safe bet. Is that a fair summary? Am I miscalculating the risk somehow?

    I got paperless-ngx running in docker. If it's helpful, here's the docker file I used:



    And the environment file

    I'm trying to get Quick Sync Video working by following this guide:

    How to activate Intel Quick Sync in docker (Jellyfin, Handbrake,...)

    When I run


    vainfo


    I see these errors:



    I'm using a Intel® Pentium® Processor N3700 that supports QSV.


    I added the lines into my docker file for plex:


    devices:

    - /dev/dri/renderD128:/dev/dri/renderD128

    - /dev/dri/card0:/dev/dri/card0


    When I force a transcode it seems to be working. I see the (hw) note in Plex and when I run intel-gpu-top I'm seeing activity.

    But I do see those errors. Should I be concerned??


    :S

    I was in a similar spot and wanted to access the OMV webGUI running under the Let's Encrypt certificate. However, I couldn't find a solution and ended up getting this working more quickly and easily another way. I consider it good enough.


    I generated my own, self-signed certificate, within OMV under System > Certificates > SSL.


    I selected that certificate under System > General Settings > Web Administration under Secure connection. Enabled SSL/TLS. Then selected a port - not 443, since that was used by the LetsEncrypt certificate.


    You'll get warnings in your browser since it's self-signed, but it works and it's encrypted.


    If someone more clever than me has a way to set something up with the nginx proxy-conf stuff, that'd be great to see...

    Ah, look at that - I will put that Docker trick into use!


    And to clarify the rest, just to make sure I understand...


    Reading some of your posts it sounds like you don't recommend any adjustments to the Physical Disk Properties in OMV under Disks > Edit at all. Including the spindown time and write cache. They don't really work.


    I did a quick test and disabling the Advanced Power Management and Automatic Acoustic Management in Physical Disk Properties and that was enough for the drive to remount again automatically on reboot – so success there – but I like the idea of spindown and write cache, if they work.


    And you recommend updating SATA firmware using the method here:
    https://wiki.odroid.com/odroid-xu4/software/jms578_fw_update


    And then managing spin down via the commandline as in Example 4 on that page. Just bypass the web gui entirely.


    And you underclocked using this as a guide:
    https://wiki.odroid.com/odroid…requtils_cpufreq_govornor


    I'd read about the CPU gov but dismissed it as kinda unnecessary for me. But now I'm wondering.


    All that said, I wonder if spindown and underclocking would even be a factor for me. I've got a 8TB drive that I'll be using as Plex server with Transmission seeding much of what's downloaded, and running Syncthing on a a few gigs of files shared with a handful of people as a dropbox replacement. (By the way, I was using a Banana Pro for all this for a few years and have already been stunned at the improvement.) I also have a pi-hole on an old RPi I may consolidate onto the HC2 as well, figuring it can probably handle it. I suppose the disk may spin down now and again, but wonder if it's worth bothering. ...and now also wondering if the passive cooling is enough or should I get a small, quiet fan?


    Anyway, let me know what you think. I appreciate your sharing your experience having already head down this path.

    Thanks for taking the time. I will give that a shot and report back.


    In the meantime, could you tell me more about this?


    Also, you may want to create a shared folder on the HDD for Docker images. Step 13. Base path for docker instead of /var/lib/docker on the SD card. Also a share for the docker configs.


    Following TechnoDadLife's videos, in sharedfolders on the HDD I've created an "appdata" folder and set all the /config directories in my docker containers to that appdata folder, creating a subdirectory for sonarr, transmission, etc. as I go.


    But it sounds like you're describing that and something else? Can you lead me to some more into on how to set that /var/lib/docker path to the HDD?


    Thanks so much!

    Short version: After creating a ext4 filesystem on an 8TB drive, I reboot, am notified there are errors, and then the drive does not mount automatically.


    I've been working on setting up the HC2, done this set up a couple times and kept notes to verify I could reproduce it and I didn't miss something. Here's my whole process from the start, just in case it's relevant.


    • Download and Install on SD Card with Etcher

      • OMV_4_Odroid_XU4_HC1_HC2.img.xz
    • Attach drive and SD card, start it up.
    • Wait 30+ minutes
    • Login to web gui
    • General Settings > Change web admin password
    • Set Date and Time
    • Specify ethernet on network - probably not necessary, but helpful for monitoring
    • Monitoring - turn on
    • Update Management - ran update. Saw errors in dialog while upgrading. "Bad Gateway"

      • To check on this I SSH'd in as root. Ran apt-get -f install - saw 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. and decided to move on.
    • Reboot
    • Check for updates again. Nothing to be updated.
    • OMV-Extras - enabled Docker CE
    • Installed Docker-gui via commandline (because the past few times I did it the "right" way caused this error "Failed to execute XPath query '/config/services/docker'" and I (right or wrong) learned from this thread just to install via the commandline with apt-get install openmediavault-docker-gui
    • Reboot. (Maybe superstitiously)
    • Disks > Edit

      • Set spindown, noise vs. performance level, etc
    • File Systems > Create

      • set up as ext4 with the name "eighttb"
    • Mount the filesystem via the gui - everything looks good. The Device is /dev/sda1 the label is eighttb
    • Restart - the gui has a popup with "an error occurred" and only an "ok" button - doesn't let me see the error.
    • Upon reboot, the filesystem is not mounted.


    Looking in OMV under FileSystems, two filesystems appear related to the drive:


    /dev/disk/by-label/eighttb
    and
    /dev/sda1


    See here:



    IIRC, the "/by-label/" thing is like a shortcut in the fstab? But for some reason I had the following problem in previous attempts to get this done. When I set up share folders, upon reboot the ../by-label/... mount was referenced but missing, while the sda1 listing was not referenced and unmounted.


    I tried a few things on previous attempts to fix like


    omv-mkconf fstab 
    omv-mkconf systemd 
    mount -a 


    And adding mount -a  to a scheduled job (see more on that below) but wasn't able to solve it. Perhaps I missed something.


    I dug through the syslog briefly and pulled a couple excerpts that may be helpful (total guesswork here, so apologies if it's useless):

    Code
    Mar 17 11:04:14 odroidxu4 systemd[1]: apt-daily.timer: Adding 29min 31.747306s random time.
    Mar 17 11:04:15 odroidxu4 kernel: [  229.851883] sd 0:0:0:0: [sda] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08
    Mar 17 11:04:15 odroidxu4 kernel: [  229.851908] sd 0:0:0:0: [sda] tag#0 Sense Key : 0x2 [current] 
    Mar 17 11:04:15 odroidxu4 kernel: [  229.851928] sd 0:0:0:0: [sda] tag#0 ASC=0x4 ASCQ=0x1 
    Mar 17 11:04:15 odroidxu4 kernel: [  229.851939] sd 0:0:0:0: [sda] tag#0 CDB: opcode=0x88 88 00 00 00 00 03 a3 81 2a 00 00 00 00 08 00 00
    Mar 17 11:04:15 odroidxu4 kernel: [  229.851948] print_req_error: I/O error, dev sda, sector 15628052992
    Mar 17 11:04:15 odroidxu4 systemd-udevd[375]: worker [474] terminated by signal 9 (Killed)
    Mar 17 11:04:15 odroidxu4 systemd-udevd[375]: worker [474] failed while handling '/devices/platform/soc/soc:usb3-0/12000000.dwc3/xhci-hcd.3.auto/usb4/4-1/4-1:1.0/host0/target0:0:0/0:0:0:0/block/sda/sda1'



    Sooooooo...


    I've been very close to having the whole NAS set up with 5 docker containers running and configured, then hit this snag. I've learned a lot tracking it down, but... ready to fix it and move on.


    I tried what was suggested in this thread, which creating a scheduled job of mount -a on reboot. This did mount the drive, but after my docker containers started, which meant they didn't work. So that solution didn't work for me.


    Insights and kind instructions are appreciated!