Beiträge von prplhaz4

    By default, when using important apps, you try to always relocate somewhere not / , becuase if your root devices is full,the app keeps on working ,etc ,etc.


    Edit: And since k8s is ecosystem, you try to shelter that ecosystem from others.

    I agree here too with a couple additional points.

    1. Since images are immutable, constantly changing and easily downloadable they don’t belong anywhere near my root drive and related backups

    2. Image size is volatile and with omv using the whole disk by default and favoring “minimal” installs it just feels like additional risk. K3s service availability should always be secondary to core OS services (otherwise wouldn’t you run k3os?)


    As for the filesystem dependency many of us running docker on omv already have this dependency and systemd seems to handle it quite well


    All that being said great idea and great plugin overall!

    For anyone following, if you can keep your finger off the update button it looks like a fix on the docker side is in the works:

    Daemon does not start without apparmor · Issue #44900 · moby/moby
    Description Running on Debian 11 Bullseye, and using deb https://download.docker.com/linux/debian bullseye stable in sources.list.d. After upgrade from version…
    github.com

    Zitat

    FYI, this was discussed in today's maintainer call and a revert to the apparmor startup logic is being fast-tracked for a 23.0.1 release: #44902 (comment)

    1. Yes it is possible - and IMO accessing OMV web over SSL is a requirement (at the very least to avoid OMV credentials in the clear over your network) - the other protocols used by OMV aren't addressed by the proxy but that is another can of worms entirely

    2. Yes, it is possible - I run Traefik in docker that is running on the OMV host - Traefik manages the wildcard SSL cert for my domain and proxies requests to all apps running in docker or elsewhere on my network (incl OMV and Cockpit web UIs)

    3. "Best" will always be a debate - Traefik in docker is GREAT when all apps are in docker, but becomes more of a pain when you want to use it for docker and non-docker apps (omv, cockpit for instance), but it is doable, and is what I've chosen to use.


    The main challenge with running a containerized proxy are:

    1. docker containers can't access applications running on the host (workaround is proxying to the docker gateway ip or `host.docker.internal`)

    2. an http/s proxy usually needs to bind to 443 (and probably 80, among potentially many others), so anything running on the host needs to be bound to ports that don't conflict with the proxy (omv web ui for instance) - workaround is changing anything running on the host to NOT use port 80, 443, or anything else that should be proxied

    I would (and have) move both the docker storage location as well as all your container volumes to the hard disk (the process is pretty well-documented on the forum somewhere). Keep the SD card abuse to just what OMV is writing. Containers and logs tend to multiply...

    You can use the "recreate" button in Portainer - after you click it, there is an option to "pull latest image".


    I wouldn't recommend ouroboros, watchtower, or portainer -> recreate (or any automatic updating) for the Unifi controller. Ubiquiti have had some pretty terrible "stable" releases, and are a perfect example of why a critical service should pinned to a specific version (not "latest") and updated in a controlled and deliberate manner (after appropriate backups have been made).

    I think you are running into the same problem as another user here: RE: Trying to access SMART menu in the WebUI throw an error message


    IMO this is a bug in smartctl because if every disk of the HBA has a dedicated device file then there is no need to use the -d cciss,N command line argument, because this is only needed if there is only ONE device file for the whole HBA and you want to request the SMART info for a specific disk behind that 'wall'.


    Please execute the Python code mentioned in the link above, i'm interested in the result. But i bet that the result is None because the HBA is using the generic sd driver.

    Thanks for this thread - I followed a very similar line of troubleshooting, as I've recently switched my H240 from RAID5 to AHCI/HBA, and am not able to see any SMART status (but the drives did appear in Disks and I've successfully created a pair of btrfs mirrors).


    When running the commands from the other thread, the driver reported is `hpsa` for all drives attached to the H240:


    Code
    >>> import openmediavault.device
    >>> sd = openmediavault.device.StorageDevice("/dev/sda")
    >>> sd.host_driver
    'hpsa'

    How did you create the bridge?


    I believe I'm using the proxmox kernel (Debian GNU/Linux, with Linux 5.4.44-2-pve) and I can do stuff in cockpit but I cannot create the bridge there and I'm guessing that you didn't either. Did you create it on the command line or somewhere in OMV?

    I'm looking for a solution to this too.


    I was looking for an "easy" way to run a vm on omv, and cockpit would fit the bill if it worked, but I'm running into this same network bridge issue...

    votdev it looks like debug mode is still on in the forum software so php errors dump debug info to web clients. I tried to grab a copy/paste but ended up overwriting my clipboard - sorry!


    Other than that - things look great to me so far...

    haha...I was waiting to hear back too. Guess I should do my part to help humanity during this pandemic and just test it...

    Alright - good news - this issue looks to be fixed in bpo.4!


    For some reason, after installing and rebooting, the bpo.3 kernel was selected by default and everything was broken again.


    BE SURE to change the kernel to bpo.4 in OMV Extras before rebooting after installing this new kernel.

    It mostly works as expected, but you will have to define static rules that point to the docker gateway (probably 172.168.17.0.1) if you want to proxy services running on the host (like the OMV web interface or cockpit).


    Tips:

    - Use a DNS provider supported out of the box by Traefik/lego

    - Progress gradually: make sure DNS works as expected (internal/external), get Traefik dashboard working, then Let's Encrypt, then add services to Traefik

    - Change other apps (omv web ui) off of port 80 or 443 before trying to start Traefik

    - Traefik/Cockpit example: https://blog.jjhayes.net/wp/2019/11/24/cockpit-and-traefik/

    - Traefik host network "bug": https://github.com/containous/traefik/issues/5559

    There is an interesting video here this installs pi-hole without macvlan.

    I'm guessing this is the side effect of not doing so:



    Zitat von Link

    Pihole won’t show IP addresses of individual clients on the network. It will only show the IP Pihole created for the virtual network.

    I think it all comes down to Linux kernel version eventually.If you refer to this link, people are complaining the same macvlan issue on the 5.4.x kernel.


    The reason it worked on 4.19.102-v7l+ with rpi-update is that they back-ported the patch on kernel 4.19.x.


    So if you can find a way to update your arm64 kernel to 5.4.14, that should fix the issue.

    I had this problem after updating OMV5-amd64 kernel to Debian GNU/Linux, with Linux 5.4.0-0.bpo.3-amd64.


    Rolling back to Debian GNU/Linux, with Linux 5.4.0-0.bpo.2-amd64 using the omv-extras plugin fixed it for me.

    Good morning all,


    I allow myself to relaunch this topic, indeed I docker installed on my OMV5 and all my applications are accessible from Traefik, it's really great!
    But what would be the process to integrate OMV? how should I configure it? For example with my dockers it is a great simplicity with the labels but on OMV which is a physical machine I do not know how to do..

    For anyone stumbling upon this, here's the solution I'm using to proxy traffic to things running on my docker host.
    1. Move OMV web to a new port (in "General Settings" - this example uses 88)
    2. Create static rules for Traefik that point to the docker host/gateway IP as below (It'll likely be 172.17.0.1 or 172.18.0.1 - I've included cockpit here also)
    3. Replace domain.tld with your domain
    4. For more info on Cockpit/Traefik - see here: https://blog.jjhayes.net/wp/2019/11/24/cockpit-and-traefik/


    It appears that the results of a scheduled job are not logged (other than in an email notification). What is a good way to make sure the output is saved? It seems if I pipe it to a file, the email notification doesn't get triggered...


    Bash
    >> restic [args here] stats latest > /var/log/restic.log

    I ran into the same issue today. I added my log entries to the github issue. Unfortunately, I'm new to OMV so don't have an idea if/when this was last working. Thanks for the post - hopefully we can find better resolution than "Disable NUT plugin"...