Posts by 1activegeek

    If it helps, here is a shot of my config for the container. This is set to run the cron job as shown in the schedule field. You can adjust this to meet your required timeframe. The other thing to be sure is that you map the docker socket to run the api commands locally. If you have these 2things, you should be all set. If you want to make other changes of course then you may make some other changes or add other variables. But this is the default way to get up and running quickly, and performs the basics.

    Can you source me a link to external repo to test? this is new to me, is it just docker pull

    Yes I believe that is all that should be needed. This is referred to as a different Docker Registry. The default registry most use is Official Docker Registry Docs - this should cover the basics of the allowed formatting. As an example, this is the container I was trying to pull with reference documentation - ElasticSearch Container. Working on building an ELK stack using the official images. They have options available in Docker Hub, but as of the next revision, they will only support their private registry, so I'm starting with the private option now rather than migrate later. As seen at top of the page, the command is:
    docker pull
    So resulting format is just the same, but requires support of fqdn:port/source/project

    @subzero79 & @ryecoaaron - awesome, thanks for the quick work whipping this into shape - it was a minor issue I think, but nice to have it resolved. Now I don't have to keep checking the variable :D

    @flvinny521 - are you talking about when you need to update the image for the running container, i.e. new update comes down and you want to run the updated version? As subzero hinted, there is a bit of some juggling to clean it all up properly. I've opted for using the Watchtower container to auto-update my images. As an example, I'm currently running the following:

    As you can imagine pulling, restarting, cleaning up, etc - becomes a difficult task with this many containers. The container that has been running the longest, is the watchtower container. It's been configured to every sunday night at like 2AM or something, runs a process to check if there is an updated image, if there is then pull it, and it will automatically update current running images by restarting them with the new image. I then have a scheduled task that runs at say 3AM, set to just run
    docker system prune -f
    This will do a cleanup for you in automatic fashion. So each weekend on Monday morning/Sunday night, i'll get an output (I set the scheduled job to email output) of the cleanup, and I'll often see the updated containers with created dates in a shorter window than when they were originally setup inside OMV to run.

    Just wanted to share my two cents as this has proved to be the easiest to handle in terms of regular upkeep. I've been running my containers solidly for many months on OMV this way.

    Just wanted to jump in and point out a few things I noticed recently in testing/using the docker plugin:

    • Clear log - extremely useful!! Thank you for this.
    • I think it's related to the macvlan usage - I can't seem to get the hostname variable to take when using macvlan - I setup a container using a macvlan and for some reason the hostname is resorting to the contianerID

      • When I try to "modify" the container, it will then show me the replaced hostname field with the ContainerID - edit, and same thing still uses the new containerID instead of assigned hostname
      • All other containers NOT using macvlan are working as expected with the hostname setting
    • Support for non docker hub sources - it seems (or I couldn't find) a way to import images from outside of the usual docker hub repositories.

      • Thought here is some extra logic to assume if given just a <source>/<project> format, then query docker hub. If receiving extra input like: - then query the actual domain supplied.
      • Not sure if it's as simple as that, but ti would be nice to support external sources as well as
      • It seems more companies offering official images like to use this - not sure why
      • Solution for now was manually pulling the images via the CLI then they showed in the console - updating them though, also causes the same issue, so will need to do this manually as well
      • Info button also defaults to opening the project on docker hub, vs the domain supplied

    Are you able to post a clip? When you say gibberish, is it like code like scrambled useless data gibberish? Or as in their is an output that seems readable, but you're not sure what to make of that output?

    I'm not skilled in doing it, but if you check the process tree you should see the current running process. I know it's possibly to attach yourself to that running process in the CLI if you'd like to try and view the current output? Just a thought if it's already underway. 15.5 does sound like it'll take a bit! ;)

    @flvinny521 - if you are running it manually locally, no I don't believe there is an output. But I do believe it logs to syslog somewhere? So you may be able to find logs in the log section and/or by searching locally on the filesystem. I'm not 100% sure. The best method is to schedule the job to run so you can see the output. Or if you'd like, you can manually run it on the command line locally and put the output to a log file for your perusal.

    I would agree with @nicjo814 - I've done the same thing for some time now. I switched around running different VM's on an ESX host, I've fiddled with running services locally, I tried running plugins - in the end, Docker tends to have more universality to it allowing you to run anywhere. I also dug in a bit deeper and even created alternative containers that hadn't offered at the time, but used their core images (a little github searching and you can find the relevant pieces) - scrapped together and ran some apps I needed. I too love the flexibility of it NOT modifying my core system to setup the runtimes and components it needs. It's only flaw, which I believe has been overcome now, is that you couldn't run windows based apps before. I don't have any I cared about, but I know there are lots of windows only based things in the world. There is fresh support for that though, I just never cared to test the waters with it.

    I'd say if you already have the containers running, focus on that. If nothing else it's a useful skill to understand how they work more so than just using plugins to OMV which are more or less just apps being installed onto OMV with some type of a GUI management plugged into the OMV interface. At the root of it all, you're usually still running a core linux app under the hood. Docker can be quite useful to know.

    variables are things introduced by the team

    It's actually funny to me that almost everyone on these forums and others that are running things like Plex, downloaders, and most home enthusiast types - are running containers! I mean it's fantastic what they've done, no doubt, but it's amazing how wide spread their containers and methodology really are. I've seen other replicate the functionality (maybe not exactly the same) by introducing variables for the user/group of the user inside running the service. Welcome to the fold flvinny, docker is a lot of fun once you understand and get a hang of it. It's my go-to platform to search for new apps I want to try out since I can spin it up/down quickly, easily, and not impact my systems.

    There have been other such changes before with packages called "docker" and "".

    This is true, I do recall when this change came about. Unfortunately I was fresh into understanding and using docker at the time, so I probably wouldn't have known much better. To my understanding, I don't think the docker daemon process was changed/renamed though, I thought they simply modified the repository and package naming. The core commands and such were still called the same. My only worry here is that calling of the daemon will change. Thinking further though, I'd imagine it shouldn't change the commands being used in the core docker components, so I would hope this won't truly change anything other than the package naming. Obviously the package conflicts you mention are at the OS level and/or package level for installation more so than actual commands and daemon process calls.

    I'll proceed with caution, but I think hopefully we should be good. I will try to see if those release notes do call out any gotchas in this regard. Thanks for the input! And thanks for the great plugin work!!

    @nicjo814 - is it likely this change will affect other things as well? I'm just thinking for example that I run the watchtower container to automatically pull updated images and restart the containers with the new images, then runs a cleanup afterward. I believe this operates by interacting with the docker daemon, and from what I believe I understand, the docker daemon changes from one version to the next here. Just want to be sure to hold off on updating until I've thoroughly tested this. My containers have been so solid for quite some time, I'd really like not to get flack from the household on things not working again! ^^ Not to mention I also run security camera software in one container, which I'd like to keep running.

    Ok, guess I can mark it back as solved. I think it is relevant to the length in lines of items to hit. In this case, I recently setup a new camera system to record to some of the disks. Forgot about when the "purge" comes along to clean up old files (after set timeframe), it would start wiping out TONS of files per day. You'd be amazed how much an average household with only 3 cameras creates in short little clips. For this reason I was getting massive amounts of deletions. I guess so much so that it was clogging the ever growing list of files being reported without having run a successful sync.

    Put in a fresh ignore rule, and we're off to the races again. Just got my nightly job email tonight. And the telegram notification worked as well.

    Shouldn't be. The Telegram integration is just a job to copy the details whenever an email is sent. In this case, the email is sending and showing the same thing as telegram is. So I failed to note this difference between my original issue and now. This time I'm just continuing to receive the email notification as well, saying the same thing. It's the send-mail instruction that is failing in the error. The telegram integration is a secondary script in the notification sink.d folder.

    I could be wrong. I'm going to let it run again tonight and validate it's still kicking out. I just ran a manual Snap today so I'll just make sure it doesn't have something to do with the legitimate length of the message somehow. If that functions the same, then I'll try temporarily removing the telegram notification script for tomorrow evening and see if it still happens to try and narrow it down.

    So I'm opening this one back up. It seems I'm getting these notifications again. Instead of receiving the content, just receiving the message that the file was too big.

    It seems this started around 7/19 for me (was away and just getting back on top of things now). Not sure if there were system updates (for OMV as a whole) or specific plugin updates (for SnapRAID) that caused this issue to appear again. Based on my brief look at the system though, it appears the last update to the SnapRAID plugin was around June timeframe. I believe I'd have updated the plugin since then as I have a bi-weekly task to install updates.

    Adding in to what @anderbytes mentioned - this is true. This backend communication mechanism is used to ease communication between containers. It's a non-external accessible network. So as to say an external system isn't able to reach the containers by their names, but inter-container communication is easily facilitated this way. This is intended to simplify the task of networking/linking containers that rely on each other (such as a DB with an App, etc).

    @SubZero - you are correct, host-container communication is not allowed. This is what I was meaning to illustrate in my earlier post. I'm not sure of a testing version which allowed communication, but then again I don't think i tested it when it was in testing! ^^

    Glad to hear though that you got it working. By the looks of what you included for code, I believe that may have been your missing piece. You did need to give it a parent interface to get outbound communication. It is a tricky configuration overall to get ironed out properly. But when you get it right, it does provide a valuable functionality. For some time I actually used this to stop my smartphone system (OpenHAB) and Plex from contending with each other on ports trying to be established. I know generically you would think assigning ports via mappings would help, but not when they're not flexible and contend with UPnP requirements.

    Long story short - it has its uses at times. So thanks for getting this etched out. It should provide valuable use for anyone deciding to leverage it.

    Agree with @anderbytes - I did notice this odd behavior. I thought maybe something was just wrong with my setup, so I created DNS names to resolve to the individual containers (but this is on the physical network) to make this easier. Glad to hear though that it may simply be something related to the actual plugin config.

    @nicjo814 On the macvlan side, I will say that it was a bit tricky originally getting it all configured and understanding that gotchas. Once I figured out creating it manually, I did have to specify the IP (for each container) manually as I was putting it on a real network. It did require setting the IP range for the network, and it did have to be in a real operating network (aka one that exists with gateway etc). The other gotcha that may come into play here, is that when you setup the macvlan network, you can no longer reach the local host from the containers. This is for security purposes. So if you're saying you can't get outbound, make sure to test the actual gateway or something else, not just the local host route. I believe yes this will also affect "localhost" interface as well.

    Ohh, did something change? Last couple times you've made updates with it, I simply uploaded it through the plugins section, checked the updates which showed up, then installed the update. Worked like a charm. No biggie, just curious what would have changed.