Posts by that0n3guy


    I'm using the openvpn plugin (not AS) and connecting from a remote location. I can connect just fine from the remote, and I have access to my local network through openvpn, but when my remote computer is connected to the VPN, it can not access the internet anymore (from the remote computer, the OMV server still has internet).

    As soon as I disconnect, the internet is back on the remote.

    I have no idea what I'm missing. I have default everything on the settings, I've messed w/ adding DNS entries like my router ip, google ( no go. If I uncheck the "default gateway" option, I can no longer get to OMV server backend via the (only through the openvpn given ip of and can't access other servers on the network (like

    I must be miss configuring something. Maybe I need to re-export the certificate/config after changing settings?

    Can anyone help me out?

    I've sorta stopped using the docker-plugin for OMV and use Rancher instead ( Its what I use at work and I have extra ram to spare :).

    I thought I would share the docker compose yaml file I just tested:

    I just tested out sparkyballs mythtv server... got it working first try:

    I will note, vnc would work, but wouldn't work with myth-setup. Windows RDP worked just fine though. I setup my hdhomerun card and storage group in minutes.

    After setting up the backend, I just had to reboot the docker. Then it would work w/ my frontends and mythweb.

    My mythtv install is inside virtualbox in OMV and I would love to move it to docker. I will be messing with this in the next couple weeks (probably over christmas vacation) so I'll keep you updated.

    I use docker at work all the time so I am coming to know it pretty well. If this docker image works on unraid, I don't know of any reason why it wouldn't work on OMV.

    Know what would instantly give you 100's of pre-configured apps for this? If you supported unraids docker repositories :evil:.

    Basically a "repository" is just a bunch of xml files. Each XML file has the config (volumes, ports, env vars, etc...) for a docker container.

    If OMV supported unraid repo's, it would be very easy for newbies to get started :). See the "Adding Template repositories" section here: Many of the repo's on are github (ie opensource) so I wouldn't think this would violate any type of licensing if you use their opensource configs.

    I would think that you would use shared folders as a means of mapping to container volumes.

    I don't know if it would be such a good idea to use shared folders for actually storing containers.

    I sort of agree. I've set my OMV root partition large enough that I can do a lot in there, but a lot of people may not have done that. That would mean you need to move the docker images somplace.... so to data drive someplace. Moving it is pretty easy:…uff-to-a-different-drive/

    As for the path mapping, Something like:

    -v /media/43j2klj234lkj432i243ij4o234/sharedFolder/Folderinside:/opt/webapp

    . This means that anything added to /opt/webapp in the container will be actually stored at /media/43j2klj234lkj432i243ij4o234/sharedFolder/Folderinside on the host.

    Is a container just the configuration file?

    Nope... container is like a VM image that takes up way less space and is cached so it can be rebuilt very quickly. Did you mess with this at all: ?

    The UI is essentially just building a docker command. See: ... Also:

    Simpler example:

    Note: The image doesn't show this very well... but we need the ability to add multiple "paths" ,"ports", environtment variables.

    Also, another UI out there (more complicated)... with demo:

    Here is my nginx-phpfpm dokku dockerfile:

    Its meant to be used as a base for a dokku app, so it doesn't have an app in it. Basically, you could create a simple dockerfile that references the above docker image. You new dockerfile essentially just adds your app to `/app` and the app will be exposed via port 80.

    The above uses scripts for setting up phpfpm and nginx. Basically start by readying the dockerfile. It runs scripts to setup other scripts. This uses so read that (particularly the init and runit sections of the docs) before looking at things.


    • scripts are run from dockerfile so they are run when the container is built.
    • Scripts named init are added via the setup scripts so they run when the container starts. (useful for setting permissions, tweaking config files, etc..). I rarely start a container without rebuilding it, so essentially an init script is run on "first boot", if you want to think about it that way.
    • Scripts named runit are added via the setup scripts so they run something and keep it running (like how supervisord does). So Nginx is run by runit. If nginx crashes, runit will restart it.

    Its a "more complicated" example of what docker does... but is a good example of how to use baseimage-docker.

    That tells me docker would be good for a media encoding app (handbrake for instance) or file sharing. Probably not so good for web server or media server since you don't want to create the database every time. Am I on the right track?

    Nope... :). Docker works great for web servers. You just store your databases on the host (or in a data container) and store your database config for the webapp on the host (or in git... or in environment variables). Usually you use 1 container for mysql, then 1 for your webapp (nginx+php for example). Just google docker mysql... or docker lamp...

    If using OMV, I would probably use OMV's mysql plugin with all of my docker apps (or maybe in the future have a mysql docker image specifically for OMV). I don't really want a bunch of mysql servers running, I just want 1... then all my docker images can use it.

    A great docker based PAAS app out there is It makes docker deployment like heroku.... I run many a web app with it. Its written all in bash, so OMV could steal some code from it if needed :).

    One more tip. Learn to use: for base images.

    There are tons of reasons why, but mainly b/c its easier :P. There are mainly 2 schools of thought when it comes to docker containers:

    • each container should only run 1 process
    • you can run as many processes as you want in a container.

    I would go with #2. It makes things easier and I'm not a giant corporation w/ lots of resources, so being a purist (#1) takes too long.

    phusion/baseimage allows you to do #2 easily. It has runit built in (so you don't have to mess w/ supervisord), ssh, cron, etc...

    Also, one of the main unraid docker guys is using it:

    Oh, another thing. All the "unraid specific" docker apps that are being made out there would be fully functionally in OMV... so these apps don't really need to be "recreated". Just use an existing docker :).

    Can you explain these two comments more?

    Something Tyberious Funk didn't mention that is a HUGE advantage to using docker is that a "dockerfile" is essentially server/application documentation. When you make a server change, you make the change in a dockerfile (or script that is run by the dockerfile).

    For example:…rr/blob/master/Dockerfile. You can see:

    • that debian:wheezy is used as a base
    • that the first thing done when building the image is install stuff w/ apt
    • then it changes some permissions
    • Sets some volumes to be used by the host or other containers.

    So, you don't have issues w/ docker that you had with VM's where you changed something in a config file or installed some package in the VM and now you don't remember what it was (you forgot to write it down). The dockerfile is essentially self documenting.

    One thing to note though, is that containers are NOT meant to be used like virtual machines. By that I mean in a virtual machine, you install the machine, then ssh in and configure the machine. You should NOT use docker containers like this (though it is possible).

    Docker containers should be considered completely throw away. Meaning if you save a config file in a container, then next time you rebuild the container, that config file is gone. So if you want "persistent" data (like your config file), you mount your config directories on the host (or in a data container... but that complicates things for now) and the next time you build your container... the entire app inside it (including operating system) can be completely different, but it still uses your old config file (that you mounted/saved on the host) so it still works the same or is upgraded.

    I used to run xenserver for my home server. I could have mythtv, omv, windows server, etc... It was nice from a standpoint that it was easy to whip up a test server.... but its really a pain from a storage standpoint since xenserver has all sorts of limitations for drive passthrough.

    Now with docker, I can do almost everything I was doing in xenserver, but with smaller, faster containers. The whole methodology is different, but better since all my changes to the containers are documented and it forces me to automate my "server" (container) configuration. I can still use virtualbox (or even kvm if I get ambitious) for the weird one-off servers (like windows).

    Docker makes updateing apps (sonarr/sickbeard, sab, plex, etc...) sooo easy and worry free.

    With a docker interface for OMV, many of the omv plugins (that need updating all the time) will no longer be needed.