Beiträge von Tyberious Funk

    Is this plugin currently working in 3.0?


    I've setup a test system for OMV 3.0 and tried the unionfilesystem plugin. It installs ok, but when I go to create a pool, I get the following error;


    Code
    Failed to execute XPath query '/config/services/unionfilesystems'.



    Details;

    Code
    Error #0:exception 'OMV\Config\DatabaseException' with message 'Failed to execute XPath query '/config/services/unionfilesystems'.' in /usr/share/php/openmediavault/config/database.inc:244Stack trace:#0 /usr/share/openmediavault/engined/rpc/unionfilesystems.inc(221): OMV\Config\Database->set(Object(OMV\Config\ConfigObject))#1 [internal function]: OMV\Engined\Rpc\UnionFilesystems->set(Array, Array)#2 /usr/share/php/openmediavault/rpc/serviceabstract.inc(124): call_user_func_array(Array, Array)#3 /usr/share/php/openmediavault/rpc/rpc.inc(84): OMV\Rpc\ServiceAbstract->callMethod('set', Array, Array)#4 /usr/sbin/omv-engined(516): OMV\Rpc\Rpc::call('UnionFilesystem...', 'set', Array, Array, 1)#5 {main}

    By default, Docker stores containers in /var/lib/docker, which on openmediavault will be on the same hard drive as the base operating system. As a generally rule, all the executables inside the container should work because they are essentially on the same file system as any executables on the host system.


    The problem occurs when you create an image with volumes that you map to a mounted file share. For example, I created an Emby container with a volume called /config, and mapped it to /media/[UUID]/Emby on the host. Therefore, any executables stored in the directory /config are, on the host, stored on a file system mounted as noexec. I was a bit bored at work, so I tested this with a basic debian container. And sure enough, Docker won't run these executables.


    I'm guessing the reason why transcoding works in your case, is because the container you are using is storing ffmpeg somewhere in /opt. Looking through the init scripts, it looks like there might be some kind of runtime option to specify where ffmpeg sits.


    Sorry if that sounds a bit convoluted... it's taken a little bit of time for me to work out in my own head :)

    I also tested transcoding and worked well. As I say the ffmpeg is inside the container (at least in in my docker the official one)


    I understand that ffmpeg is inside the container, but is that a volume that sits on the host system in a folder mounted with noexec?


    If so, then that basically means that Docker ignores the underlying file system and will allow you to execute a binary that's on a filesystem mounted as 'noexec'. That would seem strange to me. But then it's not the first time Docker has surprised me.


    Well, depends on the image i guess, because i pulled the official one docker pull emby/embyserver and the ffmpeg is inside the container in /opt/ffmpeg, i just tested and just in case moved the config folder to a default ext4 one with noexec and there was not an issue playing content from the webinterface.


    I'm not sure which version of the official docker build you've been using, but the one I use puts the configurations in a volume /config, which of course, you can map wherever you like on the host system. I was originally hosting it in /etc/emby, but it can get surprisingly large and my OS drive isn't very big. So I moved it to a shared folder. This works, and the web interface will still continue to play content. But it won't transcode because ffmpeg won't execute. I can't say for certain that this is the issue... but I moved the config directory back to /etc/emby and transcoding immediately started working.


    I considered a whole range of options, but in the end decided to just stick with keeping it in /etc/emby, and just increasing the size of my OS drive.


    I did spend some time playing with the emby plugin, and it seems to work quite nicely. But I generally prefer using docker, at least for certain applications... particularly one like emby. Docker allows me to put some controls around the amount of resources the application uses. My NAS has a fairly limited CPU, so transcoding can slow it down somewhat. Docker allows me to minimise the impact on the host system.

    Is it possible to edit the default mount options for shared folders?


    I have a docker container running emby (media browser), but the emby config folder (which can get really large) is pointed to a shared folder. Unfortunately, emby includes it's own version of ffmpeg, which sits in this config folder. I've been trying to figure out why transcoding won't work, and I suspect it's because shared folders in OMV, by default, are mounted noexec... hence ffmpeg refuses to work :(


    So I want to be able to mount the shared folder without the noexec option.



    So I played around a bit over the weekend, and here are a couple of combinations;


    1. Set min protocol = SMB2 -- works on my macbook (Yosemite), fails on Windows 7, and fails on OpenELEC
    2. Set max protocol = SMB2 -- works on my macbook (Yosemite), fails on Windows 10 (preview)... didn't try it on OpenELEC
    3. Leave the setting blank -- works on everything, but macbook (Yosemite) is painfully slow


    In the case of Windows, I rebooted to ensure the existing sessions weren't being cached.


    I'm not really sure what to do now. Samba performance on OSX isn't disastrously bad when transferring files. But directory listings are pretty slow and painful.

    For a while now, I've found the performance of Samba when being browsed by a OSX client to be really painful. For example, when opening a network folder on my macbook, it can take several seconds to get a directory listing. Whereas, on Windows it's pretty close to instantaneous. A number of online forums suggest the issue is with OSX Mavericks and SMB1, so it's recommended to push samba to use SMB2. So I added the follow;


    min protocol = SMB2


    ... to my config. I suddenly found a HUGE leap in performance on my Macbook. Only problem is that my Windows 7 desktop can't access anything. I was under the impression that W7 supports SMB2?? If I change the setting to;


    max protocol = SMB2


    ... Windows starts working again. But I seem to lose the performance gain on my Macbook.


    Am I missing something?

    Not sure if this will help, but you can create samba configs specific to each user based on their username. If you create a file /etc/samba/smb.conf.<username> then any user logging in with that username will inherit the configs in that file (in addition to the configs in /etc/samba/smb.conf). You just need to reference the user-specific config with an "include" command.


    eg, in the [global] section, you'd have something like;


    include = /etc/samba/smb.conf.%U


    And then in each user-config file you could specify the details of their share access. For a lot of users, this would be a pain in the butt. But assuming everything is structured nicely, you could probably automate things with some shell scripts.

    I came across this issue last night...


    Set up a website on port 8000, and shortly after couldn't access to the OMV gui. I'm running OMV on port 7000, and HAProxy on port 80. So it didn't occur to me that there could be a port conflict.


    Thanks to this thread, I quickly checked that nginx wasn't running. Turns out I'd forgotten that I'm running HTPC Manager on port 8000, hence the conflict.... d'oh!!

    This is a "bug" in openmediavault. Docker creates a virtual network device called docker0, and for each container, a virtual device with the prefix vethXXXXX. OMV tries to validate the devices against known regex patterns, fails, and then throws an error. Best I can tell, it doesn't cause any issues.

    For anyone that is interested, I've written a short blog post about putting services behind HAProxy. I thought I'd post it here, since it's relevant to people using some of the third-party plugins for things like Sickbeard, Couchpotato, SABnzbd, etc, etc. Plus, if there is sufficient interest, I think HAProxy might be a worthwhile plugin itself.


    Why did I do this?


    Generally, when you set up a service like (say) Sickbeard, it will listen on a particular port (like, say 8080). Any there's a pretty fair chance you'll come across situations where ports like 8080 are blocked -- such as if you are sitting behind a firewall at work, or at university -- but you'd like to access Sickbeard remotely. Of course, you can just configure Sickbeard to listen to port 80, which is the standard port for http traffic, and which is unlikely to be firewalled. But this will conflict with OMV's web interface. Plus, what do you then do with Couchpotato? Or SABnzbd? Or whatever else you are running? They can't all listen on the same port.


    Basically, you can set up HAProxy to listen on port 80, and then redirect traffic according to the URL path. This has a nice little added benefit of allowing you to use URL's that are a bit easier to remember. For example, instead of referencing http://<myserver>:8080 you can use http://<myserver>/sickbeard.

    This is just a workaround that I use, which may or may not work depending on your setup...


    If you manage your TV Shows via software like Sickbeard, Sickrage or Sonarr, you can set them up to write directly to aufs branches. That way, new episodes are posted to the same folder every time. You can then setup file sharing via samba, nfs or whatever, sharing the pooled directory. Although you don't get the full benefits of pooling, in my case at least, it's reading files where I need pooling the most.


    But then, I only have three drives in my pool, and only two of them store TV Shows. With 14 drives, you might find this way less convenient.

    I'm trying to remove an NFS share. Deleting from the GUI seems to work, but when I got "apply" the settings, I get an error;


    Failed to unlink the directory '/export/Movies': rm: cannot remove `/export/Movies': Device or resource busy


    Normally I'd check to see if there is anything using that directory;


    lsof +D /export/Movies


    But I don't get any results. As it turns out, checking via the cmdline, it looks like the share does get unmounted. But it's not getting removed from the config.


    Any thoughts?

    For anyone that is interested and still following this thread...


    Over the weekend I noticed that my installation of sabnzbd wasn't working quite correctly. Basically, anything that required repairs was failing. I have par2 installed, but for whatever reason, the sabnzbd code wasn't picking it up. When I've got some time, I'll try and debug the problem and try and figure out what's going on. But in the meantime, I really needed a working install of sabnzbd.


    I did a quick bit of research and found that while I'm running sabnzbd based on the latest source code in git, most debian and ubuntu users use the packages put together by jcfp. So I modified my Dockerfile accordingly;



    I built a new image using my edited Dockerfile, then shutdown the existing container and fired up a new container based on the fixed image. Since my data and configs are all stored in volumes on the host, the new container opened up with my complete history and configs all in tact. The whole experience took about 15-20 minutes. The actually downtime for the container though, was about 15-20 seconds. That's pretty damn useful.

    Over the weekend, I upgraded my OMV system to Krazelic (finally), and took the opportunity to install Docker and put my services in containers. Up to this point, I've been mostly playing around in Virtual Box. For anyone interested, here is my Dockerfile for sabnzbd;



    A note a couple of things... as I mentioned earlier in the thread, I run services as a non-root user, even inside containers. This can cause some issues with file permissions. So I have created corresponding users and a group in the OMV host, that matches the user and group in the container. Hence why I specify the UID and the GID... these need to match what is on the host.


    After successful installation, run the following command;


    Code
    sudo docker run -v /path/to/config/on/host:/etc/downloaders/sabnzbd -v /path/to/datadir:/export -v /etc/localtime:/etc/localtime:ro -p 8080:8080 --name=sabnzbd -d --restart=always sabnzbd


    I found that mapping the localtime on the host to the container seemed to be the easiest way to keep the container time correct -- without it, the time keep getting out of sync. Also, the --restart=always option means that the container will automatically restart whenever required -- such as on reboot, or if for some reason the container crashes. HOWEVER, if you shutdown the sabnzbd application within the container (say from the webgui), the container will continue to run happily. ie, the restart option only works if the container as a whole fails. It doesn't restart if an application inside the container fails.

    I would image that some kind of Docker interface would be relatively simple to build, in a technical sense. But there are a couple of little tricky idiosyncrasies that would need to be worked out in order to make a a Docker plugin "work" nicely.


    For example, some changes you make inside the container won't really work unless you make a corresponding change outside the container. eg, if you change sickbeard's default port from 8080 to 8000, you can make the change in the sickbeard interface (which is running inside the container), but it won't really work unless you change the port mapping for the container.


    I've also found user permissions to be a bit annoying to resolve. Most containers are built with the assumption that the application contained will run as root. If you run something like, say, SABnzbd this will result in all your downloaded files being owned by the root user, which can cause some problems. If you change your container to run SABnzbd under a non-root user, then that user will only exist inside the container... unless you create a corresponding user on the host system.


    None of these are insurmountable problems. But I think if you want to create a clean user experience, you need to first decide on the "OMV way" of handling containers, and then proceed accordingly.

    Thanks for the info :)


    Where do the files from for an image based on something other than the host? Can the docker access the host file system?


    Images are stored in /var/lib/docker. If you try and start a container based on an image you don't have, docker will automatically check the online registry and start downloading the image to store locally. The containers themselves are also stored in /var/lib/docker (can't remember the exact file structure). But what's really nice is that the underlying container file system is based on aufs, layered over the top of the image. So for example, if I have a debian image that takes up 200mb, I then start up a container based on this image and download a 50mb piece of software into the container... the total amount of space taken by the container is only 50mb (plus a small amount of overhead). Docker doesn't duplicate the entire file system. If I create a second container from that debian image, and download a 20mb piece of sofware, the total amount of space for my containers is 50mb + 20mb. And of course, the 200mb for the original image. So running multiple containers can be surprisingly efficient, as long as they are based on the same image.


    I suspect this is probably one of the reasons why containers are so quick to start up.


    You can make files and folders in the host file system available to a container when you start the container. You basically map a folder on the host system, with a folder in the container, eg docker run -v /etc/myapp_config:/config would map the host directory to a corresponding directory in the container (they don't have to have the exact same path). This allows you to share data from host to container, and between containers. But also, containers can be transient... you tend to create and delete them as needs dictate. Data mapped backed to the host remains in place, after the container has been shutdown or even deleted.


    Can you explain these two comments more?


    When you build an application in a Docker container, the container is more or less isolated from the host system. For example, you may run OMV as your host system, but deploy sickbeard in a Docker container based on Ubuntu 14.04. If/when you upgrade OMV, the Docker container remains running in Ubuntu 14.04. On the flip side, you could upgrade your container from Ubuntu 14.04 to 14.10, while leaving your OMV host completely untouched. And of course, you can mix and match containers based on different images. I'm testing Docker on OMV (based on Debian Wheezy, obviously), but the majority of my containers are using Debian Jessie, but with one container running Ubuntu 14.04. Basically, you use the distribution for your container that is most suitable to the needs of the application -- and you don't need to worry about interdependency issues across other containers (or with the host system), because they are isolated.


    For anyone used to running virtualisation, this doesn't really seem like a big deal. eg, I can set up various virtualbox instances with different applications running in each. But the big advantage to Docker is that you get the benefits of virtualisation with really minimal overhead. Once you have images built, you can spawn a new container based on the image within a fraction of a second. In fact, it's so fast, you can use a single command to spawn a container, run a command (or application or script) inside the container, get the results, and have the container close down... almost at the same speed as if you ran the command directly on the host.


    The other big advantage is that it is really quick and simple to build images. Images are basically static templates that you spawn containers from. A single Dockerfile contains the instructions to build an image with whatever application you want. The reason it works so nicely, is because the Dockerfile specifies the distribution for it to be based on. For example, I can write a Dockerfile for installing sickbeard inside a Fedora 19 container. And I KNOW the build instructions will work, because i've tested them in Fedora 19. And it doesn't matter what your host machine is, if you run my Dockerfile, the image WILL BE based on Fedora 19. This portability makes Dockerfiles ideal for sharing. And creating Dockerfiles is really, really easy.


    IMHO, for unRaid it's a MASSIVE win. One of the big "flaws" in unRaid is the difficulty in installing new apps. Partly, this is because unRaid loads from a USB and runs in memory, so you have to get around that limitation. And partly because it is based on Slackware, which isn't the friendliest distribution when it comes to package management and resolving dependencies. Now, with Docker, people can write Dockerfiles based on Ubuntu or Debian, and run them on unRaid.