{solved}Help with docker macvlan network needed

  • This question follows on for my thread about using a pihole container which conflicts with the OMV WebUI alreadyy running on port 80. @subzero79 suggested a docker macvlan as a possible solution, rather than changing the port of OMV's webUI.


    I'm new to docker, but was willing to try, so I read this ref: https://docs.docker.com/engine…king/get-started-macvlan/


    I found this rather confusing as the term "bridge" seems to be used in more than one context. I tend to think of a "bridge" in Linux as
    functioning as a swtich.


    Anyway, this is my OMV host network config after adding the docker plugin:


    (Docker adds to iptables and sets port forward on in systcl)


    Created a docker macvlan with this command:


    Code
    root@omv-vm:/# docker network create -d macvlan --subnet=192.168.0.0/24 --gateway=192.168.0.254 -o parent=eth0 lan

    This shows in the docker networks:


    If I've understood this correctly, you should be able to run a docker container with the arguments of --net=lan and, for example, --ip=192.168.0.25
    and supposedly you can access it via 192.168.0.25, as this would be on the same subnet as the OMV box.


    First question is, can you use the docker plugin webUI to create a container which uses a macvlan? If so, what details should be added to the "Network" section and hat goes in the "Extra Arguments" section? The various combos I've tried do not appear to work.


    Second question, in the docker plugin webUI, what does the network mode option of "Bridge" actually refer to? I think it means the container
    is going to be given an IP in the same subnet as the docker host's "bridge" network in the range 172.17.0.0/24, e.g: 172.17.0.2. Docker handles the forwarding of any exposed ports you choose to the HOST IP, which would normally be IP of your OMV box.


    So what are the correct setting in the docker plugin webUI if you want to run your docker on the macvlan?


    Running it at the command line gives this:




    The pihole container is running as shown in the logs ... but it cannot be accessed at 192.168.0.25/NET_ADMIN



    At this point, I've no idea why this doesn't work. But I noticed no "vethxxxxx" interface appeared in the OMV network config, as happens when your container runs in "Bridge" mode.

  • This switches are unnecessary


    -p 53:53/tcp -p 53:53/udp -p 80:80

    You didn't explain why they are unnecessary, but with or without the switches, I have no access to the container. At the moment, I'm testing OMV within virtualbox. Perhaps this is a problem, I'll have to create a new OMV VM within qemu/KVM on my desktop and see if the problem persists. I go it work in another scenario, so the method seems to be correct and you have not highlighted any errors.

  • I recall something about macvlan in virtual box, something about the adapter. This was when I was testing lxc using macvlan in a vm a long time ago.


    https://forums.virtualbox.org/viewtopic.php?f=7&t=59215


    You need to use another emulated Ethernet device apparently.

    Yes, that fixes the Vbox problem. I had read about the need for seeting the adapter to promisc, but had missed using "PCnet-FAST III". Pleased I don't have to ditch my Vbox OMV test config in favour of qemu/KVM.

  • Interesting to know you got this working. I am looking at moving my Docker containers from a VM host to a docker container running directly on my OMV box - since most of them need the storage provided on the OMV anyway. Since it has the resources, it only made sense.


    That said, I wanted to add in a bit of color here as I was playing around with the macvlan option as well on my separate VM host. One key thing that I understood when I embarked on the journey, but causes some issues in some areas, is that when you use the macvlan option, then you will not be able to communicate with the container from the host it is running on (and vice versa). I'm not sure if I fully understood the virtual setup you had running, but if you were trying to access it from essentially the same host, this is blocked by default. It's for a security purpose, which I won't dive into, but this could be of importance to understand for others who may be looking to do something similar.


    1 Question - were you able to get the container up/running from within the OMV interface on the macvlan network once you had manually created it via CLI? Or were you only able to create/start it up from the CLI? And if you did it via CLI, did it show up in the OMV interface at all afterward?

    This switches are unnecessary


    -p 53:53/tcp -p 53:53/udp -p 80:80

    Your ports question - By nature of how the macvlan network works, the ports will be accessible and don't need to be mapped as they are natively available on the IP that you will be setting up for the container. Since there isn't a need to map ports to the host network (i.e. Bridge mode), you can just hit the ports that the container is using on the IP assigned. No need to actually expose the ports as they already are. This is also important to keep in mind from a security standpoint.

  • @1activegeek


    Not been around for a few days. To answer your question, once macvaln was created, CLI had to used to create container that used the macvaln. A macvaln network does not appear in the WEbUI. Once the contianer was created and running it appeared on WebUI container list.



    Released the answer re: ports after looking at few refs. Any container I use is internal only.

  • Interesting, just checked what you were setting up. So it was required to have a macvlan exposed IP to be able to successfully run PiHole? Had you tried using just bridged mode and there were issues?

    I did this as I wanted to keep OMV webUI on port 80, otherwise change the OMV webUII port and just use bridged mode with exposed ports mapped to same IP as the OMV host for the pihole container.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!