{solved}Help with docker macvlan network needed

    • OMV 3.x (stable)
    • Resolved

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • Help with docker macvlan network needed

      This question follows on for my thread about using a pihole container which conflicts with the OMV WebUI alreadyy running on port 80. @subzero79 suggested a docker macvlan as a possible solution, rather than changing the port of OMV's webUI.

      I'm new to docker, but was willing to try, so I read this ref: docs.docker.com/engine/usergui…king/get-started-macvlan/

      I found this rather confusing as the term "bridge" seems to be used in more than one context. I tend to think of a "bridge" in Linux as
      functioning as a swtich.

      Anyway, this is my OMV host network config after adding the docker plugin:

      Source Code

      1. root@omv-vm:/# ifconfig
      2. docker0 Link encap:Ethernet HWaddr 02:42:2a:8e:6d:67
      3. inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
      4. UP BROADCAST MULTICAST MTU:1500 Metric:1
      5. RX packets:61785 errors:0 dropped:0 overruns:0 frame:0
      6. TX packets:115126 errors:0 dropped:0 overruns:0 carrier:0
      7. collisions:0 txqueuelen:0
      8. RX bytes:3452582 (3.2 MiB) TX bytes:169811983 (161.9 MiB)
      9. eth0 Link encap:Ethernet HWaddr 08:00:27:42:81:1e
      10. inet addr:192.168.0.101 Bcast:192.168.0.255 Mask:255.255.255.0
      11. UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1
      12. RX packets:137393 errors:0 dropped:76 overruns:0 frame:0
      13. TX packets:80146 errors:0 dropped:0 overruns:0 carrier:0
      14. collisions:0 txqueuelen:1000
      15. RX bytes:176467337 (168.2 MiB) TX bytes:17734792 (16.9 MiB)
      16. lo Link encap:Local Loopback
      17. inet addr:127.0.0.1 Mask:255.0.0.0
      18. UP LOOPBACK RUNNING MTU:65536 Metric:1
      19. RX packets:5935 errors:0 dropped:0 overruns:0 frame:0
      20. TX packets:5935 errors:0 dropped:0 overruns:0 carrier:0
      21. collisions:0 txqueuelen:1
      22. RX bytes:2316033 (2.2 MiB) TX bytes:2316033 (2.2 MiB)
      23. root@omv-vm:/#
      24. Kernel IP routing table
      25. Destination Gateway Genmask Flags MSS Window irtt Iface
      26. default 192.168.0.254 0.0.0.0 UG 0 0 0 eth0
      27. 172.17.0.0 * 255.255.0.0 U 0 0 0 docker0
      28. 192.168.0.0 * 255.255.255.0 U 0 0 0 eth0
      Display All
      (Docker adds to iptables and sets port forward on in systcl)

      Created a docker macvlan with this command:

      Source Code

      1. root@omv-vm:/# docker network create -d macvlan --subnet=192.168.0.0/24 --gateway=192.168.0.254 -o parent=eth0 lan
      This shows in the docker networks:

      Source Code

      1. root@omv-vm:/# docker network ls
      2. NETWORK ID NAME DRIVER SCOPE
      3. 51fe71160146 bridge bridge local
      4. 43f784196b34 host host local
      5. f261e60ac158 lan macvlan local
      6. e6d1e1879f31 none null local
      7. root@omv-vm:/# docker network inspect lan
      8. [
      9. {
      10. "Name": "lan",
      11. "Id": "f261e60ac15846f28d59e99c34d6ad81615869ff089f10acda6b4ba0d632e344",
      12. "Created": "2017-04-15T12:20:12.705849636+01:00",
      13. "Scope": "local",
      14. "Driver": "macvlan",
      15. "EnableIPv6": false,
      16. "IPAM": {
      17. "Driver": "default",
      18. "Options": {},
      19. "Config": [
      20. {
      21. "Subnet": "192.168.0.0/24",
      22. "Gateway": "192.168.0.254"
      23. }
      24. ]
      25. },
      26. "Internal": false,
      27. "Attachable": false,
      28. "Containers": {},
      29. "Options": {
      30. "parent": "eth0"
      31. },
      32. "Labels": {}
      33. }
      34. ]
      35. root@omv-vm:/#
      Display All
      If I've understood this correctly, you should be able to run a docker container with the arguments of --net=lan and, for example, --ip=192.168.0.25
      and supposedly you can access it via 192.168.0.25, as this would be on the same subnet as the OMV box.

      First question is, can you use the docker plugin webUI to create a container which uses a macvlan? If so, what details should be added to the "Network" section and hat goes in the "Extra Arguments" section? The various combos I've tried do not appear to work.

      Second question, in the docker plugin webUI, what does the network mode option of "Bridge" actually refer to? I think it means the container
      is going to be given an IP in the same subnet as the docker host's "bridge" network in the range 172.17.0.0/24, e.g: 172.17.0.2. Docker handles the forwarding of any exposed ports you choose to the HOST IP, which would normally be IP of your OMV box.

      So what are the correct setting in the docker plugin webUI if you want to run your docker on the macvlan?

      Running it at the command line gives this:


      Source Code

      1. root@omv-vm:/# docker run -p 53:53/tcp -p 53:53/udp -p 80:80 \
      2. > --net=lan --ip=192.168.0.25 \
      3. > --cap-add=NET_ADMIN -e ServerIP="192.168.0.25" -e WEBPASSWORD="testpass" -e VIRTUAL_HOST="192.168.0.25" \
      4. > --name macpihole -d diginc/pi-hole:latest
      5. 273c21f63c01f75f70dfba21fc417b272c42c16fabfb5475c1c9c4e22e0f5f75
      6. root@omv-vm:/# docker ps
      7. CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
      8. 273c21f63c01 diginc/pi-hole:latest "/tini -- /start.sh" 6 seconds ago Up 5 seconds macpihole
      9. root@omv-vm:/# docker netwoek inspect lan
      10. docker: 'netwoek' is not a docker command.
      11. See 'docker --help'
      12. root@omv-vm:/# docker network inspect lan
      13. [
      14. {
      15. "Name": "lan",
      16. "Id": "f261e60ac15846f28d59e99c34d6ad81615869ff089f10acda6b4ba0d632e344",
      17. "Created": "2017-04-15T12:20:12.705849636+01:00",
      18. "Scope": "local",
      19. "Driver": "macvlan",
      20. "EnableIPv6": false,
      21. "IPAM": {
      22. "Driver": "default",
      23. "Options": {},
      24. "Config": [
      25. {
      26. "Subnet": "192.168.0.0/24",
      27. "Gateway": "192.168.0.254"
      28. }
      29. ]
      30. },
      31. "Internal": false,
      32. "Attachable": false,
      33. "Containers": {
      34. "273c21f63c01f75f70dfba21fc417b272c42c16fabfb5475c1c9c4e22e0f5f75": {
      35. "Name": "macpihole",
      36. "EndpointID": "c697d3c012142b182f10cb6b74bc86418904b6ae7b011fc5c18fa769ecbb79b5",
      37. "MacAddress": "02:42:c0:a8:00:19",
      38. "IPv4Address": "192.168.0.25/24",
      39. "IPv6Address": ""
      40. }
      41. },
      42. "Options": {
      43. "parent": "eth0"
      44. },
      45. "Labels": {}
      46. }
      47. ]
      48. root@omv-vm:/# docker exec -it 273c21f63c01 bash
      49. bash-4.3# ifconfig
      50. eth0 Link encap:Ethernet HWaddr 02:42:C0:A8:00:19
      51. inet addr:192.168.0.25 Bcast:0.0.0.0 Mask:255.255.255.0
      52. UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
      53. RX packets:14 errors:0 dropped:0 overruns:0 frame:0
      54. TX packets:21 errors:0 dropped:0 overruns:0 carrier:0
      55. collisions:0 txqueuelen:0
      56. RX bytes:1229 (1.2 KiB) TX bytes:882 (882.0 B)
      57. lo Link encap:Local Loopback
      58. inet addr:127.0.0.1 Mask:255.0.0.0
      59. UP LOOPBACK RUNNING MTU:65536 Metric:1
      60. RX packets:56 errors:0 dropped:0 overruns:0 frame:0
      61. TX packets:56 errors:0 dropped:0 overruns:0 carrier:0
      62. collisions:0 txqueuelen:1
      63. RX bytes:4376 (4.2 KiB) TX bytes:4376 (4.2 KiB)
      Display All

      The pihole container is running as shown in the logs ... but it cannot be accessed at 192.168.0.25/NET_ADMIN


      At this point, I've no idea why this doesn't work. But I noticed no "vethxxxxx" interface appeared in the OMV network config, as happens when your container runs in "Bridge" mode.
    • subzero79 wrote:

      This switches are unnecessary

      -p 53:53/tcp -p 53:53/udp -p 80:80
      You didn't explain why they are unnecessary, but with or without the switches, I have no access to the container. At the moment, I'm testing OMV within virtualbox. Perhaps this is a problem, I'll have to create a new OMV VM within qemu/KVM on my desktop and see if the problem persists. I go it work in another scenario, so the method seems to be correct and you have not highlighted any errors.
    • I recall something about macvlan in virtual box, something about the adapter. This was when I was testing lxc using macvlan in a vm a long time ago.

      forums.virtualbox.org/viewtopic.php?f=7&t=59215

      You need to use another emulated Ethernet device apparently.
      chat support at #openmediavault@freenode IRC | Spanish & English | GMT+10
      telegram.me/openmediavault broadcast channel
      openmediavault discord server
    • subzero79 wrote:

      I recall something about macvlan in virtual box, something about the adapter. This was when I was testing lxc using macvlan in a vm a long time ago.

      forums.virtualbox.org/viewtopic.php?f=7&t=59215

      You need to use another emulated Ethernet device apparently.
      Yes, that fixes the Vbox problem. I had read about the need for seeting the adapter to promisc, but had missed using "PCnet-FAST III". Pleased I don't have to ditch my Vbox OMV test config in favour of qemu/KVM.
    • Interesting to know you got this working. I am looking at moving my Docker containers from a VM host to a docker container running directly on my OMV box - since most of them need the storage provided on the OMV anyway. Since it has the resources, it only made sense.

      That said, I wanted to add in a bit of color here as I was playing around with the macvlan option as well on my separate VM host. One key thing that I understood when I embarked on the journey, but causes some issues in some areas, is that when you use the macvlan option, then you will not be able to communicate with the container from the host it is running on (and vice versa). I'm not sure if I fully understood the virtual setup you had running, but if you were trying to access it from essentially the same host, this is blocked by default. It's for a security purpose, which I won't dive into, but this could be of importance to understand for others who may be looking to do something similar.

      1 Question - were you able to get the container up/running from within the OMV interface on the macvlan network once you had manually created it via CLI? Or were you only able to create/start it up from the CLI? And if you did it via CLI, did it show up in the OMV interface at all afterward?

      subzero79 wrote:

      This switches are unnecessary

      -p 53:53/tcp -p 53:53/udp -p 80:80
      Your ports question - By nature of how the macvlan network works, the ports will be accessible and don't need to be mapped as they are natively available on the IP that you will be setting up for the container. Since there isn't a need to map ports to the host network (i.e. Bridge mode), you can just hit the ports that the container is using on the IP assigned. No need to actually expose the ports as they already are. This is also important to keep in mind from a security standpoint.
    • 1activegeek wrote:

      Interesting, just checked what you were setting up. So it was required to have a macvlan exposed IP to be able to successfully run PiHole? Had you tried using just bridged mode and there were issues?
      I did this as I wanted to keep OMV webUI on port 80, otherwise change the OMV webUII port and just use bridged mode with exposed ports mapped to same IP as the OMV host for the pihole container.