This question follows on for my thread about using a pihole container which conflicts with the OMV WebUI alreadyy running on port 80. @subzero79 suggested a docker macvlan as a possible solution, rather than changing the port of OMV's webUI.
I'm new to docker, but was willing to try, so I read this ref: https://docs.docker.com/engine…king/get-started-macvlan/
I found this rather confusing as the term "bridge" seems to be used in more than one context. I tend to think of a "bridge" in Linux as
functioning as a swtich.
Anyway, this is my OMV host network config after adding the docker plugin:
root@omv-vm:/# ifconfig
docker0 Link encap:Ethernet HWaddr 02:42:2a:8e:6d:67
inet addr:172.17.0.1 Bcast:0.0.0.0 Mask:255.255.0.0
UP BROADCAST MULTICAST MTU:1500 Metric:1
RX packets:61785 errors:0 dropped:0 overruns:0 frame:0
TX packets:115126 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:3452582 (3.2 MiB) TX bytes:169811983 (161.9 MiB)
eth0 Link encap:Ethernet HWaddr 08:00:27:42:81:1e
inet addr:192.168.0.101 Bcast:192.168.0.255 Mask:255.255.255.0
UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1
RX packets:137393 errors:0 dropped:76 overruns:0 frame:0
TX packets:80146 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:176467337 (168.2 MiB) TX bytes:17734792 (16.9 MiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:5935 errors:0 dropped:0 overruns:0 frame:0
TX packets:5935 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:2316033 (2.2 MiB) TX bytes:2316033 (2.2 MiB)
root@omv-vm:/#
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
default 192.168.0.254 0.0.0.0 UG 0 0 0 eth0
172.17.0.0 * 255.255.0.0 U 0 0 0 docker0
192.168.0.0 * 255.255.255.0 U 0 0 0 eth0
Alles anzeigen
(Docker adds to iptables and sets port forward on in systcl)
Created a docker macvlan with this command:
root@omv-vm:/# docker network create -d macvlan --subnet=192.168.0.0/24 --gateway=192.168.0.254 -o parent=eth0 lan
This shows in the docker networks:
root@omv-vm:/# docker network ls
NETWORK ID NAME DRIVER SCOPE
51fe71160146 bridge bridge local
43f784196b34 host host local
f261e60ac158 lan macvlan local
e6d1e1879f31 none null local
root@omv-vm:/# docker network inspect lan
[
{
"Name": "lan",
"Id": "f261e60ac15846f28d59e99c34d6ad81615869ff089f10acda6b4ba0d632e344",
"Created": "2017-04-15T12:20:12.705849636+01:00",
"Scope": "local",
"Driver": "macvlan",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "192.168.0.0/24",
"Gateway": "192.168.0.254"
}
]
},
"Internal": false,
"Attachable": false,
"Containers": {},
"Options": {
"parent": "eth0"
},
"Labels": {}
}
]
root@omv-vm:/#
Alles anzeigen
If I've understood this correctly, you should be able to run a docker container with the arguments of --net=lan and, for example, --ip=192.168.0.25
and supposedly you can access it via 192.168.0.25, as this would be on the same subnet as the OMV box.
First question is, can you use the docker plugin webUI to create a container which uses a macvlan? If so, what details should be added to the "Network" section and hat goes in the "Extra Arguments" section? The various combos I've tried do not appear to work.
Second question, in the docker plugin webUI, what does the network mode option of "Bridge" actually refer to? I think it means the container
is going to be given an IP in the same subnet as the docker host's "bridge" network in the range 172.17.0.0/24, e.g: 172.17.0.2. Docker handles the forwarding of any exposed ports you choose to the HOST IP, which would normally be IP of your OMV box.
So what are the correct setting in the docker plugin webUI if you want to run your docker on the macvlan?
Running it at the command line gives this:
root@omv-vm:/# docker run -p 53:53/tcp -p 53:53/udp -p 80:80 \
> --net=lan --ip=192.168.0.25 \
> --cap-add=NET_ADMIN -e ServerIP="192.168.0.25" -e WEBPASSWORD="testpass" -e VIRTUAL_HOST="192.168.0.25" \
> --name macpihole -d diginc/pi-hole:latest
273c21f63c01f75f70dfba21fc417b272c42c16fabfb5475c1c9c4e22e0f5f75
root@omv-vm:/# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
273c21f63c01 diginc/pi-hole:latest "/tini -- /start.sh" 6 seconds ago Up 5 seconds macpihole
root@omv-vm:/# docker netwoek inspect lan
docker: 'netwoek' is not a docker command.
See 'docker --help'
root@omv-vm:/# docker network inspect lan
[
{
"Name": "lan",
"Id": "f261e60ac15846f28d59e99c34d6ad81615869ff089f10acda6b4ba0d632e344",
"Created": "2017-04-15T12:20:12.705849636+01:00",
"Scope": "local",
"Driver": "macvlan",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "192.168.0.0/24",
"Gateway": "192.168.0.254"
}
]
},
"Internal": false,
"Attachable": false,
"Containers": {
"273c21f63c01f75f70dfba21fc417b272c42c16fabfb5475c1c9c4e22e0f5f75": {
"Name": "macpihole",
"EndpointID": "c697d3c012142b182f10cb6b74bc86418904b6ae7b011fc5c18fa769ecbb79b5",
"MacAddress": "02:42:c0:a8:00:19",
"IPv4Address": "192.168.0.25/24",
"IPv6Address": ""
}
},
"Options": {
"parent": "eth0"
},
"Labels": {}
}
]
root@omv-vm:/# docker exec -it 273c21f63c01 bash
bash-4.3# ifconfig
eth0 Link encap:Ethernet HWaddr 02:42:C0:A8:00:19
inet addr:192.168.0.25 Bcast:0.0.0.0 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:14 errors:0 dropped:0 overruns:0 frame:0
TX packets:21 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1229 (1.2 KiB) TX bytes:882 (882.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:56 errors:0 dropped:0 overruns:0 frame:0
TX packets:56 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:4376 (4.2 KiB) TX bytes:4376 (4.2 KiB)
Alles anzeigen
The pihole container is running as shown in the logs ... but it cannot be accessed at 192.168.0.25/NET_ADMIN
At this point, I've no idea why this doesn't work. But I noticed no "vethxxxxx" interface appeared in the OMV network config, as happens when your container runs in "Bridge" mode.