Hi
I run OMV on a PC of mine together with docker (docker-compose plugin)
OMV itself is accessible via the br0 bridge interface (192.168.2.45)
I have two containers, jellyfin and pihole, (192.168.2.20, 192.168.2.30) they use the macvlan interface (named netbr0). This works and there are no issues...until the host itself attempts to access these two containers.
To resolve this, a bridge must be added allowing the host to connect to the macvlan containers
(https://www.networkshinobi.com…iners-running-on-macvlan/)
I ran the following command
ip link add netbr0 link br0 type macvlan mode bridge
ip addr add 192.168.2.43/32 dev netbr0
I then proceeded to bring this interface up.
if my understanding is correct, this adds a new bridge interface, "connecting" it to netbr0 (the name of the docker macvlan interface). The bridge has an ip of 192.168.2.43.
then, per that webpage, I added routes to my containers:
ip addr add 192.168.2.30/32 dev netbr0
ip addr add 192.168.2.30/32 dev netbr0
This works, I'm not sure if it's the best way to go about this, but it works, OMV can now ping these two containers and I can use tailscale on OMV and successfully connect to these containers.
In summary:
Problem: These changes I made are not permanent and will get removed on every reboot. If my understanding is correct, OMV uses netplan.
I found these two config files under /etc/netplan
10-openmediavault-default.yaml, 60-openmediavault-br0.yaml
here is where I get confused, should I create a new yaml file or edit 60-openmediavault-br0.yaml?
looking at netplan documentation's tutorials, I'm not sure which situation applies to my case (Introduction - Netplan documentation perhaps?)
attached below you will find a full system report and the contents of the netplan config file. I would appreciate any help
system report
netplan 60-openmediavault-br0.yaml