Posts by pled29
-
-
Thanks, Gokapi looks interesting : docker version available and exactly what I need (only admin can upload files + nice download link).
Will look at this tomorrow !
-
Hi,
I have OMV running, with nextcloud-mariadb-swag installed via docker. All is fine.
I am not really happy with Nextcloud "public share" feature, because such link tries to open file in browser (typically video) when I would like a simple "click to download" button.
I've seen that I can add "/download" to the nextcloud public share link to force download, but doing that immediately starts the download, which is not what I want.
So considering I have swag (aka nginx) already running, what is the best option to nicely share files with friends providing them with a simple http link ? Thinking about a simple docker dedicated app for which I could create a subdomain, or something like that.
I don't want sftp or similar solution, because I want something straightforward and easy for them (no Filezilla install etc...)
Thank you for your advices.
-
Reviewing my requirement : considering Homepage access to Pihole is only to display API stats (
["queries", "blocked", "blocked_percent", "gravity"]), which is nice but not something mandatory, and the complexity of the problem, I will stay like this : Pihole is working nice as DNS & DHCP, using the macvlan adapter, and this is by far the most important.
Cosmetic API information on my Homepage dashboard is something I can do without !Thank you for your help in understanding the problem.
-
Thank you for your answer, yes it is a docker behaviour, I was expecting (dreaming ?) there might be some specific OMV parameter allowing to ping macvlan container from another one...

So from the links you provided, it appears to be difficult using macvlan network : no easy way ! looking at your second link : it is about setting up all containers to use the macvlan network (defining a ip-range when creating the interface) if my understanding is correct. Looks a bit challenging to me, and would need me to review all my containers. But looks like the most efficient method...
I am still confuse which method should I use for my Docker Pihole server : as I want to use it as the DHCP server on my LAN, the other way is to use the host networking mode, according to that doc (much more easier). But they say "you may still have to deal with port conflicts", as Pihole will have the same IP as the host, so as OMV.
Not sure which king of issue I would have to deal with ? For example port 53 ? Would OMV and Pihole be in conflict here ?
.
-
Hi,
I just installed Pihole as a docker container on my OMV. I use it as DHCP server on my LAN, so I created a macvlan network interface like below :
Code[ { "Name": "macvlan-ntw", "Id": "81f2a0f1eebfa65771d4fd4b29d4aa81b99ed4b737988447a5adcecbb739d1ee", "Created": "2025-07-15T18:21:03.841818958+02:00", "Scope": "local", "Driver": "macvlan", "EnableIPv4": true, "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "192.168.1.0/24", "Gateway": "192.168.1.1" } ] }, "Internal": false, "Attachable": true, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": { "43f3383965069737daeb4c200b57c109d5c2e805d74b4bc56272291a67998e8b": { "Name": "pihole", "EndpointID": "e79f2b53ab5af2bea42fbfba59fe021f28108b792e6ccb380700fc97ae1947d5", "MacAddress": "a2:fd:3c:cd:99:12", "IPv4Address": "192.168.1.19/24", "IPv6Address": "" } }, "Options": { "parent": "enp1s0" }, "Labels": {} } ]And in the pihole compose.yaml :
Pihole works fine, as DHCP and DNS on my LAN, so far so good.
-=-
But I am also using Homepage Dashboard (docker), and that container cannot ping pihole container. Homepage container uses a standard bridge network.
Code[ { "Name": "homepage_default", "Id": "0fe929fe972855ce488c014ac7799aac23c1450cc28b79991a7a28ecb3b3a783", "Created": "2025-06-24T15:57:06.656193195+02:00", "Scope": "local", "Driver": "bridge", "EnableIPv4": true, "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": null, "Config": [ { "Subnet": "172.19.0.0/16", "Gateway": "172.19.0.1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": { "a3b56a1406836484ada691d0c67e04aafdfeeebd8bf5f25eaa558b0297d3673b": { "Name": "homepage_n150", "EndpointID": "9260afb9217420d7ca03411244672de6e8a8366390eddb0fb66c6ebde2065ad5", "MacAddress": "fe:23:96:25:83:72", "IPv4Address": "172.19.0.2/16", "IPv6Address": "" } }, "Options": {}, "Labels": { "com.docker.compose.config-hash": "60ce0d492a4086a6423246d60ff394236e5f809c245ae6f6eb5712ab783f2090", "com.docker.compose.network": "default", "com.docker.compose.project": "homepage", "com.docker.compose.version": "2.34.0" } } ]From what I have read, it is expected that Homepage container cannot communicate with the pihole container due to macvlan adapter : containers using macvlan cannot communicate with host.
So what is the easiest way to make Homepage container able to ping the pihole container ?
-
New container from https://github.com/Trigus42/alpine-qbittorrentvpn installed and running, no issue setting it up !

-
My version is 4.6.7, so I will definitely upgrade with the image you suggested above.
Thank you so much for all your advises, very useful !
-
Very clever tool ! Despite I have no UI on the NAS, I managed to open magnet using a firefox extension (Add link to qbittorrent WebUI) and that worked easily ! And I can see the IP address is from the VPN provider and not my own address.
Very good test indeed, thank you !
-
Hum, I have to answer no. How should I test this specifically ?
-
Yes, through WebUI, I can select network interface to work with (tun0). And the qb process will stop if vpn connection goes down if I remember well.
Thank you for the link, I know I should use a more active image, I will definitely consider this.
-
gderf ,
The compose file is almost the same than the old one, so I guess issue is elsewhere... may be related to the new machine arch ?
- old machine is odroid-hc2, an armv7l SBC
- new machine is the Beelink Me mini, N150 Intel x86-64. Btw, this machine has 2 network adapters, but I use only one, and there is only one visible from OMV, so I guess this is not the issue.
The compose file is like below (using 8080 port now) :
Code
Display Moreservices: qbittorrent-openvpn: image: chrisjohnson00/qbittorrent-openvpn:latest container_name: qb-openvpn privileged: true sysctls: - net.ipv6.conf.all.disable_ipv6=0 volumes: - /srv/dev-disk-by-uuid-f39c071a-91e2-4184-971f-a722a7697ca5/Download:/downloads - /srv/dev-disk-by-uuid-f39c071a-91e2-4184-971f-a722a7697ca5/AppData/Qbittorrent:/config environment: - VPN_ENABLED=yes - LAN_NETWORK=192.168.1.0/24 - NAME_SERVERS=8.8.8.8,8.8.4.4 - VPN_USERNAME=xxxxx - VPN_PASSWORD=yyyyy - PUID=996 - PGID=100 - WEBUI_PORT_ENV=8080 - INCOMING_PORT_ENV=8999 ports: - 8080:8080 - 8999:8999 - 8999:8999/udp restart: unless-stoppedGood news: it seems the new route is still present after restarting container.
-
Still investigating my issue, and found the root cause.
It seems to be docker image problem, and about the network route defined. Fortunately, I still have my x32 (armv7) stack running, so I can compare. And this is what I can see :
Previous and working machine (odroidhc2 is the localhost where OMV is installed) :
Code# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 10.8.0.1 128.0.0.0 UG 0 0 0 tun0 0.0.0.0 172.27.0.1 0.0.0.0 UG 0 0 0 eth0 10.8.0.0 0.0.0.0 255.255.0.0 U 0 0 0 tun0 128.0.0.0 10.8.0.1 128.0.0.0 UG 0 0 0 tun0 172.27.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 192.168.1.0 172.27.0.1 255.255.255.0 UG 0 0 0 eth0 193.32.126.81 172.27.0.1 255.255.255.255 UGH 0 0 0 eth0New and not working machine (NAS-N150 is the localhost where OMV is installed) :
Code# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 10.8.0.1 128.0.0.0 UG 0 0 0 tun0 0.0.0.0 172.20.0.1 0.0.0.0 UG 0 0 0 eth0 10.8.0.0 0.0.0.0 255.255.0.0 U 0 0 0 tun0 128.0.0.0 10.8.0.1 128.0.0.0 UG 0 0 0 tun0 172.20.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 193.32.126.82 172.20.0.1 255.255.255.255 UGH 0 0 0 eth0You can see there is no route for 192.168.1.0/24 for eth0, making my local network inaccessible from the docker container.
So I added the missing route :
And now QB WebUI is accessible !

I hope this change is permanent and that I will not have to enter that command after each startup... Not tested yet !
-
Hi,
[context}
I am trying to install a docker Qbittorrent+openvpn (Headless qBittorrent client with WebUI and optional OpenVPN connection) on my new NAS running OMV latest version.
I have the same stack running on my previous OMV NAS without issue, but this Qb stack is just a more recent version (moving my old x32 NAS to a new x64 NAS).
[issue]
The QB container looks like running (Started qBittorrent daemon successfully), but I cannot access the WebUI interface (no http answer). Port defined is 9090.
NOTE the container is launched via Dockge and is not managed inside OMV itself, but I can see the container running in "Service-Compose-Containers" menu.[troubleshooting]
When I look from my desktop to the ports on the NAS, I see this for port 9090 :
$ nmap -sT 192.168.1.32
..
9090/tcp filtered zeus-admin..
I tried first using 8080, but it is the same, port appears as "filtered". What does that mean ?
Is that a firewall related issue ?Thank you for your help.
-
I believe this is expected, no more supported version, so nothing to ask or fix (armbian guys are just a few month before official debian Buster EOL).
Actually, my problem is how to perform an OMV7 fresh install as I cannot find an image for odroid-hc2 as I did for OMV5...
Probably the best for me is to upgrade to OMV6, right ?
Thank you.
-
Yes, I read that long post, with instructions how to upgrade to OMV6 (or even OMV7) without the need of the buster repo, if my understanding is correct. I will probably do it later on, when I will be ready for that.
What I mean is that Buster repo is still removed from official armbian repo, and the error message is still there every morning on my OMV5.
I don't see in which way "armbian people fixed it".
-
Not sure this is solved... Error is still here this morning, and armbian repository is still missing Buster...
-
Thanks for your help, it is now working :
- I first check filesystem growth, the option is checked by default with gparted, so no problem with that.
- Then I try again using dd , I first zero the disk (if=/dev/zero) then burn the image specifying block size (bs=512) : exactly same result and same issues !

- Finally, I used Clonezilla to create a new image, then restore it to new SD using -k1 and -r option (expert mode), which extand partition and filesystem at the same time : Perfect restart, no issue !
Happy Christmas !!

-
Hi,
I am having troubles trying to change my 8GB microSD used by OMV to a new and larger one (16GB) as OMV is now using 85% of the first one.
OMV is running on Odroidhc2, I installed Docker then Plex, Nextcloud, Mariadb, Swag containers. I have one SATA HD 1TB for Data.
Here is what I did :
- Shutdown OMV, take sdcard on my desktop, and use dd command to create an image (.img).
- Still on desktop, used BalenaEtcher to burn image, then gparted to extend partition to 16 GB.
- Boot with new sdcard, ssh login is fine.
Issues :
- no more mariadb container (seems to have disappear from portainer) so nextcloud "internal server error" !! (very strange to me !)
- issue with one plex library : "unexpected error loading library" (while other libraries are loaded fine)
So I shutdown OMV and put back 8GB sdcard : everything is back and not any error. This is a bit strange to me, I am a bit lost. I may have miss something...
Any idea what happenned ? What is the best method to move OMV to a new larger disk ?
Thank you for your help.
-
Relying to myself : issue was related to "NAT loopback" (or NAT hairpinning) : access to nextcloud web page was working outside of my LAN, but not inside. This is solved configuring your router (if possible) or setting you own DNS locally to resolve your DNS names locally. I had Pihole running on my LAN, and I use it for that purpose : just adding "nextcloud.odroidhc2.mydomain.fr" and "odroidhc2.mydomain.fr" to Local DNS records, poiting to internal 192.168.x.y IP address.