OMV5 => OMV6 => OMV7 not able to reach docker/container port

  • Hi,


    I upgraded from OMV5 to 6 and to 7.

    Since update to 6 I am unable to reach my container port web (portainer, domoticz, qbitorrent, etc...), for example domoticz is on working (I see actions scheduled happening) but unable to access the web front end on port 8081 (docker as -p 80:8081 for it)


    Any clue ?


    Thanks

    amd64 NAS server | OMV5 => OMV6 => OMV7, issue on accessing container from IP:Port since OMV6

  • chente

    Approved the thread.
  • No IP is correct, OMV web gui, smb share and ssh are working with same IP

    amd64 NAS server | OMV5 => OMV6 => OMV7, issue on accessing container from IP:Port since OMV6

  • To sum up :

    OMV web gui on port 80 is working

    smb share is working

    ssh on port 22 is working

    but

    domoticz container on port 8081 does not work (firefox timeout)

    qbitorrent container on port 8080, same issue

    portainer on port 9000 same issue

    amd64 NAS server | OMV5 => OMV6 => OMV7, issue on accessing container from IP:Port since OMV6

  • arnaud@serveur:~$ docker ps

    CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

    625d22a220ab lscr.io/linuxserver/qbittorrent:latest "/init" 2 days ago Up 3 hours 0.0.0.0:6881->6881/tcp, :::6881->6881/tcp, 0.0.0.0:8080->8080/tcp, 0.0.0.0:6881->6881/udp, :::8080->8080/tcp, :::6881->6881/udp qbittorrent

    501ce0b94b77 portainer/portainer-ce:latest "/portainer" 4 days ago Up 3 hours 0.0.0.0:8000->8000/tcp, :::8000->8000/tcp, 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp, 9443/tcp portainer

    0378f3940a2e lscr.io/linuxserver/ubooquity:latest "/init" 10 days ago Up 3 hours 0.0.0.0:2202-2203->2202-2203/tcp, :::2202-2203->2202-2203/tcp ubooquity

    c4a47ed2af35 lscr.io/linuxserver/plex:latest "/init" 12 days ago Up 3 hours plex

    5b75085265f4 mysteriumnetwork/myst:latest "/usr/local/bin/dock…" 3 weeks ago Up 12 seconds 0.0.0.0:1194->1194/udp, :::1194->1194/udp, 0.0.0.0:59950-60000->59950-60000/udp, :::59950-60000->59950-60000/udp, 0.0.0.0:4449->4449/tcp, :::4449->4449/tcp Serveur0Mysterium

    c91aae0ac9bd iproyal/pawns-cli:latest "/pawns-cli -email=a…" 7 weeks ago Up 3 hours Serveur0IPRoyal

    271534b9a047 domoticz/domoticz:stable "docker-entrypoint.s…" 3 months ago Up 3 hours 443/tcp, 0.0.0.0:3433->3433/tcp, :::3433->3433/tcp, 8080/tcp, 0.0.0.0:8081->8081/tcp, :::8081->8081/tcp domoticz

    cda790a6779c repocket/repocket:latest "/bin/sh -c 'node di…" 3 months ago Restarting (0) 12 seconds ago Serveur0RePocket

    545442ed96e1 earnfm/earnfm-client:latest "/app/earnfm_example" 4 months ago Up 3 hours Serveur0EarnFM

    d3c744fb4c55 packetstream/psclient:latest "/usr/local/bin/psla…" 4 months ago Up 3 hours Serveur0PacketStream

    arnaud@serveur:~$

    amd64 NAS server | OMV5 => OMV6 => OMV7, issue on accessing container from IP:Port since OMV6

  • amd64 NAS server | OMV5 => OMV6 => OMV7, issue on accessing container from IP:Port since OMV6

  • Code
    arnaud@serveur:~$ docker container logs portainer
    2024/10/13 03:53PM INF github.com/portainer/portainer/api/http/server.go:354 > starting HTTP server | bind_address=:9000
    arnaud@serveur:~$ 

    No particular, ipv6 is disabled on NAS

    amd64 NAS server | OMV5 => OMV6 => OMV7, issue on accessing container from IP:Port since OMV6

  • Ok, stopped all my containers and tried the compose plugin, with a simple nginx hello world container.


    Code
    services:
      web:
        image: dockerbogo/docker-nginx-hello-world
        ports:
          - 8082:80

    Not accessible by <serverIP>:8082 either....I'm stuck !

    amd64 NAS server | OMV5 => OMV6 => OMV7, issue on accessing container from IP:Port since OMV6

  • arno05

    Added the Label OMV 7.x
  • arno05

    Added the Label Upgrade 6.x -> 7.x
  • Hi,


    Followed some advice regarding docker and deactivated apparmor:

    Code
    arnaud@serveur:~$ cat /proc/cmdline
    BOOT_IMAGE=/boot/vmlinuz-6.1.0-26-amd64 root=UUID=82bdfe32-d0b3-43e6-a535-fabebd56cf4c ro quiet apparmor=0

    Docker restarted, not using apparmor as a security option:

    run simple nginx helloworld container :

    Code
    arnaud@serveur:~$ docker -v -D run -p 8082:80 -d dockerbogo/docker-nginx-hello-world
    9a39ede90d50fc1095f263d5f8630a96a26a05c382161c29608071e60d78f2d9
    arnaud@serveur:~$


    Trying to access it with firefox IP:8082 and get "not available website, check the url" !!!!


    What is wrong ??

    amd64 NAS server | OMV5 => OMV6 => OMV7, issue on accessing container from IP:Port since OMV6

    • Official Post

    What is the output of:

    dpkg -l | grep docker

    docker network ls


    Have you made any firewall changes?

    omv 7.4.14-1 sandworm | 64 bit | 6.11 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.15 | compose 7.2.16 | k8s 7.3.1-1 | cputemp 7.0.2 | mergerfs 7.0.5 | scripts 7.0.9


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Code
    arnaud@serveur:~$ dpkg -l | grep docker
    ii  docker-buildx-plugin                             0.17.1-1~debian.12~bookworm               amd64        Docker Buildx cli plugin.
    ii  docker-ce                                        5:27.3.1-1~debian.12~bookworm             amd64        Docker: the open-source application container engine
    ii  docker-ce-cli                                    5:27.3.1-1~debian.12~bookworm             amd64        Docker CLI: the open-source application container engine
    ii  docker-compose-plugin                            2.29.7-1~debian.12~bookworm               amd64        Docker Compose (V2) plugin for the Docker CLI.
    ii  python3-docker                                   5.0.3-1                                   all          Python 3 wrapper to access docker.io's control socket
    arnaud@serveur:~$ 

    and

    Code
    arnaud@serveur:~$ docker network ls
    NETWORK ID     NAME                DRIVER    SCOPE
    4fe51f54b02c   bridge              bridge    local
    1243943bce3f   host                host      local
    141f0ebd5e34   none                null      local
    ed6a01c86025   pis.bridge.no_icc   bridge    local
    arnaud@serveur:~$ 

    amd64 NAS server | OMV5 => OMV6 => OMV7, issue on accessing container from IP:Port since OMV6

    • Official Post

    Any firewall changes?

    sudo iptables -L

    sudo ss -tulpn

    omv 7.4.14-1 sandworm | 64 bit | 6.11 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.15 | compose 7.2.16 | k8s 7.3.1-1 | cputemp 7.0.2 | mergerfs 7.0.5 | scripts 7.0.9


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Don't know about tulpn but look bizarre content...

    In next message because too long

    amd64 NAS server | OMV5 => OMV6 => OMV7, issue on accessing container from IP:Port since OMV6

  • arnaud@serveur:~$ sudo ss -tulpn

    Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port Process

    udp UNCONN 0 0 127.0.0.54:53 0.0.0.0:* users:(("systemd-resolve",pid=1924,fd=19))

    udp UNCONN 0 0 127.0.0.53%lo:53 0.0.0.0:* users:(("systemd-resolve",pid=1924,fd=17))

    udp UNCONN 0 0 0.0.0.0:49239 0.0.0.0:* users:(("python3",pid=3549,fd=20))

    udp UNCONN 0 0 0.0.0.0:111 0.0.0.0:* users:(("rpcbind",pid=1923,fd=5),("systemd",pid=1,fd=79))

    udp UNCONN 0 0 0.0.0.0:123 0.0.0.0:* users:(("chronyd",pid=3490,fd=7))

    udp UNCONN 0 0 127.0.0.1:323 0.0.0.0:* users:(("chronyd",pid=3490,fd=5))

    udp UNCONN 0 0 172.20.0.1:3702 0.0.0.0:* users:(("python3",pid=3549,fd=21))

    udp UNCONN 0 0 239.255.255.250:3702 0.0.0.0:* users:(("python3",pid=3549,fd=19))

    udp UNCONN 0 0 172.17.0.1:3702 0.0.0.0:* users:(("python3",pid=3549,fd=17))

    udp UNCONN 0 0 239.255.255.250:3702 0.0.0.0:* users:(("python3",pid=3549,fd=15))

    udp UNCONN 0 0 192.168.0.200:3702 0.0.0.0:* users:(("python3",pid=3549,fd=9))

    udp UNCONN 0 0 239.255.255.250:3702 0.0.0.0:* users:(("python3",pid=3549,fd=7))

    udp UNCONN 0 0 0.0.0.0:53803 0.0.0.0:* users:(("python3",pid=3549,fd=8))

    udp UNCONN 0 0 0.0.0.0:54462 0.0.0.0:* users:(("python3",pid=3549,fd=16))

    udp UNCONN 0 0 0.0.0.0:5353 0.0.0.0:* users:(("avahi-daemon",pid=2596,fd=12))

    udp UNCONN 0 0 0.0.0.0:5355 0.0.0.0:* users:(("systemd-resolve",pid=1924,fd=11))

    udp UNCONN 0 0 0.0.0.0:44281 0.0.0.0:* users:(("avahi-daemon",pid=2596,fd=14))

    udp UNCONN 0 0 [::]:111 [::]:* users:(("rpcbind",pid=1923,fd=7),("systemd",pid=1,fd=84))

    udp UNCONN 0 0 [::1]:323 [::]:* users:(("chronyd",pid=3490,fd=6))

    udp UNCONN 0 0 [::]:34896 [::]:* users:(("avahi-daemon",pid=2596,fd=15))

    udp UNCONN 0 0 *:35459 *:* users:(("python3",pid=3549,fd=24))

    udp UNCONN 0 0 [fe80::b085:9aff:fe20:fff6]%veth6eebcc4:3702 [::]:* users:(("python3",pid=3549,fd=13))

    udp UNCONN 0 0 [ff02::c]%veth6eebcc4:3702 [::]:* users:(("python3",pid=3549,fd=11))

    udp UNCONN 0 0 [fe80::42:9cff:fe06:e40c]%br-ed6a01c86025:3702 [::]:* users:(("python3",pid=3549,fd=37))

    udp UNCONN 0 0 [ff02::c]%br-ed6a01c86025:3702 [::]:* users:(("python3",pid=3549,fd=35))

    udp UNCONN 0 0 [fe80::42:69ff:fe0a:9a68]%docker0:3702 [::]:* users:(("python3",pid=3549,fd=25))

    udp UNCONN 0 0 [ff02::c]%docker0:3702 [::]:* users:(("python3",pid=3549,fd=23))

    udp UNCONN 0 0 *:37012 *:* users:(("python3",pid=3549,fd=12))

    udp UNCONN 0 0 [::]:5353 [::]:* users:(("avahi-daemon",pid=2596,fd=13))

    udp UNCONN 0 0 [::]:5355 [::]:* users:(("systemd-resolve",pid=1924,fd=13))

    udp UNCONN 0 0 *:60733 *:* users:(("python3",pid=3549,fd=36))

    amd64 NAS server | OMV5 => OMV6 => OMV7, issue on accessing container from IP:Port since OMV6

  • and the remaining...


    tcp LISTEN 0 5 192.168.0.200:5357 0.0.0.0:* users:(("python3",pid=3549,fd=10))

    tcp LISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:(("sshd",pid=3491,fd=3))

    tcp LISTEN 0 511 0.0.0.0:80 0.0.0.0:* users:(("nginx",pid=3628,fd=7),("nginx",pid=3627,fd=7),("nginx",pid=3625,fd=7),("nginx",pid=3624,fd=7),("nginx",pid=3623,fd=7))

    tcp LISTEN 0 4096 0.0.0.0:111 0.0.0.0:* users:(("rpcbind",pid=1923,fd=4),("systemd",pid=1,fd=78))

    tcp LISTEN 0 4096 127.0.0.54:53 0.0.0.0:* users:(("systemd-resolve",pid=1924,fd=20))

    tcp LISTEN 0 511 0.0.0.0:443 0.0.0.0:* users:(("nginx",pid=3628,fd=9),("nginx",pid=3627,fd=9),("nginx",pid=3625,fd=9),("nginx",pid=3624,fd=9),("nginx",pid=3623,fd=9))

    tcp LISTEN 0 4096 0.0.0.0:5355 0.0.0.0:* users:(("systemd-resolve",pid=1924,fd=12))

    tcp LISTEN 0 50 192.168.0.200:445 0.0.0.0:* users:(("smbd",pid=3621,fd=29))

    tcp LISTEN 0 50 192.168.0.200:139 0.0.0.0:* users:(("smbd",pid=3621,fd=30))

    tcp LISTEN 0 50 127.0.0.1:445 0.0.0.0:* users:(("smbd",pid=3621,fd=31))

    tcp LISTEN 0 4096 0.0.0.0:8082 0.0.0.0:* users:(("docker-proxy",pid=369909,fd=4))

    tcp LISTEN 0 100 127.0.0.1:25 0.0.0.0:* users:(("master",pid=3839,fd=13))

    tcp LISTEN 0 50 127.0.0.1:139 0.0.0.0:* users:(("smbd",pid=3621,fd=32))

    tcp LISTEN 0 4096 127.0.0.53%lo:53 0.0.0.0:* users:(("systemd-resolve",pid=1924,fd=18))

    tcp LISTEN 0 5 172.17.0.1:5357 0.0.0.0:* users:(("python3",pid=3549,fd=18))

    tcp LISTEN 0 5 172.20.0.1:5357 0.0.0.0:* users:(("python3",pid=3549,fd=22))

    tcp LISTEN 0 50 [::1]:445 [::]:* users:(("smbd",pid=3621,fd=33))

    tcp LISTEN 0 50 [::1]:139 [::]:* users:(("smbd",pid=3621,fd=34))

    tcp LISTEN 0 100 [::1]:25 [::]:* users:(("master",pid=3839,fd=14))

    tcp LISTEN 0 128 [::]:22 [::]:* users:(("sshd",pid=3491,fd=4))

    tcp LISTEN 0 511 [::]:80 [::]:* users:(("nginx",pid=3628,fd=8),("nginx",pid=3627,fd=8),("nginx",pid=3625,fd=8),("nginx",pid=3624,fd=8),("nginx",pid=3623,fd=8))

    tcp LISTEN 0 4096 [::]:111 [::]:* users:(("rpcbind",pid=1923,fd=6),("systemd",pid=1,fd=83))

    tcp LISTEN 0 511 [::]:443 [::]:* users:(("nginx",pid=3628,fd=10),("nginx",pid=3627,fd=10),("nginx",pid=3625,fd=10),("nginx",pid=3624,fd=10),("nginx",pid=3623,fd=10))

    tcp LISTEN 0 16 *:3493 *:* users:(("upsd",pid=3530,fd=4))

    tcp LISTEN 0 4096 [::]:5355 [::]:* users:(("systemd-resolve",pid=1924,fd=14))

    tcp LISTEN 0 4096 [::]:8082 [::]:* users:(("docker-proxy",pid=369920,fd=4))

    tcp LISTEN 0 5 [fe80::42:69ff:fe0a:9a68]%docker0:5357 [::]:* users:(("python3",pid=3549,fd=26))

    tcp LISTEN 0 5 [fe80::b085:9aff:fe20:fff6]%veth6eebcc4:5357 [::]:* users:(("python3",pid=3549,fd=14))

    tcp LISTEN 0 5 [fe80::42:9cff:fe06:e40c]%br-ed6a01c86025:5357 [::]:* users:(("python3",pid=3549,fd=38))

    arnaud@serveur:~$

    amd64 NAS server | OMV5 => OMV6 => OMV7, issue on accessing container from IP:Port since OMV6

    • Official Post

    Everything looks to be correct. docker and ss show that the port is open. iptables says it is allowing traffic. I can't think of anything else to look at. You would have to try something like wireshark to get any more info.

    omv 7.4.14-1 sandworm | 64 bit | 6.11 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.15 | compose 7.2.16 | k8s 7.3.1-1 | cputemp 7.0.2 | mergerfs 7.0.5 | scripts 7.0.9


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!