omvextrasorg error after upgrade to 5

  • and a docker network ls might help as well to further nail down the problem.

    Sure thing



    Code
    root@HAL:/srv/dev-disk-by-label-Docker/AppData# docker network ls
    NETWORK ID     NAME                       DRIVER    SCOPE
    e78320268df8   BEDELL                     macvlan   local
    9f22d5167a6b   bridge                     bridge    local
    edda285bdf7e   host                       host      local
    1328fe871c8d   my-net                     bridge    local
    cfe423b9b7a8   none                       null      local
    2c85917c67f6   unify-controller_default   bridge    local
  • At least you Pihole container expects its volumes here:

    Code
          "HostConfig": {
                "Binds": [
                    "/etc/localtime:/etc/localtime:ro",
                    "/sharedfolders/AppData/Pihole:/etc/pihole:rw",
                    "/sharedfolders/AppData/Pihole/DNSMasq:/etc/dnsmasq.d:rw"
                ],


    If they are not there, docker creates the directories, but abviously the content is missing. This may well be the reason for the containers not starting.


    You have these options:

    1. try a symlink mapping /srv/dev-disk-by-label-Docker/AppData to /sharedfolders/AppData and see if your containers start
      ln -s /srv/dev-disk-by-label-Docker/AppData /sharedfolders/AppData
    2. try to extract a docker-compose.yml from the containers and edit them and re-create the containers
      docker run --rm -v /var/run/docker.sock:/var/run/docker.sock ghcr.io/red5d/docker-autocompose <container_name>
    3. edit the file /srv/dev-disk-by-label-Docker/Docker/containers/config.v2.json and replace /sharedfolders/AppData by /srv/dev-disk-by-label-Docker/AppData on all places

    If 1) works, it is the easyest, 2) is the safest and will get the permissions right if you start all ofer with a fresh docker directory and 3) is the riskiest


    Here is how to extract the docker-compose.yml files for all contaienrs at once:

    Code
    cd
    mkdir -p compose-files
    cd compose-files
    for i in $( docker ps -a --format "{{ .Names }}"); do  docker run --rm -v /var/run/docker.sock:/var/run/docker.sock ghcr.io/red5d/docker-autocompose $i >> $i.docker-compose.yml; done

    If you got help in the forum and want to give something back to the project click here (omv) or here (scroll down) (plugins) and write up your solution for others.

  • Thanks again, I gave #1 a whirl.



    Code
    Failure
    OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "/s6-init": permission denied: unknown


    I got a slightly different error this time


  • So you are back in business. Time to clean up permissions

    Code
    [s6-init] making user provided files available at /var/run/s6/etc...exited 0.
    [s6-init] ensuring user provided files have correct perms...exited 0.


    Don't know what you did to your AppData Directory:


    drwxrwsrwx+ 3 openmediavault-webgui root 4096 Jan 8 10:04 Pihole


    remove the ACLs (recursive),

    remove the SGID bit (i bet recursive as well) try again.


    And the owner looks strage as well.

    If you got help in the forum and want to give something back to the project click here (omv) or here (scroll down) (plugins) and write up your solution for others.


  • It may have been when I was playing with having the directory be root/root. To be honest the permissions are a bit over my head.


    What's the best way to remove the ACLs? I googled it and found setfacl. Same with the SGID - this is a bit new to me.

  • None of the commands we hab would create ACLs. This was done using the UI


    Remove ACL recursive: sudo setfacl -R -b Pihole

    sudo chmod -R g-s Pihole


    executed inside the AppData directory.

    If you got help in the forum and want to give something back to the project click here (omv) or here (scroll down) (plugins) and write up your solution for others.

  • Code
    drwxrwxrwx   3 openmediavault-webgui root  4096 Jan  8 10:04 Pihole


    Ok, executed in /srv/..../AppData. Is that what you were expecting the result to be? I got the same error when trying to start it.

  • Adding on to last post:


  • To me it looks like it is working (somehow) and the healthcheck is killing the container, because pi-hole can not be found in the DNS.

    The healthcheck fails and the container gets killed. For mee this seems to be a misconfigured DNS inside the container.


    One last try, before Option 2.


    Go into the directory of the container and make a backup copy of config.v2.json.

    then edit the file config.v2.json.


    Replace "S6_LOGGING=0" with "S6_LOGGING=1" in ENV and

    "dig @127.0.0.1 pi.hole || exit 1" with "dig @127.0.0.1 pi.hole || exit 0" in Healthcheck Test


    Then start again.


    or you could try to start a different container.



    Edit: Editor replaced chars with Symbols, undone.

    If you got help in the forum and want to give something back to the project click here (omv) or here (scroll down) (plugins) and write up your solution for others.


  • You know, in doing my research yesterday I did come across a thread where someone had a similar issue as mine and it was a network issue. Before I did this on saturday I did have to decouple ddwrt from pihole so that my network didn't fall down when I took down OMV. I don't have any reason to think there is a network issue on the server but I guess it is something to keep in mind. I double checked the network settings over the weekend - there is nothing referencing pihole in any way.


    Anyway, I am a bit confused. Wouldn't the directory of the container be /srv/.../../containers? I only see containers in that folder and not config.v2.json. I am assuming that you meant a different directory?

  • It is inside the docker/containers/very_long_numer



    I do not say, that the network of the server is bogus, but something in the combination with pihole.

    If you got help in the forum and want to give something back to the project click here (omv) or here (scroll down) (plugins) and write up your solution for others.

  • It is inside the docker/containers/very_long_numer



    I do not say, that the network of the server is bogus, but something in the combination with pihole.

    Ok, I noticed something that had not occurred to me before. By default the timestamps do not show in portainer for the docker logs. When I turned them on I saw that the logs I had referenced earlier were from over the weekend. At some point they did run, just not recently.


    I made the changes that you asked for and kicked it off again. The log did not update and I am getting this error:

    Code
    docker start f2c44d081905629955633807f61c09f28a61cd67802e28c0daacae727ac832e0
    Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "/s6-init": permission denied: unknown
    Error: failed to start containers: f2c44d081905629955633807f61c09f28a61cd67802e28c0daacae727ac832e0
  • Somehow the permissions are screwd up seriously, time fpr option 2 from #62


    Takes too much time to fix it.

    If you got help in the forum and want to give something back to the project click here (omv) or here (scroll down) (plugins) and write up your solution for others.

  • Somehow the permissions are screwd up seriously, time fpr option 2 from #62


    Takes too much time to fix it.


    Again, thanks so much for your time with this. I tried 2 and was still getting more errors.


    I did try something a little bit different. In all of my configs the files are referencing /sharedfolders for the app configuration ( can see this in the Portainer UI). So I opened up the config files for Glances and changed /sharedfolder to / and saved the files. For some reason I did not see the changes reflected in the Portainer UI for glances (I re-opened the files several times to verify that the changes saved).


    So in the portainer UI I used the option to make a duplicate. Before saving I changed the config path to be /srv and then deployed it. Sure enough, the container started up. However, it is not accessible via the web. Upon inspection of the /Glances folder that I was working in a few minutes ago I see that the folder is completely empty. I don't know if this is a clue or not. Is there something else I can look at? Is there a way to completely blow away Docker and start all of this from scratch?

  • Have you been able to get the docker-compose.yml for your containers? Once you have the files, starting all over is easy.

    If you got help in the forum and want to give something back to the project click here (omv) or here (scroll down) (plugins) and write up your solution for others.

  • Have you been able to get the docker-compose.yml for your containers? Once you have the files, starting all over is easy.

    I had tried that earlier and it errored out. However, I think I solved part of the puzzle. In some places there were still references to /sharedfolders. While I was browsing the OMV UI I noticed that the permissions to SharedFolders was not right. Once I fixed this all of the containers are now able to start.


    But I now seem to have a different issue - the configs aren't being honored. For example, in SABNZBD it started me from the beginning as if it was a first-time run. In the Portainer UI I can clearly see that the /config path is the /sharedfolder/AppData/Sabnzbd (and all of my other settings are there) but when I went through the wizard it did not update that file. I know that it created somewhere because the basic settings that I had entered in the wizard had in fact saved. If I open up the container folder that portainer shows I can see that the two .json files have new timestamps and have all of my original configuration in them. However, the app is clearly looking somewhere else for the sabnzbd.ini file. I have similar experience in the other containers I have tried. Any ideas how to debug this?

  • muchgooder

    Added the Label resolved

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!