Four Docker containers suddenly stopped working

  • linuxserver/calibre - Docker Image | Docker Hub

    User / Group Identifiers

    When using volumes ( flags) permissions issues can arise between the host OS and the container, we avoid this issue by allowing you to specify the user and group .-vPUIDPGID

    Ensure any volume directories on the host are owned by the same user you specify and any permissions issues will vanish like magic.

    In this instance and , to find yours use as below:PUID=1000PGID=1000id user

    Code
      $ id username    uid=1000(dockeruser) gid=1000(dockergroup) groups=1000(dockergroup)

    Thank you again for taking the time out to reply. I can't tell you how much I appreciate it.


    I just spent the last hour googling pieces of your post. I think sometimes the people that are experts at this have a hard time connecting to the weekend warriors like myself because they don't understand how little we understand about this area. I also think that's why you see so many people opening the permissions - they don't fully understand what they are doing and the guides tend to skimp on information in this area and they don't want to be admonished here for opening something up that they did not need to.


    Anyway, at a high level I completely get what you are saying. In the guides I set up PUID and PGID to be 1000 and things went swimmingly. Until they weren't, which is now. I did as the guides said and everything was great until last week.



    so this is the state of my AppData folder where I keep all of my docker configs. At this point in time none of my containers are operating normally. Some are working but I can't edit them ("can't find container information" error in portainer). Others have the same error AND they don't start. From doing my research I can see that for many of these "root" is the owner and the group users is also listed. Now what? I completely get that if I type "ID 1000" I should see what you showed me. But how do I get there? Again, I set them in docker-compose the way you described.


    Based on what I said in the previous paragraph doesn't it seem like this is more than just a permissions issue in the individual folders? I can't find any similarities between the errors I am getting and the folders with similar permissions.


    Thanks to all who are helping!

  • Bump again. All I am looking for is someone to tell me what a folder should have (or some other kind of point in the right direction). I am still skeptical that there isn't a bigger issue.

    Can you post the YML from 1 of the containers that stopped working?


    We can move from there, after.


    And PLEASE, post it on Codeboxes, NOT screenshots

  • Can you post the YML from 1 of the containers that stopped working?


    We can move from there, after.


    And PLEASE, post it on Codeboxes, NOT screenshots

    Thank you so, so much! There was one on the previous page - I'll post another one here. As noted earlier, they are all not functioning correctly in one way or another. Some start but can't be edited in portainer. Others don't start. An idea did occur to me a minute ago - I am making a new Calibre container. It is currently running but it seems to be running for a few minutes, which seems odd.


  • - PGID=1000

    Using PGID 1000 is bad. This will make the volumes only available to the groupid 1000 (which will be the group from the User ID 1000.

    Use PGID 100 (users) since this group includes every regular user created.


    /srv/dev-disk-by-label-HOMEMEDIA/AppData/Calibre/books

    What is the output of:

    ls -al /srv/dev-disk-by-label-HOMEMEDIA/AppData/Calibre/

    ls -al /srv/dev-disk-by-label-HOMEMEDIA/AppData/Calibre/books/

  • Ah, thanks so much for that! I did not know that - I'll adjust my files accordingly.




    How do these look? Anything that I should change?

  • One other note - I really can't seem to do much of anything with docker right now. If I try to stop a running container in the UI I get errors about the container not found.

    Then, start with basics.

    Check the docker root directory.

    sudo docker info | grep -i "Docker Root Dir"


    Check containers names.

    sudo docker ps -a


    See how it's called under the NAMES column

    sudo docker stop <NAME>


    Restart docker service

    sudo systemctl restart docker


    Then start one and check if it starts properly

    sudo docker start <NAME>


    Check if it show's normal on Portainer.


    Then start another one, and so on....

  • Fantastic, thanks so much again!


    So I was going through your commands with no problems until I got to the part where I restart docker. It has been hanging there in the session for more than 10 minutes.


    However, while waiting for Docker to finish rebooting I did some poking around.








    Could this be the issue? The file system that hosts Docker is erroring out with disk warnings. I am trying to run extended tests now.

  • If that's where your configs are (or docker root) than, yes: corrupted files there will give you issues.

  • If that's where your configs are (or docker root) than, yes: corrupted files there will give you issues.

    I was mistaken - the config files are in HOMEMEDIA on sdg. sdh is the bad file system.


    So next question - how do I find out where docker actually lives? In the UI it mentions /var/lib/docker. I tried to go to that directory but it said it did not exist.


    Similarly, if I even tryin to bring up the docker tab in the OMV UI it errors out. I tried to install Docker again - error.

  • So next question - how do I find out where docker actually lives?

    docker info | grep -i "Docker Root Dir"

  • docker info | grep -i "Docker Root Dir"

    Argh, I am sorry - you had me do that earlier. So yes, that is the bad drive and it is the only thing on that file system.


    So what is the easiest way to recover from this? Ideally I would like to have Docker be on another file system that has plenty of room. I don't necessarily need to replace that disk

  • Argh, I am sorry - you had me do that earlier. So yes, that is the bad drive and it is the only thing on that file system.


    So what is the easiest way to recover from this? Ideally I would like to have Docker be on another file system that has plenty of room. I don't necessarily need to replace that disk

    You can try to copy/rsync the docker root to a different folder/path and see if helps.

    Only issue is, if files are corrupted, it will be copied over so, probably the errors will continue.


    If this happens, you'll need to nuke the containers with errors and deploy them again.

  • You can try to copy/rsync the docker root to a different folder/path and see if helps.

    Only issue is, if files are corrupted, it will be copied over so, probably the errors will continue.


    If this happens, you'll need to nuke the containers with errors and deploy them again.

    Thanks so much! How do I go about telling OMV where Docker is? How do I tell it to install somewhere else?

  • Thanks so much! How do I go about telling OMV where Docker is? How do I tell it to install somewhere else?

    On the OMV Gui:


    Then press "SAVE" on the bottom right corner.


    There are some threads with moving docker root to a different place on the Forum.

    See if you can find them

  • Thanks again! This actually reminded me of a different error that I showed in the original post.


    stat /srv/dev-disk-by-label-Docker/Docker/overlay2/1af8e33685ebdd5949f338938e70a05602a35e6995e81a2abfaef335621449fe: no such file or directory


    is there any kind of magic that I can run on that folder? Or is that really a waste of time seeing as that SMART is telling me the disk is going bad?


    Also, one more question. In your post you reference the location of Docker but then you have @docker on the end. Is that indicating that it is the root folder?

  • stat /srv/dev-disk-by-label-Docker/Docker/overlay2/1af8e33685ebdd5949f338938e70a05602a35e6995e81a2abfaef335621449fe: no such file or directory


    is there any kind of magic that I can run on that folder? Or is that really a waste of time seeing as that SMART is telling me the disk is going bad?

    This is files/folders owned by the docker proccess. They're created depending on how you launch the containers. There's no magic to it.


    Thing is, you need all recursive folders on the <PATH-TO-DOCKER-ROOT> from old path to new in order to have everything as it was once you move the docker root to another place.


    Also, one more question. In your post you reference the location of Docker but then you have @docker on the end. Is that indicating that it is the root folder?

    No. I just created my folder with a @ (mkdir -p @dockerbecause I have it on a BTRFS filesystem and wanted to name it in a different way to stand out from the other filesystems.

    Could easily named it without the @

    No science on there.


    As for you to move the docker root, post the output of the actual docker root and the new path you want to move it.

    That way I can give you commands for you to use.


    I'll try to find the posts with instructions to move the docker root, in the meantime

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!