About docker restart after a blackout

  • Hi,


    would like to ask for a little advice.


    Actually on my Odroid HC2 i moved my docker directory on my HD in order to avoid the use of my SD card.


    If i reboot the system everything works fine, the docker containers start automatically and no issue.


    But it happened a couple of times after a blackout that when the NAS was up again, the containers were not running. Checking with "systemctl is-active docker" resulted in "active".


    So i suppose maybe in this case the HD was mounted only after that the docker service was started.


    What i have to do to fix this is restart docker service.


    Any idea how can i avoid this and let the disk to be mounted first?


    Thanks!


    EDIT: Found this RE: Is your docker also corrupt after a reboot? (OMV5, Pi4) This might fix it! . I have no /etc/systemd/system/docker.service.d directory on my system. From the source code i see that configOverride() creates the dir and the file automatically. But i do not understand when this happens, because as i said i did not have the directory.


    I manually created the directory "docker.service.d" (hope it's fine) and issued commands suggested by @ryecoaaron.


    The content of the file is:


    Code
    root@DK:/etc/systemd/system/docker.service.d# cat waitAllMounts.conf
    [Unit]
    After=local-fs.target srv-dev\x2ddisk\x2dby\x2dlabel\x2dHC2-DockerBasePath-docker-overlay2-1b1d9b233ba8e448042780ed3c701696989181c3b21ac2d81f753e42022b766f-merged.mount srv-dev\x2ddisk\x2dby\x2dlabel\x2dHC2-DockerBasePath-docker-overlay2-216648bcaf99a824d59e47de3c47a3b947fdb6090c0076b6237aa86845011003-merged.mount srv-dev\x2ddisk\x2dby\x2dlabel\x2dHC2-DockerBasePath-docker-overlay2-29f5ee1a68efc93a92f3308ac7ba9a806017223099e1a56be69664ba581b52ba-merged.mount srv-dev\x2ddisk\x2dby\x2dlabel\x2dHC2-DockerBasePath-docker-overlay2-3402a9221439b346e248cd0d7d63bf12347ee5375f2c2ea5f520f86e5af5dca2-merged.mount srv-dev\x2ddisk\x2dby\x2dlabel\x2dHC2-DockerBasePath-docker-overlay2-34773d33441dfc6815e1b4f2b9bdb65ae5d3700d85257a388ad45b7b4112d7e5-merged.mount srv-dev\x2ddisk\x2dby\x2dlabel\x2dHC2-DockerBasePath-docker-overlay2-3ca90bfbbeca36d1d096a2f1c55dca3b4adbb67c98273f6a271c384ba026ebc5-merged.mount srv-dev\x2ddisk\x2dby\x2dlabel\x2dHC2-DockerBasePath-docker-overlay2-3e7b8ea8268cdf49ed6e76628b8fa843b409cf19b25e8a29f73bba96424d9ea4-merged.mount srv-dev\x2ddisk\x2dby\x2dlabel\x2dHC2-DockerBasePath-docker-overlay2-4a612126ebd96d1fad9596bcd9075988f246355f1a536e7d82dd595db6bdc842-merged.mount srv-dev\x2ddisk\x2dby\x2dlabel\x2dHC2-DockerBasePath-docker-overlay2-4d93f9cec46779a06d4c83cca1815057522af02ab3d9814f2ea16f6b692d7c91-merged.mount srv-dev\x2ddisk\x2dby\x2dlabel\x2dHC2-DockerBasePath-docker-overlay2-59733ab507ae26f596055af5ec81fec152d289cc8447d7bb41074b97b9fc6715-merged.mount srv-dev\x2ddisk\x2dby\x2dlabel\x2dHC2-DockerBasePath-docker-overlay2-6ca76864e107ba5daad0d9ff457dcb94a2d7a09c382fbb9a2382e6c0ef2820bf-merged.mount srv-dev\x2ddisk\x2dby\x2dlabel\x2dHC2-DockerBasePath-docker-overlay2-79a738c6bc16e91faab387120e6c3f424cd683b886e1cd6e66df3812259effae-merged.mount srv-dev\x2ddisk\x2dby\x2dlabel\x2dHC2-DockerBasePath-docker-overlay2-8440af768ac300e49a1df9f1b207da90009f99aa4e5b1232d7677657e271860b-merged.mount srv-dev\x2ddisk\x2dby\x2dlabel\x2dHC2-DockerBasePath-docker-overlay2-869ee2d246d24ae6d529e9e0c47b42794476d2e1172b234dc0414828b91c6d4a-merged.mount srv-dev\x2ddisk\x2dby\x2dlabel\x2dHC2-DockerBasePath-docker-overlay2-86a2a567e7557bf2709dc1fbddc94671745d18ba8db5ca58d1ddac153f3fc0ac-merged.mount srv-dev\x2ddisk\x2dby\x2dlabel\x2dHC2-DockerBasePath-docker-overlay2-98b9104faa94e6c4118a469404f7e458107f5b5501c72aa9a4439905711db050-merged.mount srv-dev\x2ddisk\x2dby\x2dlabel\x2dHC2-DockerBasePath-docker-overlay2-9d185dffe6f5a7eae9905b745e8deb8a7ca273c6beed7a3fc7d1a75bd5b71a65-merged.mount srv-dev\x2ddisk\x2dby\x2dlabel\x2dHC2-DockerBasePath-docker-overlay2-a9487380934327a58f055c8444eb8b4e4800bffd3c52f4d677b659fd9e135290-merged.mount srv-dev\x2ddisk\x2dby\x2dlabel\x2dHC2-DockerBasePath-docker-overlay2-ba377e5ec8a42d69bf20984ffaf7a1638f70553980c0f14546bb0bbe8dad4b32-merged.mount srv-dev\x2ddisk\x2dby\x2dlabel\x2dHC2-DockerBasePath-docker-overlay2-c207e16bac03b74001821f071f5f2e694816e928408d30404e869b1c16908865-merged.mount srv-dev\x2ddisk\x2dby\x2dlabel\x2dHC2-DockerBasePath-docker-overlay2-c2a0193dc8107ff67f58c2b993ccdd796d7fb692ace4fb01a273bbc2fa141e05-merged.mount srv-dev\x2ddisk\x2dby\x2dlabel\x2dHC2-DockerBasePath-docker-overlay2-c2c7fc809117fdf3a9d3d222799201e9f7aaf4d78da95fe2acc2a15daad23891-merged.mount srv-dev\x2ddisk\x2dby\x2dlabel\x2dHC2-DockerBasePath-docker-overlay2-f20e8b908b92ce746ba4c51e2d8d3bd022ab38d8c45c6ec2e00b77812886d9c7-merged.mount srv-dev\x2ddisk\x2dby\x2dlabel\x2dHC2-DockerBasePath-docker-overlay2-fd556ec21966d4aba017b714a1b3bceb52a73bfe75905ec00eee6af699becd65-merged.mount srv-dev\x2ddisk\x2dby\x2dlabel\x2dHC2.mount

    Is it correct? Do not understand why every path here points to where docker is stored on the HD, apart from the last string.

    • Offizieller Beitrag

    The override file is created when docker is installed by omv-extras. Running the commands after docker was installed with containers running added too many mounts. But either way, it doesn't really matter because those determine when docker starts. Are you using restart-always for the containers?

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • The override file is created when docker is installed by omv-extras. Running the commands after docker was installed with containers running added too many mounts. But either way, it doesn't really matter because those determine when docker starts. Are you using restart-always for the containers?

    Hi, thank you for your reply.


    I use restart: unless-stopped.


    By your response it's not clear to me if the .conf file generated is ok that way or if i should delete it.

    • Offizieller Beitrag

    If you really believe your drive is not mounting until after docker starts (I've personally never had that issue w/ an HC2 internal drive), after the next "blackout", just execute the command


    Code
     systemctl restart docker


    then after a minute or so, if you're right your dockers should be up like normal. If that's the case, then you have correctly diagnosed the problem.


    To fix it...


    If it were me, I'd delete that conf file, and give this thread a read...


    Jellyfin portainer stack completely reset after 10-hour server shutdown: why?

  • If you really believe your drive is not mounting until after docker starts (I've personally never had that issue w/ an HC2 internal drive), after the next "blackout", just execute the command


    Code
     systemctl restart docker


    then after a minute or so, if you're right your dockers should be up like normal. If that's the case, then you have correctly diagnosed the problem.

    These are exactly the same steps i followed, as i said in the first post.

    To fix it...


    If it were me, I'd delete that conf file, and give this thread a read...


    Jellyfin portainer stack completely reset after 10-hour server shutdown: why?

    The solution provided in the link you posted is the same as the first one proposed in the link i posted in my first post. This solution is not optimal as ryecoaaron says. Instead he suggests to use the waitAllMounts.conf file. The same file i issued this thread about.

    • Offizieller Beitrag

    These are exactly the same steps i followed, as i said in the first post.

    The solution provided in the link you posted is the same as the first one proposed in the link i posted in my first post. This solution is not optimal as ryecoaaron says. Instead he suggests to use the waitAllMounts.conf file. The same file i issued this thread about.

    Then you did something wrong.. Multiple people have done exactly what was in that thread w/ zero issues.

  • Then you did something wrong.. Multiple people have done exactly what was in that thread w/ zero issues.

    Probably is like he said before:

    "The override file is created when docker is installed by omv-extras. Running the commands after docker was installed with containers running added too many mounts."

    • Offizieller Beitrag

    Probably is like he said before:

    "The override file is created when docker is installed by omv-extras. Running the commands after docker was installed with containers running added too many mounts."

    ok.. Good luck

    • Offizieller Beitrag

    I never said using the delay won't work. I just don't like it because it could be waiting time it doesn't need to or not wait long enough. Waiting for all of the mounts (which is what the code is supposed to do) should work but I give up. Listen to someone who uses the setup that you are trying to get working (hint: not me).

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I never said using the delay won't work. I just don't like it because it could be waiting time it doesn't need to or not wait long enough. Waiting for all of the mounts (which is what the code is supposed to do) should work but I give up. Listen to someone who uses the setup that you are trying to get working (hint: not me).

    Never said that, mentioned in fact that you said it's not optimal.


    The point i'm missing is, the mounts number in the waitAllMounts.conf file depends on the number of the running containers? Or i can stick with the file i created?

    • Offizieller Beitrag

    The point i'm missing is, the mounts number in the waitAllMounts.conf file depends on the number of the running containers? Or i can stick with the file i created?

    The file really should be re-created because some of them are just wrong. On your system, waitAllMounts.conf should just be:


    [Unit]

    After=local-fs.target srv-dev\x2ddisk\x2dby\x2dlabel\x2dHC2.mount


    then systemctl daemon-reload

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Well at this point i could edit the file as you mentioned and reload the daemon. Could work in theory?

    • Offizieller Beitrag

    Could work in theory?

    That is what I was suggesting. The omv-extras docker install doesn't do anything magical.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!