Posts by Copaxy

    No. A parity drive from snapraid is just a normal filesystem with a parity file on it.

    Okay that is good, so one possibility less

    I think then I can only watch over that behaviour. If it occurs the next time after boot, what logs or data should I check or save to analyse the reason behind it? I guess omv syslog anything else or special?

    you could have a leftover entry in the mntent section of the database or less likely, a leftover mount file in /etc/systemd/system/. If you are only uninstalling the plugin for fresh installs, neither of these should be a problem.

    mhm.. I looked a both locations but there was nothing I could find. No leftover entry, and i also only found my PoolB entry in the sytemd location you mentioned.

    Is there a possibility an old parity drive could mix up OMV?

    Because when I removed the snapraid configuration I left the old parity drive not mounted bit still connected to the system. I didn't had the time to clean it.

    you could have a leftover entry in the mntent section of the database or less likely, a leftover mount file in /etc/systemd/system/. If you are only uninstalling the plugin for fresh installs, neither of these should be a problem.

    Maybe yeah. I will try that tomorrow and check if there is any leftover data, because there has to be a reason if PoolA still sometimes appears.

    sudo omv-showkey mergerfs But if there was "leftover" settings, then the pool would show up in the plugin. The plugin shows everything in the database not just things mounted.

    I would highly recommend to stop uninstalling the plugin to "fix" things. I don't ever test this and don't want to.

    Ah okey. Interestingly as you said only the PoolB is configured.

    I want to mention that i only ever reinstalled mergerfs after a fresh new OMV installaton. The only exception was one time by accident when reconfiguring my NAS from snapraid mergerfs combi to just mergerfs. That's when I created PoolB because I had to create a new pool.

    sudo omv-showkey mergerfs

    only showed me the new PoolB. Is there any other way to check how mergerfs still sometimes creates the old pool?

    Does someone know where the mergerfs plugin in OMV stores its configuration data?

    Reason being is, that I want to check a behavior. I previously when my server was messed up with snapraid and mergerfs plugin I mentioned in this and previous topics, created a pool named for example PoolA. Then when I removed the snapraid configuration and just used mergerfs, until I build a new NAS, I named the new pool PoolB. But in between, I also uninstalled and reinstalled the mergerfs plugin. Sometimes I noticed when I restart OMV that the pool is not mounted and of course I manually mount it then. But sometimes it won't mount and throws errors. When I look into the "/srv/mergerfs/" there should be only PoolB but there still is PoolA from the previous config. Even tough I deleted it, and it is fine for a while, but sometimes mergerfs still shows the old PoolA folder and I have to delete it. Maybe this is also contributing to some drive missing issues.

    Any ideas on how I can check, if mergerfs still has some old settings stored somewhere and I then how can remove them?

    Interesting, so if i understand it correctly, if i use the omv compose plugin it saves the data in the docker root folder? Or am i able to save the config data somewhere else like when i use compose in portainer and i set the config and container files to be in whatever directory i choose.

    For example i create a docker compose for jellyfin and i put the path for the config at my prefered place but the rest of the container files are in the docker root folder.

    What is the difference when i use the compose plugin?

    I can confirm it works now. After successful installation, it took me to the web ui as if it was a total fresh install. I tested to upload my backup and it worked.

    My docker containers work now, finally...thank god

    I previously did it exactly as you said. I just updated portainer over the "install" button on extras. I did update it always that way but seems like at the last update something went wrong.

    Luckily you were there to fix it 🙈

    I already could see my server to be rebuild from scratch...

    Okey lessons learned. If someday in the future a update kills portainer then i uninstall it over extras, remove the data over extras and create the portainer_data folder manually with root permission.

    Thanks again🙏🏻

    s the docker folder living on a merged pool??? Or is it only on the root of the OS?

    It is 100% on the boot drive aka SD-card as it always was

    mkdir /var/lib/docker/volumes/portainer_data #This will make the folder owned by root

    ok i created the folder now this way

    ls -al /var/lib/docker/volumes To list the folders again to check if it need changes to owner:users

    It is definitely root now.

    It faild again but now i wanted to make sure extras can remove data, so i removed the data over extras again and confirmed that omv can delete the folder.

    Now i reinstalled again and it worked, i can get to the web ui now. But i am still checking for errors

    ls -al /var/lib/docker/volumes

    Maybe it helps. My syslog shows something is down

    Sorry but this is really weird, :rolleyes:

    Since you run as root, no need for sudo.

    docker ps -a | grep -i portainer

    I know..that is why i am here. Just randomly after updating portainer over the extras to the latest version it started to become unstable after one day and now it totally refuses to work.

    I have no clue anymore what the error might be. Before it was totally fine, no issue.

    root@meinnas:~# docker ps -a | grep -i portainer
    d681f9ca4876   portainer/portainer-ce                          "/portainer"             45 seconds ago   Created           >8000/tcp, :::8000->8000/tcp,>9000/tcp, :::9000->9000/tcp, 9443/tcp   portainer

    After the installation it fails and the container is just created

    root@meinnas:~# sudo docker stop portainer
    root@meinnas:~# sudo docker rm portainer
    root@meinnas:~# sudo docker ps -a | grep -i portainer

    It seems gone now

    Reinstall still fails

    The output of

    sudo docker ps -a

    It completely fails to install

    I tried to reinstall portainer over the gui in extras.

    Yes, but from what I'm seeing from Copaxy , I'm thinking he had Portainer running, NOT by OMV-Extras but maybe by docker-compose.

    And all the instructions I'm giving are based on a Portainer installation via Extras.

    I only used the extras tool, i didn't manually create a portainer container

    Please, outputs of:

    sudo ls -al /var/lib/docker/volumes/

    sudo ls -al /var/lib/docker/volumes/portainer_data

    And what command you used to make the copy of the folder?

    root@meinnas:~# sudo ls -al /var/lib/docker/volumes/portainer_data
    ls: cannot access '/var/lib/docker/volumes/portainer_data': No such file or directory

    The command i used to make a copy was:

    sudo cp -a  /var/lib/docker/volumes/portainer_data /var/lib/config/portainer_data-backup

    Note: The config folder is just the folder i keep other containers container data -> The name of config is maybe a bit missleading

    docker ps -a

    root@meinnas:~# docker ps -a
    CONTAINER ID   IMAGE                                           COMMAND                  CREATED          STATUS                      PORTS                                                                                            NAMES
    7176964d2ab1   portainer/portainer-ce                          "/portainer"             17 minutes ago   Created           >8000/tcp, :::8000->8000/tcp,>9000/tcp, :::9000->9000/tcp, 9443/tcp   portainer
    2df359cfd18e               "docker-entrypoint.s…"   6 weeks ago      Exited (0) 6 weeks ago                                                                                                       homepage
    ffa32266cfb9   e251b53f1bb7                                    "/bin/sh -c 'apt-get…"   6 weeks ago      Exited (100) 6 weeks ago                                                                                                     keen_bell
    33c23a74804d   gitea/gitea:latest                              "/usr/bin/entrypoint…"   6 weeks ago      Up 5 minutes      >22/tcp, :::2234->22/tcp,>3000/tcp, :::3010->3000/tcp                 gitea
    e90dfc309588   yobasystems/alpine-mariadb:latest               "/scripts/"        6 weeks ago      Up 5 minutes                3306/tcp
    root@meinnas:~# docker start portainer
    Error response from daemon: error evaluating symlinks from mount source "/var/lib/docker/volumes/portainer_data/_data": lstat /var/lib/docker/volumes/portainer_data/_data: no such file or directory
    Error: failed to start containers: portainer