What is it? An extra pool? The pool doesn't mount?
I mean the before mentioned behaviour, that the old PoolA still sometimes shows up after boot and then PoolB is not mounting of course.
What is it? An extra pool? The pool doesn't mount?
I mean the before mentioned behaviour, that the old PoolA still sometimes shows up after boot and then PoolB is not mounting of course.
No. A parity drive from snapraid is just a normal filesystem with a parity file on it.
Okay that is good, so one possibility less
I think then I can only watch over that behaviour. If it occurs the next time after boot, what logs or data should I check or save to analyse the reason behind it? I guess omv syslog anything else or special?
you could have a leftover entry in the mntent section of the database or less likely, a leftover mount file in /etc/systemd/system/. If you are only uninstalling the plugin for fresh installs, neither of these should be a problem.
mhm.. I looked a both locations but there was nothing I could find. No leftover entry, and i also only found my PoolB entry in the sytemd location you mentioned.
Is there a possibility an old parity drive could mix up OMV?
Because when I removed the snapraid configuration I left the old parity drive not mounted bit still connected to the system. I didn't had the time to clean it.
you could have a leftover entry in the mntent section of the database or less likely, a leftover mount file in /etc/systemd/system/. If you are only uninstalling the plugin for fresh installs, neither of these should be a problem.
Maybe yeah. I will try that tomorrow and check if there is any leftover data, because there has to be a reason if PoolA still sometimes appears.
sudo omv-showkey mergerfs But if there was "leftover" settings, then the pool would show up in the plugin. The plugin shows everything in the database not just things mounted.
I would highly recommend to stop uninstalling the plugin to "fix" things. I don't ever test this and don't want to.
Ah okey. Interestingly as you said only the PoolB is configured.
I want to mention that i only ever reinstalled mergerfs after a fresh new OMV installaton. The only exception was one time by accident when reconfiguring my NAS from snapraid mergerfs combi to just mergerfs. That's when I created PoolB because I had to create a new pool.
only showed me the new PoolB. Is there any other way to check how mergerfs still sometimes creates the old pool?
Does someone know where the mergerfs plugin in OMV stores its configuration data?
Reason being is, that I want to check a behavior. I previously when my server was messed up with snapraid and mergerfs plugin I mentioned in this and previous topics, created a pool named for example PoolA. Then when I removed the snapraid configuration and just used mergerfs, until I build a new NAS, I named the new pool PoolB. But in between, I also uninstalled and reinstalled the mergerfs plugin. Sometimes I noticed when I restart OMV that the pool is not mounted and of course I manually mount it then. But sometimes it won't mount and throws errors. When I look into the "/srv/mergerfs/" there should be only PoolB but there still is PoolA from the previous config. Even tough I deleted it, and it is fine for a while, but sometimes mergerfs still shows the old PoolA folder and I have to delete it. Maybe this is also contributing to some drive missing issues.
Any ideas on how I can check, if mergerfs still has some old settings stored somewhere and I then how can remove them?
The docker-compose plugin.. it basically lets you run docker-compose from the OMV webUI.. but in additiona to that, it saves your compose files in a specified folder.
Ah so it is like docker-compose on portainer but with a additional save in another folder
interesting, i have to try that
Interesting, so if i understand it correctly, if i use the omv compose plugin it saves the data in the docker root folder? Or am i able to save the config data somewhere else like when i use compose in portainer and i set the config and container files to be in whatever directory i choose.
For example i create a docker compose for jellyfin and i put the path for the config at my prefered place but the rest of the container files are in the docker root folder.
What is the difference when i use the compose plugin?
Display MoreGreat.
Can you now access Portainer as if it was new?You'll need to create a user and password again.
If you want to test it, just click "install" on OMV GUI.It should come up clean.
When there's an update to Portainer, all it takes is to click "Install" and it will update over the previous version.
I can confirm it works now. After successful installation, it took me to the web ui as if it was a total fresh install. I tested to upload my backup and it worked.
My docker containers work now, finally...thank god
I previously did it exactly as you said. I just updated portainer over the "install" button on extras. I did update it always that way but seems like at the last update something went wrong.
Luckily you were there to fix it 🙈
I already could see my server to be rebuild from scratch...
Okey lessons learned. If someday in the future a update kills portainer then i uninstall it over extras, remove the data over extras and create the portainer_data folder manually with root permission.
Thanks again🙏🏻
s the docker folder living on a merged pool??? Or is it only on the root of the OS?
It is 100% on the boot drive aka SD-card as it always was
mkdir /var/lib/docker/volumes/portainer_data #This will make the folder owned by root
ok i created the folder now this way
ls -al /var/lib/docker/volumes To list the folders again to check if it need changes to owner:users
root@meinnas:~# ls -al /var/lib/docker/volumes
total 80
drwx-----x 12 root root 4096 Dec 21 17:32 .
drwx--x--- 13 root root 4096 Dec 21 15:36 ..
drwx-----x 3 root root 4096 Apr 19 2022 07a52b2096960f4f3e88c4908c0a9f1bac0884ea72f713fb6ece6b111bf5d1e6
drwx-----x 3 root root 4096 Sep 22 00:57 2214dac2c6eaa5cdddaf1161bfab9f97e3ccb8bc95dc5df0a5311aacb9338f62
drwx-----x 3 root root 4096 Dec 21 09:49 480a8ca7aa2fc0922b1ec81bd0bc933498eddc3d2d694c79be6441caa39fb78d
brw------- 1 root root 179, 2 Dec 21 15:37 backingFsBlockDev
drwx-----x 3 root root 4096 Nov 2 12:17 compose_opdata
drwx-----x 3 root root 4096 Nov 2 12:17 compose_pgdata
drwx-----x 3 root root 4096 Sep 22 00:53 kasm_db_1.11.0
-rw------- 1 root root 65536 Dec 21 15:37 metadata.db
drwx-----x 3 root root 4096 Nov 3 15:30 openproject-ce_pg-data
drwx-----x 3 root root 4096 Nov 3 15:57 openproject_raspi_pg-data
drwxr-xr-x 2 root root 4096 Dec 21 17:32 portainer_data
drwx-----x 3 root root 4096 Apr 19 2022 synapse-data
Display More
It is definitely root now.
It faild again but now i wanted to make sure extras can remove data, so i removed the data over extras again and confirmed that omv can delete the folder.
Now i reinstalled again and it worked, i can get to the web ui now. But i am still checking for errors
Docker storage :: /var/lib/docker
Agent port:: 8000
Web port:: 9000
Yacht port:: 8001
ee:: 0
image:: portainer/portainer-ce
Enable TLS:: 0
arch :: arm64
option :: portainer
state :: install
extras :: 6.1.1
DNS OK.
No portainer containers or images to remove.
Creating portainer volume ...
portainer_data
Pulling portainer/portainer-ce ...
Using default tag: latest
latest: Pulling from portainer/portainer-ce
772227786281: Pulling fs layer
96fd13befc87: Pulling fs layer
9feedfe952b7: Pulling fs layer
2b0b05213b47: Pulling fs layer
2b0b05213b47: Waiting
96fd13befc87: Verifying Checksum
96fd13befc87: Download complete
772227786281: Verifying Checksum
772227786281: Download complete
772227786281: Pull complete
2b0b05213b47: Verifying Checksum
2b0b05213b47: Download complete
96fd13befc87: Pull complete
9feedfe952b7: Verifying Checksum
9feedfe952b7: Download complete
9feedfe952b7: Pull complete
2b0b05213b47: Pull complete
Digest: sha256:f7607310051ee21f58f99d7b7f7878a6a49d4850422d88a31f8c61c248bbc3a4
Status: Downloaded newer image for portainer/portainer-ce:latest
docker.io/portainer/portainer-ce:latest
Starting portainer/portainer-ce ...
198eb9d30c2c5e0a7612366017fe22dccbdd228cd671ec3775a623b761058567
END OF LINE
Display More
docker inspect portainer
ls -al /var/lib/docker/volumes
root@meinnas:~# ls -al /var/lib/docker/volumes
total 76
drwx-----x 11 root root 4096 Dec 21 15:48 .
drwx--x--- 13 root root 4096 Dec 21 15:36 ..
drwx-----x 3 root root 4096 Apr 19 2022 07a52b2096960f4f3e88c4908c0a9f1bac0884ea72f713fb6ece6b111bf5d1e6
drwx-----x 3 root root 4096 Sep 22 00:57 2214dac2c6eaa5cdddaf1161bfab9f97e3ccb8bc95dc5df0a5311aacb9338f62
drwx-----x 3 root root 4096 Dec 21 09:49 480a8ca7aa2fc0922b1ec81bd0bc933498eddc3d2d694c79be6441caa39fb78d
brw------- 1 root root 179, 2 Dec 21 15:37 backingFsBlockDev
drwx-----x 3 root root 4096 Nov 2 12:17 compose_opdata
drwx-----x 3 root root 4096 Nov 2 12:17 compose_pgdata
drwx-----x 3 root root 4096 Sep 22 00:53 kasm_db_1.11.0
-rw------- 1 root root 65536 Dec 21 15:37 metadata.db
drwx-----x 3 root root 4096 Nov 3 15:30 openproject-ce_pg-data
drwx-----x 3 root root 4096 Nov 3 15:57 openproject_raspi_pg-data
drwx-----x 3 root root 4096 Apr 19 2022 synapse-data
Display More
Maybe it helps. My syslog shows something is down
Dec 21 16:18:48 meinnas collectd[647]: rrdcached plugin: Successfully reconnected to RRDCacheD at unix:/run/rrdcached.sock
Dec 21 16:18:48 meinnas collectd[647]: rrdcached plugin: rrdc_update (/var/lib/rrdcached/db/localhost/df-srv-dev-disk-by-uuid-5f244b38-4472-4302-a16e-f2ad9a9e1516/df_complex-reserved.rrd, [1671635915.154832:16777216.000000], 1) failed: rrdcached@unix:/run/rrdcached.sock: illegal attempt to update using time 1671635915.154832 when last update time is 1671635925.154201 (minimum one second step) (status=-1)
Dec 21 16:18:48 meinnas collectd[647]: Filter subsystem: Built-in target `write': Dispatching value to all write plugins failed with status -1.
Dec 21 16:18:48 meinnas collectd[647]: Filter subsystem: Built-in target `write': Some write plugin is back to normal operation. `write' succeeded.
Dec 21 16:19:19 meinnas systemd[1]: var-lib-docker-overlay2-d9cc13fff0ed4219a3dd5abc70f9fca278370af0330b5b60619c417b1aaaf1d2\x2dinit-merged.mount: Succeeded.
Dec 21 16:19:19 meinnas systemd[823]: var-lib-docker-overlay2-d9cc13fff0ed4219a3dd5abc70f9fca278370af0330b5b60619c417b1aaaf1d2\x2dinit-merged.mount: Succeeded.
Dec 21 16:19:19 meinnas systemd[4512]: var-lib-docker-overlay2-d9cc13fff0ed4219a3dd5abc70f9fca278370af0330b5b60619c417b1aaaf1d2\x2dinit-merged.mount: Succeeded.
Dec 21 16:19:26 meinnas systemd[4512]: var-lib-docker-overlay2-d9cc13fff0ed4219a3dd5abc70f9fca278370af0330b5b60619c417b1aaaf1d2-merged.mount: Succeeded.
Dec 21 16:19:26 meinnas systemd[823]: var-lib-docker-overlay2-d9cc13fff0ed4219a3dd5abc70f9fca278370af0330b5b60619c417b1aaaf1d2-merged.mount: Succeeded.
Dec 21 16:19:26 meinnas systemd[1]: var-lib-docker-overlay2-d9cc13fff0ed4219a3dd5abc70f9fca278370af0330b5b60619c417b1aaaf1d2-merged.mount: Succeeded.
Dec 21 16:19:26 meinnas kernel: [ 2651.266591] docker0: port 1(veth815fe57) entered blocking state
Dec 21 16:19:26 meinnas kernel: [ 2651.266615] docker0: port 1(veth815fe57) entered disabled state
Dec 21 16:19:26 meinnas kernel: [ 2651.266850] device veth815fe57 entered promiscuous mode
Dec 21 16:19:26 meinnas systemd-udevd[19347]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Dec 21 16:19:26 meinnas systemd-networkd[199]: veth815fe57: Link UP
Dec 21 16:19:26 meinnas systemd-udevd[19347]: Using default interface naming scheme 'v247'.
Dec 21 16:19:26 meinnas systemd-udevd[19345]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable.
Dec 21 16:19:26 meinnas systemd-udevd[19345]: Using default interface naming scheme 'v247'.
Dec 21 16:19:27 meinnas systemd-networkd[199]: veth815fe57: Link DOWN
Dec 21 16:19:27 meinnas systemd-networkd[199]: rtnl: received neighbor for link '47' we don't know about, ignoring.
Dec 21 16:19:27 meinnas kernel: [ 2651.641592] docker0: port 1(veth815fe57) entered disabled state
Dec 21 16:19:27 meinnas kernel: [ 2651.646492] device veth815fe57 left promiscuous mode
Dec 21 16:19:27 meinnas kernel: [ 2651.646513] docker0: port 1(veth815fe57) entered disabled state
Dec 21 16:19:27 meinnas systemd[823]: var-lib-docker-overlay2-d9cc13fff0ed4219a3dd5abc70f9fca278370af0330b5b60619c417b1aaaf1d2-merged.mount: Succeeded.
Dec 21 16:19:27 meinnas systemd[4512]: var-lib-docker-overlay2-d9cc13fff0ed4219a3dd5abc70f9fca278370af0330b5b60619c417b1aaaf1d2-merged.mount: Succeeded.
Dec 21 16:19:27 meinnas systemd[1]: var-lib-docker-overlay2-d9cc13fff0ed4219a3dd5abc70f9fca278370af0330b5b60619c417b1aaaf1d2-merged.mount: Succeeded.
Dec 21 16:19:27 meinnas dockerd[707]: time="2022-12-21T16:19:27.746229290+01:00" level=error msg="0fb3875c08867626bcfe5a07e926499a07d09f6631a906f371454e2a7a0de337 cleanup: failed to delete container from containerd: no such container"
Dec 21 16:19:27 meinnas dockerd[707]: time="2022-12-21T16:19:27.746341863+01:00" level=error msg="Handler for POST /v1.41/containers/0fb3875c08867626bcfe5a07e926499a07d09f6631a906f371454e2a7a0de337/start returned error: error evaluating symlinks from mount source \"/var/lib/docker/volumes/portainer_data/_data\": lstat /var/lib/docker/volumes/portainer_data: no such file or directory"
Dec 21 16:19:56 meinnas systemd[1]: run-docker-runtime\x2drunc-moby-221efa221983b030282f00571611a6953a5e054887a7b389bdd2dd64c5c88639-runc.EUDCOd.mount: Succeeded.
Display More
Sorry but this is really weird,
Since you run as root, no need for sudo.docker ps -a | grep -i portainer
I know..that is why i am here. Just randomly after updating portainer over the extras to the latest version it started to become unstable after one day and now it totally refuses to work.
I have no clue anymore what the error might be. Before it was totally fine, no issue.
root@meinnas:~# docker ps -a | grep -i portainer
d681f9ca4876 portainer/portainer-ce "/portainer" 45 seconds ago Created 0.0.0.0:8000->8000/tcp, :::8000->8000/tcp, 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp, 9443/tcp portainer
root@meinnas:~#
After the installation it fails and the container is just created
root@meinnas:~# sudo docker stop portainer
portainer
root@meinnas:~# sudo docker rm portainer
portainer
root@meinnas:~# sudo docker ps -a | grep -i portainer
root@meinnas:~#
It seems gone now
Reinstall still fails
Docker storage :: /var/lib/docker
Agent port:: 8000
Web port:: 9000
Yacht port:: 8001
ee:: 0
image:: portainer/portainer-ce
Enable TLS:: 0
arch :: arm64
option :: portainer
state :: install
extras :: 6.1.1
DNS OK.
Removing portainer/portainer-ce image ...
Untagged: portainer/portainer-ce:latest
Untagged: portainer/portainer-ce@sha256:f7607310051ee21f58f99d7b7f7878a6a49d4850422d88a31f8c61c248bbc3a4
Deleted: sha256:9281e1907542d9e135476db62e7dd129a95972dc5cd297f5d01acff58c4f751f
Deleted: sha256:067f72a72d633747ba5a6039a1b4ec3d36555fa22a07f6e5c3be2940d4d040cc
Deleted: sha256:f6fe101531bcf0e63b651a4e3ce2676c1a7f1880288bb288ede04fc1deb1a8a1
Deleted: sha256:e0a46f5d05e1b93a7993c45aaea39729d111d7a096e02ac1656c721e39cb5222
Deleted: sha256:8c004456aeb58b75f792fa091b194c20d6ed4f0d95dd25b0150d71c5c9ab601b
Pulling portainer/portainer-ce ...
Using default tag: latest
latest: Pulling from portainer/portainer-ce
772227786281: Pulling fs layer
96fd13befc87: Pulling fs layer
9feedfe952b7: Pulling fs layer
2b0b05213b47: Pulling fs layer
96fd13befc87: Download complete
772227786281: Verifying Checksum
772227786281: Download complete
772227786281: Pull complete
2b0b05213b47: Verifying Checksum
2b0b05213b47: Download complete
96fd13befc87: Pull complete
9feedfe952b7: Download complete
9feedfe952b7: Pull complete
2b0b05213b47: Pull complete
Digest: sha256:f7607310051ee21f58f99d7b7f7878a6a49d4850422d88a31f8c61c248bbc3a4
Status: Downloaded newer image for portainer/portainer-ce:latest
docker.io/portainer/portainer-ce:latest
Starting portainer/portainer-ce ...
0fb3875c08867626bcfe5a07e926499a07d09f6631a906f371454e2a7a0de337
Something went wrong trying to pull and start portainer ...
END OF LINE
█
Display More
The output of
sudo docker ps -a
root@meinnas:~# sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d0a5db262e1c portainer/portainer-ce "/portainer" About a minute ago Created 0.0.0.0:8000->8000/tcp, :::8000->8000/tcp, 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp, 9443/tcp portainer
2df359cfd18e ghcr.io/benphelps/homepage:latest "docker-entrypoint.s…" 6 weeks ago Exited (0) 6 weeks ago homepage
ffa32266cfb9 e251b53f1bb7 "/bin/sh -c 'apt-get…" 6 weeks ago Exited (100) 6 weeks ago keen_bell
33c23a74804d gitea/gitea:latest "/usr/bin/entrypoint…" 6 weeks ago Up 25 minutes 0.0.0.0:2234->22/tcp, :::2234->22/tcp, 0.0.0.0:3010->3000/tcp, :::3010->3000/tcp gitea
e90dfc309588 yobasystems/alpine-mariadb:latest "/scripts/run.sh" 6 weeks ago Up 25 minutes 3306/tcp gitea_db_1
16abdac6a342 ownyourbits/nextcloudpi:latest "/run-parts.sh 192.1…" 7 weeks ago Exited (137) 2 days ago nextcloudpi
6d17a8ceef7b kasmweb/share:1.11.0 "/bin/sh -c '/usr/bi…" 2 months ago Exited (137) 2 months ago kasm_share
f1292d52eb43 redis:5-alpine "docker-entrypoint.s…" 2 months ago Exited (0) 2 months ago kasm_redis
8168643fc57c kasmweb/nginx:latest "/docker-entrypoint.…" 2 months ago Exited (1) 2 months ago kasm_proxy
581dffefe050 kasmweb/manager:1.11.0 "/bin/sh -c '/usr/bi…" 2 months ago Exited (137) 2 months ago kasm_manager
aaf6d6cf548e kasmweb/api:1.11.0 "/bin/sh -c '/usr/bi…" 2 months ago Exited (1) 2 months ago kasm_api
7845e0ff3626 kasmweb/agent:1.11.0 "/bin/sh -c '/usr/bi…" 2 months ago Exited (137) 2 months ago kasm_agent
8bd07048ea5a 440bec326847 "/dockerstartup/kasm…" 3 months ago Created interesting_ardinghelli
221efa221983 f806c3b223bd "docker-entrypoint.s…" 3 months ago Up 25 minutes (healthy) 5432/tcp kasm_db
749e5ad056a1 linuxserver/syncthing "/init" 3 months ago Exited (137) 2 months ago syncthing1
7e73b6067539 linuxserver/syncthing "/init" 3 months ago Exited (0) 3 months ago syncthing2
ca9b087066d2 xavierh/goaccess-for-nginxproxymanager:latest "sh /goan/start.sh" 7 months ago Exited (137) 3 days ago goaccess
c825c908a753 yobasystems/alpine-mariadb:latest "/scripts/run.sh" 8 months ago Exited (0) 3 months ago wordpress1_db_1
af7deb7a9acd wordpress:latest "docker-entrypoint.s…" 8 months ago Exited (0) 3 months ago wordpress1_wordpress_1
aa99785f1152 grafana/grafana:latest "/run.sh" 8 months ago Exited (0) 8 months ago grafana
8479d543c8f5 prom/prometheus:latest "/bin/prometheus --c…" 8 months ago Exited (0) 8 months ago prometheus
9fc2e8d8f62a quay.io/prometheus/node-exporter:latest "/bin/node_exporter …" 8 months ago Exited (143) 8 months ago priceless_hugle
74c6ffab57c6 jc21/nginx-proxy-manager:latest "/init" 8 months ago Up 25 minutes 0.0.0.0:80-81->80-81/tcp, :::80-81->80-81/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp dockernginx_app_1
417efdc9fce8 75ec876a5285 "/scripts/run.sh" 8 months ago Up 25 minutes 3306/tcp dockernginx_db_1
6ec6f7f58dea matrixdotorg/synapse:latest "/start.py" 8 months ago Exited (1) 8 months ago synapse
6ba0efc99319 oznu/guacamole:armhf "/init" 8 months ago Exited (0) 8 months ago confident_leakey
3e5daa61a1d5 vaultwarden/server:latest "/usr/bin/dumb-init …" 8 months ago Up 25 minutes (healthy) 3012/tcp, 0.0.0.0:8100->80/tcp, :::8100->80/tcp vaultwarden
7216268e4a35 ghcr.io/flaresolverr/flaresolverr:latest "/usr/bin/dumb-init …" 8 months ago Exited (0) 8 months ago flaresolverr
f2e3be04c36c failed2run/dashmachine "gunicorn --bind 0.0…" 8 months ago Up 25 minutes 0.0.0.0:5001->5000/tcp, :::5001->5000/tcp dashmachine
a27ce9bcffc1 lscr.io/linuxserver/jackett "/init" 8 months ago Exited (0) 8 months ago jackett
88d25059d96b netdata/netdata "/usr/sbin/run.sh" 8 months ago Exited (0) 8 months ago netdata_netdata_1
43f6feaf46c3 linuxserver/radarr "/init" 8 months ago Exited (137) 8 months ago radarr
aa4b8d59f29c linuxserver/sonarr:latest "/init" 8 months ago Exited (0) 8 months ago sonarr
c9896cfab4b0 linuxserver/transmission "/init" 8 months ago Exited (0) 8 months ago transmission
c2599cf104f8 qmcgaw/gluetun "/gluetun-entrypoint" 8 months ago Exited (1) 3 months ago gluetun1_gluetun_1
Display More
It completely fails to install
Display MoreThere's no Portainer on the docker volumes.
Output ofsudo docker ps -a
If there's no Portainer container running:On the GUI, goto System->OMV Extras-> Portainer and click Install.
Post the output here.
I tried to reinstall portainer over the gui in extras.
Docker storage :: /var/lib/docker
Agent port:: 8000
Web port:: 9000
Yacht port:: 8001
ee:: 0
image:: portainer/portainer-ce
Enable TLS:: 0
arch :: arm64
option :: portainer
state :: install
extras :: 6.1.1
DNS OK.
Removing portainer ...
portainer
Removing portainer/portainer-ce image ...
Untagged: portainer/portainer-ce:latest
Untagged: portainer/portainer-ce@sha256:f7607310051ee21f58f99d7b7f7878a6a49d4850422d88a31f8c61c248bbc3a4
Deleted: sha256:9281e1907542d9e135476db62e7dd129a95972dc5cd297f5d01acff58c4f751f
Deleted: sha256:067f72a72d633747ba5a6039a1b4ec3d36555fa22a07f6e5c3be2940d4d040cc
Deleted: sha256:f6fe101531bcf0e63b651a4e3ce2676c1a7f1880288bb288ede04fc1deb1a8a1
Deleted: sha256:e0a46f5d05e1b93a7993c45aaea39729d111d7a096e02ac1656c721e39cb5222
Deleted: sha256:8c004456aeb58b75f792fa091b194c20d6ed4f0d95dd25b0150d71c5c9ab601b
Pulling portainer/portainer-ce ...
Using default tag: latest
latest: Pulling from portainer/portainer-ce
772227786281: Pulling fs layer
96fd13befc87: Pulling fs layer
9feedfe952b7: Pulling fs layer
2b0b05213b47: Pulling fs layer
2b0b05213b47: Waiting
96fd13befc87: Verifying Checksum
96fd13befc87: Download complete
772227786281: Download complete
772227786281: Pull complete
96fd13befc87: Pull complete
2b0b05213b47: Download complete
9feedfe952b7: Verifying Checksum
9feedfe952b7: Download complete
9feedfe952b7: Pull complete
2b0b05213b47: Pull complete
Digest: sha256:f7607310051ee21f58f99d7b7f7878a6a49d4850422d88a31f8c61c248bbc3a4
Status: Downloaded newer image for portainer/portainer-ce:latest
docker.io/portainer/portainer-ce:latest
Starting portainer/portainer-ce ...
d0a5db262e1c0209db447836bd884e3b6dcba6c83d99111d7bc17951dcd07552
Something went wrong trying to pull and start portainer ...
END OF LINE
█
Display More
Yes, but from what I'm seeing from Copaxy , I'm thinking he had Portainer running, NOT by OMV-Extras but maybe by docker-compose.
And all the instructions I'm giving are based on a Portainer installation via Extras.
I only used the extras tool, i didn't manually create a portainer container
Please, outputs of:
sudo ls -al /var/lib/docker/volumes/sudo ls -al /var/lib/docker/volumes/portainer_data
And what command you used to make the copy of the folder?
root@meinnas:~# sudo ls -al /var/lib/docker/volumes/
total 76
drwx-----x 11 root root 4096 Dec 21 15:48 .
drwx--x--- 13 root root 4096 Dec 21 15:36 ..
drwx-----x 3 root root 4096 Apr 19 2022 07a52b2096960f4f3e88c4908c0a9f1bac0884ea72f713fb6ece6b111bf5d1e6
drwx-----x 3 root root 4096 Sep 22 00:57 2214dac2c6eaa5cdddaf1161bfab9f97e3ccb8bc95dc5df0a5311aacb9338f62
drwx-----x 3 root root 4096 Dec 21 09:49 480a8ca7aa2fc0922b1ec81bd0bc933498eddc3d2d694c79be6441caa39fb78d
brw------- 1 root root 179, 2 Dec 21 15:37 backingFsBlockDev
drwx-----x 3 root root 4096 Nov 2 12:17 compose_opdata
drwx-----x 3 root root 4096 Nov 2 12:17 compose_pgdata
drwx-----x 3 root root 4096 Sep 22 00:53 kasm_db_1.11.0
-rw------- 1 root root 65536 Dec 21 15:37 metadata.db
drwx-----x 3 root root 4096 Nov 3 15:30 openproject-ce_pg-data
drwx-----x 3 root root 4096 Nov 3 15:57 openproject_raspi_pg-data
drwx-----x 3 root root 4096 Apr 19 2022 synapse-data
Display More
root@meinnas:~# sudo ls -al /var/lib/docker/volumes/portainer_data
ls: cannot access '/var/lib/docker/volumes/portainer_data': No such file or directory
The command i used to make a copy was:
Note: The config folder is just the folder i keep other containers container data -> The name of config is maybe a bit missleading
docker ps -a
root@meinnas:~# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7176964d2ab1 portainer/portainer-ce "/portainer" 17 minutes ago Created 0.0.0.0:8000->8000/tcp, :::8000->8000/tcp, 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp, 9443/tcp portainer
2df359cfd18e ghcr.io/benphelps/homepage:latest "docker-entrypoint.s…" 6 weeks ago Exited (0) 6 weeks ago homepage
ffa32266cfb9 e251b53f1bb7 "/bin/sh -c 'apt-get…" 6 weeks ago Exited (100) 6 weeks ago keen_bell
33c23a74804d gitea/gitea:latest "/usr/bin/entrypoint…" 6 weeks ago Up 5 minutes 0.0.0.0:2234->22/tcp, :::2234->22/tcp, 0.0.0.0:3010->3000/tcp, :::3010->3000/tcp gitea
e90dfc309588 yobasystems/alpine-mariadb:latest "/scripts/run.sh" 6 weeks ago Up 5 minutes 3306/tcp