Made a lot of progress. I have Portainer up and running. When I went through this initially, all of my previous installed containers showed up. Not the case this time. How do I import all of my previous containers?
Beiträge von ajaja
-
-
I forgot how to read. thanks. Too early in the morning. I'm still baffled that all it took was a reboot. #Noob
Still not getting to a running portainer web interface:2024/03/05 06:48AM INF github.com/portainer/portainer/api/cmd/portainer/main.go:369 > encryption key file not present | filename=portainer
2024/03/05 06:48AM INF github.com/portainer/portainer/api/cmd/portainer/main.go:392 > proceeding without encryption key |
2024/03/05 06:48AM INF github.com/portainer/portainer/api/database/boltdb/db.go:125 > loading PortainerDB | filename=portainer.db
2024/03/05 06:48AM INF github.com/portainer/portainer/api/internal/ssl/ssl.go:80 > no cert files found, generating self signed SSL certificates |
2024/03/05 06:48AM INF github.com/portainer/portainer/api/chisel/service.go:193 > Generated a new Chisel private key file | private-key=/data/chisel/private-key.pem
2024/03/05 06:48:59 server: Reverse tunnelling enabled
2024/03/05 06:48:59 server: Fingerprint zgM0B06jjOKu5Sbq6g+kE3Ew60s4b5o7PVou5KNo6kY=
2024/03/05 06:48:59 server: Listening on http://0.0.0.0:8000
2024/03/05 06:48AM INF github.com/portainer/portainer/api/cmd/portainer/main.go:649 > starting Portainer | build_number=35428 go_version=1.20.5 image_tag=linux-amd64-2.19.4 nodejs_version=18.19.0 version=2.19.4 webpack_version=5.88.1 yarn_version=1.22.21
2024/03/05 06:48AM INF github.com/portainer/portainer/api/http/server.go:357 > starting HTTPS server | bind_address=:9443
2024/03/05 06:48AM INF github.com/portainer/portainer/api/http/server.go:341 > starting HTTP server | bind_address=:9000
2024/03/05 06:56AM INF github.com/portainer/portainer/api/cmd/portainer/main.go:369 > encryption key file not present | filename=portainer
2024/03/05 06:56AM INF github.com/portainer/portainer/api/cmd/portainer/main.go:392 > proceeding without encryption key |
2024/03/05 06:56AM INF github.com/portainer/portainer/api/database/boltdb/db.go:125 > loading PortainerDB | filename=portainer.db
2024/03/05 06:56AM INF github.com/portainer/portainer/api/chisel/service.go:198 > Found Chisel private key file on disk | private-key=/data/chisel/private-key.pem
2024/03/05 06:56:08 server: Reverse tunnelling enabled
2024/03/05 06:56:08 server: Fingerprint zgM0B06jjOKu5Sbq6g+kE3Ew60s4b5o7PVou5KNo6kY=
2024/03/05 06:56:08 server: Listening on http://0.0.0.0:8000
2024/03/05 06:56AM INF github.com/portainer/portainer/api/cmd/portainer/main.go:649 > starting Portainer | build_number=35428 go_version=1.20.5 image_tag=linux-amd64-2.19.4 nodejs_version=18.19.0 version=2.19.4 webpack_version=5.88.1 yarn_version=1.22.21
2024/03/05 06:56AM INF github.com/portainer/portainer/api/http/server.go:357 > starting HTTPS server | bind_address=:9443
2024/03/05 06:56AM INF github.com/portainer/portainer/api/http/server.go:341 > starting HTTP server | bind_address=:9000
-
After a reboot I made progress, but am stopped when saving portaner file.
Please set shared folder for file storage.
OMV\Exception: Please set shared folder for file storage. in /usr/share/openmediavault/engined/rpc/compose.inc:117
Stack trace:
#0 /usr/share/openmediavault/engined/rpc/compose.inc(783): OMVRpcServiceCompose->getComposePath()
#1 [internal function]: OMVRpcServiceCompose->setExample()
#2 /usr/share/php/openmediavault/rpc/serviceabstract.inc(122): call_user_func_array()
#3 /usr/share/php/openmediavault/rpc/rpc.inc(86): OMV\Rpc\ServiceAbstract->callMethod()
#4 /usr/sbin/omv-engined(535): OMV\Rpc\Rpc::call()
#5 {main}
-
-
I doubt it completed or you need to refresh browser cache (ctrl-shift-R).
what is output of: dpkg -l | grep openme
It sure looks to be installed.
ii openmediavault-compose 7.0.9 all OpenMediaVault compose plugin
The new thread was created because, even though there's a relationship between these issues, it's a different issue.
-
What is output of: dpkg -l | grep openme
root@omv:~# dpkg -l | grep openme
ii openmediavault 7.0-32 all openmediavault - The open network attached storage solution
ii openmediavault-compose 7.0.9 all OpenMediaVault compose plugin
ii openmediavault-diskstats 7.0-2 all openmediavault disk monitoring plugin
ii openmediavault-flashmemory 7.0 all folder2ram plugin for openmediavault
ii openmediavault-ftp 7.0-4 all openmediavault FTP-Server plugin
ii openmediavault-kernel 7.0.3 all kernel package
ii openmediavault-keyring 1.0.2-2 all GnuPG archive keys of the openmediavault archive
ii openmediavault-md 7.0-6 all openmediavault Linux MD (Multiple Device) plugin
ii openmediavault-mergerfs 7.0.3 all mergerfs plugin for openmediavault.
ii openmediavault-omvextrasorg 7.0 all OMV-Extras.org Package Repositories for OpenMediaVault
ii openmediavault-photoprism 7.0-4 all openmediavault PhotoPrism plugin
ii openmediavault-resetperms 7.1 all Reset Permissions
ii openmediavault-sharerootfs 7.0-1 all openmediavault share root filesystem plugin
ii openmediavault-snapraid 7.0.5 all snapraid plugin for OpenMediaVault.
ii openmediavault-wetty 7.0-2 all openmediavault WeTTY (Web + TTY) plugin
ii openmediavault-zfs 7.0.5 amd64 OpenMediaVault plugin for ZFS
-
If you're backing up everything.. just kill the docker service..
This did not go so well. I now have composed plug-in installed, but not showing up as a service.
-
-
If you would watch the video, you would see that with a few simple steps, you could have portainer back. You would just have to update portainer using the compose plugin instead of omv-extras.
Where is the video?
-
I had to rebuild the RAID10 drive that houses many of my docker container configurations.
The mount point of my ZFS pool(Zenith) is /srv/Wharfrsync -azq /srv/Wharf /srv/mergerfs/Cargo/BackUpArchives/Zenith/
After re-creating Zenith, I put all the data back:
rsync -azq /srv/mergerfs/Cargo/BackUpArchives/Zenith/Wharf/* /srv/Wharf &
Trying to fix this I am certain I've made it worse. I am suspecting it is actually a permission issue, but what do I know.
What can I do?
By the way
_________________________
When I try to install Compose Plug-in:
CodeFailed to read from socket: Connection reset by peer OMV\Rpc\Exception: Failed to read from socket: Connection reset by peer in /usr/share/php/openmediavault/rpc/rpc.inc:172 Stack trace: #0 /usr/share/php/openmediavault/rpc/proxy/json.inc(95): OMV\Rpc\Rpc::call() #1 /var/www/openmediavault/rpc.php(45): OMV\Rpc\Proxy\Json->handle() #2 {main}
Code
Alles anzeigenroot@omv:~# docker run hello-world Hello from Docker! This message shows that your installation appears to be working correctly. To generate this message, Docker took the following steps: 1. The Docker client contacted the Docker daemon. 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. (amd64) 3. The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading. 4. The Docker daemon streamed that output to the Docker client, which sent it to your terminal.
Miscellaneous information:
root@omv:~# sudo service docker restart
Job for docker.service failed because the control process exited with error code.
See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
root@omv:~# systemctl status docker.service
× docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; preset: enabled)
Drop-In: /etc/systemd/system/docker.service.d
└─waitAllMounts.conf
Active: failed (Result: exit-code) since Mon 2024-03-04 16:48:46 CST; 12min ago
Duration: 12h 19min 45.716s
TriggeredBy: × docker.socket
Docs: https://docs.docker.com
Process: 780970 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock (code=exited, s>
Main PID: 780970 (code=exited, status=1/FAILURE)
CPU: 246ms
root@omv:~# journalctl -xeu docker.service
~
~
-- No entries --
-
-
Google is your friend. Bob is your...
https://docs.nginx.com/nginx/a…/installing-nginx-docker/https://docs.linuxserver.io/images/docker-baseimage-alpine-nginx/
How to Deploy an NGINX Image with Docker | NGINXUse the NGINX image from the Docker Hub repository or create your own NGINX Plus image to easily deploy NGINX in Docker containers.www.nginx.comHow to Use the NGINX Docker Official Image | DockerLearn how to harness the NGINX Docker Official Image from running a basic server to shipping the image.www.docker.com -
If you're backing up everything.. just kill the docker service..
Thanks I believe that should do it.
root@omv:~# systemctl stop docker
Warning: Stopping docker.service, but it can still be activated by:
docker.socket
-
I'm assuming you mean the down button here. The only composed file I have is portainer (misspelled as ortainer). Portainer is managing all of my Docker containers and is clearly running even though there is a big red dot that says down.
I have very little experience with Compose, so this is not looking intuitive to me. -
zsh: command not found: docker
I need to back-up all of my Docker configuration files and data(including databases). I would like to make sure none of that is being written to while I back it up. I will also be replacing/rebuilding the RAID10 drives that these files reside on so I will not want Docker to be running while I am copying the data back to it.Thank you
-
Thank you
If you run the sync from the web interface, it should work. The only thing not working is the diff script. If you insist on running the command from the command line, you will need to add the -c /etc/snapraid/omv-snapraid-114a88d2-53ed-11ed-8eee-b3f2573b9c38.conf to the command. The uuid might have to be changed for your array.
-
sudo omv-showkey snapraid
Code
Alles anzeigen<snapraid> <blocksize>256</blocksize> <hashsize>16</hashsize> <autosave>0</autosave> <nohidden>0</nohidden> <syslog>1</syslog> <debug>0</debug> <sendmail>1</sendmail> <runscrub>1</runscrub> <scrubfreq>7</scrubfreq> <updthreshold>0</updthreshold> <delthreshold>0</delthreshold> <percentscrub>12</percentscrub> <scrubpercent>100</scrubpercent> <arrays> <array> <uuid>d8f7c892-0f26-4824-8cc2-e8c0d3a8b3cd</uuid> <name>StarRays</name> </array> </arrays> <drives> <drive> <uuid>f2701040-b3f3-4442-b7e9-7891e93e9a17</uuid> <arrayref>d8f7c892-0f26-4824-8cc2-e8c0d3a8b3cd</arrayref> <mntentref>ba14ba71-636b-45f5-8a38-30ae5bfb80b0</mntentref> <name>18ST2TV103ZR555BVHdc</name> <label></label> <path>/srv/dev-disk-by-uuid-e1796b8a-1e25-4445-a711-275cd072def5</path> <content>1</content> <data>1</data> <parity>0</parity> <paritynum>1</paritynum> <paritysplit>0</paritysplit> </drive> <drive> <uuid>11e8e6d8-ede7-4bb0-b5b6-5b6b6eaca1e0</uuid> <arrayref>d8f7c892-0f26-4824-8cc2-e8c0d3a8b3cd</arrayref> <mntentref>e8d38d28-1a2a-4a50-84d3-45bffb8516f5</mntentref> <name>18ST3G6101ZVT0DZWWdc</name> <label></label> <path>/srv/dev-disk-by-uuid-77646585-d861-4ec1-8811-15906c2da3ec</path> <content>1</content> <data>1</data> <parity>0</parity> <paritynum>1</paritynum> <paritysplit>0</paritysplit> </drive> <drive> <uuid>3565e33e-0f1b-4b06-8127-64077b881c6d</uuid> <arrayref>d8f7c892-0f26-4824-8cc2-e8c0d3a8b3cd</arrayref> <mntentref>df135e30-ba70-4a0d-a63a-5bb68143d3da</mntentref> <name>16WDC2CKH9HHNp</name> <label></label> <path>/srv/dev-disk-by-uuid-742151bf-0bdd-4455-a593-c81224e78d46</path> <content>0</content> <data>0</data> <parity>1</parity> <paritynum>1</paritynum> <paritysplit>0</paritysplit> </drive> <drive> <uuid>cfd7e0fb-92fd-4139-8dc9-aae99eea4267</uuid> <arrayref>d8f7c892-0f26-4824-8cc2-e8c0d3a8b3cd</arrayref> <mntentref>9eacd529-819a-49d6-8396-6f1d0782c648</mntentref> <name>16TOSH91L0A2R2FWTGp</name> <label></label> <path>/srv/dev-disk-by-uuid-196879b9-0183-4a6c-a452-53a1b5cbb6c7</path> <content>0</content> <data>0</data> <parity>1</parity> <paritynum>2</paritynum> <paritysplit>0</paritysplit> </drive> <drive> <uuid>676298dd-3bc3-4c64-b370-f3f066d3b756</uuid> <arrayref>d8f7c892-0f26-4824-8cc2-e8c0d3a8b3cd</arrayref> <mntentref>f78e7241-b204-479f-90bc-bcfc66895c12</mntentref> <name>08ST2CX188ZR10P2GEdc</name> <label></label> <path>/srv/dev-disk-by-uuid-304f067c-222e-4818-b8f0-2834b63c1f71</path> <content>1</content> <data>1</data> <parity>0</parity> <paritynum>1</paritynum> <paritysplit>0</paritysplit> </drive> <drive> <uuid>eecc6e50-fd9d-4e66-bf65-d570b9161feb</uuid> <arrayref>d8f7c892-0f26-4824-8cc2-e8c0d3a8b3cd</arrayref> <mntentref>6da03348-f3cd-4733-86e4-a5a168640cc3</mntentref> <name>08TOSH97A5K5MSF96Fd</name> <label></label> <path>/srv/dev-disk-by-uuid-f540400f-2cc5-4653-8d3a-971dea3daa76</path> <content>0</content> <data>1</data> <parity>0</parity> <paritynum>1</paritynum> <paritysplit>0</paritysplit> </drive> <drive> <uuid>67c8fc11-d96d-49a5-83ef-00d51a92fec8</uuid> <arrayref>d8f7c892-0f26-4824-8cc2-e8c0d3a8b3cd</arrayref> <mntentref>fe3125b1-3068-461d-b429-559181797f09</mntentref> <name>08TOSH97B6K5LJF96Fd</name> <label></label> <path>/srv/dev-disk-by-uuid-4452839b-ceb5-47dc-83d1-b54593f5f8c9</path> <content>0</content> <data>1</data> <parity>0</parity> <paritynum>1</paritynum> <paritysplit>0</paritysplit> </drive> <drive> <uuid>ae0f3a68-1361-4e06-bd88-81026d0407e7</uuid> <arrayref>d8f7c892-0f26-4824-8cc2-e8c0d3a8b3cd</arrayref> <mntentref>81ead3d9-421e-4257-aa1a-a6830ab489b9</mntentref> <name>10WD00WJTA02YJD8UYDdc</name> <label>WD10TBA02YJD8UYD</label> <path>/srv/dev-disk-by-uuid-ffd38b91-901e-47e5-a388-e58632a7b2c9</path> <content>1</content> <data>1</data> <parity>0</parity> <paritynum>1</paritynum> <paritysplit>0</paritysplit> </drive> <drive> <uuid>7f1d907f-a2f3-4319-93d6-9a61fb760246</uuid> <arrayref>d8f7c892-0f26-4824-8cc2-e8c0d3a8b3cd</arrayref> <mntentref>ae4317e9-2055-466c-9d4d-a153e065e1e1</mntentref> <name>10WD00WJTA02YHZ3Z6Dd</name> <label>WDC10TB2YHZ3Z6D</label> <path>/srv/dev-disk-by-uuid-38fad4a7-7d34-4e7d-b5a3-32e0588deead</path> <content>0</content> <data>1</data> <parity>0</parity> <paritynum>1</paritynum> <paritysplit>0</paritysplit> </drive> <drive> <uuid>1edef701-2bca-4c17-8792-266ab7372811</uuid> <arrayref>d8f7c892-0f26-4824-8cc2-e8c0d3a8b3cd</arrayref> <mntentref>ebe19fb5-97b7-4f49-8593-1148cddeb1ba</mntentref> <name>12WD11A6JA09JH8R0WTdc</name> <label></label> <path>/srv/dev-disk-by-uuid-5579e13f-0dd4-41dc-8dc7-1d8e9c25a51d</path> <content>0</content> <data>1</data> <parity>0</parity> <paritynum>1</paritynum> <paritysplit>0</paritysplit> </drive> <drive> <uuid>2186f9bd-7fcd-4075-ba13-3ec1c7032b7c</uuid> <arrayref>d8f7c892-0f26-4824-8cc2-e8c0d3a8b3cd</arrayref> <mntentref>76f615c5-b188-497e-958c-cd7d0f71bc77</mntentref> <name>12WD11A6JA09KG52WDLd</name> <label></label> <path>/srv/dev-disk-by-uuid-30da0e3e-cb5c-43ef-88cd-d580e924930f</path> <content>1</content> <data>1</data> <parity>0</parity> <paritynum>1</paritynum> <paritysplit>0</paritysplit> </drive> </drives> <rules> <rule> <uuid>a27998b6-7e0f-441b-9654-5285eaaba98f</uuid> <rule1>/srv/mergerfs/Cargo/hold/Media/TV/</rule1> <rtype>0</rtype> </rule> <rule> <uuid>9c70da41-315b-49aa-8128-d99f158242d4</uuid> <rule1>/srv/mergerfs/Cargo/hold/Media/Movies/</rule1> <rtype>0</rtype> </rule> <rule> <uuid>726570dc-d6c8-4578-8483-d004f4ee1ac9</uuid> <rule1>/srv/mergerfs/Cargo/BackUpArchives/</rule1> <rtype>0</rtype> </rule> <rule> <uuid>c46a2b1e-4170-44a9-a17d-ab776e4d8162</uuid> <rule1>/srv/mergerfs/Cargo/hold/Downloads/</rule1> <rtype>0</rtype> </rule> <rule> <uuid>a1cabb3d-0e8d-4315-9364-673b36000490</uuid> <rule1>/srv/mergerfs/Cargo/TimeMachine/</rule1> <rtype>0</rtype> </rule> <rule> <uuid>a47cb90f-a54d-4d29-a5f6-1ba0e9eca3e5</uuid> <rule1>/srv/Wharf/</rule1> <rtype>0</rtype> </rule> <rule> <uuid>d9ed7a02-9b38-451a-97f7-4185488f8838</uuid> <rule1>/srv/mergerfs/Cargo/hold/Media/Books/</rule1> <rtype>0</rtype> </rule> <rule> <uuid>d64c9976-03a4-453b-a2aa-2c22f245aa0f</uuid> <rule1>/srv/mergerfs/Cargo/hold/Media/Audiobooks/</rule1> <rtype>0</rtype> </rule> </rules> </snapraid>
-
sudo omv-salt deploy run snapraid
Code
Alles anzeigendebian: ---------- ID: configure_borg_envvar_dir Function: file.directory Name: /etc/snapraid Result: True Comment: The directory /etc/snapraid is in the correct state Started: 08:45:04.274524 Duration: 4.242 ms Changes: ---------- ID: remove_snapraid_conf_files Function: module.run Result: True Comment: file.find: ['/etc/snapraid/omv-snapraid-d8f7c892-0f26-4824-8cc2-e8c0d3a8b3cd.conf'] Started: 08:45:04.279145 Duration: 0.908 ms Changes: ---------- file.find: - /etc/snapraid/omv-snapraid-d8f7c892-0f26-4824-8cc2-e8c0d3a8b3cd.conf ---------- ID: configure_snapraid_d8f7c892-0f26-4824-8cc2-e8c0d3a8b3cd Function: file.managed Name: /etc/snapraid/omv-snapraid-d8f7c892-0f26-4824-8cc2-e8c0d3a8b3cd.conf Result: True Comment: File /etc/snapraid/omv-snapraid-d8f7c892-0f26-4824-8cc2-e8c0d3a8b3cd.conf updated Started: 08:45:04.280131 Duration: 127.23 ms Changes: ---------- diff: New file mode: 0644 ---------- ID: configure_snapraid-diff Function: file.managed Name: /etc/snapraid-diff.conf Result: True Comment: File /etc/snapraid-diff.conf is in the correct state Started: 08:45:04.407454 Duration: 92.335 ms Changes: Summary for debian ------------ Succeeded: 4 (changed=2) Failed: 0 ------------ Total states run: 4 Total run time: 224.715 ms
-
I've been traveling quite a bit this past year and have been rather hands off with my server.
Today I moved a bunch of data around, so I thought I would push snapraid a bit with:
snapraid touch and snapraid sync -h
Ugh, I got the dreaded: No configuration file found at '/etc/snapraid.conf'How do I resolve this?
-