I have been successfully running OMV5 on the raspberry Pi 3 but due to some extenuating circumstance it has not been running for over a year. When trying to run it again the web interface did not come up. I then tried to SSH in and got an access denied. I also ran it with a monitor hooked up to the PI. It appears that it is loading with all the lines indicating OK accept for the attached storage which is not connected at the moment. On the monitor it seems like it accepts my password but just cycles through and comes back to login. If I use an incorrect password it indicates that the password is incorrect. Any thoughts on what I can do next?
Beiträge von vandoe
-
-
volumes: - /srv/dev-disk-by-uuid-1c0dc0b4-d37c-4a43-b9ed-597a8dd4f64f/Docker-Config/wireguard:/config #See point 1.
I deleted everything and redeployed the container as suggested above and the peer1 and 2 files were created, so everything is working again.
-
Port forwarding is correct. There was no change from when it was working.
-
Yes, I'm using Docker-Config for each of my containers(Nextcloud, Bitwarden and Wireguard). Here is what is in Docker-Config:
Code
Alles anzeigendrwxr-sr-x 4 pi users 4096 Mar 13 2021 Bitwarden_Self_Host drwxr-sr-x 2 pi users 4096 Feb 25 15:34 bitwarden drwxrwsr-x 7 pi users 4096 Feb 13 2021 cache drwxr-sr-x 2 pi users 4096 Feb 12 01:16 coredns drwxr-sr-x 2 root root 4096 Mar 4 16:26 custom-cont-init.d (File created by wireguard instead of Peer1 and 2) drwxr-sr-x 2 root root 4096 Mar 4 16:26 custom-services.d (File created by wireguard instead of Peer1 and 2) drwxrwsr-x 9 pi users 4096 Feb 13 2021 nextcloud drwxrwsr-x 4 pi users 4096 Feb 13 2021 nextclouddb drwxr-sr-x 2 pi users 4096 Feb 12 01:16 server drwxrwsr-x 12 pi users 4096 Feb 13 2021 swag drwxr-sr-x 2 pi users 4096 Feb 12 01:16 templates -rw------- 1 pi users 585 Feb 12 01:16 wg0.conf
ZitatTo support the app dev(s) visit:
WireGuard: https://www.wireguard.com/donations/
To support LSIO projects visit:
https://www.linuxserver.io/donate/
-------------------------------------
GID/UID
-------------------------------------
User uid: 1000
User gid: 100
-------------------------------------
[cont-init.d] 10-adduser: exited 0.
[cont-init.d] 30-module: executing...
Uname info: Linux 2be563ae216e 5.10.63-v7+ #1496 SMP Wed Dec 1 15:58:11 GMT 2021 armv7l armv7l armv7l GNU/Linux
**** It seems the wireguard module is already active. Skipping kernel header install and module compilation. ****
[cont-init.d] 30-module: exited 0.
[cont-init.d] 40-confs: executing...
**** Server mode is selected ****
**** External server address is set to 24.128.146.227 ****
**** External server port is set to 51820. Make sure that port is properly forwarded to port 51820 inside this container ****
**** Internal subnet is set to 10.13.13.0 ****
**** AllowedIPs for peers 0.0.0.0/0 ****
**** PEERDNS var is either not set or is set to "auto", setting peer DNS to 10.13.13.1 to use wireguard docker host's DNS. ****
**** Server mode is selected ****
**** No changes to parameters. Existing configs are used. ****
[cont-init.d] 40-confs: exited 0.
[cont-init.d] 90-custom-folders: executing...
[cont-init.d] 90-custom-folders: exited 0.
[cont-init.d] 99-custom-scripts: executing...
[custom-init] no custom files found exiting...
[cont-init.d] 99-custom-scripts: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.
[#] ip link add wg0 type wireguard
[#] wg setconf wg0 /dev/fd/63
[#] ip -4 address add 10.13.13.1 dev wg0
.:53
CoreDNS-1.8.7
linux/arm, go1.17.6, a9adfd5
[#] ip link set mtu 1420 up dev wg0
[#] ip -4 route add 10.13.13.3/32 dev wg0
[#] ip -4 route add 10.13.13.2/32 dev wg0
[#] iptables -A FORWARD -i wg0 -j ACCEPT; iptables -A FORWARD -o wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
-
After my wireguard stopped working I decided to dump the whole thing and start over. I deleted the container and stack using portainer then reloaded everything. The second time around instead of creating the peer1 and peer2 folders it is creating the following two empty folders.
custom-cont-init.d and custom-services.d
Here is the stack I'm using. It worked just fine the first time around.
Code
Alles anzeigenversion: "2.1" services: wireguard: image: lscr.io/linuxserver/wireguard container_name: wireguard cap_add: - NET_ADMIN - SYS_MODULE environment: - PUID=1000 #See point 1. - PGID=100 #See point 1. - TZ=America/New_York #Should be adjusted according to your location - SERVERURL=My IP #See point 2. - SERVERPORT=51820 #To change see next post - PEERS=2 #See point 2. Number of clients you want to configure - PEERDNS=auto - INTERNAL_SUBNET=10.13.13.0 #Only change if it conflicts - ALLOWEDIPS=0.0.0.0/0 volumes: - /srv/dev-disk-by-uuid-1c0dc0b4-d37c-4a43-b9ed-597a8dd4f64f/Docker-Config:/config #See point 1. - /lib/modules:/lib/modules ports: - 51823:51820/udp #To change see next post sysctls: - net.ipv4.conf.all.src_valid_mark=1 restart: unless-stopped
-
After getting the containers up and running everything works except I can't access anything through wireguard.
Zitat von Wireguard guide
We can check it by opening a browser and accessing the IP of any service on our LAN.T he home network appears on the screen, we press the button on the right and we give it permission to access.I should be able to open openmediavault by putting in 192.168.0.23:85 from a browser according to the above. This did not work for me. I was able to get to my files originally by setting up a remote connection in my android file browser with SFTP using IP 192.168.0.23 and port 22. After restart this does not work either
-
sudo systemctl restart docker.service
This seems to have restarted all my containers. Thank you
-
Here are the other requested items:
mount | grep disk
Code/dev/sdc1 on /srv/dev-disk-by-uuid-f1ebbf1b-0cc5-4056-bba0-0e4f7e08932d type ext4 (rw,relatime,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group) overlay on /srv/dev-disk-by-uuid-1c0dc0b4-d37c-4a43-b9ed-597a8dd4f64f/Docker/overlay2/d4f7cef45dbdef46192655e7b192cb212c36a882937aae6f4055cd4253c3ff7e/merged type overlay (rw,relatime,lowerdir=/srv/dev-disk-by-uuid-1c0dc0b4-d37c-4a43-b9ed-597a8dd4f64f/Docker/overlay2/l/JKBQWOYQ2AAMKCRISZLQQWHG4T:/srv/dev-disk-by-uuid-1c0dc0b4-d37c-4a43-b9ed-597a8dd4f64f/Docker/overlay2/l/6YUF322H3FUPXQZBCMCHTVCQQM:/srv/dev-disk-by-uuid-1c0dc0b4-d37c-4a43-b9ed-597a8dd4f64f/Docker/overlay2/l/TWMRF3GZ4J3ZNTPUL2OL725DBD:/srv/dev-disk-by-uuid-1c0dc0b4-d37c-4a43-b9ed-597a8dd4f64f/Docker/overlay2/l/O7IBVJB3YXYNCCQVIRELLLKLR6,upperdir=/srv/dev-disk-by-uuid-1c0dc0b4-d37c-4a43-b9ed-597a8dd4f64f/Docker/overlay2/d4f7cef45dbdef46192655e7b192cb212c36a882937aae6f4055cd4253c3ff7e/diff,workdir=/srv/dev-disk-by-uuid-1c0dc0b4-d37c-4a43-b9ed-597a8dd4f64f/Docker/overlay2/d4f7cef45dbdef46192655e7b192cb212c36a882937aae6f4055cd4253c3ff7e/work) /dev/sda1 on /srv/dev-disk-by-uuid-1c0dc0b4-d37c-4a43-b9ed-597a8dd4f64f type ext4 (rw,relatime,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group)
lsblk
Code8:0 0 1.8T 0 disk └─sda1 8:1 0 1.8T 0 part /srv/dev-disk-by-uuid-1c0dc0b4-d37c-4a43-b9ed-597a8dd4f64f sdb 8:16 0 111.8G 0 disk └─sdb1 8:17 0 111.8G 0 part /srv/dev-disk-by-uuid-12E426FAE426E029 sdc 8:32 0 1.4T 0 disk └─sdc1 8:33 0 1.4T 0 part /srv/dev-disk-by-uuid-f1ebbf1b-0cc5-4056-bba0-0e4f7e08932d mmcblk0 179:0 0 29.7G 0 disk ├─mmcblk0p1 179:1 0 256M 0 part /boot └─mmcblk0p2 179:2 0 29.5G 0 part / span id="selection-marker-1" class="redactor-selection-marker woltlab-bbcode-marker"></span>
blkid
Code/dev/mmcblk0p1: LABEL_FATBOOT="boot" LABEL="boot" UUID="DC3E-E470" TYPE="vfat" PARTUUID="5feefdba-01" /dev/mmcblk0p2: LABEL="rootfs" UUID="a7adb26a-8b87-4729-99c8-9f5ac069d51e" TYPE="ext4" PARTUUID="5feefdba-02"
cat /etc/docker/daemon.json
-
Here is the nextcloud/bitwarden stack:
Code
Alles anzeigenversion: "2" services: nextcloud: image: ghcr.io/linuxserver/nextcloud:latest container_name: nextcloud environment: - PUID=1000 - PGID=100 - TZ=America/New_York volumes: - /srv/dev-disk-by-uuid-1c0dc0b4-d37c-4a43-b9ed-597a8dd4f64f/Docker-Config/nextcloud/:/config - /srv/dev-disk-by-uuid-1c0dc0b4-d37c-4a43-b9ed-597a8dd4f64f/Primary/AppData/Nextcloud/:/data depends_on: - nextclouddb ports: - 450:443 restart: unless-stopped nextclouddb: image: ghcr.io/linuxserver/mariadb:latest container_name: nextclouddb environment: - PUID=1000 - PGID=100 - MYSQL_ROOT_PASSWORD=dbpassword volumes: - /srv/dev-disk-by-uuid-1c0dc0b4-d37c-4a43-b9ed-597a8dd4f64f/Docker-Config/nextclouddb/:/config restart: unless-stopped swag: image: linuxserver/swag container_name: swag cap_add: - NET_ADMIN environment: - PUID=1000 - PGID=100 - TZ=America/New_York - DNSPLUGIN=duckdns - URL=duckdns.org - DUCKDNSTOKEN=mytoken - SUBDOMAINS=mmydomainnextcloud,mydomainbitwarden - ONLY_SUBDOMAINS=true - VALIDATION=http - EMAIL=myemail volumes: - /srv/dev-disk-by-uuid-1c0dc0b4-d37c-4a43-b9ed-597a8dd4f64f/Docker-Config/swag/:/config ports: - 444:443 - 81:80 restart: unless-stopped bitwarden: image: bitwardenrs/server:latest # raspberry container_name: bitwarden volumes: - /srv/dev-disk-by-uuid-1c0dc0b4-d37c-4a43-b9ed-597a8dd4f64f/Docker-Config/bitwarden/:/config ports: - 8005:80 restart: unless-stopped
-
I just discovered that my containers just stopped running. I had Nextcloud, Bitwarden and wireguard all running fine. The only thing done to the system was to add an additional hardrive to my NAS. The NAS is working fine. I can access all the data on the disks just fine. running docker ps -a also shows no containers at all. I ran a test case where I reloaded my wireguard stack using portainer and it errored out. Below is the error message:
Any thoughts on how to recover?
-
Everything is now working however the guide implies you can get access to server drives and files but doesn't say how. I can get to OMV 5 console by just putting serverip:85. I was also able to set up a remote access on my phone with file manager by choosing SFTP connection on port 22 so I thought putting serverip:22 would also work but it doesn't. Is there another way?
-
Changing the ports to this:
Allowed the stack to run without error, but I would still like to stick with the original and clear port 51820. I don't understand why removing wireguard completely would still leave something looking at that port. Is there some way to figure out how to clear that port?
-
So I've deleted the wireguard install and recreated it with the following stack
Code
Alles anzeigenversion: "2.1" services: wireguard: image: lscr.io/linuxserver/wireguard container_name: wireguard cap_add: - NET_ADMIN - SYS_MODULE environment: - PUID=1000 #See point 1. - PGID=100 #See point 1. - TZ=America/New_York #Should be adjusted according to your location - SERVERURL=MyextIP #See point 2. - SERVERPORT=51820 #To change see next post - PEERS=2 #See point 2. Number of clients you want to configure - PEERDNS=auto - INTERNAL_SUBNET=10.13.13.0 #Only change if it conflicts - ALLOWEDIPS=0.0.0.0/0 volumes: - /srv/dev-disk-by-uuid-1c0dc0b4-d37c-4a43-b9ed-597a8dd4f64f/Docker-Config:/config #See point 1. - /lib/modules:/lib/modules ports: - 51820:51820/udp #To change see next post sysctls: - net.ipv4.conf.all.src_valid_mark=1 restart: unless-stopped
Still get the same error. Nothing else is looking at port 51820. Do you see anything wrong with this file?
-
I tried it and it made no difference. I'm under the impression that what comes before the colon is whatever folder and drive I'm working in so /Docker-Config: /lib/modules should work just the same as - /lib/modules:/lib/modules. the first one adds /lib/modules to my existing folder Docker-Config and the second creates new folders /lib/modules on my disk.
-
-
Just tried to get wireguard up and running on my OMV 5. Used the guide from the forum and Portainer 2.11.1. I have the following error and google is showing any useful info on this error.
CodeDeployment error failed to deploy a stack: Container wireguard Creating Container wireguard Created Container wireguard Starting Error response from daemon: driver failed programming external connectivity on endpoint wireguard (9a47df36fabb47ec02da8d928a3d6469a5e2872b413594a138018e4e6f0d5503): Error starting userland proxy: listen udp4 0.0.0.0:51820: bind: address already in use
Below is my stack in portainer:
Code
Alles anzeigenversion: "2.1" services: wireguard: image: lscr.io/linuxserver/wireguard container_name: wireguard cap_add: - NET_ADMIN - SYS_MODULE environment: - PUID=1000 #See point 1. - PGID=100 #See point 1. - TZ=America/New_York #Should be adjusted according to your location - SERVERURL=MyextIP #See point 2. - SERVERPORT=51820 #To change see next post - PEERS=2 #See point 2. Number of clients you want to configure - PEERDNS=auto - INTERNAL_SUBNET=10.13.13.0 #Only change if it conflicts - ALLOWEDIPS=0.0.0.0/0 volumes: - /srv/dev-disk-by-uuid-1c0dc0b4-d37c-4a43-b9ed-597a8dd4f64f/Docker-Config:/config #See point 1. - /srv/dev-disk-by-uuid-1c0dc0b4-d37c-4a43-b9ed-597a8dd4f64f/Docker-Config:/lib/modules ports: - 51820:51820/udp #To change see next post sysctls: - net.ipv4.conf.all.src_valid_mark=1 restart: unless-stopped
I added the port forwarding as follows:
Code# Service Name External Port External IP Address Internal Port Internal IP Address 3 Wireguard UDP: 51820 Any UDP: 51820 OMV server IP
Any guesses as to what is wrong?
-
Just shut down my nginx_app_1 container and restarted swag. Everything is now working again. Thanks for everyone's help. You guys are awesome. Is there a way I can determine which Bitwarden version has all the saved data?
-
To help me make sense of things I have laid out my file structure:
Code
Alles anzeigenpi@raspberrypi:/srv/dev-disk-by-uuid-1c0dc0b4-d37c-4a43-b9ed-597a8dd4f64f $ ls Container_bakup Docker Docker-Config Primary aquota.group aquota.user lost+found Primary AppData admin-user docker-compose.yml(nextcloud + swag) pi AppData Bitwarden_Self_Host Nextcloud Bitwarden_Self_Host README.md create_ssl.sh data docker-compose.yml get-docker.sh setup.sh Nextcloud admin appdata_ocbh96lgg1l1 files_external index.html nextcloud.log tparks Docker-Config Bitwarden_Self_Host bitwarden cache nextcloud nextclouddb nginx swag Bitwarden_Self_Host README.md create_ssl.sh data docker-compose.yml get-docker.sh setup.sh nextcloud crontabs keys log nginx php www nginx config.json data docker-compose.yml letsencrypt swag crontabs dns-conf etc fail2ban geoip2db keys log nginx php www nginx authelia-location.conf authelia-server.conf dhparams.pem geoip2.conf ldap.conf nginx.conf proxy-confs proxy.conf site-confs ssl.conf
From this I can see how I ended up with two instances of Bitwarden. Setting aside this issue I see a docker-compose file under Docker-Config/nginx that looks like it might be interfering with swag. Its contents are:
Code
Alles anzeigenversion: '3' services: app: image: 'jc21/nginx-proxy-manager:latest' ports: - '80:80' - '81:81' - '443:443' volumes: - ./config.json:/app/config/production.json - ./data:/data - ./letsencrypt:/etc/letsencrypt db: image: 'yobasystems/alpine-mariadb:latest' environment: MYSQL_ROOT_PASSWORD: 'npm' MYSQL_DATABASE: 'npm' MYSQL_USER: 'npm' MYSQL_PASSWORD: 'npm' volumes: - ./data/mysql:/var/lib/mysql
I don't think this stack has anything to do with my Nextcloud or Bitwarden. Can someone confirm? Is this the container nginix_app_1 that Zoki mentions in #24? If so shouldn't I be able to stop it then start swag again?
-
I had significant help from KM0201 11 months ago to set this up. I had only Bitwarden and Nextcloud running successfully without ever touching it since. I thought both were using swag, I thought. Any guesses as to what the nginx_app_1 is for? Is it for running Bitwarden?
-
It does appear that swag is not running. I jus tried to start it but got the following error: