Beiträge von vandoe

    I have been successfully running OMV5 on the raspberry Pi 3 but due to some extenuating circumstance it has not been running for over a year. When trying to run it again the web interface did not come up. I then tried to SSH in and got an access denied. I also ran it with a monitor hooked up to the PI. It appears that it is loading with all the lines indicating OK accept for the attached storage which is not connected at the moment. On the monitor it seems like it accepts my password but just cycles through and comes back to login. If I use an incorrect password it indicates that the password is incorrect. Any thoughts on what I can do next?

    Yes, I'm using Docker-Config for each of my containers(Nextcloud, Bitwarden and Wireguard). Here is what is in Docker-Config:

    After my wireguard stopped working I decided to dump the whole thing and start over. I deleted the container and stack using portainer then reloaded everything. The second time around instead of creating the peer1 and peer2 folders it is creating the following two empty folders.


    custom-cont-init.d and custom-services.d


    Here is the stack I'm using. It worked just fine the first time around.


    After getting the containers up and running everything works except I can't access anything through wireguard.


    Zitat von Wireguard guide


    We can check it by opening a browser and accessing the IP of any service on our LAN.T he home network appears on the screen, we press the button on the right and we give it permission to access.

    I should be able to open openmediavault by putting in 192.168.0.23:85 from a browser according to the above. This did not work for me. I was able to get to my files originally by setting up a remote connection in my android file browser with SFTP using IP 192.168.0.23 and port 22. After restart this does not work either

    Here are the other requested items:


    mount | grep disk


    Code
    /dev/sdc1 on /srv/dev-disk-by-uuid-f1ebbf1b-0cc5-4056-bba0-0e4f7e08932d type ext4 (rw,relatime,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group)
    overlay on /srv/dev-disk-by-uuid-1c0dc0b4-d37c-4a43-b9ed-597a8dd4f64f/Docker/overlay2/d4f7cef45dbdef46192655e7b192cb212c36a882937aae6f4055cd4253c3ff7e/merged type overlay (rw,relatime,lowerdir=/srv/dev-disk-by-uuid-1c0dc0b4-d37c-4a43-b9ed-597a8dd4f64f/Docker/overlay2/l/JKBQWOYQ2AAMKCRISZLQQWHG4T:/srv/dev-disk-by-uuid-1c0dc0b4-d37c-4a43-b9ed-597a8dd4f64f/Docker/overlay2/l/6YUF322H3FUPXQZBCMCHTVCQQM:/srv/dev-disk-by-uuid-1c0dc0b4-d37c-4a43-b9ed-597a8dd4f64f/Docker/overlay2/l/TWMRF3GZ4J3ZNTPUL2OL725DBD:/srv/dev-disk-by-uuid-1c0dc0b4-d37c-4a43-b9ed-597a8dd4f64f/Docker/overlay2/l/O7IBVJB3YXYNCCQVIRELLLKLR6,upperdir=/srv/dev-disk-by-uuid-1c0dc0b4-d37c-4a43-b9ed-597a8dd4f64f/Docker/overlay2/d4f7cef45dbdef46192655e7b192cb212c36a882937aae6f4055cd4253c3ff7e/diff,workdir=/srv/dev-disk-by-uuid-1c0dc0b4-d37c-4a43-b9ed-597a8dd4f64f/Docker/overlay2/d4f7cef45dbdef46192655e7b192cb212c36a882937aae6f4055cd4253c3ff7e/work)
    /dev/sda1 on /srv/dev-disk-by-uuid-1c0dc0b4-d37c-4a43-b9ed-597a8dd4f64f type ext4 (rw,relatime,jqfmt=vfsv0,usrjquota=aquota.user,grpjquota=aquota.group)

    lsblk


    Code
               8:0    0   1.8T  0 disk
    └─sda1        8:1    0   1.8T  0 part /srv/dev-disk-by-uuid-1c0dc0b4-d37c-4a43-b9ed-597a8dd4f64f
    sdb           8:16   0 111.8G  0 disk
    └─sdb1        8:17   0 111.8G  0 part /srv/dev-disk-by-uuid-12E426FAE426E029
    sdc           8:32   0   1.4T  0 disk
    └─sdc1        8:33   0   1.4T  0 part /srv/dev-disk-by-uuid-f1ebbf1b-0cc5-4056-bba0-0e4f7e08932d
    mmcblk0     179:0    0  29.7G  0 disk
    ├─mmcblk0p1 179:1    0   256M  0 part /boot
    └─mmcblk0p2 179:2    0  29.5G  0 part /
    span id="selection-marker-1" class="redactor-selection-marker woltlab-bbcode-marker"></span>

    blkid

    Code
    /dev/mmcblk0p1: LABEL_FATBOOT="boot" LABEL="boot" UUID="DC3E-E470" TYPE="vfat" PARTUUID="5feefdba-01"
    /dev/mmcblk0p2: LABEL="rootfs" UUID="a7adb26a-8b87-4729-99c8-9f5ac069d51e" TYPE="ext4" PARTUUID="5feefdba-02"

    cat /etc/docker/daemon.json


    Code
    "data-root": "/srv/dev-disk-by-uuid-1c0dc0b4-d37c-4a43-b9ed-597a8dd4f64f/Docker"

    Here is the nextcloud/bitwarden stack:


    I just discovered that my containers just stopped running. I had Nextcloud, Bitwarden and wireguard all running fine. The only thing done to the system was to add an additional hardrive to my NAS. The NAS is working fine. I can access all the data on the disks just fine. running docker ps -a also shows no containers at all. I ran a test case where I reloaded my wireguard stack using portainer and it errored out. Below is the error message:


    Code
    Deployment error
    failed to deploy a stack: Error response from daemon: layer does not exist

    Any thoughts on how to recover?

    Everything is now working however the guide implies you can get access to server drives and files but doesn't say how. I can get to OMV 5 console by just putting serverip:85. I was also able to set up a remote access on my phone with file manager by choosing SFTP connection on port 22 so I thought putting serverip:22 would also work but it doesn't. Is there another way?

    Changing the ports to this:

    Code
    ports:
    - 51823:51820/udp #To change see next post

    Allowed the stack to run without error, but I would still like to stick with the original and clear port 51820. I don't understand why removing wireguard completely would still leave something looking at that port. Is there some way to figure out how to clear that port?

    So I've deleted the wireguard install and recreated it with the following stack


    Still get the same error. Nothing else is looking at port 51820. Do you see anything wrong with this file?

    I tried it and it made no difference. I'm under the impression that what comes before the colon is whatever folder and drive I'm working in so /Docker-Config: /lib/modules should work just the same as - /lib/modules:/lib/modules. the first one adds /lib/modules to my existing folder Docker-Config and the second creates new folders /lib/modules on my disk.

    Are you saying it should be as follows?


    Code
    - /srv/dev-disk-by-uuid-1c0dc0b4-d37c-4a43-b9ed-597a8dd4f64f/lib/modules:/lib/module

    Just tried to get wireguard up and running on my OMV 5. Used the guide from the forum and Portainer 2.11.1. I have the following error and google is showing any useful info on this error.


    Code
    Deployment error
    failed to deploy a stack: 
    Container wireguard Creating Container wireguard Created Container wireguard Starting Error response from daemon: 
    driver failed programming external connectivity on endpoint wireguard (9a47df36fabb47ec02da8d928a3d6469a5e2872b413594a138018e4e6f0d5503): 
    Error starting userland proxy: listen udp4 0.0.0.0:51820: bind: address already in use

    Below is my stack in portainer:


    I added the port forwarding as follows:

    Code
        #    Service Name    External Port    External IP Address    Internal Port    Internal IP Address
    3        Wireguard        UDP: 51820              Any               UDP: 51820    OMV server IP

    Any guesses as to what is wrong?

    To help me make sense of things I have laid out my file structure:

    From this I can see how I ended up with two instances of Bitwarden. Setting aside this issue I see a docker-compose file under Docker-Config/nginx that looks like it might be interfering with swag. Its contents are:

    I don't think this stack has anything to do with my Nextcloud or Bitwarden. Can someone confirm? Is this the container nginix_app_1 that Zoki mentions in #24? If so shouldn't I be able to stop it then start swag again?

    It does appear that swag is not running. I jus tried to start it but got the following error:


    Code
    Error response from daemon: driver failed programming external connectivity on endpoint swag (5562997e3871ec92cd4c1e23a12e49cdef3fce754df4ab74286a5a92a259698d): Bind for 0.0.0.0:81 failed: port is already allocated
    Error: failed to start containers: swag