Posts by riff-raff

    I set up my Nextcloud, Vaultwarden, etc with Docker and Swag as Reverse Proxy.


    When adding a shared calender (WebDAV) within a Thunderbird client, the Swag seems to have an issue and die forwarded containers are not reachable for minutes from the outside world. Within my local net the containers are still reachable under host-ip with port. The situation recovers afer a couple minutes. Machine load during this situation is minor, so i suspect a limitation of parallel requests or something like that. Sync with the desktop app also works like a charm.


    Can you give me a hint which switch to flip to boost the performance, since the machine is pretty capable and it worked on my old rig with 1/10 of the power like a charm with nginx-proxy.


    Proxy-Conf new:

    server {

    listen 443 ssl;

    listen [::]:443 ssl;

    server_name cloud.*;

    include /config/nginx/ssl.conf;

    client_max_body_size 0;

    location / {

    include /config/nginx/proxy.conf;

    include /config/nginx/resolver.conf;

    set $upstream_app Nextcloud;

    set $upstream_port 443;

    set $upstream_proto https;

    proxy_pass $upstream_proto://$upstream_app:$upstream_port;

    # Hide proxy response headers from Nextcloud that conflict with ssl.conf

    # Uncomment the Optional additional headers in SWAG's ssl.conf to pass Nextcloud's security scan

    proxy_hide_header Referrer-Policy;

    proxy_hide_header X-Content-Type-Options;

    proxy_hide_header X-Frame-Options;

    proxy_hide_header X-XSS-Protection;

    # Disable proxy buffering

    proxy_buffering off;

    }

    }

    server {

    listen 443 ssl;

    listen [::]:443 ssl;

    server_name vaultwarden.*;

    include /config/nginx/ssl.conf;

    client_max_body_size 0;

    location / {

    include /config/nginx/proxy.conf;

    resolver 127.0.0.11 valid=30s;

    set $upstream_app Vaultwarden;

    set $upstream_port 80;

    set $upstream_proto http;

    proxy_pass $upstream_proto://$upstream_app:$upstream_port;

    proxy_max_temp_file_size 128m;

    }

    }


    Proxy-Conf old:


    server {

    listen 443 ssl;

    listen [::]:443 ssl;

    server_name cloud.*;

    include /config/nginx/ssl.conf;

    client_max_body_size 0;

    location / {

    include /config/nginx/proxy.conf;

    resolver 127.0.0.11 valid=30s;

    set $upstream_app nextcloud;

    set $upstream_port 443;

    set $upstream_proto https;

    proxy_pass $upstream_proto://$upstream_app:$upstream_port;

    proxy_max_temp_file_size 2048m;

    }

    }

    server {

    listen 443 ssl;

    listen [::]:443 ssl;

    server_name bitwarden.*;

    include /config/nginx/ssl.conf;

    client_max_body_size 0;

    location / {

    include /config/nginx/proxy.conf;

    resolver 127.0.0.11 valid=30s;

    set $upstream_app vaultwarden;

    set $upstream_port 80;

    set $upstream_proto http;

    proxy_pass $upstream_proto://$upstream_app:$upstream_port;

    proxy_max_temp_file_size 128m;

    }

    }



    Kind regards,

    I set up Duplicati with Compose,


    what is the best way to grant the container access to the root file system to backup my shares to a remote WebDAV cloud storage?


    I remember setting it up for my old build, that I had to use the option "http-readwrite-timeout" in the job configuration, but can't remember what is a suitable value for this.


    Kind regards,


    Ralf

    Isn't this the same as I posted initially, only not so well formatted? (I'll have to check propper quoting)


    Sometimes i think I'm getting old ... ChatGPT was obvious, missed on asking "that guy". But human help is something to be more greteful for.


    Edit:

    ChatGPT did the formatting for me now :D

    Thank you for your support, that post clears pretty much all of my uncertenties.


    I modify within the containers typicially only the configs, which are im most cases mapped to an external volume anyways, so I can access them from the host.


    I'll try to set the network as recommended and give some feedback afterwards. Can I define a static IP for the container using:


    networks:

    docker-net:

    ipv4_address: 192.168.100.2

    external: true


    Kind regards

    I recently moved from my old OVM 5.6.26-1 NAS running >10 years now to a new more powerful machine. Since I can run both systems parallel, moving data and configurations is quite easy. Running Docker+Portainer on the old machine, I have a little struggle with certein settings through Compose plugin:


    1. I set up a custum bridged network within Compose, so that the containers can communicate with eachother (nextcould+swag for example). How do I set up the usage of the network within the compose files? My bridge is 'docker-net' with IP range 192.168.100.0


    I tried to use this

    networks:

    docker-net:

    ipv4_address: 192.168.100.2

    networks:

    docker-net:

    external: true

    within the compose files. Any suggestions how to do it right?


    2. With Portainer in the old machine, I used the CLI option to work within the containers, for example to edit their configurations. How do I do this with Compose? My idea was to edit the configuration files from the host within the volumes. Is there a more convientient way to do it?


    I set up Docker with user docker:docker, all running on the SSD-pool.


    My machine:


    HPE ProLiant Microserver Gen10 Plus - Xeon E-2224 - 64 GB ECC - QM2-2P10G1TB - 2x WD Red SN700 2TB zfs mirror - 4x Toshiba MG10ACA 20 TB zfs Z-R2


    Thank you in advance for your support and suggestions!

    I suggest for more security, store the keyfile somewhere remote and load it at boot. I use 2 NAS in 2 different locations, the opposite keyfile for encryption lies on the other NAS.


    Using this TPM or a local keyfile encrypts the device and/or the drive as long as the TPM is present. without VPN or the keyfile reachable, the NAS contains garbage.

    Check on using an external database until this issue is fixed. Exporting and importing the database should be a piece of cake.


    DB-configuration is done in



    zm.default within the conf-folder. Create a backup of this configuration in advance. Files will be created after first unsuccessful run.


    Edit: A new version was published today, still same issue.


    According to this post, which has a similar issue, a new setup should do it. I'll try to use a custom user script do solve the DNS-issue at the first start of this container.

    Code
    /mnt/Zoneminder

    is owned by docker:docker


    I checked with group 100 (users), no issue regarding group any more, but still unreachable ppa's.


    Checking those manually shows availability, so there might be some name resolving issue. I tried setting up a different bridged network as well; specified my router and google as DNS, still the same thing.

    Using dlandon/zoneminder docker, I run into a DNS issue. Starting this docker gives me this log:


    The container uses the bridged network without any modification. I sticked to the standard configuration recommended by the author.



    Any suggestions how to resolve the DNS errors?

    My bitwarden works like a charm, I enabled admin page and cancled registration, but exposing it gives me some worries due to possible brute force attempts. Having fail2ban would be a nice security pillow. Might be a good thing to set up on a rainy Sunday

    Ah, awesome! Thanks. Makes sense.


    Morlan: How does your reverse proxy configuration with letsencrypt look like? Did you stick to the sample provided with the letsencrypt-container?



    EDIT: WORKS!


    Does the BitwardenRS-Server work with the payed features, like multiple user? As far as I cound figure out, even with self -hosting, a Bitwarden-Account is still needed and with more than one user, a little allowance to be due.

    A little Typo within your command


    docker exec nextcloud sudo -u abc php /config/www/nextcloud/occ maintenance:mode --on

    docker exec nextcloud sudo -u abc php /config/www/nextcloud/occ maintenance:mode --off


    works like a charm.


    Next do do: Dump of database for backup purposes.


    Thank you Morlan


    Edit:

    docker exec nextclouddb /usr/bin/mysqldump -u nextcloud --password=xxx nextcloud > /srv/dev-disk-by-label-xxx/backup/nextcloud_backup.sql

    Seems to work, but backup database seems a litte too small somehow. (Previous natively installed MariaDB backups of NC were >120 MB, this is only 40 MB and there was not much activity on this cloud lately)