Nextcloud with Letsencrypt using OMV and docker-compose - Q&A

    • Offizieller Beitrag

    https://en.m.wikipedia.org/wiki/Hairpinning


    Seems to be the problem you are facing

    Interesting. Never heard of that.


    I have zero issues logging in to any of my services using my external IP, while on my network. In fact I have them all bookmarked w/ my external domain name rather than local IP.. and they all work fine.

    • Offizieller Beitrag

    Seems to be the problem you are facing

    I know. I've been fighting with that for a long time. What I didn't know is that there are routers that fix it.

    I think my path is going to be to configure a DNS server on the router (my router supports it). I think it will be the easiest thing for me. I am not buying a new router. The one I have is only a few months old.

    • Offizieller Beitrag

    Yes now. Resolved. I post it in case anyone has the same problem as me with "loopback NAT".


    Step 1. Convert the external address to access nextcloud your-domain.com to the IP address of the OMV server. Two options:


    Step 1 Option 1: Configure a DNS server. In my case I have configured it in my router. There are other ways to do it if your router doesn't allow it. I suppose it can be done with a docker in OMV, I have not investigated it. Once configured, it will serve all clients within the LAN.


    Step 1 Option 2: Edit the hosts file of your client PC with which you access nextcloud. You will have to do it on each client.

    Example for windows 10. Type notepad in cortana's search bar. Right click on the application and select the option to run as administrator. Open the hosts file in the path C: / Windows / System32 / drivers / etc / hosts. (change .txt files to all files). Add at the end a line like the following:


    your-IP-server-OMV your-domain.com www.your-domain.com


    Save and exit. From this moment on, every time you type in a browser the address of your domain will direct you to the IP of your server.


    Step 2. Follow the macom guide as follows:


    1. DO NOT uncomment the port in the nextcloud section. All accesses to nextcloud will be done through the proxy.


    2. Add one more port to the swag container. Where says:


    Code
    ports:
    - 444: 443
    - 81:80
    restart: unless-stopped


    We will add a line with port 443:


    Code
    ports:
    - 443: 443
    - 444: 443
    - 81:80
    restart: unless-stopped


    In this way we write the address "your-domain.com" in the browser. And Nextcloud receives the request from the same domain that we use on the WAN through the proxy.

  • As on a armfh platform (see signature) I had to migrate from linuxserver/mariadb to webhippie/mariadb according to the guide by Morlan. Applying my weekly "docker-compose pull" I got for the first time a (brand-)new version of webhippie/mariadb. But after the "docker-compose up -d" and trying to connect to my nextcloud I get

    Code
    Internal Server Error
    
    The server encountered an internal error and was unable to complete your request.
    Please contact the server administrator if this error reappears multiple times, please include the technical details below in your report.
    More details can be found in the server log.

    It looks as if the mariadb-container does not start.


    Any clue?


    Not without further information. What does the mariadb-container log say?

    ...

    That's the mariadb-log:

    OMV 7.0.5 Sandworm | omvextrasorg 7.0 | compose 7.1 | Linux 6.6.16-current-mvebu | Armbian 24.2.1 Bookworm | Hardware Helios4 | RAID5 with 4 * 8TB WD Red

  • Googling brings up this: https://til.codes/how-to-fix-e…ed-storage-engine-innodb/


    Be sure to take backups before changing anything

    Last night I found a similar link. But I had no success (I better should have mention this detail).

    The log says now:

    The nextcloud login screen appears now but after the logon I see the "Internal Server Error" again.

    OMV 7.0.5 Sandworm | omvextrasorg 7.0 | compose 7.1 | Linux 6.6.16-current-mvebu | Armbian 24.2.1 Bookworm | Hardware Helios4 | RAID5 with 4 * 8TB WD Red

  • Last night I found a similar link. But I had no success (I better should have mention this detail).

    The log says now:

    Code
    2021-09-28  8:43:44 0 [ERROR] InnoDB: Your database may be corrupt or you may have copied the InnoDB tablespace but not the InnoDB log files. Please refer to https://mariadb.com/kb/en/library/innodb-recovery-modes/ for information about forcing recovery.

    The nextcloud login screen appears now but after the logon I see the "Internal Server Error" again.


    2021-09-28 7:20:57 0 [Note] mysqld (server 10.6.4-MariaDB-debug) starting as process 7 ...
    ...
    2021-09-28 7:20:57 0 [ERROR] InnoDB: Upgrade after a crash is not supported. The redo log was created with MariaDB 10.4.8.
    2021-09-28 7:20:57 0 [ERROR] InnoDB: Plugin initialization aborted at srv0start.cc[1447] with error Generic error
    2021-09-28 7:20:57 0 [Note] InnoDB: Starting shutdown...

    It seems that the recent image, updated the mDB version from 10.4 to 10.6 and it somewhow, your server had a crash prior to the recreation of the container.


    Seeing the log, it's pointing to the InnoDB recovery page: InnoDB Recovery Modes - MariaDB Knowledge Base

    Maybe you can sort it via those solutions.


    OR

    Have you got any recent backup of the DB?

    Can you revert back to the previous version on the webhippie while it was running?

    All it matters is to have it running as it was.


    While it's fully function (the DB) you can then, change to the linuxserver version with a few steps ( reversing what we did following Morlan tips on page 26 or 27)

    I did it a few months ago when linuxserver made the "alpine" version and didn't had any issues anymore.

  • I'm back in business after going back to "image: webhippie/mariadb:latest-arm32v6" and a restore of the last but one backup.

    I had to go this way because I didn't succeed with the latest mariadb image from webhippie which is release 10.6. I found several links where an incompatibility between nextcloud and mariadb 10.6 are described, e.g. https://github.com/nextcloud/docker/issues/1492


    I'll stay for a while with mariadb:latest-arm32v6 until the incompatibility will be resolved, instead of to try the workaround mentioned in the line above.


    EDIT:

    linuxserver/mariadb is at 10.5 → No incompatibility

    OMV 7.0.5 Sandworm | omvextrasorg 7.0 | compose 7.1 | Linux 6.6.16-current-mvebu | Armbian 24.2.1 Bookworm | Hardware Helios4 | RAID5 with 4 * 8TB WD Red

    Einmal editiert, zuletzt von FredK ()

  • Hello.

    I am trying to setup nextcloud on OMV which is running on RPI4

    I have another RPI4 with home assistant running, which already has duckdns and lets encrypt.

    Can I bypass all the swag/Letsencrypt setup from this guide ([How-To] Nextcloud with swag (Letsencrypt) using OMV and docker-compose)?


    I tried deploying stack without swag part, but with uncommented ports in nextCloud (just to test locally). But it does not load. Not sure how this file should be modified in my situation if that is possible at all

    ....appdata/nextcloud/config/www/nextcloud/config/config.php


    EDIT: There was issue with port forwarding config, as all docker thing is new to me. So now it's up and running with this port config:

    - 11443:443 #Map port 443 in the container to port 11443 on the Docker host.

    This question is still open: So if I would forward 11443 port to my OMV RPI4 in my router would it be some kind of security issue, as all lets encrypt is handled on Home assistant RPI4?


    EDIT2: Tried port forwarding getting "Your connection is not private" from browser and even after ignoring that getting: Access through untrusted domain. So probably it would be more complicated as I expected. No more questions from my side.


    EDIT3: Last edit.

    Basically I left home assistant's lets encrypt and added OMV swag.

    My Home assistant address is unchanged at https://XXXXX.duckdns.org/ 

    Nextcloud is with current config is https://XXXXX.duckdns.org:11443/nextcloud/

    Mostly because I wanted to preserve that XXXXX part :)


    All config:




    /srv/dev-disk-by-label-disk1/appdata/nextcloud/config/www/nextcloud/config/config.php part is only changed by adding this additional overwritehost property line so whole addition looked like so:


    3 Mal editiert, zuletzt von vukisz () aus folgendem Grund: Addition #3 final

    • Offizieller Beitrag

    I have another RPI4 with home assistant running, which already has duckdns and lets encrypt.

    Can I bypass all the swag/Letsencrypt setup from this guide ([How-To] Nextcloud with swag (Letsencrypt) using OMV and docker-compose)?

    No, all the containers need to be on the same machine and on the same bridge network.

  • Solved, see below


    Hi,

    I found this in Settings -> System after a user told me, he was unable to upload a bigger video file to a shared folder.

    I found this in the NC docs which seemed relevant for my problem: https://docs.nextcloud.com/ser…upload_configuration.html

    But I was unable to fix this. I updated

    /srv/dev-disk-by-label-Docker/appdata/nextcloud/config/www/nextcloud/.user.ini and

    /srv/dev-disk-by-label-Docker/appdata/nextcloud/config/www/nextcloud/.htaccess

    with

    Code
    php_value upload_max_filesize 16G
    php_value post_max_size 16G

    But NC settings still show me whats in the picture above. Any ideas where I have put the lines or can it even be included in the docker compose somehow?

    My config is from the macom How-To. Your help is appreciated :)


    edit:

    also changed client_max_body_size in config/nginx/site-confs/default from 512MB to 100G but the settings still show 1 GB. Restarted the containers of course.


    edit2: I solved it by putting an equal sign in the .user.ini file

    Code
    upload_max_filesize=16G
    post_max_size=16G
    memory_limit=1G
  • I updated

    /srv/dev-disk-by-label-Docker/appdata/nextcloud/config/www/nextcloud/.user.ini and

    /srv/dev-disk-by-label-Docker/appdata/nextcloud/config/www/nextcloud/.htaccess


    edit2: I solved it by putting an equal sign in the .user.ini file

    I almost certain that those changes won't survive a reboot/restart. (might be wrong, though)


    The file to edit is:

    Code
    # /nextcloud/config/php/php-local.ini
    
    ; Edit this file to override php.ini directives and restart the container
    
    date.timezone = Europe/Lisbon
    upload_max_filesize = 16G
    memory_limit = 1536M # above this value, occ command fails on armhf (for x64, you can go higher)
    • Offizieller Beitrag

    The file to edit is:

    After consulting the php manual, see here https://www.php.net/manual/en/…pload.common-pitfalls.php maybe you should also configure the following values:


    max_input_time

    post_max_size


    It would not be necessary if nextcloud already has them configured on another site, I ignore this.


    Also, I'm not sure what values to determine for these variables. Maybe these?


    max_input_time = 3600

    post_max_size = 16G



    On the other hand, following one of the comments from the first link of this post (it's old, 5 years ago), it says that the value of max_file_size cannot exceed the value of php_int_max


    That brings me here https://www.php.net/manual/es/reserved.constants.php where it states that the largest supported integer is 2147483647 on 32-bit systems. Maybe this will affect 32 bit systems on raspberry. With 64 bits it is not a problem, the maximum value is larger. If I have understood correctly, on 32-bit systems, cannot files larger than 2GB be uploaded?


    I don't know if this is saved with the notation referring to GB instead of bytes. If the 16G value is stable, the number is 16, it is not 17,179,869,184



    Another related question. In the swag configuration file /nginx/site-confs/default, at least in my case, I have the following line at the end of the file:


    proxy_cache_path cache / keys_zone = auth_cache: 10m;


    This states (if I don't misunderstand) that data cached inactive for more than 10 minutes will be removed. Therefore a large file upload lasting more than 10 minutes would delete the "old" data and the upload would fail. Perhaps this value should also be changed to a higher value?

    • Offizieller Beitrag

    also changed client_max_body_size in config/nginx/site-confs/default from 512MB to 100G but the settings still show 1 GB. Restarted the containers of course.

    setting the value to 0 disables this control

    ver http://nginx.org/en/docs/http/…html#client_max_body_size

  • If I have understood correctly, on 32-bit systems, cannot files larger than 2GB be uploaded?

    The example I gave is for 32Bit (armhf on a Pi).

    In my case I set a limit of 16Gb (way overkill and more than enough for the type of files I upload to my Nextcloud).


    As for the subject discussed: there's no need to edit files on SWAG, it is just redirecting the access to Nextcloud (or other containers) and (If I see it correctly) doesn't make any block to RAM/TTL/SIZE.


    The "default" and "nginx.conf" files from SWAG already have a client_max_body_size 0; to prevent any issues with file sizes on the redirects.

    Hence, you only edit file(s) on the containers.

    • Offizieller Beitrag

    As for the subject discussed: there's no need to edit files on SWAG, it is just redirecting the access to Nextcloud (or other containers) and (If I see it correctly) doesn't make any block to RAM/TTL/SIZE.


    The "default" and "nginx.conf" files from SWAG already have a client_max_body_size 0; to prevent any issues with file sizes on the redirects.

    Hence, you only edit file(s) on the containers.

    Thanks. I did not understand this. :thumbup:



    You have not answered my question about these values. Wouldn't it be necessary to add them to php_local.ini as well?


    max_input_time = 3600

    post_max_size = 16G

  • You have not answered my question about these values. Wouldn't it be necessary to add them to php_local.ini as well?


    max_input_time = 3600

    post_max_size = 16G

    The post_max_size=16G is needed/can be used (as you can see in my example, ;) )

    Edit the value that suits your need.


    [EDIT] Sorry, misread the argument: "post_max_size=xx" (I was seeing "upload_max_size") will make a limit to the entire requests beeing made to the server while "upload_max_size=xxx" is only setting a limit to any given file/upload.


    Zitat

    upload_max_filesize is the limit of any single file. post_max_size is the limit of the entire body of the request, which could include multiple files.

    Given post_max_size = 20M and upload_max_filesize = 6M you could upload up to 3 files of 6M each. If instead post_max_size = 6M and upload_max_filesize = 20M then you could only upload one 6M file before hitting post_max_size. It doesn't help to have upload_max_size > post_max_size.

    [/EDIT]



    The max_input_time=3600 can be used but it will be more than needed since the integer is the "inbetween" of time to parse any request and the beginning of the execution:

    max_input_time

    Zitat

    max_input_time int

    This sets the maximum time in seconds a script is allowed to parse input data, like POST and GET. Timing begins at the moment PHP is invoked at the server and ends when execution begins. The default setting is -1, which means that max_execution_time is used instead. Set to 0 to allow unlimited time.

    The default beeing used is "-1" which instead, will make the PHP works with "max_execution_time=3600", (as seen on Andifront Nextcloud (mine is the same)).


    If you need/want, you can edit it as you see fit but defaults are enough for normal use case.

    • Offizieller Beitrag

    If you need/want, you can edit it as you see fit but defaults are enough for normal use case.

    That I wanted to know, thank you! :thumbup:

  • I almost certain that those changes won't survive a reboot/restart. (might be wrong, though)

    Until now the settings survive (Containers being stopped every night during backup).

    Putting the settings only in /nextcloud/config/php/php-local.ini has no effect.

    In nextcloud/config/www/nextcloud/.user.ini they work (all but the max_execution_time=7200, maybe 3600 is the max value accepted?)

  • Putting the settings only in /nextcloud/config/php/php-local.ini has no effect

    Then you might not be doing it in a proper way.

    Linuxserver images are done to minimize the user of changing too many files, hence (on Nextcloud container) there are these 2 files to be edited by users with values different than defaults (no need to mess with anything else on the image/container):

    Code
    # /nextcloud/config/php/php-local.ini
    ; Edit this file to override php.ini directives and restart the container

    All defaults for PHP can be overridden in here.


    Code
    # /nextcloud/config/php/www2.conf
    ; Edit this file to override www.conf and php-fpm.conf directives and restart the container

    This one is used to fine-tune the "pm.max_children...." for eg.



    And, of course, you have the "/nextcloud/config/www/nextcloud/config/config.php" that is edited as per macom guide instructions but that has nothing to do with the container itself.


    If, by any chance, your container is corrupted/damaged/lost, it will be very easy to redo all the previous overrides just by editing those files with what you need.

    Most defaults already cover most of the user's needs.



    Nonetheless, this is just one way of doing things (that I assume is the way it was thought by linuxserver)

    If what you did is working, then leave it at that, ;)

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!