Nextcloud with Letsencrypt using OMV and docker-compose - Q&A

  • Maybe start again with a fresh install?


    Just what I was thinking. Thanks.

    Rsync makes true backup and restoration stupid easy, and it's built right in to OMV. Use this command in a Scheduled Job: rsync -av --delete /srv/dev-disk-by-label-NAMEofSOURCEdisk/ /srv/dev-disk-by-label-NAMEofDESTINATIONdisk/
    OMV Version: Ver. 5 (current) - Hardware: NanoPi M4, running Nextcloud, Plex, & Heimdall - Acer Aspire T180, running backup - Odroid XU4, running Pi-Hole (DietPi) - Testing/Playing: Odroid HC2, Odroid XU4, Raspberry Pi 3B+, Odroid XU4, and HP dx2400.

  • So I freshly installed omv 5.0.10-1 on my NanoPi M4, and just to see i ran docker-compose --version and was met with this response: Command 'docker-compose' not found, but can be installed with: apt install docker-compose


    So I installed it and called for the version again to get this response docker-compose version 1.21.0, build unknown


    Next I ran docker-compose up -d and received the following, same as last time:
    Creating network "nextcloud_default" with the default driverERROR: Failed to program FILTER chain: iptables failed: iptables --wait -I FORWARD -o br-860a55f6922b -j DOCKER: iptables v1.8.2 (nf_tables): RULE_INSERT failed (Invalid argument): rule in chain FORWARD (exit status 4)


    I believe I have gone through the [How To] without error and I think I have a clean install of omv5. Is there something I am missing, like special permissions or un-permissions for the user docker1, or logged into the command line as something other than root? I know it is probably something stupid that I am overlooking.

    Rsync makes true backup and restoration stupid easy, and it's built right in to OMV. Use this command in a Scheduled Job: rsync -av --delete /srv/dev-disk-by-label-NAMEofSOURCEdisk/ /srv/dev-disk-by-label-NAMEofDESTINATIONdisk/
    OMV Version: Ver. 5 (current) - Hardware: NanoPi M4, running Nextcloud, Plex, & Heimdall - Acer Aspire T180, running backup - Odroid XU4, running Pi-Hole (DietPi) - Testing/Playing: Odroid HC2, Odroid XU4, Raspberry Pi 3B+, Odroid XU4, and HP dx2400.

  • I tried to find something using google, but didn't find anything helpful. It might be related to the Debian Version on the NanoPi. To check you could test if your workflow is working on a virtual machine.


    Also you can try to create the network from the CLI and then change the docker-compose.yml so that the container join this network.


    Or you ask in a Docker or the linuxserver forum.

  • Hi there,


    thanks a lot for this and all the other guides, I am still learning but feel like I am getting there ;)


    Since I made some mistakes earlier with sharedfolders and configuring wrong access rights and not using direct paths to '/srv/dev...-disk1/appdata/..' I just want to confirm some of the steps in detail: (using OMV5 Beta)


    1. I create user docker1 in the GUI with no special groups, just 'users'?
    2. I do not enable 'User Home Directories' from User/Settings in the GUI?
    3a. I create the /home/docker1 folder in CLI with user root and continue with root? or
    3b. I ssh with docker1 and do the whole CLI part with 'him'?
    4. the directory /home/docker1 is only to store .yml-files?



    Again, thanks for your effort!
    kriber

  • 1. I create user docker1 in the GUI with no special groups, just 'users'?

    Yes (you might add him to ssh and sudo group; depends on 3a/3b)

    2. I do not enable 'User Home Directories' from User/Settings in the GUI?

    Yes, not needed

    3a. I create the /home/docker1 folder in CLI with user root and continue with root? or
    3b. I ssh with docker1 and do the whole CLI part with 'him'?

    I use root, others recommend to use another user. You have to add the user docker1 to the ssh and the sudo group to be able to connect vie ssh and run commands that need elevated privileges. But you can also use any other user that is in the sudo and ssh group.
    sudo docker-compose up -d

    4. the directory /home/docker1 is only to store .yml-files?

    Yes.

  • thank you @macom

    I use root, others recommend to use another user. You have to add the user docker1 to the ssh and the sudo group to be able to connect vie ssh and run commands that need elevated privileges. But you can also use any other user that is in the sudo and ssh group.sudo docker-compose up -d

    Is the following true?
    I can use root for the whole setup in your guide and after the setup, user docker1 is running all of the containers (since we enter his PUID)?


    The nice part of that would be that, docker1 might not need sudo-rights, since (some of) these containers are reachable from the internet. I also do like to use root on my system (very convenient), but I recall, that limiting user rights on 'exposed' machines in a good thing to do.
    Of course this only works if docker1 does not need privileges for running a container. I will go ahead and try that :)
    thanks.

  • I can use root for the whole setup in your guide and after the setup, user docker1 is running all of the containers (since we enter his PUID)?

    Yes


    The nice part of that would be that, docker1 might not need sudo-rights, since (some of) these containers are reachable from the internet. I also do like to use root on my system (very convenient), but I recall, that limiting user rights on 'exposed' machines in a good thing to do.

    You can use any other user. No need to use docker1 for that.

  • You can use any other user. No need to use docker1 for that.

    Sorry, for sticking to that question:
    How I understand you: docker1 is kind of a placeholder for 'some user' I would maybe use 'kriber'.


    The thing I would like to understand is (only with regard to PUID/PGID, not the setup):
    is it more secure to use some user with no sudo/ssh privileges, or does it not matter at all or is it just best-practise?
    Or asked differently, I should probably not use PUID/PGID of root?

  • Or asked differently, I should probably not use PUID/PGID of root?

    Yes, you should not run docker as root.


    You should run docker as a user just for docker (e.g. docker1)


    You can use any (other) user to run the docker-compose file that is in the sudo group

    • I noticed that the volumes sections in docker compose implement absolute paths as opposed to the old docker install of /sharedfolders/AppData...etc. Is that correct?
    • I noticed that the yml file for nextcloud begins with version: "2" whereas the plex yml in a subsequent [HOW TO] begins with three hyphens with line two being version: "2". What do the hyphens do?
    • The error I posted above mentions iptables several times. Does this have to do with the MariaDB database? I know with the old TDL videos you have to go to command line and setup the database user and password and there is nothing like that in the docker-compose version.

    Rsync makes true backup and restoration stupid easy, and it's built right in to OMV. Use this command in a Scheduled Job: rsync -av --delete /srv/dev-disk-by-label-NAMEofSOURCEdisk/ /srv/dev-disk-by-label-NAMEofDESTINATIONdisk/
    OMV Version: Ver. 5 (current) - Hardware: NanoPi M4, running Nextcloud, Plex, & Heimdall - Acer Aspire T180, running backup - Odroid XU4, running Pi-Hole (DietPi) - Testing/Playing: Odroid HC2, Odroid XU4, Raspberry Pi 3B+, Odroid XU4, and HP dx2400.

    • The error I posted above mentions iptables several times. Does this have to do with the MariaDB database? I know with the old TDL videos you have to go to command line and setup the database user and password and there is nothing like that in the docker-compose version.

    I don't exactly know what the error means but I'm pretty is has to something with configuring the docker network and not with the MariaDB

  • [..] setup the database user and password and there is nothing like that in the docker-compose version.

    In the Info page on docker-hub it says for 'docker-compose':

    Code
    - MYSQL_DATABASE=USER DB NAME #optional
    - MYSQL_USER=MYSQL USER #optional
    - MYSQL_PASSWORD=DATABASE PASSWORD #optional

    I am not sure, but I expect this to do the same - so you do not need to set up db-name, user and pw via sql in cli.


    (don't know about the error you mentioned)

  • I am not sure, but I expect this to do the same

    Thanks for the replies @kriber and @Morlan.
    I now see line one in the example given. 2 and 3 are not present. Easy to fix.


    FYI ALL: I just ran docker compose for the next [HOW TO] regarding Plex and it worked like a dream. Renewed my confidence. I'm not an idiot after all; just a half wit. :D I guess I will go back and try Nextcloud again.

    Rsync makes true backup and restoration stupid easy, and it's built right in to OMV. Use this command in a Scheduled Job: rsync -av --delete /srv/dev-disk-by-label-NAMEofSOURCEdisk/ /srv/dev-disk-by-label-NAMEofDESTINATIONdisk/
    OMV Version: Ver. 5 (current) - Hardware: NanoPi M4, running Nextcloud, Plex, & Heimdall - Acer Aspire T180, running backup - Odroid XU4, running Pi-Hole (DietPi) - Testing/Playing: Odroid HC2, Odroid XU4, Raspberry Pi 3B+, Odroid XU4, and HP dx2400.

  • I know with the old TDL videos you have to go to command line and setup the database user and password and there is nothing like that in the docker-compose version.

    That is because in this guide the database is created by Nextcloud itself when you first set it up (hence using the database root user).



    Code
    - MYSQL_DATABASE=USER DB NAME #optional
    - MYSQL_USER=MYSQL USER #optional
    - MYSQL_PASSWORD=DATABASE PASSWORD #optional

    I am not sure, but I expect this to do the same - so you do not need to set up db-name, user and pw via sql in cli.

    You are correct, but as Im mentioned above even these arguments are not needed in this particular case.

    • With regard to creating a user "docker1" and then creating a directory for "docker1" in /home: when I create the user it is automatically given a home directory in my designated shared folder AppData. Can I use the already created "home" folder to house my yml files or does it specifically need to be /home/docker1/nextcloud and not /sharedfolders/AppData/docker1/nextcloud?
    • With regard to volume paths: when using docker gui and then Portainer the paths were always laid out with /sharedfolders/AppData/Nextcloud/config. Is that wrong now or just a matter of personal preference? Is Absolute path the only correct way?
    • In the [HOW TO] it is stated that it is not necessary if folders exist, that they will be created when the docker container runs. Does that not cause permission problems if you don't put them in place in advance?

    Rsync makes true backup and restoration stupid easy, and it's built right in to OMV. Use this command in a Scheduled Job: rsync -av --delete /srv/dev-disk-by-label-NAMEofSOURCEdisk/ /srv/dev-disk-by-label-NAMEofDESTINATIONdisk/
    OMV Version: Ver. 5 (current) - Hardware: NanoPi M4, running Nextcloud, Plex, & Heimdall - Acer Aspire T180, running backup - Odroid XU4, running Pi-Hole (DietPi) - Testing/Playing: Odroid HC2, Odroid XU4, Raspberry Pi 3B+, Odroid XU4, and HP dx2400.

  • With regard to creating a user "docker1" and then creating a directory for "docker1" in /home: when I create the user it is automatically given a home directory in my designated shared folder AppData. Can I use the already created "home" folder to house my yml files or does it specifically need to be /home/docker1/nextcloud and not /sharedfolders/AppData/docker1/nextcloud?

    You can use whatever folder you want for the .yml files. So the automaticaly created folder is fine.


    With regard to volume paths: when using docker gui and then Portainer the paths were always laid out with /sharedfolders/AppData/Nextcloud/config. Is that wrong now or just a matter of personal preference? Is Absolute path the only correct way?

    Its not wrong and should work. Its considered best practise to use the absolute path in case the relative paths of the sharedfolder might not work.


    In the [HOW TO] it is stated that it is not necessary if folders exist, that they will be created when the docker container runs. Does that not cause permission problems if you don't put them in place in advance?

    It actually prevends permission problems. E.g. in case you created the folders as root and the permissions are set that your docker-user cant access them.

  • Its considered best practise to use the absolute path in case the relative paths of the sharedfolder might not work.

    But if you use the relative path and your data drive dies and you re-point all of your shares to the rsync'd backup drive your server is back up with nothing more to do. If you use absolute path, how does that scenario work?



    It actually prevends permission problems. E.g. in case you created the folders as root and the permissions are set that your docker-user cant access them.

    Should these folders be owned by docker1 with a drwxr-xr-x or is it just necessary that the permissions are set to drwxrwxrwx? Or something else?

    • I find that if I create the folders in omv>Access Rights Management>Shared Folders I get "root" ownership with drwxrwxrwx privileges. That makes sense. These are folders like "AppData", "Media", "Docker", and "Nextcloud".
    • The folders that the "home folder" setting creates gets that user as owner (makes sense) with drwx--S--- as permissions (that I don't understand) and that folder cannot be opened from the desktop share that it resides in.
    • If I create sub-folders from my desktop the owner is set up as the user that I'm logged in as with permissions of drwxrwsrwx. That all makes sense except for the "s" for users.

    I hope I am not making this more difficult that it is, and if this is off topic, moderators feel free to move it off. Permissions and Paths seem to be the two things that keep me awake at night here lately.

    Rsync makes true backup and restoration stupid easy, and it's built right in to OMV. Use this command in a Scheduled Job: rsync -av --delete /srv/dev-disk-by-label-NAMEofSOURCEdisk/ /srv/dev-disk-by-label-NAMEofDESTINATIONdisk/
    OMV Version: Ver. 5 (current) - Hardware: NanoPi M4, running Nextcloud, Plex, & Heimdall - Acer Aspire T180, running backup - Odroid XU4, running Pi-Hole (DietPi) - Testing/Playing: Odroid HC2, Odroid XU4, Raspberry Pi 3B+, Odroid XU4, and HP dx2400.

  • But if you use the relative path and your data drive dies and you re-point all of your shares to the rsync'd backup drive your server is back up with nothing more to do. If you use absolute path, how does that scenario work?

    You would have to change the paths.


    Should these folders be owned by docker1 with a drwxr-xr-x

    I would say yes.


    I find that if I create the folders in omv>Access Rights Management>Shared Folders I get "root" ownership with drwxrwxrwx privileges. That makes sense. These are folders like "AppData", "Media", "Docker", and "Nextcloud".

    When you create a shared folder there is a drop-down menu where you set the permissions.

  • Howdy all


    Many thanks to @macom for this outstanding & very detailed guide - I look forward to reaching to the end of it, but...


    First of all: I'm running on a Raspberry Pi3+ - which is ARM architecture.


    I have followed each step precisely, including making all the necessary changes to docker-compose.yml where instructed - but unfortunately, I receive this error each time I try to execute it (I've pasted the output in full for clarity):


    docker@phewtus:~$ docker-compose up -d
    Unable to find image 'docker/compose:1.24.1' locally
    1.24.1: Pulling from docker/compose
    c87736221ed0: Pull complete
    ba1ee912e9a7: Pull complete
    2df7dacacdeb: Pull complete
    6037f24be055: Pull complete
    Digest: sha256:8616a861a5c769b7fe633625a4d5a4f76ae5a54d1d04874dcef827644c136684
    Status: Downloaded newer image for docker/compose:1.24.1
    standard_init_linux.go:211: exec user process caused "exec format error"
    failed to resize tty, using default size


    There is no sign of anything have executed, as there is not even an empty "nextcloud" folder created.


    If I rerun it, it just skips the image pull (of course) and spits out exactly the same error.


    I've spent a fair amount of time trying to resolve this (gotta move beyond Beginner, right ?( ?), and the most plausible explanation I've found is that Docker can't start the build because the binary in the images is incompatible with the environment in the container, i.e. the repo images are for AMD64 hardware, not the RPi's ARM.



    In case anyone with greater knowledge wishes to take a look, I've attached my .yml file (minus details like my email address).


    I'd be very grateful for expert advice on this, because it is driving me crazy & I think I've hit a brick wall on this. (and yes, switching to AMD64 hardware is an option, but only if all else fails).


    Many thanks in advance


    neil

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!