Posts by lapulga

    To mount a local external storage you need to bind mount the folder via the -v flag in your container settings

    Ahh, great. Thank you! This is so much easier, then I don't have to mess around with privileges and permissions that don't do what I'm expecting.


    Looks like everything is working now, thank you all!

    I thought you were trying to do that! Probably is the only "right" way of accessing existing data with Nextcloud.

    Of course, but I didn't know it was possible from within Nextcloud.


    In the meanwhile, another issue arrose. How did you manage to mount your HDDs in nextcloud? It looks that this is only possible for folders inside /media on the same partition where nextcloud is installed. I'm constantly getting 'stat(): stat failed' error, no matter which permissions I set.

    Ok, I found out what was wrong. Because the Nextcloud-adress I got from MyFritz is not like nextcloud.example.com, but nextcloud.something.example.com:444 I thought I had to add the port in the config.php of the Nextcloud docker as well. I removed it now from overwrite.cli.url and overwritehost (but kept it in trusted_domains so far, it seems to make no difference) and now I'm able to access my Nextcloud via myNAS.something.myfritz.net directly, without being redirected to port 444. If I'm calling myNAS.something.myfritz.net:444 now, I'm being redirected to myNAS.something.myfritz.net, where there's a valid certificate :)


    I guess the downside of it may be, that I'm now only able to access a single service (i.e. Nextcloud) of my NAS publicly, because I'm getting essentially the same link by MyFritz for everything on my NAS distiguished only by the trailing ports for which the certificates don't work. For now this is not an issue for me though :thumbup:

    Further infos: My nextcloud and letsencrypt dockers are currently connected to two networks, bridge and my-network like in the video by TechnoDadLife . When I remove my-network from nextcloud, myNAS.something.myfritz.net does not get redirected automatically to myNAS.somethin.myfritz.net like before. Thus I'm getting 502 Bad Gateway when accessing it, but the certificate by Let's encrypt works here. So it looks like either this certificate doesn't include port 444 or it is somehow overwritten by this linuxserver certificate when accessing port 444.


    Furthermore when I'm trying to access https://something.myfritz.net I'm getting a warning now, that its certificate (Let's encrypt) is only valid for myNAS.something.myfritz.net. Otherwise, when I'm trying to access my FritzBox settings via MyFritz (something.myfritz.net:port) the certificate is valid, it looks like this is the certificate I configured in the FritzBox settings directly. I find that a liitle bit weird :/

    Hi,


    I followed the guides here in the forum and by TechnoDadLife to set up Let's encrypt with Nextcloud, as DDNS-provider I'm using MyFritz. In the settings of my FritzBox it's possible to use certificates from let's encrypt automatically for my MyFritz address (something.myfritz.net), I've activated it and it does work if I'm accessing my FritzBox from outside via https://something.myfritz.net:1234.


    I then created a port release for Nextcloud and linked it to MyFritz which worked as well, I can reach my Nextcloud now via myNAS.something.myfritz.net:444. However, the let's encrypt certificate I created before obviously doesn't include this subdomain thus I'm getting a warning message when accessing. Firefox says the certificate comes from linuxserver.io. Further on I set up the letsencrypt-docker, according to docker logs -f letsencrypt everything seems to work, no errors reported. I set the domain to something.myfritz.net and subdomain to myNAS and Let's encrypt seems to create a certificate for myNAS.something.myfritz.net indeed, but when accessing my Nextcloud nothing has changed, the certificate is still not trusted and comes from linuxserver.io. Does anyone have a clue how to get this work?

    Most of /data should be filled with data created by users who are not using your external storage. I don't care about /data, but I still placed on the SSD for good folder placement.

    Ahh, now I get it as well. I didn't know it's possible to mount an external storage in NC, I'll have a look at this :thumbup:

    I was able to get it working in the meanwhile, setting wide links = yes and unix extensions = no in the smb.conf fixed it. Thus the symlink in the users home directory to his nextcloud folder works now as expected :)



    I'm not sure if I understood your question, but:


    on my SSD, I have the following shared folders:

    • docker-install: where docker binaries are stored. It's not shared with anyone, you don't want to mess with those files
    • docker-apps: where my container's data are stored. I created a folder for each container to store its data.

    This allows my docker ecosystem to run independently by HDDs (I want them to spindown).

    The Nextcloud-container creates two volumes (at least on my system, I still have related stuff in /var/lib/docker though, I believe): /data and /config. In the /data volume there's one folder for every user with 'cache' and 'files' subfolders respectively and folders called 'appdata_...', 'files_external' and 'ownbackup' and some files. I don't know how big this stuff may grow, except for 'files', or if moving them to my SSD would affect speed in some way, thus my question if you got your /data volume on your HDDs as a whole, or only the 'files' folders. I guess the /config volumes of your nextcloud and database are in docker-apps in your case?

    Absolutely yes!

    All my docker apps run from my SSD, and in general it's a good idea. Nextcloud is quite "big" and will definitely get some boost.

    Does your SSD have only one partition used by OMV? If yes, you'll have to repartition the drive (use gParted), create a new file system and use it for docker apps.

    Further question: Did you put only your "files" folders on your HDDs or the whole /data-Volume including e.g. "appdata_..." or "ownbackup"?


    I'm currently trying to link the "files"-folders to the user home directories using the Symlink-plugin, but I can't quite get it working. When I'm linking a "files" folder from inside the /data-Volume to a home directory I cannot open it with that user, some permissions are wrong here. When I do it vice-versa, i.e. moving the "files" folder to the home directory and linking it to the /data-Volume the symlink doesn't even show up. I've set ACL of this folder to rwe for all users. My goal is, that every user can access their Nextcloud files, but only theirs, through SMB or NFS as well.


    Edit: Apparently Nextcloud creates the folders inside the /data-Volume read-only for the users group, only the owner I set with PUID has write access. So I gave my user rw-access to his nextcloud folder and subfolders on the /data-Volume through ACL. Now I can write when accessing the /data-Volume directly via SMB, but I'm still getting this network access error saying I don't have permissions when trying to open this folder with the symlink in my home folder :/

    Absolutely yes!

    All my docker apps run from my SSD, and in general it's a good idea. Nextcloud is quite "big" and will definitely get some boost.

    Does your SSD have only one partition used by OMV? If yes, you'll have to repartition the drive (use gParted), create a new file system and use it for docker apps.

    I haven't thought about repartitioning, thank you!

    Hi guys,


    in my case, OMV is installed on a 120GB SSD and I have 2 HDDs for data storage. I installed the Nextcloud-Docker (linuxserver) broadly similar to the guides by TechnoDadLife and macom . In both cases, the three folders needed (config for database and nextcloud, data) are created on the data drives. I'm wondering if it is possible to move (or set) the two config folders, i.e. the database and appdata of nextcloud, to my system SSD, especially regarding permissions. Would there be noticeable advantages?


    This question is basically not limited to Nextcloud, as I want to run e.g. my Digikam-database on my server as well.


    Many thanks for your help

    Hi guys,


    since I'm currently setting up my first NAS, I spent the last hours reading especially about bit rot and data corruption. Although I consider myself to be brighter than before, I did not find a satisfying answer how to proceed in my case. I hope someone can shed some light upon this.


    There are currently two 4TB drives in my NAS. A significant amount of storage will be reserved to media, including many photos and videos of my family. Obviously they are very important to us, thus I don't want them to get corrupted. Since for now availability is not that important and I'm going to do weekly backups anyway, I decided against using RAID. Basically what I want is corrupted files and bit rots to be detected, so that I can restore the affected files from my backup. I don't need automatic protection, I'm fine with restoring the files manually. I already played around a bit with BTRFS and ZFS and managed to induce a corruption in ZFS and detect it via zpool status -Lv which (I think) looks like what I want. However, at least ZFS seems to require quite some time to learn the ropes and has other downsides too. So what's the "simplest" way to achieve what I want?


    Many thanks in advance