Posts by Skullchuck

    Quote
    • "works for a while" is the current state --> I have access for 1-3 minutes, then I get ERR_CONNECTION_REFUSED again (WTF!?)

    Ok this behaviour was apparently caused by the desktop & mobile clients trying to connect to NC with the "old" credentials (I chose the same user name and password, but there seems to be a difference anyway). NC or maybe Fail2Ban probably just blocked the connection from my IP after a while.


    Now the access works locally & from the internet! :love: Finally after almost 2 weeks nightmare....

    It's very interesting though that I set overwrite.cli.url' => '192.168.0.101:451', BUT the access via domain works, also from the internet. NC with Swag remains a mystery for me.


    I just restored my calendar app using https://help.nextcloud.com/t/c…ks-as-ics-vcf-files/11978 and the procedure was a bit different for me:


    1. I stopped all "new" containers related to NC

    2. Commented out everything from the old stack apart from the mariadb section and added the following lines to it:

    Code
       ports:
          - 3306:3306

    3. Deployed the stack

    4. Followed https://codeberg.org/BernieO/c…ncloud-nextcloud-instance

    5. In the /usr/local/bin/nextcloud_dummy/config/config.php I had to set:

    Code
    'dbhost' => '192.168.0.101',  # replace by your local IP
    'dbport' => '3306',

    The container name, "localhost" or "127.0.0.1" did NOT work! Took me a while to figure this out.

    6. Then the script would work and I could download the backup file to my PC via SMB and upload it to the new calendar (old container stopped, new containers started again) via the web GUI


    This might be useful for anyone having to restore calendars and contacts from the Nextcloud database.

    Current status:


    This table shows what I changed (config, before, after) and the result (remaining columns)

    configbeforeafter

    domain

    https://192.168.0.101:451/

    domain on phone

    from the internet

    nextcloud.subdomain.conf set $upstream_app nextcloud; set $upstream_app nextcloud1; does not work works, insecure - -
    Tuesday morning initial state does not work does not work works for a while!? -
    /etc/appdata/nextcloud1/config/www/nextcloud/config/config.php 'overwrite.cli.url' => 'https://nextcloud.<ANONYMIZED>.duckdns.org', overwrite.cli.url' => '192.168.0.101:451', does not work does not work works for a while!? -
    hosts 192.168.0.101 nextcloud.<ANONYMIZED>.duckdns.org commented out works for a while!? - works for a while!? works for a while!?


    Some remarks:

    • "-" means I didn't test it
    • I don't know why the "Tuesday morning initial state" is different from before. There's an automated swag restart during the night, maybe this changed something...
    • "works for a while" is the current state --> I have access for 1-3 minutes, then I get ERR_CONNECTION_REFUSED again (WTF!?)
    • "hosts" means the local Windows hosts file on my PC. I had an entry there to prevent routing my local traffic to my ISP and back

    KM0201 pic... Windows.... not sure if you confuse me with someone else?


    These are the permissions of the data folder:


    Code
    drwxrwx--- 8 docker1 users 4096 25. Jan 13:48 data

    If these were wrong, it wouldn't have worked locally before.


    Interestingly, now opening https://192.168.0.101:451/ leads to ERR_CONNECTION_REFUSED again with overwriting the URL. It might be a belated reaction to the new config.php, but actually I'm not aware of changing anything since then... very very strange.

    Quote

    Note line 16. you have an extra / in there after "nextcloud"

    This wasn't real – something went wrong when I replaced the domain by <ANONYMIZED>. The line actually already looks like this:

    Code
     'overwrite.cli.url' => 'https://nextcloud.<ANONYMIZED>.duckdns.org',


    In nextcloud.subdomain.conf I changed "set $upstream_app nextcloud;" to set $upstream_app nextcloud1; and restarted both the nextcloud1 and the swag container. The result was that NC is available again under https://192.168.0.101:451/ (but insecure and not replacing the URL), but not under the domain.


    Also, I noticed that the old NC config.php was much more comprehensive, so I added a few sections from the old to the new config (redis config, trusted_proxies and mail config). Now it looks like this:


    The result is still the same, unfortunately.

    1.

    Yes we left it on 443 but I adapted it to the guide because I thought there must be a reason for these ports.


    2.

    This could be it, it's still on 443. But I don't have time to try now...

    So there was no way to update this container properly. Nobody knows why :|

    I deployed a new stack in portainer with the containers nextcloud1, nextclouddb1 and redis1. Nextcloud was running locally on port 451 and with the help of KM0201 we managed to transfer the user data from the old data directory to the new one.


    Next step was to make it reachable from the internet again by using swag (generally, following this guide). As I already had a nextcloud stack and a Swag stack, I applied the following steps:

    • In my router I forwarded port 80 external to 82 internal and 443 external to 444 internal (it was 443 to 443 before)
    • In the swag stack for the swag container I added under environment
      - DOCKER_MODS=linuxserver/mods:swag-dashboard and under ports: - 444:443- 82:80 - 81:81 and redeployed the stack
    • checked that in /etc/appdata/swag/nginx/proxy-confs there is an nextcloud.subdomain.conf (without .sample). It was like this already and I didn't change anything
    • adapted /etc/appdata/nextcloud1/config/www/nextcloud/config/config.php
    • Uncommented the ##network modes in the nextcloud1 stack and redeployed

    Situation now: domain gives ERR_CONNECTION_REFUSED from inside the network, 502 from outside. https://192.168.0.101:451/, which worked before, also gives ERR_CONNECTION_REFUSED and overwrites the IP with my domain. https://<MY_DOMAIN>.duckdns.org/ shows the SWAG park page, but with invalid certificate.

    Also, by accident, I started the old nextcloud stack once. Nextcloud didn't come up, but mariadb and redis. Hope that didn't break anything. I removed them immediately after realizing.

    Somehow I've got the feeling that I would need to assign port 444 to 451 somewhere inside of swag. I already tried to set

    ports:

    - 444:451

    (instead of 444:443) in the swag stack, but it didn't change anything.


    You can find the config that I changed and the complete portainer stacks attached.


    I hope I just have a silly mistake somewhere and someone here can spot it and point me towards it. I will be offline from now on till at least Sunday night. Maybe someone has some spare time over the weekend? Thanks :)

    config/databases/b3506031d05d.err says:


    and repeats this every second.


    What I'm asking myself is how is this

    Code
    Can't create/write to file '/var/tmp/ibXXXXXX' (Errcode: 13 "Permission denied")

    even possible? That's a directory within the container, right? How can it not have permissions inside the container?


    I can go inside the container and execute chmod 777 /var/tmp and the container will start successfully (at least until the next deployment). But NC keeps showing me 404 :(

    Thanks again KM0201 for all the great support :thumbup:

    This is the current error message of the nextcloud container:


    And the mariadb container keeps writing:


    Code
    240123 12:06:28 mysqld_safe Logging to '/config/databases/b3506031d05d.err'.
    240123 12:06:28 mysqld_safe Starting mariadbd daemon with databases from /config/databases

    Any ideas?

    Thanks for the quick responses.


    KM0201

    Quote

    You never set a proper data directory for portainer is my guess.

    Unfortunately, that's my guess, as well :(


    Quote


    Do you still have your old version of portainer installed? (Even if it is not working)

    How would I find this out? I cannot see it anymore in the OMV plugins. Portainer GUI is not reachable anymore (this is how I noticed it at all). A docker container ls does not list portainer or any of the containers that I had before the installation of openmediavault-compose. Btw I'm on 6.9.12-2 (Shaitan).


    chente

    Quote


    To solve the Portainer issue you can follow these steps. https://wiki.omv-extras.org/do…openmediavault-compose_67

    I can't, because as I wrote above, the portainer data directory either never existed or is gone now.


    Quote

    As for the quick setup guide you followed, I assume you read the beginning of the guide and the conditions for using it. If you are doing this installation on an already configured server, this guide will serve as a reference but you should follow this: https://wiki.omv-extras.org/doku.php?id=omv6:docker_in_omv

    I'm not sure how this should help? The NPM and the failban containers are up and running. NPM is having a Letsencrypt issue though, as I described above. This is the one I cannot solve ?(


    So from my perspective I can either try to "go backward" (reviving Portainer somehow) or to "go forward" (solving the NPM/Letsencrypt issue). And both directions seem blocked...

    Ok since I need to find a solution rather quickly I went ahead with trying to install NC AIO by following this, deciding for the "with proxy" variant and ultimately using that.


    What a mess....! If I had realized that installing the docker-compose plugin would destroy Portainer, I would've never done it. So Portainer is gone and it seems I can't bring it back (this does not work, there is no Portainer container running anymore and I don't have a directory with the Portainer data).


    Anyway, also going ahead does not work, because NPM just gives me an "internal error" without any explanation when I try to create a proxy host:



    Container log:

    And the letsencrypt.log:


    I'm getting desperate. Could someone please help?

    Not sure if I'm right here, please just move this post if it doesn't belong.


    I've Nextcloud + Swag set up and so far I was on NC 25.0.13. Since it's officially unsupported (and I was maybe too update-eager) I wanted to upgrade to NC 28.


    I followed this approach, i.e. I pulled the latest NC image and re-deployed the container with it. Obviously I forgot that NC cannot skip major versions ;(


    So the new container was not working and my idea was now to go back to the old version and then go through the major releases iteratively. However, after deploying the copy of my backup image and overwriting the config folder with my backup config folder, I receive the following backend error:


    nginx: [emerg] duplicate upstream "php-handler" in /config/nginx/site-confs/default.conf:1


    On the front end this is a "502 bad gateway". When I comment out the respective lines in default.conf, the container starts, gives me some warnings ("nginx: [warn] conflicting server name "_" on 0.0.0.0:80, ignored"), but the front end shows 404.


    The question is now: How to proceed? Does it make sense to fix the container at all and if yes, how can it be fixed? Or would it be simpler to switch to NC AIO?


    Looking forward to your ideas and opinions, thanks :)

    Nice, that sounds easy. For problem A I just copied your solution (scheduled IP renewal with duckdns via a cron job in the OMV GUI) :thumbup:


    For problem B I realized I don't even need systemd. Just scheduled "docker restart swag" in the OMV GUI scheduler.


    Probably that solves the issues. I will watch if it keeps on working from now on.

    Hi all,


    I'm on 6.0.28-3 (Shaitan) and I've been having my Nextcloud + Swag + DuckDNS (following this guide, kudos) to access it externally running for over a year. I noticed two small problems:


    Problem A:

    Often duckDNS does not refresh the IP address (for at least a day or so).


    My current workaround:

    I have to log in manually, delete the IPv4 address and let it detect it again. Pretty annoying.


    I don't know if duckDNS would notice the outdated IP eventually, but it definitely takes too long.

    I'm aware that there are charged services (e.g. Cloudflare) that might work better, but right now this is not an option for me.


    Problem B:

    Even if duckDNS has the correct IP address, sometimes the external access is not working (404).


    My current workaround:

    Restart the swag container.


    I will soon be going on a longer journey and won't have access to my local network to do this (right now there is no VPN and actually I would like to keep it this way). So my current idea (not implemented yet) is to set up a service with systemd that executes "docker restart swag" and schedule this service with cron.


    Did anyone experience similar issues? Any other suggestions on how to cope with these issues?


    Thank you :)

    So the test is still running, but curiously I looked at sudo smartctl -a /dev/sde and found some errors in the log:



    It doesn't seem like these error codes (800-804) are the official ones from WD, but maybe someone knows how to interpret them...

    The drive is brand-new. I can only image that the shutdown during extension broke something :(

    I would like to run a long SMART test, but again no device is listed:



    Am I doing something wrong?


    EDIT: Solved it by

    sudo smartctl -t long /dev/sde


    It will be running for 10 hours...

    That is because the drive /dev/sde has a raid signature on it according to blkid


    I assume that /dev/sde is the drive you added to grow the array, if that's the case then mdadm --add /dev/md127 /dev/sde should add the drive back to the array

    Thank you for the quick response. OK, I did that. Unfortunately it's still "clean, degraded":


    cat /proc/mdstat

    Code
    Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
    md127 : active raid5 sde[3](F) sdc[1] sdb[0] sdd[2]
    11720661504 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
    bitmap: 11/30 pages [44KB], 65536KB chunk

    Looks like the drive is faulty :(

    Do you know if this must be a hardware error or do I have any (software-wise) recovery options from here on?

    Hi all,


    I'm on 6.0.28-3 (Shaitan). I extended my RAID-5 to a new 4 TB HDD, having 3x 4 TB already in the system. During extension, the system shut down (for some unknown reason, maybe overheating), but when I started it again, it continued with the extension. When it ended (seemingly successfully), I was in a hurry and just quickly extended the file system, which worked.


    Now having a closer look at the RAID, it tells me it's in the state "clean, degraded":

    Code
    Version : 1.2 Creation Time : Sun Sep 2 04:05:15 2018 Raid Level : raid5 Array Size : 11720661504 (11177.69 GiB 12001.96 GB) Used Dev Size : 3906887168 (3725.90 GiB 4000.65 GB) Raid Devices : 4 Total Devices : 3 Persistence : Superblock is persistent
    Intent Bitmap : Internal
    Update Time : Fri Jan 27 07:00:56 2023 State : clean, degraded Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0
    Layout : left-symmetric Chunk Size : 512K
    
    Consistency Policy : bitmap
    Name : openmediavault:RAIDAR (local to host openmediavault) UUID : e98b7abd:4f328c81:40a102c3:1824afcf Events : 79896
    Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 2 8 48 2 active sync /dev/sdd - 0 0 3 removed

    Googling suggested to me the mdadm --add command to add the missing drive back to the array. However, I would have expected the "recover" option in the GUI to do the same, but I cannot select a device there:



    Does anyone have experience with this? Can I safely execute the mdadm --add command or do I need to do something else?


    Here some detailed information:

    cat /proc/mdstat

    Code
    Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
    md127 : active raid5 sdc[1] sdb[0] sdd[2]
    11720661504 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
    bitmap: 20/30 pages [80KB], 65536KB chunk

    blkid

    Code
    /dev/sda1: UUID="64ae1488-3bd9-4236-8742-9ea44db6f56c" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="76aa5ac0-01"
    /dev/sda5: UUID="c2b0cb47-aeec-4b5a-8285-857b1c56da54" TYPE="swap" PARTUUID="76aa5ac0-05"
    /dev/sdb: UUID="e98b7abd-4f32-8c81-40a1-02c31824afcf" UUID_SUB="a36eadb0-2348-fb83-ec76-65c9fa5df48b" LABEL="openmediavault:RAIDAR" TYPE="linux_raid_member"
    /dev/sdc: UUID="e98b7abd-4f32-8c81-40a1-02c31824afcf" UUID_SUB="94cf7512-43e5-3957-7060-0e6cc0cdd526" LABEL="openmediavault:RAIDAR" TYPE="linux_raid_member"
    /dev/sdd: UUID="e98b7abd-4f32-8c81-40a1-02c31824afcf" UUID_SUB="f1ae8b96-55da-2541-bc00-7be870687109" LABEL="openmediavault:RAIDAR" TYPE="linux_raid_member"
    /dev/md127: LABEL="Raidar" UUID="5d21dac9-d7ba-4831-9d29-e6d9d8de5b3b" BLOCK_SIZE="4096" TYPE="ext4"
    /dev/sde: UUID="e98b7abd-4f32-8c81-40a1-02c31824afcf" UUID_SUB="e80184c3-5dc3-17b4-1f73-a6f95f5fb718" LABEL="openmediavault:RAIDAR" TYPE="linux_raid_member"
    /dev/sdf1: UUID="b533ba9f-52ff-9d49-8092-a954a53881e4" BLOCK_SIZE="4096" TYPE="ext4" PTUUID="d433308c" PTTYPE="dos" PARTUUID="d433308c-01"

    fdisk -l | grep "Disk "

    cat /etc/mdadm/mdadm.conf

    mdadm --detail --scan --verbose

    Code
    ARRAY /dev/md/openmediavault:RAIDAR level=raid5 num-devices=4 metadata=1.2 name=openmediavault:RAIDAR UUID=e98b7abd:4f328c81:40a102c3:1824afcf
    devices=/dev/sdb,/dev/sdc,/dev/sdd


    Any help is appreciated, thank you.

    Arn#t you using letsencrypt certs for the internet access?

    This was the right hint. Until now I was bypassing the proxy (=Swag/letsencrypt). Now I changed the Swag port to 443 and it works without cert error both internally and externally!


    Thank you Zoki for staying persistent :)


    Now I finally understand this:

    You would see the service because it would enter through Swag on port 443.

    I just didn't expect Swag to behave differently depending on the port it's running on.