Posts by DonkeeeyKong

    I run Pihole in an lxc. It works fine, and since it is treated as another computer, the host can use it for dns. The trick is making sure the vm is getting a lan ip and not a kvm NAT ip.


    I achieve this by using a second nic in my system for OCM use that is enabled but not configured in omv. You can achieve the same result with a bridge if you have one nic.

    May i ask how you attached the second NIC to the container? I was trying the same for AdGuard Home but I wasn't able to use the NIC from within the container (tried with Ubuntu 24.04 container).


    My second NIC is connected directly to the mainboard but shows as USB device. I was able to add it to the LXC via virt-manager and I could see it with lsusb in the container, but I wasn't able to use it for a network connection.


    I'm now running AdGuard Home in a Ubuntu VM with that NIC attached to it. That was no problem at all. It was recognized by the OS without problems.

    LXC would be a lot nicer than a VM though.

    You should read the documentation. There's a good guide in the omv extras wiki.


    You can use one parity drive with four data drives. So you could merge five of your 1 TB drives as a data mergerfs and use one 1TB and the 6TB drive as parity drives. You should read the FAQ and the documentation on the snapraid website.


    Edit: I'm wondering why you want to use Snapraid in this way. Your setup seems like you tried to set up something similar to a traditional RAID10 using Snapraid (which doesn't work). If you only want to use 3 TB from your 6 1 TB drives another option would be MDRAID, btrfs or ZFS and use the 6 TB drive for something else.

    3. Set the system up for encryption by booting into a live environment. I used an Ubuntu 23.10 Live USB drive for this and connected display, mouse and keyboard to the server.


    3.1. First we need a separate /boot partition that remains unencrypted. (Newer versions of cryptsetup can work with encrypted /boot partitions but I haven't tried that yet.)

    a. We use Gparted to shrink our OMV system partition by a few GiB (I did 4, but 2 should be enough as well).

    b. Create a new ext4 partition in the freed space. Use boot as label and as name (LABEL and PARTNAME).

    c. Find out the device name of our system partition via sudo lsblk or sudo blkid. In my case /dev/nvme1n1p2

    d. Mount it: sudo mount /dev/nvme1n1p2 /mnt

    e. Now we backup our /boot folder and create an empty one:

    Code
    sudo mv /mnt/boot /mnt/boot.old
    sudo mkdir /mnt/boot

    f. We mount our new partition: sudo mount LABEL=boot /mnt/boot

    g. We copy the old /boot folder to the new partition: sudo cp -a /mnt/boot.old/* /mnt/boot/

    h. We add the following line to /mnt/etc/fstab:

    Code
    LABEL=boot /boot ext4 defaults 0 2

    i. We unmount both the system and the boot partition:

    Code
    sudo umount /mnt/boot
    sudo umount /mnt


    3.2. We stay in the live environment. Now we need to prepare our system partition to be able to be encrypted.

    The LUKS Header needs up to 32 MB of space at the beginning of the partition, that we need to create.

    a. Check the partition for errors. This needs to be done before changing it. sudo e2fsck -f /dev/nvme1n1p2

    b. Now we need to resize the filesystem inside the partition (not the partition itself!) to make room for our LUKS header. Here it is advisable to be generous with space, the change is only temporary. My partition was 32 GiB with only about 10 used, so I resized to 20 G: sudo resize2fs /dev/nvme1n1p2 20G

    c. Now we can create the LUKS header and initialize the encryption: sudo cryptsetup reencrypt --encrypt --init-only --reduce-device-size 32m /dev/nvme1n1p2 root_crypt After setting a passphrase the partition has a LUKS header and we are asked for its passphrase during boot. It is still unencrypted though. The actual encryption can be started at any time now, and also from within the running system.

    d. We check our new LUKS partition: sudo e2fsck -f /dev/mapper/root_crypt.

    e. And now we can resize the filesystem to use the whole partition again: sudo resize2fs /dev/mapper/root_crypt


    3.3 Now we need to chroot into our omv system to update the system to the new configuration.

    a. Find our EFI partition via sudo lsblk, sudo blkid, Gnome Disks or Gparted (for me it's /dev/nvme1n1p1).

    b. Mount our partitions:

    Code
    sudo mount /dev/mapper/root_crypt /mnt
    sudo mount LABEL=boot /mnt/boot
    sudo mount /dev/nvme1n1p1 /mnt/boot/efi

    c. chroot in our omv system:

    Code
    for d in dev sys proc tmp; do sudo mount --bind /${d} /mnt/${d}; done
    sudo chroot /mnt

    d. Now we can find out the UUID of /dev/mapper/root_crypt via sudo blkid /dev/mapper/root_crypt and change the root partition's /etc/fstab entry to the new UUID:

    Code
    # / was on /dev/nvme0n1p2 during installation
    UUID=xxxx    /    ext4    errors=remount-ro    0    1

    When editing the /etc/fstab be aware that the text between # >>> [openmediavault] and # <<< [openmediavault] should remain unchanged.

    e. Now we find out the UUID of the encrypted partition via sudo blkid /dev/nvme1n1p2 and add that to the /etc/crypttab:

    Code
    # <target name>    <source device>    <key file>       <options>
    cswap1            PARTUUID=xxx        /dev/urandom    swap,cipher=aes-xts-plain64,size=256,discard
    root_crypt        UUID=xxx            none            luks,discard

    f. Now we can also add our encrypted data disks to the /etc/crypttab with their respective UUIDs. If we want them to be automatically unlocked when unlocking the root partition, we need to use the same key for them as we used for the root partition.

    I use the same passphrase, I guess it would also work with keyfiles. If one uses different passphrases, the passphrase we used for the root partition can simply be added to the existing disks in another keyslot (either via command line or via the OMV plugin).

    If the same key is used we just need to add keyscript=decrypt_keyctl to the options and all drives get unlocked when unlocking the root partition during the boot process. The /etc/crypttab should look like this:

    Code
    # <target name>    <source device>    <key file>       <options>
    cswap1            PARTUUID=xyz        /dev/urandom    swap,cipher=aes-xts-plain64,size=256,discard
    root_crypt        UUID=xxx            none            luks,discard,keyscript=decrypt_keyctl
    data1_crypt       UUID=yyy            none            luks,discard,keyscript=decrypt_keyctl
    data2_crypt       UUID=zzz            none            luks,discard,keyscript=decrypt_keyctl

    g. Now we update our initramfs and reinstall the bootloader:

    Code
    sudo update-initramfs -c -k all
    sudo grub-install
    sudo update-grub

    h. We can now leave the chroot-environment: exit and unmount our devices:

    Code
    for d in dev sys proc tmp boot/efi boot ""; do sudo umount /mnt/${d}; done
    sudo cryptsetup close root_crypt

    i. Shut down the live environment. Display, mouse and keyboard shouldn't be needed anymore.


    4. Booting the system and finishing the encryption

    a. Power up the server.

    b. We can ping the IP we set up for the server to see when it is available:

    Code
    user@Desktop:~$ ping 192.xxx.xxx.xx

    c. When we see a response we can connect to dropbear using ssh and the port we configured in the beginning:

    Code
    user@Desktop:~$ ssh -p 33333 root@192.xxx.xxx.xx

    (Connecting as root is necessary. Dropbear doesn't know our OMV users.)

    d. We can now enter the password for our root partition. The output should be something like this:

    Code
    Please unlock disk root_crypt:
    cryptsetup: root_crypt set up successfully
    Connection to 192.xxx.xxx.xx closed.

    e. Now the system will start, unlock all drives from crypttab and we can use our regular ssh connection to connect to the server. Dropbear is only reachable during boot and closes the connection after unlocking the system partition.

    f. Now, whenever we want, we can finish the encryption with the command sudo cryptsetup reencrypt --active-name root_crypt. This can be interrupted with Ctrl+Cwhen necessary and resumed at any time. In my case it took only a few minutes though.

    g. If everything works, we could delete the /boot.old folder now.



    That's it. Full disk encryption with automatic unlock for data drives.



    Addditionally to what I already mentioned, here are some guides that helped me a lot when trying to figure this out (two in German, one behind paywall):

    LUKS-Festplatten-Vollverschlüsselung per SSH entsperren - codingblatt.de

    Linux-Installationen nachträglich verschlüsseln
    Mit unserer Anleitung verschlüsseln Sie Ihre Linux-Installation nachträglich und nach entsprechender Vorbereitung sogar im laufenden Betrieb.
    www.heise.de

    https://www.cyberciti.biz/security/how-to-unlock-luks-using-dropbear-ssh-keys-remotely-in-linux/

    First of all: This is an example, not a guide. I can not guarantee that what worked for me in my specific setup will work for anybody else. I might not be able to help in case any problems occur to anyone trying to copy what I did.


    However, this might be useful to people that look for ways to achieve the same thing I did. Still, if you don't understand every step of what I did: Don't do it.


    I like to have all my data stored in encrypted drives. I do that for my computer and for the drives I have been using to backup my computer. Naturally, I also encrypted my storage drives in OMV via the openmediavault-luksencryption plugin.

    Two things bothered me about that setup:

    1. After a reboot, services that are on or refer to an encrypted drive (for me: fail2ban, docker, mergerfs, ...) were not running and I had to manually put in the passphrase for each drive and then restart every service.

    Also after each reboot I received about 30 e-mails from omv telling me, that my filesystems were not existing, that mountpoints failed, etc. After unlocking the encrypted drives I received several e-mails, that mounting the filesystems was successful, etc.

    So, after each reboot or boot, I had to do a few adjustments before things worked and then delete a lot of useless e-mails.

    2. My root drive was not encrypted. While many might argue, that that is not necessary, I like the thought that, when I power down my computers, no one can access anything. Also, this way I don't have to care about what information might be stored by which program in which place. When my drive fails and I throw it away, I don't have to worry that there might be readable information on it I don't want in somebody else's hands, etc. Everything is encrypted. Zero trust. Nice.


    I came across this guide, it seems outdated though. My method is similar, but one doesn't have to copy or rsync the root drive and the final encryption can be done while the server is up and running.

    I ended up with a fully encrypted system, where I enter one passphrase once during the boot process and get a fully working system without getting 30 useless e-mails.

    It might be noteworthy, that my installation was on a 256 GB SSD with lots of free space.


    Here is what I did:


    0. Backup everything!!


    1. We need an encrypted swap. Encrypting everything but leaving swap unencrypted doesn't make sense in my opinion, since the encryption keys or anything else might be stored in swap temporarily. The cryptsetup readme has a good guide for this:

    a. Find out the current swap partition by using lsblk, or by looking in the fstab (cat /etc/fstab). If a swapfile is used there is no need to change anything since it won't be accessible without unlocking the root drive anyway.

    b. Deactivate our current swap: sudo swapoff -a

    c. Add an entry to etc/crypttab for our swap device:

    Code
    # <target name> <source device> <key file>      <options>    
    cswap1          <swapdevice, eg. PARTUUID=xxx> /dev/urandom    plain,cipher=aes-xts-plain64,size=256,swap

    <swapdevice> needs to be replaced by the correct device. Using something like /dev/sda5 is not safe, because this will change after rebooting. It is possible to use labels or UUIDs after following this guide, otherwise they also change after rebooting.

    I just use the PARTUUID I looked up via sudo blkid. (If the partition should be repurposed later on without repartitioning first, one has to remember to remove that line from crypttab, otherwise the partition will continue to be overwritten on each boot.)

    Other options include using /dev/disk/by-id/xxx-part-x. All persistent device names can be found via find -L /dev/disk -samefile /dev/<swapdevice>

    If the swap partition is stored on a SSD it's possible to add discard to the options in order to make trim work (although this makes the encryption weaker - probably more relevant for normal partitions though)

    Code
    # <target name> <source device> <key file>      <options>    
    cswap1          <swapdevice, eg. PARTUUID=xxx> /dev/urandom    plain,cipher=aes-xts-plain64,size=256,swap,discard

    d. Change the swap entry in /etc/fstab to point to the encrypted device:

    Code
    # <file system> <mount point>   <type>  <options>     <dump>  <pass>
    /dev/mapper/cswap1  none        swap    sw            0       0

    e. Create and start the encrypted swap device:

    Code
    sudo cryptdisks_start cswap1    
    sudo swapon -a

    f. Make sure resume/suspend to disk is disabled:

    Code
    sudo echo "RESUME=none" >/etc/initramfs-tools/conf.d/resume
    sudo update-initramfs -u

    g. Done. Swap is encrypted.


    2. Setup Dropbear to be able to connect via ssh during the boot process.

    a. Install dropbear-initramfs: sudo apt install dropbear-initramfs

    b. Edit /etc/dropbear/initramfs/dropbear.conf to contain the following:

    Code
    DROPBEAR_OPTIONS="-I 180 -j -k -p 33333 -s -c cryptroot-unlock"

    (this closes the connection after 180 seconds of no activity, disables port forwarding, sets dropbear's ssh port to 33333, disables password login and restricts the ssh connection to executing cryptroot-unlock.)

    c. Copy our ssh public key to /etc/dropbear/initramfs/authorized_keys . We can generate a new pair, and insert the public key there, or copy the key we might be using already from the ~/.ssh/authorized_keys file in our user's home folder on omv.

    I am using Ubuntu on my desktop computer, and the public key for openmediavault is already stored in my home folder so I just copied that one:

    Code
    user@Desktop:~$ scp ~/.ssh/id_ed25519.pub 192.xxx.xxx.xx:~/key.pub
    user@Desktop:~$ ssh 192.xxx.xxx.xx
    user@omv:~$ sudo -s
    root@omv:/home/user# cat key.pub >> /etc/dropbear/initramfs/authorized_keys

    d. My router is configured to always assign a specific IP address to my server via DHCP. If that is not the case we need to specify the IP in /etc/initramfs-tools/initramfs.conf by adding

    Code
    IP=<desired-omv-IP>::<gateway-ip>:<netmask>:<hostname>

    Even if this is not necessary because DHCP is set up, I recommend (additionally) setting the static IP in the omv GUI network interfaces settings page to avoid any trouble with DHCP after dropbear closes the connection.

    e. I have more than one network interface and dropbear always used the one that has no DHCP set up, so I specified which interface to use by defining it in /etc/initramfs-tools/initramfs.conf (by adding DEVICE=eno1).

    f. Update the initramfs image: sudo update-initramfs -u

    g. Done.


    3. See next post

    yes



    if drives do not spin down with smartctl, try hd-idle

    Thanks for confirming! I was going to give an update here: After I posted I found this issue and this commit, that showed that hdparm had been replaced by smartctl.

    I installed hd-idle last night and I am impressed. What had not been working before, works now with hd-idle apparently:

    One old WD drive that almost never gets used and that I couldn't get to spin down before, dropped its temperature from an average of 40° C to 25° C. All other drives are up to 10 degrees colder now. Even my 2 NVME-SSDs are about 3-5 degrees colder, probably because of the lower overall temperature. Everything, foremost the fans, is less noisy. I am very content. :)

    however, there are other reasons why a drive does not spin down like access by some service like S.M.A.R.T if not configured properly

    Thank you. I selected "Standby" in the S.M.A.R.T power settings. This should prevent spun down drives from being woken up, if I understand that correctly?

    The OMV changelog reads:

    Code
    openmediavault (7.0.4-2) stable; urgency=low
    
    
    * Replace hdparm with smartctl in UDEV helper script.

    Does this have anything to do with how drive spin downs are handled? If I am using the latest version of OMV and set up spin down times in the web UI, is this still done with hdparm? Or with smartctl? And if it has changed: Is hd-idle still the best option?

    Setting spin down times in the web ui doesn't seem to work with all of my drives...

    So I had a look. If I open the contacts app, I do not see those errors in the logs. instead I see a 200 entry as below (my web domain is redacted with xxxxxxx)


    My nextcloud version is 28.0.3 but as noted many times it is not a docker and is not a prebuilt vm, it is an lxc that I custom built and uses a postgres database instead of the mysql database that is the nextcloud default (both are officially supported, but I switched to postgres because mysql has a unicode quirk that nextcloud uses a "work around" for, but postgres doesn't have the same bug and postgres performs faster than mysql). This all means that the nextcloud server has a LAN ip that is different from my main omv server ip, uses a different database, and with the custom build based on the bare metal build instructions as a start, may have some additional differences, all of which can contribute to the difference in the way the prebuilt docker behaves compared to my custom vm build..


    Code
    [23/Mar/2024:14:36:03 -0230] - 200 200 - GET https xxxxxxxxxx "/remote.php/dav/addressbooks/users/Bern/z-server-generated--system/Database:Bern.vcf?photo" [Client 192.168.2.212] [Length 7217] [Gzip -] [Sent-to 192.168.2.251] "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36 OPR/108.0.0.0" "-"

    Interesting. But you said, you didn't use the contacts feature, didn't you? Are there any entries in the contacts app?


    Thanks for looking! :)

    Yeah. I have a 20 or 30 lines similar to that after opening contacts in Nextcloud.

    Thank you. That means we either have both broken configs or there is indeed a bug in nextcloud causing 404 errors. :)


    Anyway:

    I found a working solution now! :love: :thumbup: :thumbup:

    If anyone has the same problem and finds this thread via Google:

    I changed the npm-docker.conf from BernH's guide to this:

    Code
    INCLUDES]
    
    [Definition]
    failregex = ^.* (405|404|403|401|\-) (405|404|403|401) - .* \[Client <HOST>\] \[Length .*\] .* \[Sent-to <F-CONTAINER>.*</F-CONTAINER>\] <F-USERAGENT>".*"</F-USERAGENT> .*$
    
    ignoreregex = ^.* (404|\-) (404) - .*".*(\.vcf\?photo|\.png|\.txt|\.jpg|\.ico|\.js|\.css|\.ttf|\.woff|\.woff2)(/)*?" \[Client <HOST>\] \[Length .*\] ".*" .*$

    I got the inspiration from this guide. The trick is adding \.vcf\?photo| to the ignore list of regular expressions. This way the 404 errors generated by the contacts app because of the missing photo files get ignored. 404 errors because of other missing media files get ignored as well. I think that's a nice bonus.


    It would probably also work if one just added the ignoreregex part to the end of the original filter from BernH via echo 'ignoreregex = ^.* (404|\-) (404) - .*".*(\.vcf\?photo|\.png|\.txt|\.jpg|\.ico|\.js|\.css|\.ttf|\.woff|\.woff2)(/)*?" \[Client <HOST>\] \[Length .*\] ".*" .*$' >> /absolute/path/to/persistent/appdata/fail2ban/filter.d/npm-docker.conf. I haven't tested if these filters work together though. Maybe tomorrow. For now it's time fort bed. :)


    By the way: The filter for npm doesn't notice or block failed nextcloud logins. For this I set up a separate jail and filter as described here and here.

    At first the nextcloud logs always showed the internal docker IP. After adding the whole internal IP range 172.16.0.0/12 as trusted proxy in the nextcloud config.php the logs showed and fail2ban blocked the actual client IPs.


    Thanks a lot for your help, BernH and chente ! I learned a lot on the way and now it's working exactly the way I wanted! :)




    Edit: I had to disable the 401 filter as well, because I kept getting 401 errors from webdav - although syncing was working fine. I will further investigate that matter but for now I'm not getting banned anymore. Sadly 401 won't trigger any bans either...

    In my case the file that corresponds to the Nextcloud host is appdata/npm/data/logs/proxy-host-2_access.log. It is a very long file and the search tool does not find any string that corresponds to "error". I don't know if that helps you at all.

    Thanks for looking. It doesn't say error it's just 404. Something like this (after opening the contacts page in nextcloud via web gui):

    Code
    [22/Mar/2024:17:26:13 +0100] - 404 404 - GET https mynextcloudaddress.net "/remote.php/dav/addressbooks/users/admin/kontakte-1/DEA49D70-C871-44D7-9D44-D7C15D54831E.vcf?photo" [Client 31.xx.xx.11] [Length 0] [Gzip -] [Sent-to nextcloud-aio-apache] "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:123.0) Gecko/20100101 Firefox/123.0" "-"

    I'm not sure what you're trying to do, I haven't read the entire thread in detail, there are very long conversations here. But if all you want is to sync contacts and calendars you don't need to do anything in the proxy. I have NPM with the Custom locations tab blank and sync contacts and calendars using Nextcloud AIO without problems.

    Hi, thanks for your help.

    Syncing works flawlessly here as well. I just wanted to add a layer of security with fail2ban, because I don't feel too comfortable just exposing my server to the web. But since I get 404 errors in the logs from the contact app, that didn't work so well. Nextcloud itself works fine.


    Since you seem to use a similar setup (of course - I used your guide): Can you maybe do me a favor and check your appdata/npm/data/logs/proxy-host-1_access.log (or whereever npm stores its log files on your system) after opening the contacts page in the nextcloud web gui and see if there are 404 errors there?

    I can tell you that if nextcloud and npm are on the same docker network, they should be able to be able to address each other by container name instead of ip address (this is a feature of docker designed to get around dynamic docker internal ip addresses).

    I tried that. They were able to address each other. See my post above:

    - then I changed npm's compose file to use the nextcloud-aio network, changed the hostname in npm to nextcloud-aio-apache and added jc21-npm as a trusted proxy. I could access and use Nextcloud via my domain but I still couldn't add any custom locations without the proxy host going offline and I still got 404 errors in the log when opening the contacts section in the Nextcloud web GUI.


    I don't use the contacts in nextcloud so have never encountered that error, but it suggests that the addressbook is also using a dav connection that is being blocked requiring another properly formatted rewrite directive for it to pass the proxy. I am by no means an expert on nextcloud and reverse proxying, I have only shared with you the configuration that works for the way I configure my system.

    But that's a known bug. The contacts app asks for a foto file for every contact. If there is a foto set in the contacts app, it's fine. If not, it generates a 404 error. I don't think it has anything to do with my npm configuration.

    And since nextcloud works fine, I don't get any errors in the admin overview and I can sync everything I want to, I have come to the conclusion that my npm configuration might not be the root of my problem. It's that the nextcloud contacts app has a bug that generates 404 errors.

    For now I will try to setup fail2ban the way it's recommended in the official nextcloud documentation. This should ignore the 404 errors. In the future I might chance to crowdsec since they added this bug to their whitelist so it shouldn't cause any problems. So far I wasn't able to get crowdsec working with npm and omv, but I will further investigate that.


    Thanks for your try to help! :)

    If you actually look at the nextcloud install guides, manual/bare metal install and using a vm are also recommended. The vm, however, is one made by HanssonIT (yes I used to run this one too). Originally there was only a manual install, then they came out with the VM and finally the docker, but that whole process took several years. I have been using nextcloud since it started, (before it started really as I used to run owncloud), so I have seen all of those deployment steps and have tried them all, but opted to make my own VM (using the manual install guide in a VM as I didn't like the way the HanssonIT VM was built, and ultimately settled on using lxc as the vm when OMV KVM supported it, but not before I tried the original docker release that they came out with. I had problems using nextxtcloud talk in the original docker, so went back to the lxc route (using a docker postgres database and docker redis).

    You're right. Still, the Docker image is recommended if you want to use a reverse proxy.

    As for your NPM custom locations, all you are missing is the ip address or host name and port of the docker (use the same that you use on the main page of the virtual host)

    I tried setting up the custom locations that way. No matter how I do it: As soon as I add a custom location, the proxy host status in npm changes to 'offline'.

    Here is what I tried:

    - adding the custom locations and putting the IP of my server in the forward IP field.

    - adding the custom locations and putting the IP of my server plus /remote/dav in the forward IP field ( for caldav and carddav - as recommended here)

    - adding the custom locations and putting the IP of my server plus /remote.php/dav in the forward IP field ( for caldav and carddav)

    - trying port 80 instead of 11000 with all combinations

    - any combination of scheme http/https in the custom locations and in the details page

    - changing the advanced configuration from $server_name to $server and trying other configurations from here:

    Code
    location = /.well-known/carddav {  return 301 $scheme://$host/remote.php/dav; }
    location = /.well-known/caldav {  return 301 $scheme://$host/remote.php/dav; }
    location ^~ /.well-known { return 301 $scheme://$host/index.php$uri; }

    and

    Code
        location = /.well-known/carddav {
          return 301 $scheme://$host:$server_port/remote.php/dav;
        }
        location = /.well-known/caldav {
          return 301 $scheme://$host:$server_port/remote.php/dav;
        }

    and others from other sources I don't recall anymore.

    The trusted proxies section of the nextcloud config.php has to have an entry in it to allow access from the Ip address that NPM is running on. In My setup, since I have KVM installed on my OMV system and I'm running nextcloud as an lxc, it behaves as a completely independent computer and even has it's own physical network port, so I have a LAN address for nextcloud that is different than my OMV address, I can also use 192.168.122.1 as the proxy address because KVM sets up the 192.168.122.0/24 "default"/nat network for it's own use, which operates completely inside the system. I have a second virtual NIC in the VM that is attached to this network with a static ip.

    As I am not running NC in docker and it has been many years since my initial look at the docker, I am not exactly sure how to treat the proxies section for docker, but my gut reaction says if the NC docker is set up on a macvlan, it should get a lan IP, but I'm not sure if it will work like that by itself because you will be hairpinning network traffic on the same physical connections that NPM uses. Setting up a bridge for your OMV LAN should get around this as it makes the network port behave kind of like a network switch. The alternative would be to possibly set it up with NPM and NC on the same docker network so they can address each other by container name instead of ip address. This is a feature of docker networks so if using ip addresses does not work this may be a way to get around the problem. If you have KVM installed then the 192.168.122.0/24 network also becomes available for "inside the box" connections with vm's (as I am doing with my lcx).

    Regarding the trusted proxies section I tried the following:

    First it looked like this:

    Code
      array (
        0 => '127.0.0.1',
        1 => '::1',
      ),

    - I added the IP of my server (2 => '192.168.xxx.xx',)

    - then I tried to find out the internal Docker IP of NPM and added that

    - then I changed npm's compose file to use the nextcloud-aio network, changed the hostname in npm to nextcloud-aio-apache and added jc21-npm as a trusted proxy. I could access and use Nextcloud via my domain but I still couldn't add any custom locations without the proxy host going offline and I still got 404 errors in the log when opening the contacts section in the Nextcloud web GUI.



    When I undo all these changes and type https://mynextcloudaddress.net/.well-known/carddav in my browser I get a page that says This is the WebDAV interface. It can only be accessed by WebDAV clients such as the Nextcloud desktop sync client.

    I came to think: Doesn't that mean that the redirection is already working? Syncing my iPhone calendar with my nextcloud calendar works as well. So does syncing my contacts and syncing with GNOME Calendar and GNOME Contacts. So I'm wondering if this is actually my problem or if the proxy is already set up fine 'out of the box'.


    The only problem I have is that the contacts app is generating 404 errors for all contacts that have no foto set up. As this seems to be an open bug I'm wondering if changing anything in the npm configuration would stop theses errors from showing up in the logs.


    After opening the contacts page in nextcloud my log file looks like this:

    Code
    [22/Mar/2024:17:26:13 +0100] - 404 404 - GET https mynextcloudaddress.net "/remote.php/dav/addressbooks/users/admin/kontakte-1/9CB5A33A-7966-4286-BBE2-44DDCB574A3D.vcf?photo" [Client 31.xx.xx.11] [Length 0] [Gzip -] [Sent-to nextcloud-aio-apache] "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:123.0) Gecko/20100101 Firefox/123.0" "-"
    [22/Mar/2024:17:26:13 +0100] - 404 404 - GET https mynextcloudaddress.net "/remote.php/dav/addressbooks/users/admin/kontakte-1/DEA49D70-C871-44D7-9D44-D7C15D54831E.vcf?photo" [Client 31.xx.xx.11] [Length 0] [Gzip -] [Sent-to nextcloud-aio-apache] "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:123.0) Gecko/20100101 Firefox/123.0" "-"
    [22/Mar/2024:17:26:13 +0100] - 404 404 - GET https mynextcloudaddress.net "/remote.php/dav/addressbooks/users/admin/kontakte-1/BD0C613F-46B9-4449-AFC9-6E5988FA9AA7.vcf?photo" [Client 31.xx.xx.11] [Length 0] [Gzip -] [Sent-to nextcloud-aio-apache] "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:123.0) Gecko/20100101 Firefox/123.0" "-"
    [22/Mar/2024:17:26:13 +0100] - 404 404 - GET https mynextcloudaddress.net "/remote.php/dav/addressbooks/users/admin/kontakte-1/44B00D57-6FCA-4E72-9EA7-9DBA4062E35E.vcf?photo" [Client 31.xx.xx.11] [Length 0] [Gzip -] [Sent-to nextcloud-aio-apache] "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:123.0) Gecko/20100101 Firefox/123.0" "-"
    [22/Mar/2024:17:26:13 +0100] - 404 404 - GET https mynextcloudaddress.net "/remote.php/dav/addressbooks/users/admin/kontakte-1/948D302A-F419-4125-9E68-3D6165FB8C01.vcf?photo" [Client 31.xx.xx.11] [Length 0] [Gzip -] [Sent-to nextcloud-aio-apache] "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:123.0) Gecko/20100101 Firefox/123.0" "-"
    [22/Mar/2024:17:26:51 +0100] - 304 304 - GET https mynextcloudaddress.net "/apps/richdocuments/settings/fonts.json" [Client 31.xx.xx.11] [Length 0] [Gzip -] [Sent-to nextcloud-aio-apache] "COOLWSD HTTP Agent 23.05.9.3" "-"


    Would this change if I set up npm differently? Syncing contacts and calendar with my iPhone and with desktop applications works very well.

    It’s not that the fail2ban guide doesn’t work with nextcloud, it has to do with making sure nextcloud is not triggering errors in the NPM logs. The settings I have outlined in the NPM-fail2ban guide are simply using fail2ban to watch the NPM logs and ban failed attempts.

    Sorry. What I meant was: The two guides don't work together out of the box. Getting the nextcloud errors out of the NPM logs would be great!

    From what I understand, Nextcloud by design has some very strict security related configurations that are fine when deployed for direct internet access, but need to be compensated for when using a reverse proxy.

    My Nextcloud is running as an lxc with the settings I have outlined above, works fine and is not triggering 300 or 400 errors, but once again, I am not running it as a docker container. Running as an lxc it behaves like a separate computer/server but as a docker it is different and I don’t know what the container is doing differently. The extra NPM configs and Nextcloud config.php settings above are designed to not trigger those errors by allowing proxy access from the ip of NPM and allowing NPM to pass the “problematic” portions of Nextcloud through without triggering the errors.

    I see. Running the nextcloud-aio container via docker is the official recommended form of using Nextcloud though. It seems strange, that this causes errors... I thought the 400 errors came from this bug and not from bad configuration.


    I will try to copy what you did using my docker setup.

    I'm not sure how to set up the custom locations:

    What do I put in the Forward Hostname and Port fields? The same settings as on the details tab? (In my case <IP of my server>:11000) Or something else?

    nextcloud config.php must include something like this. I have 2 trusted proxy addresses listed, the first being the actual ip of the server and the second being the internal KVM "gateway" ip for use completely "within the box" since I using the KVM plugin to run the LXC, allowing me to have NPM direct to a secondary static ip nic that I have set up in the LXC sitting on that internal KVM network

    This was almost right already. Trusted proxies included only 127.0.0.1 though. I added my server's IP. I hope that's enough. I'm not sure which IP docker uses internally.


    Thanks a lot for your help and patience! I guess it seems possible for me to get this working after all. :)

    Phew! Thanks for your help. That looks a lot more complicated than what I have done so far. I don't think I understand what's happening there or that I am able to change that to work with my setup.


    I think I might try switching to crowdsec instead of fail2ban...

    The recommendations for fail2ban in the official nextcloud documentation don't seem to cover the usecase of using more than just nextcloud with one npm instance. My understanding of all of this is limited though.


    chente : Maybe it is advisable to add a comment to your guide that using BernH 's guide with your nextcloude-aio guide (that's what I did) will lead to constantly being banned by fail2ban. Since you linked it in your npm post one could assume they work well together, if there's no comment on that. It's just that BernH's fail2ban configuration from that guide doesn't work with nextcloud...

    Redirect codes are are not catching incorrect login attempts, but are include because it is possible, although not likely, that hacker like activity can use redirections to your server. This just protects against that kind of thing by only allowing direct intentional connections. Feel free to remove the 300 filters if you so desire.

    Hello, thanks for the answer and the explanation!

    I just stumbled across this question when I was trying to find out why fail2ban keeps banning me every few minutes. The 300 filters are not the problem, I was just wondering why they were there.


    For now I have to solve the bigger problem of making fail2ban and npm work with nextcloud... Right now in order to be able to use the web interface of nextcloud I have to stop the fail2ban container before accessing the web interface...

    Hello,


    I have Nexctloud AIO set up with Nginx Proxy Manager and fail2ban. Big thanks to chente and BernH for the awesome guides (Nextcloud AIO and NPM with fail2ban). I did everything according to the guides. The setup was quite easy and it's running very nicely. :thumbup:


    If I understand BernH 's instructions and the regex file correctly, his guide sets a fail2ban filter, that scans for all HTTP status codes from the 3xx and 4xx range in the npm log files.


    Before I get to my actual problem, I have one question: Why is it necessary to include the 3xx codes?

    One example: Apparently the (preinstalled) Nextcloud richdocuments-app checks for custom fonts every few minutes or so - and most of the time receives a 304 code ( [12/Mar/2024:23:48:52 +0100] - 304 304 - GET https mynextcloud.xxx "/apps/richdocuments/settings/fonts.json" [Client 123.456.789.0] [Length 0] [Gzip -] [Sent-to 192.168.178.15] "COOLWSD HTTP Agent 23.05.9.2" "-") -> in the last 24 hours alone there are more than 300 events like that in the npm log - all registering my IP with fail2ban.

    Aren't 3xx HTTP codes just redirections and not errors? (I'm a complete noob, so this is a genuine question. :) )


    Anyway: Since everything went so smoothely, I started to use nextcloud and imported all my contacts to the contacts app.

    That's when the problems started: I got banned by fail2ban every few minutes. It took me some time and research, but I think I found the problem:

    Apparently there's an open bug, that the contacts app requests profile photos for every contact each time the app is used. And it gets a 404 code for every contact that has no photo set. This gets me banned instantly, when opening contacts in nextcloud. I also get banned without using the web interface, because apparently contact syncing (as I use it with the Gnome desktop and iOS) also generates 404 errors, just not as many as the web interface.


    For crowdsec there's a whitelist for this problem and for a similar bug of the file browser.


    I would like to stay with nextcloud-aio, since I very much like the approach. Is there any option to make my configuration work with fail2ban?

    There's regex recommendations in the official documentation and there is a nextcloud-aio fail2ban community container. This would apparently solve the problem for now. (I guess.) But I still want to use fail2ban with npm, because I plan on setting up other containers that might need to be published to the internet...


    Would crowdsec be an easier option? Does it work with my configuration?


    Any helpful advice is much appreciated!

    This is bad. It should be root root instead of johannes johannes. Have you created a shared folder using the root path of the pool?

    I don't know what you mean. :| sorry. Anyway:

    This should fix it

    chown root:root /srv/mergerfs/Archiv

    chmod 0755 /srv/mergerfs/Archiv

    This fixed it! Thank you very much!! I still don't understand why it was working without mergerfs... I saw this as well (drwx------ 6 johannes johannes 4096 11. Mär 18:35 ..) but I thought it didn't mean much, because the permissions where the same on the (working) direct path to the drive...


    Doesn't matter now: Thanks a lot for your patience and your help! :) It works! :love:

    Everything looks good, it should work. Do any of the disks in the pool have an NTFS file system?

    No. It's all ext4.


    What is the output now of

    ls -al /srv/mergerfs/Archiv/Medien


    Code
    johannes@openmediavault:~$ ls -al /srv/mergerfs/Archiv/Medien
    insgesamt 56
    drwxrwsr-x   6 root     users     4096 22. Feb 13:28  .
    drwx------   6 johannes johannes  4096 11. Mär 18:35  ..
    drwxrwsr-x   7 root     users     4096 22. Feb 13:20  Bilder
    drwxrwsr-x   5 root     users     4096 22. Feb 12:40  Filme
    drwxrwxr-x   2 root     users     4096 19. Sep 2014  'Maik Tracks'
    drwxrwsr-x 731 root     users    36864 27. Feb 19:59  Musik