Posts by DerSpatz

    I have set up borgbackup to automatically back up certain folders on my NAS on a drive that is connected via USB to my Fritzbox.


    Right now, OMV connects to the Fritzbox via SMB, and the repository folder is mounted as a shared folder, so borgbackup can access the repository as local folder.


    In the near future, I want to change the setup by plugging a USB drive into a friend's Fritzbox and connect to that drive remotely, for an automated remote backup.


    I was wondering whether it would be better to save the borgbackup repository locally and sync it to the remote drive with rsync or to keep the current solution?


    My reasoning behind this question is the assumption that borgbackup needs to access the repository for the deduplicating phase of the backup process, and thus having the files only saved remotely would be significantly slower as the internet connection speed would be a possible bottleneck. But I don't know if borgbackup works this way...

    Ich vermute mal, du hast das Mainboard wegen seiner 4 SATA Ports ausgewählt. Wirf doch auch mal einen Blick auf das Asrock J4125B-ITX.

    Das hat zwar nur 2 SATA Ports, und keinen M.2-Slot für ein Wifi-Modul, dafür hat es einen PCIe x16 Slot. In den kannst du einen LSI SAS9210 oder 9211 (geflasht auf IT-Firmware) stecken (gibts bei eBay für 20-30€ nicht meine Auktion + 5-10€ für Kabel und 3 € für einen zusätzlichen 40mm Lüfter). Der bietet dir dann 8 SATA-Anschlüsse für die Festplatten. So kannst du die SSD direkt ans Mainboard anschließen, und hast auch noch genug SATA-Anschlüsse, um deinen NAS-Server bei Bedarf einfach zu erweitern, das Gehäuse bietet ja Platz für 6 oder 7 3.5-Zoll-Laufwerke.

    Hello,


    I'd like to add more graphs in the system information tab. I'd be interested to also have graphs for CPU clock, CPU temp and HDD temps. Is there a way to add this information to the bunch of graphs that are already shown?


    I also changed the size of the existing graphs to 1200x200 pixels by adding the lines in /etc/default/openmediavault, but it seems that the resolution (as in number of values) of the graphs is still the same. Is there a way to change the resolution to a higher value?


    Best regards,


    Spatz

    Here are some ideas (the prices are roughly the same as in your build):


    - enclosure: If you don't need hotswapping, get a used Bitfenix Prodigy (ca. 50 €) and this cage for up to 10 HHDs: https://www.coldzero.eu/bitfenix-prodigy-hdd-cage-10hdd-s (this is what I am using)


    - mainboard: get an Asrock J4125B-ITX (ca. 90 €). The CPU feature set is the same as with the J5040 (AES, VT-x etc.), and the slightly slower CPU frequency should not matter in your use case. It only has two SATA ports, but it has a PCIe x16 slot, so you can use more powerful SATA cards


    - SATA card: get a LSI SAS 9210-8i or 9211-8i based card (ca. 30 €), and flash it to IT firmware. This will give you 8 additional SATA ports with the right cables (this is what I am using)


    - fans: use at least 3 120mm fans (2 front, 1 back) for good airflow, and add an additional 40mm fan for the SATA card with cable ties (the card gets pretty hot, as it is designed for the forced airflow in a server case), one additional 120mm fan fits in the top of the case (the other slot is taken by the HDD frame). Use two Y-cables to connect all fans (top and back on one connector (output), 2x front on the other connector (input), so you can change the air pressure inside the enclosure


    - SSD: better get a cheap SATA SSD as a system drive (and connect it to the mainboard), it should be more reliable than an USB drive. If you plan to run VMs etc. those can also run on the SSD, so do not buy a too small one, and make sure it is *not* DRAM-less. 120 GB or more should be plenty.

    For me it worked in single quotes ( '*' ), and filenames with a space in the backup folders are backed up, too.

    But it only works with the cron job, when I try to trigger the backup job manually, it gives me the error above.

    %uuid% is the uuid of my unionfs pool, and that's where the data directory of my Nextcloud docker is mounted.


    Code
    /srv/e9e2026f-d6bf-4129-bd12-d8c68a56e075/nextcloud/data/*/files/backup: [Errno 2] No such file or directory: '/srv/e9e2026f-d6bf-4129-bd12-d8c68a56e075/nextcloud/data/*/files/backup'

    That's the error I get. When I replace the * with my Nextcloud username, it works.

    Hello,


    I use OMV and have Nextcloud running in a docker. Nextcloud is used by close friends and family.

    I want to do regular backups with borgbackup to a USB HDD that is connected to my Fritzbox and mounted remotely, but the external HDD is not as large as the HDDs in my NAS, so I can only backup important files.

    For this reason, I want to offer my Nextcloud users to create a folder called "backup" in their home directory that is backed up regularly to the external HDD.

    As I do not want to create an entry for every user (and every new user), I'd like to automate this.

    I already tried to use a wildcard (/srv/%uuid%/nextcloud/data/*/files/backup), but this did not work.

    Is there a simpley way to achieve what I want?


    Regards, Spatz

    I also had problems with slow Nextcloud in the beginning. In my case, it was related to a bad installation of Collabora Online. If your installation does not recognize the built in CODE server, it will get very slow.

    Try to deactivate the two Collabora apps, and see if it gets better. If yes, first reinstall the CODE server, then the Collabora Online app, then check if the CODE server is recognized.

    When I try to add a cron job for nextcloud, I get this error mail:


    Could not open input file: /var/lib/docker/volumes/root_nextcloud/_data/cron.php


    The cursive part is in the email like this.


    What do?


    Nevermind, I added a cron job for root, with this


    Code
    */5 * * * * docker exec -u www-data nextcloud php cron.php

    I had to adjust "nextcloud" to "nextcloud-app" but now it works.

    After trying a lot of different things, the solution was very simple:

    It seems like one of the preinstalled Nextcloud apps (I assume it was Talk) caused problems. After deactivating all unnecessary apps in Nextcloud, the Web GUI was as responsive as it should be, and I could reactivate the apps again without performance issues.


    To be more exact, I think the problem was this: Collabora Online slows down the system when it tries to find a server. Initially, Collabora can't find the built-in CODE server. Reinstalling the built-in CODE server solves this problem.

    I installed Nextcloud in a Portainer (with mariadb, nginx, letsencrypt and duckdns) and it works, but the web GUI is extremely slow:

    After clicking on a new page, there is a wait time of 30-60 seconds, and the the site suddenly loads very quickly.

    Syncing with the app for Windows works without a problem after I set the chunk size to zero. So I assume the problem is related to the WebGUI.

    Redis is already installed as suggested here in the forum, and php-fpm is already tuned.


    The CPU is a Xeon E3-1230v2 with 8GB RAM, the system is running on a SATA SSD, and the internet connection is 250down/50up, so I assume it is not a technical bottleneck.


    I used this guide:

    https://dbtechreviews.com/2020…th-remote-access-and-ssl/

    But in the end, I left out the part where I add the {server}-lines to nginx.conf (the {html} lines were added), as adding those broke the OMV web GUI, and uploading large files worked after that.


    When I look at the page traffic in Chrome (F12), the wait time is not shown in the graphs in the network tab, all timings are in the milliseconds range according to Chrome.


    What can I try to speed up the Web GUI of Nextcloud?

    I want to use Nextcloud and DuckDNS in a Portainer, and I used this tutorial: https://dbtechreviews.com/2020…th-remote-access-and-ssl/


    Everything worked so far, but I wanted to change some configs and now I can't log in into Nextcloud over the WebGUI, so a wanted to remove everything and start from scratch.

    After removing Docker and Portainer in the WebGUI from OMV, and reinstalling it, everything is as it was before.


    How can I fully remove everything Docker and Portainer related to start from scratch?

    Well, my system has a Xeon E3-1230v2 with AES support, so I assumed I could use LUKS without losing (much) performance.

    Of course, as long as the system is unlocked it is similarly easy (or hard) to compromise as a system without LUKS, but I thought of it as a small extra layer of security for certain situations with the only cost being the time for unlocking the drives after a reboot, which should not happen too often when the system runs stable...


    But because I share your opinion, I decided to ditch LUKS in favor of UnionFS in my case.

    It seems to be an error when using LUKS and mergerfs, you can't use both at the same time, as some of you already mentioned.


    Wouldn't it be better to include a warning in OMV-extras that you can't use both at the same time? Then other users would be less likely to run into this problem...

    Okay, I got the driver to compile after installing the proxmox kernel via OMV-extras.


    Now where should I move the r8152.ko, and what do I have to do that it gets automatically loaded on startup?


    EDIT: After installing linux-headers, the driver also compiles on kernel 5.9. make install returns this:

    Code
    rmmod r8152
    make -C /lib/modules/5.9.0-0.bpo.5-amd64/build M=/mnt/sdd2/r8152-2.14.0 INSTALL_MOD_DIR=kernel/drivers/net/usb modules_install
    make[1]: Verzeichnis „/usr/src/linux-headers-5.9.0-0.bpo.5-amd64“ wird betreten
    INSTALL /mnt/sdd2/r8152-2.14.0/r8152.ko
    DEPMOD 5.9.0-0.bpo.5-amd64
    Warning: modules_install: missing 'System.map' file. Skipping depmod.
    make[1]: Verzeichnis „/usr/src/linux-headers-5.9.0-0.bpo.5-amd64“ wird verlassen