borgbackup plugin - minor issues with remote backup

  • Two hopefully quick questions here -


    1. when checking the remote repo using "borg info 'ssh://user@remoteserver:port/backup/repo' I get the following message back "No key file for repository...found in.../backup/repo" is this an issue? I use the OMV webgui admin to perform the backup with the borgbackup plugin, and searched my system but cannot find a key file for the repo?


    2. I use ztsd compression and noticed that the actual repo size on the remote server is smaller than the output from borg. For instance: backup directory size = 2GB, borg output says compressed archive is 1.8GB yet when I check the archive size on the server it says something like 1.4GB. Is this normal or is my backup not completing all the way? When I mount the repo it seems all the files are there....


    For some further background: I followed the awesome guide for creating a remote borg backup repo & archive using the borgbackup plugin by auanasgheps and everything seems to be working for the most part. I am backing up via a wireguard tunnel to a remote server (raspberry pi) from OMV. I use the omvwebgui admin user to perform the borgbackup via the plugin and OMV gui, and use the ssh key for one of my users "user1" to ssh to the remote server through the tunnel.


    All help and advice is appreciated. Thank you!

  • 1. when checking the remote repo using "borg info 'ssh://user@remoteserver:port/backup/repo' I get the following message back "No key file for repository...found in.../backup/repo" is this an issue? I use the OMV webgui admin to perform the backup with the borgbackup plugin, and searched my system but cannot find a key file for the repo?

    Have you created the folder on the receiving server and initiated it from the plugin?
    When properly initiated the message will go away and the repo will be ready to accept backups.


    2. I use ztsd compression and noticed that the actual repo size on the remote server is smaller than the output from borg.

    Correct. ZTSD is very good and if you have data that can be compressed, it will be.
    You should periodically check your backups using the the check feature, it will ensure that data is backed up and safe.

    OMV BUILD - MY NAS KILLER - OMV 6.x + omvextrasorg (updated automatically every week)

    NAS Specs: Core i3-8300 - ASRock H370M-ITX/ac - 16GB RAM - Sandisk Ultra Flair 32GB (OMV), 256GB NVME SSD (Docker Apps), Several HDDs (Data) w/ SnapRAID - Fractal Design Node 304 - Be quiet! Pure Power 11 350W


    My all-in-one SnapRAID script!

  • @auanasgheps thanks for the reply! I believe I have everything sorted although I will comment for future readers as I might have miscommunicated in my initial posting. Please feel free to correct me if something looks incorrect!


    1. When you initiate the backup from the client using the OMV webgui admin borg uses root to initiate the repo. This stores the repokey in /root/.config/borg/keys/ so if you ssh into your omv server (borg client) using a regular user and try to run borg commands it will not be able to find the keyfile to repo on the remote borg server and give the error message in my orginal post. If you want a regular user to be able to access the repo on the remote server you can use " borg key export ssh://server@address:port/backup/repo /user/home/.config/borg/keys " this will allow a regular user to use borg commands and/or access the remote repo via ssh. TBH I didn't figure out how to use the plugin as a regular user on the omv gui so maybe someone will chime in here? One last thing - I think this is important to know how to use "borg key export " and where the keyfile is located so you can make a backup of it, just in case.


    2. the ZTSD is very good and I used the check feature to ensure the backups are being made. Borg is awesome!


    3. this wasn't in my original post but maybe it should be included somewhere - when performing the initial backup my ssh tunnel to the remote server would drop prior to the backup completing. This is not a major issue as borg picks up where it left off, but semi annoying when trying to backup large files on the initial backup. What I believe was happening was when borg would get to a large file to compress (especially on outdated hardware like my own) and it would not send anything through the ssh tunnel during compression, and eventually the server would close it due to inactivity. In order to get the large initial backup to complete in one shot I had to change the ssh_config on the client and add :

    Code
    Host address
        ServerAliveInterval 60
        ServerAliveCountMax 60   

    which bumps the keepalive of the ssh tunnel to 3600 seconds, or about an hour. I also changed the sshd_config on the remote server to keep the tunnel alive by uncommenting and adding:

    Code
    ClientAliveInterval 60
    ClientAliveCountMax 60

    which bumps the ssh keepalive of the server to 3600 seconds. Obviously reloading ssh on the server and client after the changes to the configs. I'm unsure if it is necessary to add a keepalive to both but I do not believe it hurts. Once the initial backup is complete the reccurring backups are much less time intensive and you can change the ssh keepalive back to default.

  • if you ssh into your omv server (borg client) using a regular user

    I never experienced this issue since I only use root to ssh in my OMV server :)


    Regarding SSH, I believe you would need to change only on the receiving backup server. Good find, I will add it to my guide.

    OMV BUILD - MY NAS KILLER - OMV 6.x + omvextrasorg (updated automatically every week)

    NAS Specs: Core i3-8300 - ASRock H370M-ITX/ac - 16GB RAM - Sandisk Ultra Flair 32GB (OMV), 256GB NVME SSD (Docker Apps), Several HDDs (Data) w/ SnapRAID - Fractal Design Node 304 - Be quiet! Pure Power 11 350W


    My all-in-one SnapRAID script!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!