Configuring borgbackup schedule

  • Hello,


    I have set up borgbackup repo and archive, but: how do I set backup schedule now?

    Am I supposed to create jobs in Scheduled jobs with borgackup command line invocations? Or is there any UI for this?

    • Official Post

    how do I set backup schedule now?

    When you create the archive, it creates a job that runs every hour. The archive parameters tell it how many hour, day, etc copies to keep. What are you trying to schedule? If you only want to run it at certain times, then you will have to do that manually. It is designed to run hourly.


    m I supposed to create jobs in Scheduled jobs with borgackup command line invocations?

    No.

    omv 8.0.10-2 synchrony | 6.17 proxmox kernel

    plugins :: omvextrasorg 8.0.2 | kvm 8.0.6 | compose 8.1.3 | cterm 8.0 | borgbackup 8.1.5 | cputemp 8.0 | mergerfs 8.0 | scripts 8.0.1 | writecache 8.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Hello,


    Oh, I get it now - it wasn't clear to me that just creating a job makes it run hourly.


    I want to maintain a backup of 150+ GB body if images/videos, that are rarely edited/deleted over a span of few days, and from time to time new are added.

    The backup is supposed to enable recovery in case some deletions or edits should be reverted.

    Currently it is set to 0 hourly, 3 daily, 2 weekly, 3 monthly and 0 yearly backups.

    • Official Post

    I want to maintain a backup of 150+ GB body if images/videos, that are rarely edited/deleted over a span of few days, and from time to time new are added.

    I use it for the exact case as well. Borg is very good with this situation because of checksums and dedupe. So, running every hour shouldn't be a problem

    omv 8.0.10-2 synchrony | 6.17 proxmox kernel

    plugins :: omvextrasorg 8.0.2 | kvm 8.0.6 | compose 8.1.3 | cterm 8.0 | borgbackup 8.1.5 | cputemp 8.0 | mergerfs 8.0 | scripts 8.0.1 | writecache 8.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Oh that's great!


    Just one more thing that comes to my mind now - does borgbackup wake disks, provided there has been no update to source directory since the last check? I have them spin down after 30 mins of inactivity, and it does not make much sense if they are to be waken every hour.

    • Official Post

    does borgbackup wake disks, provided there has been no update to source directory since the last check?

    Probably. I don't spin down my disks. But since it has to read files, that is hard to do without waking them.

    omv 8.0.10-2 synchrony | 6.17 proxmox kernel

    plugins :: omvextrasorg 8.0.2 | kvm 8.0.6 | compose 8.1.3 | cterm 8.0 | borgbackup 8.1.5 | cputemp 8.0 | mergerfs 8.0 | scripts 8.0.1 | writecache 8.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I am trying to achieve some "efficiency" or "ease" on my backup HDD by not having multuple backups happening at once. Same issue with rsnapshots.


    Is there a way to offset the start time for each archive job or explicitly state the time for daily/weekly/monthly? - so that there is a few hours between different jobs that write to same disk?

    NAS Spec 👇

    • Official Post

    Is there a way to offset the start time for each archive job or explicitly state the time for daily/weekly/monthly? - so that there is a few hours between different jobs that write to same disk?

    Nope.

    omv 8.0.10-2 synchrony | 6.17 proxmox kernel

    plugins :: omvextrasorg 8.0.2 | kvm 8.0.6 | compose 8.1.3 | cterm 8.0 | borgbackup 8.1.5 | cputemp 8.0 | mergerfs 8.0 | scripts 8.0.1 | writecache 8.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • So with that in mind, are your backup jobs to some specific disk just all happening at once and writing files in random order? I'm trying to understand how the experts conceptualize this.

    NAS Spec 👇

    • Official Post

    are your backup jobs to some specific disk just all happening at once and writing files in random order? I'm trying to understand how the experts conceptualize this.

    I'm not sure what you are trying to figure out. I have a few borg jobs and they run at the same time to the same disk. Since I ran the archive jobs manually the first time serially, the subsequent backup jobs were very short and it made very little different that they ran at the same time. And I don't know what you mean by "writing files" in a random order". Borg writes to an archive not really files. And I don't really know what difference the order makes.

    omv 8.0.10-2 synchrony | 6.17 proxmox kernel

    plugins :: omvextrasorg 8.0.2 | kvm 8.0.6 | compose 8.1.3 | cterm 8.0 | borgbackup 8.1.5 | cputemp 8.0 | mergerfs 8.0 | scripts 8.0.1 | writecache 8.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • ...I don't know what you mean by "writing files" in a random order".

    I meant if backups A, B, C, D, and E are running at the same time and fragmentating data across the disk platter. Logically they are writing to their own archive, but physically, the write head is fragmenting (or no?). I get your point about doing first backup manually and subsequent jobs having relatively low impact. Perhaps I am overthinking optimal HDD health and thought it would be better if jobs ran one-after-the-other.

    NAS Spec 👇

    • Official Post

    I meant if backups A, B, C, D, and E are running at the same time and fragmentating data across the disk platter.

    This is a windows way of thinking but even if it was, what is the problem? borg is writing and deduplicating blocks of data not really files.

    Logically they are writing to their own archive, but physically, the write head is fragmenting (or no?)

    Even if one backup job was running, the drive isn't writing one sequential stream of data. Are you worried about performance? Not sure what the concern is.

    Perhaps I am overthinking optimal HDD health and thought it would be better if jobs ran one-after-the-other.

    You definitely are overthinking it. Do you think only one process writes to the OS drive at a time? On windows, almost everything is on one and drive. The drive will only write as fast as it can. The head is bouncing back and forth between locations anyway. You aren't going to prematurely kill a drive because you are running parallel jobs. I have three 12 year old WD Red Pro drives in my primary server and they have been running parallel jobs their entire life.

    omv 8.0.10-2 synchrony | 6.17 proxmox kernel

    plugins :: omvextrasorg 8.0.2 | kvm 8.0.6 | compose 8.1.3 | cterm 8.0 | borgbackup 8.1.5 | cputemp 8.0 | mergerfs 8.0 | scripts 8.0.1 | writecache 8.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Good to know. I haven't used an HDD outside of backups since 2008. Back then I remember hearing the stress of fragmented drives constantly growling when reading/copying large amounts of data. I aimed to be as tidy as possible regarding this by doing copying things sequentially whenever possible. I still do this with SSDs by habit and its why I primarily copy with queueable tools like TotalCommander.


    Anyway its clearer now, and I'll be less compulsive about it since I'm doing the initial backkup manually. BTW, its an awesome plugin in OMV....I don't think would have tried using Borg from CLI...whew.

    NAS Spec 👇

    • Official Post

    Back then I remember hearing the stress of fragmented drives constantly growling when reading/copying large amounts of data.

    I have heard plenty of drives do that with a single stream. Drives are also much faster and last longer than they did in 2008.

    I still do this with SSDs by habit and its why I primarily copy with queueable tools like TotalCommander.

    I do a lot of perf testing at work to figure out how many parallel copies we can run to get the most performance out of something. It is never one stream and I have never worried about the device failing. I would worry more about not using drives older than 5 years old (yes, I know I have old drives but I have plenty of backups).

    omv 8.0.10-2 synchrony | 6.17 proxmox kernel

    plugins :: omvextrasorg 8.0.2 | kvm 8.0.6 | compose 8.1.3 | cterm 8.0 | borgbackup 8.1.5 | cputemp 8.0 | mergerfs 8.0 | scripts 8.0.1 | writecache 8.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • New
    • Official Post

    omv 8.0.10-2 synchrony | 6.17 proxmox kernel

    plugins :: omvextrasorg 8.0.2 | kvm 8.0.6 | compose 8.1.3 | cterm 8.0 | borgbackup 8.1.5 | cputemp 8.0 | mergerfs 8.0 | scripts 8.0.1 | writecache 8.1


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!