rsync scheduled job failing

  • Hi,

    I have two drives in my helios4:

    1. WD Red 4TB (Data drive)
    2. Seagate Ironwolf 4TB (Backup drive)

    As per the OMV getting started guide, I have got a scheduled job with the following command to backup from my data drive (WD Red) to my backup drive (SG Ironwolf):

    rsync -av --delete /srv/dev-disk-by-label-WDRed4TB/ /srv/dev-disk-by-label-SGIronWolf4TB/


    Normally I have this disabled and then enable it just before I want it to run on the cron trigger of 0 14 * * 5. However, occasionally I run it manually, as I attempted to do last night. Both options of letting the cron trigger run the task and manually running the task have worked without issue previously.


    Last night attempting to run the scheduled job manually gave me the following error:

    I suppose I should also mention that the scheduled job is configured to run as root, so there shouldn't be any permission issues.

    Does anyone know what's gone wrong and how I can fix it?

    Thanks :)

    • Official Post

    Hmmm, how is it possible to rsync the whole mount point of the filesystem? OMV only allows to rsync directories (shared folders) below the filesystem mount point. IMO the .aquota* files are in use by the system, so they can't be overwritten.

    You may check if the target filesystem is in read-only mode (caused by the Linux kernel because of filesystem problems).

  • Thanks for responding.

    Hmmm, I'm not sure how to answer your first question.... I guess I'm just following what was outlined in the Getting Started - OMV5 Guide found here.

    Specifically this section

    Have I done something wrong?

    How do I check if the target filesystem is in read-only mode?

  • I'm right you are using a cron job to run the rsync job?

    Correct :). I am using a CRON job to run it.

    I'm doing it that way for two reasons:

    1. It was recommended in the Getting Started Guide
    2. The guide said it was an efficient approach to backup the entire drive, as opposed to backing up each shared folder separately.

    Has running it as a CRON caused the issue I'm experiencing?

  • Thanks very much for that suggestion.

    Would you recommend just updating the scheduled task I have with an appropriate exclude flag?

    Editing my original command to be as below doesn't seem to work...

    rsync -av --delete --exclude '.aquota*' /srv/dev-disk-by-label-WDRed4TB/ /srv/dev-disk-by-label-SGIronWolf4TB/


    Or would you recommend using the Rsync utility and backing up per shared folder as you mentioned above?

    If I go with this option, will I need to remove the contents of my backup drive and start again?

  • Fair enough, I'm not sure that wildcards are supported either. I've either not set the command up properly, or it's not supported.


    I'm happy to switch to the built-in rsync jobs and I understand that the target directory structure will be different. I have two shared folders, so I'll need two backup jobs which is fine. Due to the different directory structures should I clean out the backup drive before setting this up?
    Just wanting to approach this in the best possible way.


    Also it's worth wondering whether the official setup guide should be updated to recommend using the built-in rsync as opposed to a scheduled task?

    • Official Post

    I'm happy to switch to the built-in rsync jobs and I understand that the target directory structure will be different. I have two shared folders, so I'll need two backup jobs which is fine. Due to the different directory structures should I clean out the backup drive before setting this up?

    If not already done, you need to create shared folders for the source folder and the target folder. Shared folders for the source folders are probably already there. When creating the shared folders for the target folder, you can point them to the existing folder.

    Then you should be able to use the existing structure.

    I'm not sure that wildcards are supported either. I've either not set the command up properly, or it's not supported.

    Seems they are supported. Have a look at the examples here: https://linux.die.net/man/1/rsync

  • Thanks! I'll setup the appropriate shared folders now on my target and point them to the existing folders on my backup drive.


    Something I'm not sure about is whether to leave the following files/folders on my backup drive? I didn't create them on the source drive, and would guess that OMV created them by default....

    The screenshot below shows the contents of the top level of my source drive. The Media and Documents folders are the folders I created for sharing. The AppData and lost+found folders are empty and I did not create them. Should I remove them from the backup drive as well as the aquota.user and aquota.group files? I would guess that if I removed them and this drive became my source drive down the track that OMV would re-create them as needed?

    • Official Post

    The /AppData was probably created, when you played around with Docker. Just a guess ;)

    About the lost+found folder you can read here: https://unix.stackexchange.com…-folder-in-linux-and-unix

  • Right you are :D! I was playing with Docker. Very good guess.

    Ahhh OK. Thanks for the link.


    And I very much appreciate your patient response:).


    Big thank you to both macom and votdev for your help through this. I have successfully setup and run backups using the built-in rsync tool. Much appreciated

  • ekent

    Added the Label resolved
    • Official Post

    As per the OMV getting started guide, I have got a scheduled job with the following command to backup from my data drive (WD Red) to my backup drive (SG Ironwolf):

    rsync -av --delete /srv/dev-disk-by-label-WDRed4TB/ /srv/dev-disk-by-label-SGIronWolf4TB/

    To get back to the original post, this is straight out of crashtest 's Getting Started with OMV Guide under "Full Disk Mirroring".

    Hmmm, how is it possible to rsync the whole mount point of the filesystem? OMV only allows to rsync directories

    I am certainly no expert with Rsync, or anything Linux for that matter, and maybe it is not possible from the Rsync plugin. I guess that is why it is done from the Scheduled Jobs tab. It works. I have been using this backup plan for a year or more now. The beauty of this backup is that if something goes wrong with your data drive, you only have to repoint your existing shares to the backup drive (plus a few other details like repointing your Docker folder in the OMV Extras tab to the new disk) and you are up and running in a matter of minutes. This too just works. It has saved my data several times.


    Now, how the .aquota* file got into the picture, I cannot say, but under "normal" circumstances this backup plan is a good one, or crashtest is a poached egg.

    System Backup Typo alert: Under the Linux section the command should be sudo umount /dev/sda1 NOT sudo unmount /dev/sda1

    Backup Data Disk to Backup Disk on Same Machine: In a Scheduled Job:rsync -av --delete /srv/dev-disk-by-uuid-f8814ed9-9a5c-4e1c-8830-426968c20ea3/ /srv/dev-disk-by-uuid-e67439d5-00a3-4942-bd5f-b84ab86aa850/ Don't forget trailing slashes, and BE CAREFUL. (HT: Getting Started with OMV5)

    Equipment - Thinkserver TS140, NanoPi M4 (v.1), Odroid XU4 (Using DietPi): PiHole

    • Official Post

    If you use a clever shared folder structure with sub-folders, then you need to create at lleast one backup job and all shared-folders below will be included, too.

    This is true.

    But to achieve a full data disk backup, with information available from the GUI that's easy to restore, the single command line executed as a scheduled task (in the GUI) is easy to explain and is a step by step "walk through" process. The focus in the "Getting Started" guide is for a simple full data drive backup, and an easy method to cut over to the backup drive and restore shares, after a data drive failure.


    Also, as the process is laid out in the guide, there's no need to open a terminal to get on the command line.

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!