Why does my OMV NOT update changes made on my Hard Drive SORTED THANKS

  • Hi all,


    Does anybody know why my OMV does not realise when I have deleted stuff off the drive which I am backing up onto my OMV server ?


    In this case, I just wanted to see if after I did my first big install (and got it working) OMV would find any new flac albums added onto the drive I want to back up, and then replicate them on OMV. It did, so far so good :)


    Then I deleted one of my flac albums on my hard drive and ran rsync and to my suprise it did'nt delete this in OMV, its still there.


    I'm sure there's a simple answer or a toggle I need to change, but I have looked around the forum and can't see this issue anywhere.


    Hope there is a simple fix that someone might be able to help with...


    Thanks in advance...

  • Then I deleted one of my flac albums on my hard drive and ran rsync and to my suprise it did'nt delete this in OMV

    Did you use the command line for your rsync job or USBbackup plugin? There is a command line switch to delete target files which don´t exist anymore in the source file list.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • Hi cabrio_leo


    Thanks for the reply.
    I think I understood what you have said..In laymans terms, I have just done this and,
    I have gone to my rsync job, and opened my job by hitting edit.
    I looked down the list of functions as the window for that is very small and saw a toggle that more or less said what you have here and, for the file I nominated to be deleted, its gone DUHHHH.. Sorry I should have looked but my excuse is as a beginner because finding my way through all this stuff is a long learning curve, but I'm SLOWLY getting this with thanks from guys like you, so thanks its appreciated. Job sorted !!

  • to back up what you have ?

    To backup what you have is one thing. To delete in the backup what you not have is another. A lot of people prefer to save everything what they have had in the past and to keep older files in the backup. This may be the reason for the OMV default setting.


    In my personal opinion this leads to a lot of rubbish in the backup, which makes a restore difficult.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • In my personal opinion this leads to a lot of rubbish in the backup, which makes a restore difficult

    If a 'backup' deletes at the destination what has been deleted at the source then it's not a backup but just a primitive clone. It's almost as useless as RAID-1 since once you accidentally delete stuff it's gone where it should have been preserved too.


    Of course just syncing stuff in one large pool is not the smartest attempt. Even with rsync there exist better variants since ages (incremental rotating backups using as less disk spaces as possible by making use of hardlinks, see http://www.admin-magazine.com/…s/Using-rsync-for-Backups for example)


    But it's 2018 now. Why on earth dealing with such crappy attempts? Simply let the filesystem do the job. ZFS and btrfs allow for snapshots, those can be sent easily to another location with the same commands (send/receive).


    Yeah, I know that's not a solution for the OP's problem. It just makes me sad seeing people still struggling with anachronistic attempts from last century instead of enjoying technology invented in this century.

  • If a 'backup' deletes at the destination what has been deleted at the source then it's not a backup but just a primitive clone. It's almost as useless as RAID-1 since once you accidentally delete stuff it's gone where it should have been preserved too.

    I am not this opinion. There is a big difference between a RAID-1 approach and a rsync job with the --delete option.
    In a RAID configuration the change is immediately done on the other disks. With an rsync job or USBbackup I can decide when I connect the disks to the NAS and when the rsync task starts.


    Deleted files are saved to the recycle bin for some days in my setup. And I make ZFS snapshots too. So why not make a "primitive clone" with rsync and --delete option to have a further exact copy of my pool. For me this is no contradiction.


    In the past I have restored my pool from scratch with this rsync backup with success, after expanding the pool by more disks. Therefore I want a clone of my pool. I have no second system with a backup ZFS pool to make a "ZFS send" to this pool.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!