Rsnapshot needs to much space on target

  • Hello


    I use RSnapshot to backup some shares to a smb share mounted on the server. 15-25 MBit between the two locations.
    After 1 Week constantly backup I had compared the local and the last three backup folder.


    Share1 006ff57d runs since 55hours 109GB 75211 Files
    Share2 91f3b779 25 hours 95GB 47074 Files
    Share3 b39c5939 17 hours 24GB 42075 Files
    Share4 a4035633 13 hours 82GB 33977 Files
    Share5 1676f88e 12 hours 37GB 23963 Files
    Share6 87612ea1 20 minutes 4,5GB 1043 Files| 42GB 8344 Files
    Share7 5f5d29ee 8 minutes 927MB 367 Files | 8,3GB 2933 Files
    Share8 2c4a4247 50 seconds 400MB 39 Files | 3,3GB 312 Files



    I used only the Webfrontend to configure OMV (Screenshot attached).
    I also attached the log and a screenshot of the network adapter. 6,5TB from the NAS. 99% Backuptraffic


    Please help.

    • Offizieller Beitrag

    It is difficult to figure out the actual size of rsync snapshots. The reason is that several snapshots may share copies of the same file, if the file didn't change between snapshots. That is why you use rsync snapshots.


    But the snapshots looks as if they each have a separate full copy of the whole file system. The best way to estimate the size may be to look at free space left on the filesystem and compare that to total space on the filesystem.


    Some filesystems don't support hard links that rsync uses to "reuse" unchanged files from the latest snapshot to the next snapshats. If you stick to Linx/Posix filesystems you should be OK. I think ntfs works, but may need some special flags to correctly identify matching files.



    If you make many changes to the main filesystem then all changed files will have to be synced the next time. That can take a while.


    if you want to improve rsync speeds, don't use SMB/CIFS. Use nfs. Don't use ntfs, use ext4 or some other Linux native filesystem. Don't sync over the network, sync to a local HDD over SATA.


    I sync over nfs to/from ext4 filesystems. The transfers are fast and saturate my GbE network.


    I use rsync from scripts and as cron jobs. Not from the OMV GUI or plugins. I feel I get better control that way. But that is most likely because that is how I'm used to do it. YMMV.


    Hopefully someone else can check your config and spot problems.

    Be smart - be lazy. Clone your rootfs.
    OMV 5: 9 x Odroid HC2 + 1 x Odroid HC1 + 1 x Raspberry Pi 4

  • I will check the thing with the hardlink.


    The speed is not the problem, its the amount of data. And more speed isn't possible at the moment between the two location.
    Also is samba for the moment the best we can do.


    ===


    Edit:


    We are sure it was the combination of hardlinks and smb.
    We try rdiff backup.

  • Hey,
    I experience exactly the same. The same configuration was working and after migration to OMV 4 (current version) the backup doesn*t create Hardlinks anymore.
    I compared some files from the current backup and the one before and although its the same filesize, modification date and permissions (so it should qualify for a hardlink), rsync creates a file with a different inode.


    The reason is (tested), that the automatically created rsnapshot.conf (in /var/lib/openmediavault/rsnapshot.d/......conf doesn*t containt the line


    Code
    link_dest    1

    Since I don*t have an option in the GUI to set that parameter I would suppose it should automatically be set (and has been done so in the past).
    Otherwise it's not possible to create an incremental backup configuration. And I don't even have the space for 2 backups on the disk.


    For now I added the line to the automatically created rsnapshot conf file but it will be gone with the next save of the configuration, so it's just a workaround.
    Another one would be to copy the automatically created rsnapshot configuration and trigger it by own cron entries.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!