Non-RAID backup strategy

  • I am looking for some thoughts and advice on a backup strategy.


    I am considering purchasing a RockPro64 with the NAS case. I am planning on putting on a 8-10 TB drive in it for storage. Instead of setting up a RAID, I am thinking of putting in a second drive and doing a "backup" of the main drive. How does that sound to the collective?


    I could also put an external drive connected via the USB 3 port, but I figured the SATA channel would be faster.


    What backup plugin would everyone recommend for this purpose?


    Thank you in advance.



    Sent from my iPad using Tapatalk

    • Offizieller Beitrag

    Rsync... This will be a long post, but it's simple as can be.


    Create shared folders on each individual drive. I usually have one "Disk_1" and the other drive I make a folder "Disk_2".


    Create folders under "Disk_1" for services and data (Movies, TV, Music, Torrents, whatever you need) and add data where you want. Only thing I would not put here, is if you plan to use Docker, I would not put your Containers folder here.. (I'll explain in a second). This will basically be the "main" drive, and will store all your data for services, you'll probably actively add/remove data from this drive, etc.


    Once you've added all your data to "Disk_1" go to the rsync plugin.


    Create a simple job to sync "Disk_1" to "Disk_2", save it, and run it. When it's done, an exactly copy of everything under "Disk_1" will be under "Disk_2". You can see the scheduling you can do in the job configuration... I've got mine set to run every 5hrs. So every 5hrs, rsync runs and checks to see if there are changes on "Disk_1 that are not on "Disk_2" if there is, it syncs them. The other good thing about this.. if you don't enable the delete option on the rsync job... if you were to accidentally delete something... All you have to do is SSH your server, and copy it from "Disk_2" to "Disk_1". This is one reason I think rsync is far superior to raid1. Usually once a week, maybe once every 2 weeks, I log in, enable the delete option.. .manually run the job and bring the two drives completely in sync... then I disable the delete trigger again. I've ran rsync like this with OMV. since OMV .2 beta's and it has never been a a significant issue


    The only exception to this, is my Containers folder for docker. For some reason, I've found it just causes rsync and the webUI to stop responding. I think it has something to do w/ the fact the containers are running when the rsync job is running, and it just causes a mess as it tries to sync all the data while 5-6 containers are running. I decided to create two separate folders on the two disks ("Containers" and then 'ContainerBackup'), outside of "Disk_1" and "Disk"_2". Containers is where all my docker containers are stored, and then I created an rsync job just for my containers. For this job I set it to not run automatically (just turn the green button off in the job settings). Usually once every couple of weeks, I stop all my containers, go to the webUI, and manually run my Container backup rsync job. Usually takes less than a minute and never freezes... then I restart all my Containers.


    Easiest way to start/stop all your containers... is from the console or SSH
    Stop them... docker stop $(docker ps -a -q)
    After completing the sync job, restart them all.. docker start $(docker ps -a -q)


    I've only been really using docker about the last 2mo or so.. but this has proven to be an easy and effective way to back them up, while not interfering with my regular rsync job (and they don't really change enough that I need to back them up more than every couple of weeks).


    Hope that helps

    • Offizieller Beitrag

    My backup method is similar to that of @KM0201.


    I have a bunch of single HDD SBCs. Some are used for data and some are used for backups. On the backup SBCs I use cron to once a day run scripts that takes rsync snapshots of important folders on the data SBCs. TV, Movies, Music, Photos, Documents and so on.


    A rsync backup snapshot looks exactly like the original folder. But to create it unchanged files don't have to be copied over. Instead they are reused from a previous snapshot. Linux filesystems has the ability to create hard links to files, as long as they are on the same partition. This means that a file can appear to be at more than one place at the same time. But if a file is hard linked the file is not copied, an extra reference to the file is created and a copy counter is incremented. In fact there is no difference at all between a hard link to a file and the original file. Both look exactly the same and have a copy count of two. Unless a previous hard link was made. If you delete a file the copy count is decreased, and only if it has reached zero is the actual data deleted.


    This can be used to create timestamped fake full backup copies that consists of hard links and a few actual copies of updated files. It makes it possible to keep dozens of snapshot copies of the same folder, taking up very little room. Perfect for folders with files that are rarely changed, but sometimes added to. Just what many use their NAS for: Store media files and documents.


    I have been using rsync for versioned snapshots for more than a decade. I have written various scripts to automate the creation of timestamped folders and even to purge old snapshots. And it works great. But I am still improving and testing my scripts. OMV has a similar functionality in RSnapshot. But I never used that, I find it too complicated to set up. But I suspect that I am wrong. Many use RSnapshot. I prefer my system with one script per folder to backup and entries in cron. I assume that RSnapshot is much safer and better debugged than my scripts.


    On github there are many scripts similar to mine and to RSnapshot.


    Here is an OLD (2004?) link to the site that inspired me to start using versioned rsync snapshots: http://www.mikerubel.org/computers/rsync_snapshots/

  • Instead of setting up a RAID, I am thinking of putting in a second drive and doing a "backup" of the main drive


    Backup unlike RAID or 'cloning' always involves 'versioning (keeping old versions at the backup location if contents at the source change or are deleted). I would use modern filesystems like btrfs (on ARM) or ZFS (on x86) since they provide data integrity and help a lot with consistent backups due to

    • checksums (allowing to check for data integrity -- fighting silent bit rot)
    • snapshots ('freezing' the filesystem in a consistent state to allow the last snapshot to be backed up or to revert to an earlier snapshot)
    • efficient transfer of these snapshots to a 2nd disk or location

    Unfortunately these modern attempts (btrfs/ZFS) require some more knowledge so in case you're not that adventurous I would have a look at rsnapshot. And I would also look at a solution where the backup disk is physically separated from the productive data (backup destination in another room at least -- IMO that's the main advantage of using inexpensive SBC for NAS --> you can simply add a backup NAS anywhere you want).


    The importance of physical separation usually only becomes obvious after incidents involving fire, water, theft or the like...

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!