This will be a long post, as I'll try to be thorough... The way my backups work, I don't feel I need version backups remotely, so I've never tried to do it with Syncthing (although the documentation says it works, if it doesn't there's other options like rsnapshot, etc)...
My setup might seem a little confusing, and while it's probably not as good as rsnapshot or another versioned backup. Some might even say it isn't "automated" enough, and I'd agree... but that is partly why I like it. I've used this setup for several years and it works well for me. It has saved me from data loss a couple times... which is the whole point of a backup.
In my scenario:
Drives A & B, are my "working" drives. Basically where all my data is, container data is here, new stuff is added and modified, serves data to clients, etc.
Drives C & D are an "internal backup". These drives are in the same physical server as A & B. They sync once a day via rsync. This rsync job has the delete trigger turned off. So if I'm making lots of changes, etc.. to A & B, it is not uncommon to have multiple copies of something on drives C & D. Usually this is simple stuff. For instance, I just rewrote all my docker-compose files with symlinks about a month ago.. right now I have 2-3 copies of those on C & D, since it was a process that took me a couple days due to work, testing my new compose files, etc.. Occasionally if I delete a movie or entire TV series from A & B, it will remain on C&D for a while and the file system sizes will be a bit more noticeably different from A & B... but I would say for the most part, this is pretty uncommon for my work flow. Short of being used as this internal backup, C & D are not used in any service, etc.
My remote backup, we'll call drives E & F, sync to drives C & D with Syncthing. Since drives C & D are my internal backup, again it's not uncommon to have multiple copies of data that has changed, on my Syncthing backup.
Once a month or so, I'll go through and check my differences between A & B and C & D. As I said, for my data flow, most of the time the differences are pretty subtle and I usually know what they are (in the case of my docker-compose files right now). Once I check the backup drives and I know nothing crazy is going on (ie, it's synced multiple copies of a file that I know I didn't change, this would mean an issue with Drives A or B). Once I've verified all is in order, I enable the delete trigger on the rsync job, and run it manually. This brings A & B and C & D completely in sync. In turn, once it finishes, C & D will then sync to E & F, bringing everything 100% in sync. Once it's all done, I disable the delete trigger on my rsync job again.
In the event of an accidentally deleted file, etc.. from A & B, I can just SSH my system, and copy the file from C & D. In the event I've accidentally lost something on A & B, and C & D failed for some reason, the data should still be on E & F. In the event of a major issue (Fire, Flood, etc.) where I lose A, B, C, and D... My data should be reasonably up to date remotely on E and F.
Beyond that, the only thing I really do is check my rsync logs every night, and make sure nothing super crazy happened between A & B and C & D.
Hope that makes sense.