So I have plenty of storage and good versioned backups. Now I can worry about bitrot. Maybe?
Very rare random errors creeping into the data over time. Very rare, but increasing the storage increase the probability of random errors.
I use versioned backups on the folder level using rsync snapshots over the local network.
Ideally I would like a system with checksums on a file level. That can be used to find files with bitrot. If the checksum has changed but the modification time has not changed, then we have bitrot. If bitrot is detected then the file is restored from backup. Perhaps with an extra check of checksum on the backup copy and optionally use an older, error free, copy of the file if available.
So I would have to quickly create new checksums and update snapshots often. Otherwise bitrot errors may migrate into the backups.
This seems like something that might already exist? It could be integrated into a rsync backup system. Rsync use checksums and copy files back and forth. And rsync knows where the copies of the file are. It would be pretty lightweight, except for all the checksum calculations, and need very little extra storage. Just a database for the filenames, checksums and file modification times.
Does anyone know if something like this already exists? Or something better and even simpler?