I am experimenting, mostly for fun, with writing (C++) a snapshot style backup utility that use checksums to detect bitrot and fix it. To fix the bitrot it copies over the backup copy if the original file has bitrot and copies over the original file again if the backup copy has bitrot. The utility works fine between locally mounted filesystems. I am testing on EXT4. Mergerfs and NFS.
My thinking is that during a backup the utility has access to previous snapshot backup copies of all files. So that provides the redundancy needed to fix bitrot without any need for parity, mirroring or RAID. All that is needed is backups. But of course, it is not real time. But it should still work OK for large media libraries that are mostly just growing slowly. Video/music/photo archive.
But it is still buggy and too slow. And use too much memory. A backup utility should not be buggy... I am currently rewriting it (almost) from scratch for the third time. Sequential file read performance set a hard limit on the speed of bitrot testing, I want to come close to that speed.
No idea when it will be done. I have been working on this on my spare time, off and on, for a few years now...