My day job for the past 15 years has been working in data centres full of enterprise hardware, most of it containing customer data. My statements are based on that experience and that of my many colleagues.
I have seen RAID arrays fail so many times I lost count. I have also seen RAID rebuilds fail many times. The drives are deployed at the same time. When a drive fails, it's partners are also quite long in the tooth and it is not uncommon to have more drives fail during this process. RAID 6 helps, of course, but when dealing with arrays of 14 drives or more, the risks increase exponentially. The sustained stress of rebuilding the arrays is what we assume is the primary cause of subsequent drive failures. This type of experience with RAID is not uncommon. A quick google will provide you with similar experiences and opinions, for example blog.open-e.com/why-a-hot-spare-hard-disk-is-a-bad-idea/
A rebuild thrashes disks far more than a backup. I've seen rebuilds thrash drives for several days continuously. Often when this happens we recommend that the customer auhtorize us to simply build the array from scratch and restore from backups/images. It typically takes much less time than a rebuild.