I think there is some other reason for such a huge fail rate in one location.
You're right.
I deleted the RAID array and replaced the broken disks with 3 new 12 TB drives.
I reset everything using mdadm --zero-superblock /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg /dev/sdh.
In the "SMART -> Devices" section, all the disks were reported as "in good condition".
I created a new RAID 6 array, added the 8 disks, and it started syncing.
After few minutes, I checked and it was marked as "clean, degraded," and two of the disks were shown as Unknown (ZJV3Z851 and ZJV2TTQH).
I rebooted the NAS and the drives reappeared with "Good" status.
At that point, to try and undestand whether it was an issue with the controller, SATA port, or cable, I powered off the NAS, changed the order of the drives, and turned it back on.
Once again, all the drives showed a "Good" status.
I created the new RAID 6 array, but once again the sync failed but now It seems that drives ZJV3YELM and ZJV65XX1 having issues.
I removed one drive (unplugged it from the caddy) and recreated the array and now it seems to be syncing without problems.
Maybe a power supply problem? I'm using the Corsair CX750M.
The UPS?
Like Queen, I'm going slightly mad.