Over the last couple of weeks I refreshed my OMV box with a fresh OS install and also transitioned from mergerfs + snapraid to ZFS using the ZFS plugin. Everything went very smooth and it was working fine until a day or two ago when I noticed one of the Zpools had a degraded disk. All of the pools are set up with mirrored vdevs and two vdevs to each pool.
A single vdev started showing this degraded drive yet there were no reported errors on the writes, reads, or checksums. I am a little stumped as they are new drives (the two in the degraded vdev) and there are no SMART errors on either drive.
I have attached pictures of the pool (wingclipper in the pictures), the pool status, as well as the SMART info regarding the two drives.
Also, I am currently trying to run a long SMART test on the first degraded drive to see if any further errors actually show up. Any ideas or help on what might be causing this would be wonderful. Could it be a cable, a backplane (as this is a Norco 4224 server case), or something else?? They are connected from the backplane to an LSI HBA. Once the SMART test finished I might turn off the box and reseat the drives to see if that may do it but I am unsure if it will help any.