This is on a 1.19 box. I've had this server up and running with an active clean RAID 6 array for quite some time now.
Just a few days ago I logged in to the webUI to apply some openssl/libc updates and I noticed that my RAID 6 volume was in a degraded state.....
Some basic disc info/mdad.conf etc. is here.....
http://pastebin.com/s37uqydz
Had to use pastebin because...
Every 1TB disk you see there should be part of the array. The two other discs are a 320GB where OMV lives, and a 16GB flash drive that I use for flashing RAID firmwares, boot recovery, etc..
Obviously, I'd like to get the array back to a clean state. Why isn't it showing me which disk failed? How can I go about replacing/investigating the failed/missing disk?