Degraded Array (RAID 10)

  • Hi All:


    Newbie to OMV. Set up a OMV server for a friend about 4 months ago.


    - 4, 2 Tb drives (brand new IronWolfs)

    - RAID 10


    OMV started reporting "A DegradedArray event had been detected on md device /dev/md0" Last month, with md0 being the array of course.


    All the disks are visible in Storage/Disks. All of the disks are reporting green in Storage/Smart/Devices. I ran a SMART test and did not see any noticeable errors.


    Storage/RAID Management shows the array as "clean, degraded" and only shows three of the four disks - with /dev/sde/ missing.


    When I select the array and hit "recover", /dev/sde/ is not visible. /sde/ is available if I attempt to add a file system.


    What can I do to get this drive back into the array? I've read through the forums and it was suggested to check the SATA cable as part of T/S-ing.

    I also read some of the commands to be run but where can I read which ones to use and leverage?

    Any help is appreciated!

  • Has the server been shutdown with a power loss and then just switched back on, have you tried rebooting the server, have you looked at the output from each of these along with the output from mdadm --detail /dev/md? where the ? is the raid reference i.e. 0, 127 etc,

  • Thanks geaves - Answers:


    1) There's a possibility the server was shut down from power loss but I have it on a small UPS so it should have shut down "gracefully" - NUT is installed and I have tested it.

    2) I have rebooted it since - no joy

    3) I have NOT looked at the items in the linked post - I will have to get over to the server and run those including the mdadm output.

    Will submit details soon.


    When I get hands-on with the server, does it make sense to swap out the SATA cable or move it to another SATA port? Seems like a longshot to me since the drive can be seen and assigned to a filesytsem.


    Much appreciated.

  • I'm somewhat confused by your post above and re reading your first post, I assume that the server is elsewhere, but do you have a remote connection to it, if you do, you can run those commands from where you are.


    To give you guidance;

    When I select the array and hit "recover", /dev/sde/ is not visible. /sde/ is available if I attempt to add a file system

    That's because the drive already has a raid signature on it, to run recover you would need to wipe the drive.

    I have rebooted it since - no joy

    Well, that was worth a try, but there has to be a reason why the drive was displaced and power loss is usually the cause

    does it make sense to swap out the SATA cable or move it to another SATA port? Seems like a longshot to me since the drive can be seen and assigned to a filesytsem.

    At present this would not be necessary and the output with the relevant information would help with a way forward.


    As a footnote running a raid in a degraded state is not a good idea it places further stress on the existing drives, if the drive fails that /dev/sde is mirrored too then the whole Raid 10 is toast.

    If it's a true Raid 10 you can afford to lose a drive in each mirror, losing one mirror (2 drives) you lose the Raid 10, and you're going to tell me you don't have a backup :)

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!