RAID5 - Reassemble an inactive array after a disk failure

  • Hello,


    I had a disk failure on my RAID5 recently while I was on holidays with no spare time to dedicate to that issue. The array was logically in degraded mode from that moment but was still running fine with 3 disks until a second event occured yesterday :




    I don't really understand why I received this message as this device is OK, the HP Gen 8 RAID controller does not detect any error on it and SMART tests are good. At this time I turned off the server and physically unplugged the first drive which was faulty then I restarted it but now my RAID5 is marked as inactive and is no more visible by OMV although the three remaining physical disks are present in the GUI.

    Code
    cat /proc/mdstat
    
    
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md2 : inactive sdc5[1](S) sdd[3](S) sdb[4](S)
          17576636912 blocks super 1.2
    
    
    unused devices: <none>




    My configuration :


    My RAID5 contains 4 6To drives, 1 of them is faulty and has been removed physically (formerly /dev/sde)


    What I've tried to do until now :


    I thought the force mode would do the trick, especially with this little difference on the events counter (26486 vs 26567) but no luck with that.



    What's the next step then ? I've read some stuff about assume-clean switch that could work but I'm not sure about that. Could it be a good idea to do a mdadm --zero-superblock /dev/sdb ?


    Any help greatly appreciated :)


    Mazz

    • Offizieller Beitrag

    Any help greatly appreciated

    It's dead!

    Code
    mdadm: failed to RUN_ARRAY /dev/md2: Input/output error
    mdadm: Not enough devices to start the array.

    Could it be a good idea to do a mdadm --zero-superblock /dev/sdb ?

    Would be an option if it wasn't for the above.

  • That's not good indeed.


    THere is definititely something weird with my /dev/sdb disk because before my second crash it was mapped to the array with partition /dev/sdb5 and now OMV tries to use it as /dev/sdb.



    The partition still exists however but mdadm doesn't know it anymore :


    Code
    mdadm --examine /dev/sdb5
    mdadm: cannot open /dev/sdb5: No such file or directory


    Is there something I can do to force mdadm tu use /dev/sdb5 instead of /dev/sdb ?

    • Offizieller Beitrag

    mdadm: added /dev/sdb to /dev/md2 as 2 (possibly out of date)

    That is problem and as you have suggested --zero-superblock then add it back to the array would fix, however I don't believe that these are repairable unless you delete them.

    • Partition 2 does not start on physical sector boundary.
    • Partition 5 does not start on physical sector boundary

    M understanding is that OMV uses the whole disk for Raid configuration, I can only assume judging by this mdadm --assemble /dev/md2 /dev/sd[bd] /dev/sdc5 --verbose --force --run the original array was created via the cli.


    The other problem is that blkid does not list /dev/md2.


    Looking at the above you've had a failed drive which by physically removing it then rebooting returned the array as inactive which is the way mdadm behaves, a drive needs to be failed then removed from the array using mdadm. Obviously there has been some corruption on /dev/sdb which is why mdadm does not see /sdb5.


    You've lost the array, Raid 5 will only allow for one drive failure.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!