Lost RAID6

  • Hi everyone and excuse for my poor english, i will try my best.


    I loose my RAID6 in OMV. I don't know how and when, sorry.

    It seems that the sde disk is missing in the cluster, which is well recognized under omv, but the raid is not available...


    As request by moderator :


    Someone can help me ?

    Maybe the problem has been dealt with in another topic but it's impossible for me to know if it's exactly the same problem.

    Thx a lot in advance, i'm completely lost !!! ;(;(;(

  • I can provide you with the following additional information :

    • Offizieller Beitrag

    The array is inactive, whilst /dev/sde is displayed in fdisk it is not displayed in blkid post the output of the following in separate code boxes please;


    mdadm --detail /dev/md127


    mdadm --examine /dev/sde


    what actually happened for the array to 'disappear'

  • Thank you very much to take a look on my problem !!!

    the sde disk is missing

    Code
    root@NAS:~# mdadm --examine /dev/sde
    mdadm: No md superblock detected on /dev/sde.


    The smart info for sde disk is bad. I think there is a problem with that.

    I would like to restore my RAID and after change this disk to a better.

    But now, i don't know how to not lost everything...

  • I know it's not a single backup.

    I had mostly my video library on it.

    If I lose it, it's the game...


  • Code
    root@NAS:~# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md127 : active (auto-read-only) raid6 sdc[0] sdb[5] sdg[4] sdf[3] sdd[1]
          7813529600 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/5] [UU_UUU]
          bitmap: 0/15 pages [0KB], 65536KB chunk
    
    unused devices: <none>

    sde is in OMV disk management


    • Offizieller Beitrag

    The array is in auto-read-only do the following;


    mdadm --readwrite /dev/md127

    sde is in OMV disk management

    It will be that section lists all devices omv locates that are connected to the system, so a drive can have SMART errors and still be displayed there

  • The array is in auto-read-only do the following;


    mdadm --readwrite /dev/md127

    It will be that section lists all devices omv locates that are connected to the system, so a drive can have SMART errors and still be displayed there

    Done for readwrite.
    Ok for the other information. So i will replace the bad hdd, say me when... i have one in stock

  • Code
    root@NAS:~# mdadm --readwrite /dev/md127
    root@NAS:~# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md127 : active raid6 sdc[0] sdb[5] sdg[4] sdf[3] sdd[1]
          7813529600 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/5] [UU_UUU]
          bitmap: 0/15 pages [0KB], 65536KB chunk
    
    unused devices: <none>
    • Offizieller Beitrag

    next step

    Raid Management -> Recover (I think recover is the icon that looks like a bag with a cross in it) then on the next screen select the new drive and click save


    TBH, I haven't tested much config in OMV6 on my OMV6 VM test rig to familiarise myself with the changes to the Raid set up :)

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!