RAID 6 gone, physical drives visible

    • Offizieller Beitrag

    What is the output of: mdadm -A /dev/md127

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!



  • thx alex

  • @ahab666 I have no idea how to help you any further as my knowledge with a software raid config in OMV is very limited.


    Perhaps I may suggest you 'if you can recover your data' to use your LSI hardware raid and create virtual disks if you want more then one virtual disk.
    I have many servers running with LSI cards and in my opinion there is nothing better then hardware raids.
    I know it is an old fashion idea, specially with zfs, but in your particular case I would definitely go for doing all in your LSI.


    Any questions about how to do this with your LSI raid controller I am happy to assist with.


    Kind regards,

    DISCLAIMER: I'm not a native English speaker, I'm really sorry if I don't explain as good as you would like... :)

  • @ahab666


    I would have a look at this disk as it seems the troublemaker, hang it in a different computer and check the SMART.
    If it was mine and I noticed something not quite right I would do a DBAN zero wipe with verify after each sector write.
    Then check SMART again, to see if it re-allocated sectors, then at least you know more about the disk.
    But please do understand i have no clue about software raid, but in the LSI hardware raid a DBAN and putting it back would do a rebuild.
    Perhaps someone could confirm it works same way in a software raid with OMV?


    mdadm: added /dev/sdj to /dev/md127 as 7 (possibly out of date)


    I noticed it was coming back your raid and tried to rebuild, perhaps there is something really wrong with that disk and OMV failed to rebuild it.

    DISCLAIMER: I'm not a native English speaker, I'm really sorry if I don't explain as good as you would like... :)


  • and thx - alex

  • then let out the level parameter:
    mdadm --assemble /dev/md127 /dev/sd[bcdefghijklm] --verbose --force


    That was the command you used and the raid came back, but you must check that disk first!

    DISCLAIMER: I'm not a native English speaker, I'm really sorry if I don't explain as good as you would like... :)

  • @ahab666


    Alex, I scrolled through your last post, and after looking again [was confusing as you pasted all in go] i noticed your raid is in the mdam config file.
    /etc/mdadm/mdadm.conf
    ARRAY /dev/md/OMV metadata=1.2 UUID=6230a09b:2bd2f0af:b6f72e19:46e3b8b3 name=OMV:OMV


    Code
    update-initramfs -u


    and a reboot should do the trick then.

    DISCLAIMER: I'm not a native English speaker, I'm really sorry if I don't explain as good as you would like... :)

  • Code
    root@OMV:~# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4]
    md127 : inactive sdb[0] sdl[11] sdk[10] sdj[9] sdi[12] sdh[6] sdg[5] sdf[4] sde[3] sdd[2] sdc[1]
          32231490632 blocks super 1.2
    
    
    unused devices: <none>


    and


    Code
    root@OMV:~# mdadm --detail --scan
    mdadm: cannot open /dev/md/OMV: No such file or directory
  • @ahab666


    Did you check the hard-disk?


    have you run


    Code
    mdadm --assemble /dev/md127 /dev/sd[bcdefghijklm] --verbose --force


    before the

    Code
    update-initramfs -u

    DISCLAIMER: I'm not a native English speaker, I'm really sorry if I don't explain as good as you would like... :)

  • The array is not named correctly. See line 20/21:


    Code
    # definitions of existing MD arrays
    ARRAY /dev/md/OMV metadata=1.2 UUID=6230a09b:2bd2f0af:b6f72e19:46e3b8b3 name=OMV:OMV


    dev/md/OMV is not a valid RAID 

  • @WastlJ


    I think that's the left over from the point his raid was recovered but crashed!


    The mdadm/kernel setup could create the default mdadm.conf file with --name=NAS.
    When a --name parameter is set, a random (seems to always be 127) md device is created that actually symlinks to /dev/md/NAS
    That are 2 differences I think, but it depends on his configuration, I think he has called his one simple OMV.


    After your commands he should also run the update-initramfs -u


    But I warned him to check the disk first as that seems to be the culprit to me.
    As he was waiting for days I thought I jumped into, but if you prefer I will vanish in helping.

    DISCLAIMER: I'm not a native English speaker, I'm really sorry if I don't explain as good as you would like... :)

    Einmal editiert, zuletzt von Wabun ()

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!