degraded -> rebuild -> missing raid/filesystem

  • Hello experts,


    after the read and write speeds of my nas degraded fromr like 90/60 to 60/30 i checked my raid in the web-manager ....


    one hdd was missing in the raid config and the status was degraded .... well i tried to reintegrate (rebuild) the "missing" HDD - did not work - an error message occured ...


    so i fast wiped the hdd and tried the rebuild again and voila - it worked ..... for some time ...


    after about 2 hours i have got the authentication error message - left click to relogin and - Raid infos and filesystem info is empty ....


    i am using omv 0.5.48 amd64 krnl 2.6.32.5 - the raid-level is 6 and there are 12 3GB WD red hdds.


    i am willing to send some logs - there is a series of error messages in the smart log .... if you need more infos and logs please tell me which ones.


    how can i fix that ???????????


    cheers - alex


  • okay here they are ;)


    cat /proc/mdstat
    Personalities :
    md127 : inactive sdb[0](S) sdj[9](S) sdk[10](S) sdl[13](S) sdm[12](S) sdi[7](S) sdh[6](S) sdg[8](S) sdf[4](S) sde[3](S) sdd[2](S) sdc[1](S)
    35163186720 blocks super 1.2
    unused devices: <none>


    $ fdisk -l
    -dash: fdisk: not found
    $ fdisk
    -dash: fdisk: not found
    $ blkid[/quote]
    -dash: blkid[/quote]: not found
    $


    no idea about the -dash .... : not found


    mayby you can guide me - ssh client to use and any specific settings in omv - thx

    • Offizieller Beitrag

    You must not be logging in as root? Use putty for client.


    To fix array, try:


    Code
    mdadm --stop /dev/md127
    mdadm --assemble /dev/md127 /dev/sd[bcdefghijklm] --verbose --force

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!



  • Hallo,


    as addendum i am adding the two log-files of fdisl-l and blkid :




    sorry for too many lines and thanx - alex

  • Zitat von "ryecoaaron"

    You must not be logging in as root? Use putty for client.


    To fix array, try:


    Code
    mdadm --stop /dev/md127
    mdadm --assemble /dev/md127 /dev/sd[bcdefghijklm] --verbose --force


    hello,


    will that code recover the files and the filesystem ???
    and shall i fully wipe the "hdd" that caused the trouble in the first place for bad block relocation ???
    or shall i just reassemble the array mdadm --assemble /dev/md127 /dev/sd[bcdefghijkl] --verbose --force and then try to recover the original raid 6 ???


    thanks in advance - alex

    • Offizieller Beitrag

    Wiping the drive doesn't remove the mdadm superblock. You may need to add the following statement if you know which drive is bad (replace X with the proper drive letter):


    Code
    mdadm --stop /dev/md127
    mdadm --zero-superblock /dev/sdX
    mdadm --assemble /dev/md127 /dev/sd[bcdefghijklm] --verbose --force


    This should fix everything back to raid 6 with the filesystem intact after it finishes syncing and then reboot.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Zitat von "ryecoaaron"

    Wiping the drive doesn't remove the mdadm superblock. You may need to add the following statement if you know which drive is bad (replace X with the proper drive letter):


    Code
    mdadm --stop /dev/md127
    mdadm --zero-superblock /dev/sdX
    mdadm --assemble /dev/md127 /dev/sd[bcdefghijklm] --verbose --force


    This should fix everything back to raid 6 with the filesystem intact after it finishes syncing and then reboot.


    hello again,


    using the --zero-superblock caused an arror in the next command but at least i have got back my degraded raid and the filesystem --- thank you so much !


    i am now wiping the hdd with the omv built in random writes - then i will check the hdd with spinrite 6 and then i will decide if it is an rma case ....


    cheers and thank you soooooo much again - alex

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!