No Answer on " cat /proc/mdstat"

  • Hallo,


    still working on my Problem "Raid5 lost aber reshape and reboot" But something is going on, ... CPU more than 50%


    What does it mean: After
    cat /proc/mdstat
    in ssh no answer just a blank line. Other commands like ls -l are working?


    Vielen Dank


    Trickreich

    • Offizieller Beitrag

    No output means there are no mdadm arrays found. Try mdadm --assemble --scan

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Hallo,


    i mean really no output. I have to use ctrl-c to get the cursor back.


    could i be that the system is to busy to answer? The load average ist above 2.05
    Any usage of mdadm ... ist without answer.


    I hope a reshape or resize prozess is running


    I had a Problem with a raid 5 after adding a 5th Disk.
    Reshape > bad Luck > reboot > raid5 gone :(

    • Offizieller Beitrag

    Reboot it. It will start over.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Sorry to ask you again


    After the reboot the raid 5 ist not present,


    a cat /proc/mdstat gives the following


    Personalities : [raid6] [raid5] [raid4]
    md126 : inactive sdf[4](S)
    3907017560 blocks super 1.2


    md127 : inactive sda[0] sdc[3] sdd[2] sdb[1]
    15628070240 blocks super 1.2


    i stopped both with
    mdadm --stop /dev/md126
    mdadm --stop /dev/md127


    then i started


    mdadm --assemble --verbose --force /dev/md127 /dev/sd[abcdf]


    i got these Answer



    mdadm: looking for devices for /dev/md127
    mdadm: /dev/sda is identified as a member of /dev/md127, slot 0.
    mdadm: /dev/sdb is identified as a member of /dev/md127, slot 1.
    mdadm: /dev/sdc is identified as a member of /dev/md127, slot 3.
    mdadm: /dev/sdd is identified as a member of /dev/md127, slot 2.
    mdadm: /dev/sdf is identified as a member of /dev/md127, slot 4.
    mdadm: Marking array /dev/md127 as 'clean'
    mdadm:/dev/md127 has an active reshape - checking if critical section needs to be restored
    mdadm: too-old timestamp on backup-metadata on device-4
    mdadm: added /dev/sdb to /dev/md127 as 1
    mdadm: added /dev/sdd to /dev/md127 as 2
    mdadm: added /dev/sdc to /dev/md127 as 3
    mdadm: added /dev/sdf to /dev/md127 as 4 (possibly out of date)
    mdadm: added /dev/sda to /dev/md127 as 0
    mdadm: /dev/md127 has been started with 4 drives (out of 5).


    The raid is now visible in the OMV webinterface


    cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4]
    md127 : active (auto-read-only) raid5 sda[0] sdc[3] sdd[2] sdb[1]
    11721051648 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/4] [UUUU_]


    But the details are not showing how many % of the reshape ist done.


    Version : 1.2
    Creation Time : Sat Mar 15 18:24:50 2014
    Raid Level : raid5
    Array Size : 11721051648 (11178.07 GiB 12002.36 GB)
    Used Dev Size : 3907017216 (3726.02 GiB 4000.79 GB)
    Raid Devices : 5
    Total Devices : 4
    Persistence : Superblock is persistent



    Update Time : Sun Aug 7 10:44:25 2016
    State : clean, degraded
    Active Devices : 4
    Working Devices : 4
    Failed Devices : 0
    Spare Devices : 0



    Layout : left-symmetric
    Chunk Size : 512K



    Delta Devices : 1, (4->5)



    Name : Speicherkasten2:SK
    UUID : 66a1b019:6157517c:1accb2aa:85eb0bdf
    Events : 1829



    Number Major Minor RaidDevice State
    0 8 0 0 active sync /dev/sda
    1 8 16 1 active sync /dev/sdb
    2 8 48 2 active sync /dev/sdd
    3 8 32 3 active sync /dev/sdc
    4 0 0 4 removed



    can you help me??? :)

    • Offizieller Beitrag

    Try
    mdadm --readwrite /dev/md127
    If it finishes syncing, then zero /dev/sdf (dd if=/dev/zero of=/dev/sdf bs=512 count=100000)
    Stop the array again and reassemble including sdf

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Oh, it becomes better. :)


    after
    mdadm --readwrite /dev/md127


    it shows me the raid clean, degraded, reshaping


    should i wait with the
    dd if=/dev/zero of=/dev/sdf bs=512 count=100000



    until the reshaping is finished?

    • Offizieller Beitrag

    Yes, wait for the reshaping to finish. I recommend backing up your files at that point before zero'ing the drive.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • And now its almost the best. :)


    Thank you for the Help.


    Now i have some hours of backup to do. But their is something to back up. :)


    If something will be wrong with the zero'ing or the next assembly, ...


    i´ll be back. :)

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!