cancel a mdadm --zero-superblock

  • Hello.


    I upgrade mon RAID5 with a new HDD.


    In the OMV Gui, I don't activate anything and yet, my RAID wasn't still visible.
    In putty I can see my RAID was Degraded. It added the HDD by himself ??


    After a lot of search and I don't know what I have in my mind, I do:
    mdadm --zero-superblock /dev/sda
    And my RAID become FAILED... And sdg (the new HDD) not integrated in the RAID5
    I'm a linux noob and I don't want to press the nail and lost everything...
    Is there a way to recover all my data for a backup?


    NB. Excuse me for my poor english... ;)


    For more details:
    mdadm --detail /dev/md127 give:


    mdadm --examine /dev/sda 


    mdadm --examine /dev/sdb 


    mdadm --examine /dev/sde 


    mdadm --examine /dev/sdf 


    mdadm --examine /dev/sdg 


    mdadm --examine /dev/sdh 


    mdadm --examine /dev/sdi 


    cat /proc/mdstat


    Please Help

  • blkid


    fdisk -l | grep "Disk "


    cat /etc/mdadm/mdadm.conf


    mdadm --detail --scan --verbose


    The Raid contains 7 HDD: 7 WD RED 3To (WD30EFRX)
    A Kingston SSD for the OS (sdc)
    A 2To WD Green for torrent activities (sdd)


    I wish someone can help me :/;(

    • Offizieller Beitrag

    If one drive is new and you zero'd the superblock on another, it may not be good but try:


    mdadm --stop /dev/md127
    mdadm --assemble --force --verbose /dev/md127 /dev/sd[abefghi]

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Thank you very much for spending time for me.


    the result:
    mdadm: looking for devices for /dev/md127
    mdadm: no recogniseable superblock on /dev/sda
    mdadm: /dev/sda has no superblock - assembly aborted


    sdg is the new hdd and don't have the same Events number than the others.
    I don't add it manually, OMV add it by itself when I click Resize in System file.

    • Offizieller Beitrag

    sdg is the new hdd and don't have the same Events number than the others.

    I've never even looked at the Events number.


    OMV add it by itself

    OMV didn't add it. mdadm did but I don't know if it adding to the array or added it as a spare.


    This is why you need a backup. Raid is not a backup.


    Try:


    mdadm --assemble --force --verbose /dev/md127 /dev/sd[befhi]


    And always post cat /proc/mdstat after trying command(s).
    cat /proc/mdstat

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • For mdadm --assemble --force --verbose /dev/md127 /dev/sd[befhi] putty gave me:


    mdadm: looking for devices for /dev/md127
    mdadm: /dev/sdb is identified as a member of /dev/md127, slot 3.
    mdadm: /dev/sde is identified as a member of /dev/md127, slot 2.
    mdadm: /dev/sdf is identified as a member of /dev/md127, slot 5.
    mdadm: /dev/sdh is identified as a member of /dev/md127, slot 1.
    mdadm: /dev/sdi is identified as a member of /dev/md127, slot 0.
    mdadm:/dev/md127 has an active reshape - checking if critical section needs to be restored
    mdadm: added /dev/sdh to /dev/md127 as 1
    mdadm: added /dev/sde to /dev/md127 as 2
    mdadm: added /dev/sdb to /dev/md127 as 3
    mdadm: no uptodate device for slot 4 of /dev/md127
    mdadm: added /dev/sdf to /dev/md127 as 5
    mdadm: no uptodate device for slot 6 of /dev/md127
    mdadm: added /dev/sdi to /dev/md127 as 0
    mdadm: /dev/md127 assembled from 5 drives - not enough to start the array.


    Can't remove an array (sdg)?


    and cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4]
    md127 : inactive sdi[0](S) sdf[5](S) sdb[3](S) sde[2](S) sdh[1](S)
    14650677560 blocks super 1.2


    You are right, til now, ,I think Raid 5 is nice for backup...
    What a mistake.
    I wish I can restore my files and change my mind for backups...

    • Offizieller Beitrag

    Not good. The array thinks it is a 7 drive raid 5 array. If two drives are failed/missing, it won't start. The only thing left I can tell you try results in wiping the array about half the time. Risky but:


    mdadm --create /dev/md127 --level=5 --assume-clean --verbose --raid-devices=6 /dev/sd[abefhi]

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I don't have any other chance, I try...


    the result:
    mdadm: layout defaults to left-symmetric
    mdadm: layout defaults to left-symmetric
    mdadm: chunk size defaults to 512K
    mdadm: /dev/sdb appears to be part of a raid array:
    level=raid5 devices=7 ctime=Mon May 16 20:34:29 2016
    mdadm: /dev/sde appears to be part of a raid array:
    level=raid5 devices=7 ctime=Mon May 16 20:34:29 2016
    mdadm: /dev/sdf appears to be part of a raid array:
    level=raid5 devices=7 ctime=Mon May 16 20:34:29 2016
    mdadm: /dev/sdh appears to be part of a raid array:
    level=raid5 devices=7 ctime=Mon May 16 20:34:29 2016
    mdadm: /dev/sdi appears to be part of a raid array:
    level=raid5 devices=7 ctime=Mon May 16 20:34:29 2016
    mdadm: size set to 2930135040K
    Continue creating array? y
    mdadm: Defaulting to version 1.2 metadata
    mdadm: array /dev/md127 started.


    cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4]
    md127 : active (auto-read-only) raid5 sdi[5] sdh[4] sdf[3] sde[2] sdb[1] sda[0]
    14650675200 blocks super 1.2 level 5, 512k chunk, algorithm 2 [6/6] [UUUUUU]


    unused devices: <none>


     mdadm --detail /dev/md127
    /dev/md127:
    Version : 1.2
    Creation Time : Tue Jan 24 14:44:28 2017
    Raid Level : raid5
    Array Size : 14650675200 (13971.97 GiB 15002.29 GB)
    Used Dev Size : 2930135040 (2794.39 GiB 3000.46 GB)
    Raid Devices : 6
    Total Devices : 6
    Persistence : Superblock is persistent


    Update Time : Tue Jan 24 14:44:28 2017
    State : clean
    Active Devices : 6
    Working Devices : 6
    Failed Devices : 0
    Spare Devices : 0


    Layout : left-symmetric
    Chunk Size : 512K


    Name : NasMaison:127 (local to host NasMaison)
    UUID : 3870e4d0:e0fd2494:b32af0f9:3d47e9d7
    Events : 0


    Number Major Minor RaidDevice State
    0 8 0 0 active sync /dev/sda
    1 8 16 1 active sync /dev/sdb
    2 8 64 2 active sync /dev/sde
    3 8 80 3 active sync /dev/sdf
    4 8 112 4 active sync /dev/sdh
    5 8 128 5 active sync /dev/sdi


    The OMV Gui recognise the Raid as clean but not the System Files (n/a)

    • Offizieller Beitrag

    mdadm --readwrite /dev/md127
    cat /proc/mdstat
    blkid

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • [/tt]mdadm --readwrite /dev/md127
    Nothing displayed


    cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4]
    md127 : active raid5 sdi[5] sdh[4] sdf[3] sde[2] sdb[1] sda[0]
    14650675200 blocks super 1.2 level 5, 512k chunk, algorithm 2 [6/6] [UUUUUU]


    unused devices: <none>


    blkid
    /dev/sdb: UUID="3870e4d0-e0fd-2494-b32a-f0f93d47e9d7" UUID_SUB="659ed876-b4e8-167e-3c46-5cd94b431a84" LABEL="NasMaison:127" TYPE="linux_raid_member"
    /dev/sdg: UUID="343e9353-fa4e-c1d3-d6b9-1659b517e910" UUID_SUB="acdb4fd4-3a19-b021-dfd3-6849997e4ec7" LABEL="NAS-Maison:Stock" TYPE="linux_raid_member"
    /dev/sdh: UUID="3870e4d0-e0fd-2494-b32a-f0f93d47e9d7" UUID_SUB="50c2a64a-bb40-e6a2-6613-0a53d8a7fb67" LABEL="NasMaison:127" TYPE="linux_raid_member"
    /dev/sdi: UUID="3870e4d0-e0fd-2494-b32a-f0f93d47e9d7" UUID_SUB="51a7c2ae-c1ad-4693-e75e-7bb93b699a0a" LABEL="NasMaison:127" TYPE="linux_raid_member"
    /dev/sdc1: UUID="2642ec6c-7f02-40f5-8d30-9278d409aee7" TYPE="ext4"
    /dev/sdc5: UUID="7b189bc1-bc83-4ed5-bf4a-d5933a418d3f" TYPE="swap"
    /dev/sdd1: UUID="b07fb66d-1528-49ef-8760-6b762f9eacf7" TYPE="ext4"
    /dev/sde: UUID="3870e4d0-e0fd-2494-b32a-f0f93d47e9d7" UUID_SUB="c39dc121-8694-8d16-6312-3aa644484849" LABEL="NasMaison:127" TYPE="linux_raid_member"
    /dev/sdf: UUID="3870e4d0-e0fd-2494-b32a-f0f93d47e9d7" UUID_SUB="f43cae35-4d82-4acb-2e89-41574e351322" LABEL="NasMaison:127" TYPE="linux_raid_member"
    /dev/sda: UUID="3870e4d0-e0fd-2494-b32a-f0f93d47e9d7" UUID_SUB="f746cc3b-3a52-dae6-7422-21a625764976" LABEL="NasMaison:127" TYPE="linux_raid_member"

    • Offizieller Beitrag

    Well, unfortunately, this is one of those times where the array was wiped. Since it is running, you could try extundelete (if it was ext4) or photorec to recover files if you have other drives you can recover to.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • ;(;(
    with extundelete, you think I can revocer my files (My raid 5 is in ext4) and copy them in some other disk?


    Sorry, but I have to do
    apt-get install extundeleteand extundelete /dev/md127 --restore-directory /Audio
    for exemple?
    on the Raid we just mount? or disk by disk?

    • Offizieller Beitrag

    you think I can revocer my files (My raid 5 is in ext4) and copy them in some other disk?

    Possibly. I have before.


    extundelete /dev/md127 --restore-directory /Audio

    I think that should work. You need to change to the directory that you intend to recover files TO.


    on the Raid we just mount? or disk by disk?

    It works by filesystem. So, you want the whole array.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.4 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!