cancel a mdadm --zero-superblock

    • OMV 2.x
    • cancel a mdadm --zero-superblock

      Hello.

      I upgrade mon RAID5 with a new HDD.

      In the OMV Gui, I don't activate anything and yet, my RAID wasn't still visible.
      In putty I can see my RAID was Degraded. It added the HDD by himself ??

      After a lot of search and I don't know what I have in my mind, I do:
      mdadm --zero-superblock /dev/sda
      And my RAID become FAILED... And sdg (the new HDD) not integrated in the RAID5
      I'm a linux noob and I don't want to press the nail and lost everything...
      Is there a way to recover all my data for a backup?

      NB. Excuse me for my poor english... ;)

      For more details:
      mdadm --detail /dev/md127 give:
      Display Spoiler
      /dev/md127:
      Version : 1.2
      Creation Time : Mon May 16 20:34:29 2016
      Raid Level : raid5
      Used Dev Size : -1
      Raid Devices : 7
      Total Devices : 5
      Persistence : Superblock is persistent

      Update Time : Sat Jan 21 16:03:12 2017
      State : active, FAILED, Not Started
      Active Devices : 5
      Working Devices : 5
      Failed Devices : 0
      Spare Devices : 0

      Layout : left-symmetric
      Chunk Size : 512K

      Delta Devices : 1, (6->7)

      Name : NAS-Maison:Stock
      UUID : 343e9353:fa4ec1d3:d6b91659:b517e910
      Events : 7499

      Number Major Minor RaidDevice State
      0 8 128 0 active sync /dev/sdi
      1 8 112 1 active sync /dev/sdh
      2 8 64 2 active sync /dev/sde
      3 8 16 3 active sync /dev/sdb
      4 0 0 4 removed
      5 8 80 5 active sync /dev/sdf
      6 0 0 6 removed



      mdadm --examine /dev/sda
      Display Spoiler
      mdadm: No md superblock detected on /dev/sda.


      mdadm --examine /dev/sdb
      Display Spoiler
      /dev/sdb:
      Magic : a92b4efc
      Version : 1.2
      Feature Map : 0x4
      Array UUID : 343e9353:fa4ec1d3:d6b91659:b517e910
      Name : NAS-Maison:Stock
      Creation Time : Mon May 16 20:34:29 2016
      Raid Level : raid5
      Raid Devices : 7

      Avail Dev Size : 5860271024 (2794.40 GiB 3000.46 GB)
      Array Size : 17580810240 (16766.37 GiB 18002.75 GB)
      Used Dev Size : 5860270080 (2794.39 GiB 3000.46 GB)
      Data Offset : 262144 sectors
      Super Offset : 8 sectors
      State : clean
      Device UUID : c5f513f4:e8e3b192:7874e636:d959bc3e

      Reshape pos'n : 18432 (18.00 MiB 18.87 MB)
      Delta Devices : 1 (6->7)

      Update Time : Sat Jan 21 16:03:12 2017
      Checksum : 6a5a89 - correct
      Events : 7499

      Layout : left-symmetric
      Chunk Size : 512K

      Device Role : Active device 3
      Array State : AAAAAA. ('A' == active, '.' == missing)


      mdadm --examine /dev/sde
      Display Spoiler
      /dev/sde:
      Magic : a92b4efc
      Version : 1.2
      Feature Map : 0x4
      Array UUID : 343e9353:fa4ec1d3:d6b91659:b517e910
      Name : NAS-Maison:Stock
      Creation Time : Mon May 16 20:34:29 2016
      Raid Level : raid5
      Raid Devices : 7

      Avail Dev Size : 5860271024 (2794.40 GiB 3000.46 GB)
      Array Size : 17580810240 (16766.37 GiB 18002.75 GB)
      Used Dev Size : 5860270080 (2794.39 GiB 3000.46 GB)
      Data Offset : 262144 sectors
      Super Offset : 8 sectors
      State : clean
      Device UUID : 9b356513:b4b16eb6:d739e100:137940ee

      Reshape pos'n : 18432 (18.00 MiB 18.87 MB)
      Delta Devices : 1 (6->7)

      Update Time : Sat Jan 21 16:03:12 2017
      Checksum : bcf74cc2 - correct
      Events : 7499

      Layout : left-symmetric
      Chunk Size : 512K

      Device Role : Active device 2
      Array State : AAAAAA. ('A' == active, '.' == missing)


      mdadm --examine /dev/sdf
      Display Spoiler
      /dev/sdf:
      Magic : a92b4efc
      Version : 1.2
      Feature Map : 0x4
      Array UUID : 343e9353:fa4ec1d3:d6b91659:b517e910
      Name : NAS-Maison:Stock
      Creation Time : Mon May 16 20:34:29 2016
      Raid Level : raid5
      Raid Devices : 7

      Avail Dev Size : 5860271024 (2794.40 GiB 3000.46 GB)
      Array Size : 17580810240 (16766.37 GiB 18002.75 GB)
      Used Dev Size : 5860270080 (2794.39 GiB 3000.46 GB)
      Data Offset : 262144 sectors
      Super Offset : 8 sectors
      State : clean
      Device UUID : 358d754e:a6232882:0ee0fae0:efb054a8

      Reshape pos'n : 18432 (18.00 MiB 18.87 MB)
      Delta Devices : 1 (6->7)

      Update Time : Sat Jan 21 16:03:12 2017
      Checksum : 5deef465 - correct
      Events : 7499

      Layout : left-symmetric
      Chunk Size : 512K

      Device Role : Active device 5
      Array State : AAAAAA. ('A' == active, '.' == missing)


      mdadm --examine /dev/sdg
      Display Spoiler
      /dev/sdg:
      Magic : a92b4efc
      Version : 1.2
      Feature Map : 0x4
      Array UUID : 343e9353:fa4ec1d3:d6b91659:b517e910
      Name : NAS-Maison:Stock
      Creation Time : Mon May 16 20:34:29 2016
      Raid Level : raid5
      Raid Devices : 7

      Avail Dev Size : 5860271024 (2794.40 GiB 3000.46 GB)
      Array Size : 17580810240 (16766.37 GiB 18002.75 GB)
      Used Dev Size : 5860270080 (2794.39 GiB 3000.46 GB)
      Data Offset : 262144 sectors
      Super Offset : 8 sectors
      State : active
      Device UUID : acdb4fd4:3a19b021:dfd36849:997e4ec7

      Reshape pos'n : 18432 (18.00 MiB 18.87 MB)
      Delta Devices : 1 (6->7)

      Update Time : Sat Jan 21 15:02:19 2017
      Checksum : ab7ebaa - correct
      Events : 7494

      Layout : left-symmetric
      Chunk Size : 512K

      Device Role : Active device 6
      Array State : AAAAAAA ('A' == active, '.' == missing)


      mdadm --examine /dev/sdh
      Display Spoiler
      /dev/sdh:
      Magic : a92b4efc
      Version : 1.2
      Feature Map : 0x4
      Array UUID : 343e9353:fa4ec1d3:d6b91659:b517e910
      Name : NAS-Maison:Stock
      Creation Time : Mon May 16 20:34:29 2016
      Raid Level : raid5
      Raid Devices : 7

      Avail Dev Size : 5860271024 (2794.40 GiB 3000.46 GB)
      Array Size : 17580810240 (16766.37 GiB 18002.75 GB)
      Used Dev Size : 5860270080 (2794.39 GiB 3000.46 GB)
      Data Offset : 262144 sectors
      Super Offset : 8 sectors
      State : clean
      Device UUID : 38be4781:f6ba975f:06e16284:b1aeeb68

      Reshape pos'n : 18432 (18.00 MiB 18.87 MB)
      Delta Devices : 1 (6->7)

      Update Time : Sat Jan 21 16:03:12 2017
      Checksum : d22fbb6d - correct
      Events : 7499

      Layout : left-symmetric
      Chunk Size : 512K

      Device Role : Active device 1
      Array State : AAAAAA. ('A' == active, '.' == missing)


      mdadm --examine /dev/sdi
      Display Spoiler
      /dev/sdi:
      Magic : a92b4efc
      Version : 1.2
      Feature Map : 0x4
      Array UUID : 343e9353:fa4ec1d3:d6b91659:b517e910
      Name : NAS-Maison:Stock
      Creation Time : Mon May 16 20:34:29 2016
      Raid Level : raid5
      Raid Devices : 7

      Avail Dev Size : 5860271024 (2794.40 GiB 3000.46 GB)
      Array Size : 17580810240 (16766.37 GiB 18002.75 GB)
      Used Dev Size : 5860270080 (2794.39 GiB 3000.46 GB)
      Data Offset : 262144 sectors
      Super Offset : 8 sectors
      State : clean
      Device UUID : 2c4420fd:7d08af42:85993da1:c81a0804

      Reshape pos'n : 18432 (18.00 MiB 18.87 MB)
      Delta Devices : 1 (6->7)

      Update Time : Sat Jan 21 16:03:12 2017
      Checksum : e916b37d - correct
      Events : 7499

      Layout : left-symmetric
      Chunk Size : 512K

      Device Role : Active device 0
      Array State : AAAAAA. ('A' == active, '.' == missing)


      cat /proc/mdstat
      Display Spoiler
      Personalities : [raid6] [raid5] [raid4]
      md127 : inactive sdi[0] sdf[5] sdb[3] sde[2] sdh[1]
      14650677560 blocks super 1.2

      unused devices: <none>


      Please Help
    • blkid
      Display Spoiler
      /dev/sdb: UUID="343e9353-fa4e-c1d3-d6b9-1659b517e910" UUID_SUB="c5f513f4-e8e3-b192-7874-e636d959bc3e" LABEL="NAS-Maison:Stock" TYPE="linux_raid_member"
      /dev/sdg: UUID="343e9353-fa4e-c1d3-d6b9-1659b517e910" UUID_SUB="acdb4fd4-3a19-b021-dfd3-6849997e4ec7" LABEL="NAS-Maison:Stock" TYPE="linux_raid_member"
      /dev/sdh: UUID="343e9353-fa4e-c1d3-d6b9-1659b517e910" UUID_SUB="38be4781-f6ba-975f-06e1-6284b1aeeb68" LABEL="NAS-Maison:Stock" TYPE="linux_raid_member"
      /dev/sdi: UUID="343e9353-fa4e-c1d3-d6b9-1659b517e910" UUID_SUB="2c4420fd-7d08-af42-8599-3da1c81a0804" LABEL="NAS-Maison:Stock" TYPE="linux_raid_member"
      /dev/sdc1: UUID="2642ec6c-7f02-40f5-8d30-9278d409aee7" TYPE="ext4"
      /dev/sdc5: UUID="7b189bc1-bc83-4ed5-bf4a-d5933a418d3f" TYPE="swap"
      /dev/sdd1: UUID="b07fb66d-1528-49ef-8760-6b762f9eacf7" TYPE="ext4"
      /dev/sde: UUID="343e9353-fa4e-c1d3-d6b9-1659b517e910" UUID_SUB="9b356513-b4b1-6eb6-d739-e100137940ee" LABEL="NAS-Maison:Stock" TYPE="linux_raid_member"
      /dev/sdf: UUID="343e9353-fa4e-c1d3-d6b9-1659b517e910" UUID_SUB="358d754e-a623-2882-0ee0-fae0efb054a8" LABEL="NAS-Maison:Stock" TYPE="linux_raid_member"


      fdisk -l | grep "Disk "
      Display Spoiler
      Disk /dev/sda doesn't contain a valid partition table
      Disk /dev/sdb doesn't contain a valid partition table

      WARNING: GPT (GUID Partition Table) detected on '/dev/sdd'! The util fdisk doesn't support GPT. Use GNU Parted.

      Disk /dev/sde doesn't contain a valid partition table
      Disk /dev/sdf doesn't contain a valid partition table
      Disk /dev/sdg doesn't contain a valid partition table
      Disk /dev/sdh doesn't contain a valid partition table
      Disk /dev/sdi doesn't contain a valid partition table
      Disk /dev/sda: 3000.6 GB, 3000592982016 bytes
      Disk identifier: 0x00000000
      Disk /dev/sdb: 3000.6 GB, 3000592982016 bytes
      Disk identifier: 0x00000000
      Disk /dev/sdc: 120.0 GB, 120034123776 bytes
      Disk identifier: 0x0005618c
      Disk /dev/sdd: 2000.4 GB, 2000398934016 bytes
      Disk identifier: 0x00000000
      Disk /dev/sde: 3000.6 GB, 3000592982016 bytes
      Disk identifier: 0x00000000
      Disk /dev/sdf: 3000.6 GB, 3000592982016 bytes
      Disk identifier: 0x00000000
      Disk /dev/sdg: 3000.6 GB, 3000592982016 bytes
      Disk identifier: 0x00000000
      Disk /dev/sdh: 3000.6 GB, 3000592982016 bytes
      Disk identifier: 0x00000000
      Disk /dev/sdi: 3000.6 GB, 3000592982016 bytes
      Disk identifier: 0x00000000


      cat /etc/mdadm/mdadm.conf
      Display Spoiler
      # mdadm.conf
      #
      # Please refer to mdadm.conf(5) for information about this file.
      #

      # by default, scan all partitions (/proc/partitions) for MD superblocks.
      # alternatively, specify devices to scan, using wildcards if desired.
      # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      # used if no RAID devices are configured.
      DEVICE partitions

      # auto-create devices with Debian standard permissions
      CREATE owner=root group=disk mode=0660 auto=yes

      # automatically tag new arrays as belonging to the local system
      HOMEHOST <system>

      # definitions of existing MD arrays


      mdadm --detail --scan --verbose
      Display Spoiler
      mdadm: cannot open /dev/md/Stock: No such file or directory


      The Raid contains 7 HDD: 7 WD RED 3To (WD30EFRX)
      A Kingston SSD for the OS (sdc)
      A 2To WD Green for torrent activities (sdd)

      I wish someone can help me :/ ;(
    • If one drive is new and you zero'd the superblock on another, it may not be good but try:

      mdadm --stop /dev/md127
      mdadm --assemble --force --verbose /dev/md127 /dev/sd[abefghi]
      omv 4.1.15 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • Thank you very much for spending time for me.

      the result:
      mdadm: looking for devices for /dev/md127
      mdadm: no recogniseable superblock on /dev/sda
      mdadm: /dev/sda has no superblock - assembly aborted

      sdg is the new hdd and don't have the same Events number than the others.
      I don't add it manually, OMV add it by itself when I click Resize in System file.
    • bugado wrote:

      sdg is the new hdd and don't have the same Events number than the others.
      I've never even looked at the Events number.

      bugado wrote:

      OMV add it by itself
      OMV didn't add it. mdadm did but I don't know if it adding to the array or added it as a spare.

      This is why you need a backup. Raid is not a backup.

      Try:

      mdadm --assemble --force --verbose /dev/md127 /dev/sd[befhi]

      And always post cat /proc/mdstat after trying command(s).
      cat /proc/mdstat
      omv 4.1.15 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • For mdadm --assemble --force --verbose /dev/md127 /dev/sd[befhi] putty gave me:

      mdadm: looking for devices for /dev/md127
      mdadm: /dev/sdb is identified as a member of /dev/md127, slot 3.
      mdadm: /dev/sde is identified as a member of /dev/md127, slot 2.
      mdadm: /dev/sdf is identified as a member of /dev/md127, slot 5.
      mdadm: /dev/sdh is identified as a member of /dev/md127, slot 1.
      mdadm: /dev/sdi is identified as a member of /dev/md127, slot 0.
      mdadm:/dev/md127 has an active reshape - checking if critical section needs to be restored
      mdadm: added /dev/sdh to /dev/md127 as 1
      mdadm: added /dev/sde to /dev/md127 as 2
      mdadm: added /dev/sdb to /dev/md127 as 3
      mdadm: no uptodate device for slot 4 of /dev/md127
      mdadm: added /dev/sdf to /dev/md127 as 5
      mdadm: no uptodate device for slot 6 of /dev/md127
      mdadm: added /dev/sdi to /dev/md127 as 0
      mdadm: /dev/md127 assembled from 5 drives - not enough to start the array.

      Can't remove an array (sdg)?

      and cat /proc/mdstat
      Personalities : [raid6] [raid5] [raid4]
      md127 : inactive sdi[0](S) sdf[5](S) sdb[3](S) sde[2](S) sdh[1](S)
      14650677560 blocks super 1.2

      You are right, til now, ,I think Raid 5 is nice for backup...
      What a mistake.
      I wish I can restore my files and change my mind for backups...
    • Not good. The array thinks it is a 7 drive raid 5 array. If two drives are failed/missing, it won't start. The only thing left I can tell you try results in wiping the array about half the time. Risky but:

      mdadm --create /dev/md127 --level=5 --assume-clean --verbose --raid-devices=6 /dev/sd[abefhi]
      omv 4.1.15 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • I don't have any other chance, I try...

      the result:
      mdadm: layout defaults to left-symmetric
      mdadm: layout defaults to left-symmetric
      mdadm: chunk size defaults to 512K
      mdadm: /dev/sdb appears to be part of a raid array:
      level=raid5 devices=7 ctime=Mon May 16 20:34:29 2016
      mdadm: /dev/sde appears to be part of a raid array:
      level=raid5 devices=7 ctime=Mon May 16 20:34:29 2016
      mdadm: /dev/sdf appears to be part of a raid array:
      level=raid5 devices=7 ctime=Mon May 16 20:34:29 2016
      mdadm: /dev/sdh appears to be part of a raid array:
      level=raid5 devices=7 ctime=Mon May 16 20:34:29 2016
      mdadm: /dev/sdi appears to be part of a raid array:
      level=raid5 devices=7 ctime=Mon May 16 20:34:29 2016
      mdadm: size set to 2930135040K
      Continue creating array? y
      mdadm: Defaulting to version 1.2 metadata
      mdadm: array /dev/md127 started.

      cat /proc/mdstat
      Personalities : [raid6] [raid5] [raid4]
      md127 : active (auto-read-only) raid5 sdi[5] sdh[4] sdf[3] sde[2] sdb[1] sda[0]
      14650675200 blocks super 1.2 level 5, 512k chunk, algorithm 2 [6/6] [UUUUUU]

      unused devices: <none>

      mdadm --detail /dev/md127
      /dev/md127:
      Version : 1.2
      Creation Time : Tue Jan 24 14:44:28 2017
      Raid Level : raid5
      Array Size : 14650675200 (13971.97 GiB 15002.29 GB)
      Used Dev Size : 2930135040 (2794.39 GiB 3000.46 GB)
      Raid Devices : 6
      Total Devices : 6
      Persistence : Superblock is persistent

      Update Time : Tue Jan 24 14:44:28 2017
      State : clean
      Active Devices : 6
      Working Devices : 6
      Failed Devices : 0
      Spare Devices : 0

      Layout : left-symmetric
      Chunk Size : 512K

      Name : NasMaison:127 (local to host NasMaison)
      UUID : 3870e4d0:e0fd2494:b32af0f9:3d47e9d7
      Events : 0

      Number Major Minor RaidDevice State
      0 8 0 0 active sync /dev/sda
      1 8 16 1 active sync /dev/sdb
      2 8 64 2 active sync /dev/sde
      3 8 80 3 active sync /dev/sdf
      4 8 112 4 active sync /dev/sdh
      5 8 128 5 active sync /dev/sdi

      The OMV Gui recognise the Raid as clean but not the System Files (n/a)
    • [/tt]mdadm --readwrite /dev/md127
      Nothing displayed

      cat /proc/mdstat
      Personalities : [raid6] [raid5] [raid4]
      md127 : active raid5 sdi[5] sdh[4] sdf[3] sde[2] sdb[1] sda[0]
      14650675200 blocks super 1.2 level 5, 512k chunk, algorithm 2 [6/6] [UUUUUU]

      unused devices: <none>

      blkid
      /dev/sdb: UUID="3870e4d0-e0fd-2494-b32a-f0f93d47e9d7" UUID_SUB="659ed876-b4e8-167e-3c46-5cd94b431a84" LABEL="NasMaison:127" TYPE="linux_raid_member"
      /dev/sdg: UUID="343e9353-fa4e-c1d3-d6b9-1659b517e910" UUID_SUB="acdb4fd4-3a19-b021-dfd3-6849997e4ec7" LABEL="NAS-Maison:Stock" TYPE="linux_raid_member"
      /dev/sdh: UUID="3870e4d0-e0fd-2494-b32a-f0f93d47e9d7" UUID_SUB="50c2a64a-bb40-e6a2-6613-0a53d8a7fb67" LABEL="NasMaison:127" TYPE="linux_raid_member"
      /dev/sdi: UUID="3870e4d0-e0fd-2494-b32a-f0f93d47e9d7" UUID_SUB="51a7c2ae-c1ad-4693-e75e-7bb93b699a0a" LABEL="NasMaison:127" TYPE="linux_raid_member"
      /dev/sdc1: UUID="2642ec6c-7f02-40f5-8d30-9278d409aee7" TYPE="ext4"
      /dev/sdc5: UUID="7b189bc1-bc83-4ed5-bf4a-d5933a418d3f" TYPE="swap"
      /dev/sdd1: UUID="b07fb66d-1528-49ef-8760-6b762f9eacf7" TYPE="ext4"
      /dev/sde: UUID="3870e4d0-e0fd-2494-b32a-f0f93d47e9d7" UUID_SUB="c39dc121-8694-8d16-6312-3aa644484849" LABEL="NasMaison:127" TYPE="linux_raid_member"
      /dev/sdf: UUID="3870e4d0-e0fd-2494-b32a-f0f93d47e9d7" UUID_SUB="f43cae35-4d82-4acb-2e89-41574e351322" LABEL="NasMaison:127" TYPE="linux_raid_member"
      /dev/sda: UUID="3870e4d0-e0fd-2494-b32a-f0f93d47e9d7" UUID_SUB="f746cc3b-3a52-dae6-7422-21a625764976" LABEL="NasMaison:127" TYPE="linux_raid_member"
    • Well, unfortunately, this is one of those times where the array was wiped. Since it is running, you could try extundelete (if it was ext4) or photorec to recover files if you have other drives you can recover to.
      omv 4.1.15 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • bugado wrote:

      you think I can revocer my files (My raid 5 is in ext4) and copy them in some other disk?
      Possibly. I have before.

      bugado wrote:

      extundelete /dev/md127 --restore-directory /Audio
      I think that should work. You need to change to the directory that you intend to recover files TO.



      bugado wrote:

      on the Raid we just mount? or disk by disk?
      It works by filesystem. So, you want the whole array.
      omv 4.1.15 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.13
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!