RAID6 gone, please help

    • RAID6 gone, please help

      Hello,
      i'm new to OMV and installed latest version.

      Then i created a 16x3 TB Raid 6 and made a brtfs volume.

      After i added some more disks, the system had some trouble, so i removed all drives and repaired the main system.

      NOW i have a (paused) 9x8 TB Raid 5 (md0) withich is fine so far, but my MD127 is still missing...

      here some outputs, which hopefully help:


      cat /proc/mdstat


      Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
      md0 : active (auto-read-only) raid5 sdab[8] sdaa[7] sdw[3] sdy[5] sdu[1] sdv[2] sdx[4] sdz[6] sdt[0]
      62511161344 blocks super 1.2 level 5, 512k chunk, algorithm 2 [9/9] [UUUUUUUUU]
      resync=PENDING
      bitmap: 59/59 pages [236KB], 65536KB chunk


      md127 : inactive sdg[5] sdo[13] sdl[10] sdc[0] sdp[14] sde[3] sdd[2] sdk[9] sdn[12] sdr[1] sdq[15] sdf[4] sdh[6] sdj[8] sdm[11]
      43952032680 blocks super 1.2


      unused devices: <none>


      fdisk -l | grep "Disk "



      Disk /dev/sda: 119,2 GiB, 128035676160 bytes, 250069680 sectors
      Disk identifier: 0xfcd6e7a1
      Disk /dev/sdb: 3,7 TiB, 4000787030016 bytes, 7814037168 sectors
      Disk /dev/sde: 2,7 TiB, 3000592982016 bytes, 5860533168 sectors
      Disk /dev/sdf: 2,7 TiB, 3000592982016 bytes, 5860533168 sectors
      Disk /dev/sdh: 2,7 TiB, 3000592982016 bytes, 5860533168 sectors
      Disk /dev/sdi: 2,7 TiB, 3000592982016 bytes, 5860533168 sectors
      Disk /dev/sdj: 2,7 TiB, 3000592982016 bytes, 5860533168 sectors
      Disk /dev/sdq: 2,7 TiB, 3000592982016 bytes, 5860533168 sectors
      Disk /dev/sdm: 2,7 TiB, 3000592982016 bytes, 5860533168 sectors
      Disk /dev/sdn: 2,7 TiB, 3000592982016 bytes, 5860533168 sectors
      Disk /dev/sdr: 2,7 TiB, 3000592982016 bytes, 5860533168 sectors
      Disk /dev/sdd: 2,7 TiB, 3000592982016 bytes, 5860533168 sectors
      Disk /dev/sdl: 2,7 TiB, 3000592982016 bytes, 5860533168 sectors
      Disk /dev/sdo: 2,7 TiB, 3000592982016 bytes, 5860533168 sectors
      Disk /dev/sdc: 2,7 TiB, 3000592982016 bytes, 5860533168 sectors
      Disk /dev/sdk: 2,7 TiB, 3000592982016 bytes, 5860533168 sectors
      Disk /dev/sdp: 2,7 TiB, 3000592982016 bytes, 5860533168 sectors
      Disk /dev/sdg: 2,7 TiB, 3000592982016 bytes, 5860533168 sectors
      Disk /dev/sdx: 7,3 TiB, 8001563222016 bytes, 15628053168 sectors
      Disk /dev/sdz: 7,3 TiB, 8001563222016 bytes, 15628053168 sectors
      Disk /dev/sdy: 7,3 TiB, 8001563222016 bytes, 15628053168 sectors
      Disk /dev/sdu: 7,3 TiB, 8001563222016 bytes, 15628053168 sectors
      Disk /dev/sdt: 7,3 TiB, 8001563222016 bytes, 15628053168 sectors
      Disk /dev/sdw: 7,3 TiB, 8001563222016 bytes, 15628053168 sectors
      Disk /dev/sdv: 7,3 TiB, 8001563222016 bytes, 15628053168 sectors
      Disk /dev/sds: 3,7 TiB, 4000787030016 bytes, 7814037168 sectors
      Disk /dev/sdaa: 7,3 TiB, 8001563222016 bytes, 15628053168 sectors
      Disk /dev/sdab: 7,3 TiB, 8001563222016 bytes, 15628053168 sectors
      Disk /dev/md0: 58,2 TiB, 64011429216256 bytes, 125022322688 sectors


      mdadm --detail --scan --verbose


      ARRAY /dev/md127 level=raid6 num-devices=16 metadata=1.2 name=OMV-Zombie:16x3TB UUID=c81fdea7:cb471f15:beed6110:3d433a03
      devices=/dev/sdc,/dev/sdd,/dev/sde,/dev/sdf,/dev/sdg,/dev/sdh,/dev/sdj,/dev/sdk,/dev/sdl,/dev/sdm,/dev/sdn,/dev/sdo,/dev/sdp,/dev/sdq,/dev/sdr
      ARRAY /dev/md0 level=raid5 num-devices=9 metadata=1.2 name=OMV-Zombie:9x8TB UUID=43d48412:d3e85162:e6c840e5:8a578826
      devices=/dev/sdaa,/dev/sdab,/dev/sdt,/dev/sdu,/dev/sdv,/dev/sdw,/dev/sdx,/dev/sdy,/dev/sdz


      blkid



      /dev/sda1: UUID="01a9e619-b4e0-4f7f-aaef-29b6e0c85ab4" TYPE="ext4" PARTUUID="fcd6e7a1-01"
      /dev/sde: UUID="c81fdea7-cb47-1f15-beed-61103d433a03" UUID_SUB="48222177-18b7-eb57-c486-f1d2c205919b" LABEL="OMV-Zombie:16x3TB" TYPE="linux_raid_member"
      /dev/sdf: UUID="c81fdea7-cb47-1f15-beed-61103d433a03" UUID_SUB="614b123b-e98a-1ce1-4650-b8596f7c5439" LABEL="OMV-Zombie:16x3TB" TYPE="linux_raid_member"
      /dev/sdh: UUID="c81fdea7-cb47-1f15-beed-61103d433a03" UUID_SUB="e58c4e58-4b9d-cf4d-666c-920710205d4a" LABEL="OMV-Zombie:16x3TB" TYPE="linux_raid_member"
      /dev/sdi: UUID="c81fdea7-cb47-1f15-beed-61103d433a03" UUID_SUB="1dc6e110-dc9d-415e-c967-704aeb7ef2f1" LABEL="OMV-Zombie:16x3TB" TYPE="linux_raid_member"
      /dev/sdj: UUID="c81fdea7-cb47-1f15-beed-61103d433a03" UUID_SUB="26b6ee63-512d-6068-388b-656e0136fa85" LABEL="OMV-Zombie:16x3TB" TYPE="linux_raid_member"
      /dev/sdq: UUID="c81fdea7-cb47-1f15-beed-61103d433a03" UUID_SUB="51fd6466-763c-64dd-420f-013ea2319891" LABEL="OMV-Zombie:16x3TB" TYPE="linux_raid_member"
      /dev/sdm: UUID="c81fdea7-cb47-1f15-beed-61103d433a03" UUID_SUB="622c7df1-0bcf-8230-603d-4da485e094cd" LABEL="OMV-Zombie:16x3TB" TYPE="linux_raid_member"
      /dev/sdn: UUID="c81fdea7-cb47-1f15-beed-61103d433a03" UUID_SUB="53af1ceb-9e43-593f-94ea-3a876e9232ec" LABEL="OMV-Zombie:16x3TB" TYPE="linux_raid_member"
      /dev/sdr: UUID="c81fdea7-cb47-1f15-beed-61103d433a03" UUID_SUB="dfc7c737-dd71-1137-60d3-9a19fd630390" LABEL="OMV-Zombie:16x3TB" TYPE="linux_raid_member"
      /dev/sdd: UUID="c81fdea7-cb47-1f15-beed-61103d433a03" UUID_SUB="07873947-65ad-9128-1ccf-96df9c619687" LABEL="OMV-Zombie:16x3TB" TYPE="linux_raid_member"
      /dev/sdl: UUID="c81fdea7-cb47-1f15-beed-61103d433a03" UUID_SUB="9d2fea53-4dbe-b6a3-08be-4f7e03301cf2" LABEL="OMV-Zombie:16x3TB" TYPE="linux_raid_member"
      /dev/sdo: UUID="c81fdea7-cb47-1f15-beed-61103d433a03" UUID_SUB="851fcdb6-a16d-e6f3-f072-3c5fff3aabdd" LABEL="OMV-Zombie:16x3TB" TYPE="linux_raid_member"
      /dev/sdc: UUID="c81fdea7-cb47-1f15-beed-61103d433a03" UUID_SUB="269b020b-e164-4613-9288-e39f7a1c7910" LABEL="OMV-Zombie:16x3TB" TYPE="linux_raid_member"
      /dev/sdk: UUID="c81fdea7-cb47-1f15-beed-61103d433a03" UUID_SUB="981b6039-5c3c-351d-3013-4fc025c24bfb" LABEL="OMV-Zombie:16x3TB" TYPE="linux_raid_member"
      /dev/sdp: UUID="c81fdea7-cb47-1f15-beed-61103d433a03" UUID_SUB="e9fa7366-f72c-f442-fedf-4eb64fd4c417" LABEL="OMV-Zombie:16x3TB" TYPE="linux_raid_member"
      /dev/sdg: UUID="c81fdea7-cb47-1f15-beed-61103d433a03" UUID_SUB="b10adb17-dbf4-7828-fa62-ee3f4e60eb65" LABEL="OMV-Zombie:16x3TB" TYPE="linux_raid_member"
      /dev/sdx: UUID="43d48412-d3e8-5162-e6c8-40e58a578826" UUID_SUB="98b3f022-44a2-560f-223f-03ba67b14f6d" LABEL="OMV-Zombie:9x8TB" TYPE="linux_raid_member"
      /dev/sdz: UUID="43d48412-d3e8-5162-e6c8-40e58a578826" UUID_SUB="c7336b69-b4d1-166a-8466-5635c496f697" LABEL="OMV-Zombie:9x8TB" TYPE="linux_raid_member"
      /dev/sdy: UUID="43d48412-d3e8-5162-e6c8-40e58a578826" UUID_SUB="cf0e8f6c-2311-dafb-317d-fa39c9bef5c8" LABEL="OMV-Zombie:9x8TB" TYPE="linux_raid_member"
      /dev/sdu: UUID="43d48412-d3e8-5162-e6c8-40e58a578826" UUID_SUB="bcbd0c08-97be-9c1b-da53-f259809883ac" LABEL="OMV-Zombie:9x8TB" TYPE="linux_raid_member"
      /dev/sdt: UUID="43d48412-d3e8-5162-e6c8-40e58a578826" UUID_SUB="86811f04-9d01-7790-7e1c-c75f2586af83" LABEL="OMV-Zombie:9x8TB" TYPE="linux_raid_member"
      /dev/sdw: UUID="43d48412-d3e8-5162-e6c8-40e58a578826" UUID_SUB="b6526c06-a7b0-dde8-68f3-4cd81d8246bb" LABEL="OMV-Zombie:9x8TB" TYPE="linux_raid_member"
      /dev/sdv: UUID="43d48412-d3e8-5162-e6c8-40e58a578826" UUID_SUB="891b8f0d-61ae-6ffc-0526-39c7dced0f13" LABEL="OMV-Zombie:9x8TB" TYPE="linux_raid_member"
      /dev/sdaa: UUID="43d48412-d3e8-5162-e6c8-40e58a578826" UUID_SUB="48a7fb47-61c5-1f63-dfa0-6826c85cec8e" LABEL="OMV-Zombie:9x8TB" TYPE="linux_raid_member"
      /dev/sdab: UUID="43d48412-d3e8-5162-e6c8-40e58a578826" UUID_SUB="fefff4a2-cd06-7ff8-896a-87e6b87e777a" LABEL="OMV-Zombie:9x8TB" TYPE="linux_raid_member"



      cat /etc/mdadm/mdadm.conf




      # mdadm.conf
      #
      # Please refer to mdadm.conf(5) for information about this file.
      #


      # by default, scan all partitions (/proc/partitions) for MD superblocks.
      # alternatively, specify devices to scan, using wildcards if desired.
      # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      # used if no RAID devices are configured.
      DEVICE partitions


      # auto-create devices with Debian standard permissions
      CREATE owner=root group=disk mode=0660 auto=yes


      # automatically tag new arrays as belonging to the local system
      HOMEHOST <system>


      # definitions of existing MD arrays
      ARRAY /dev/md0 metadata=1.2 name=OMV-Zombie:9x8TB UUID=43d48412:d3e85162:e6c840e5:8a578826
      ARRAY /dev/md/16x3TB metadata=1.2 UUID=c81fdea7:cb471f15:beed6110:3d433a03 name=OMV-Zombie:16x3TB
      ARRAY /dev/md/9x8TB metadata=1.2 UUID=43d48412:d3e85162:e6c840e5:8a578826 name=OMV-Zombie:9x8TB



      I already tried to assemble but this is all i get...

      mdadm --assemble --force --verbose /dev/md127 /dev/sd[cdefghjklmnopqe]


      mdadm: looking for devices for /dev/md127
      mdadm: /dev/sdc is busy - skipping
      mdadm: /dev/sdd is busy - skipping
      mdadm: /dev/sde is busy - skipping
      mdadm: /dev/sdf is busy - skipping
      mdadm: /dev/sdg is busy - skipping
      mdadm: /dev/sdh is busy - skipping
      mdadm: /dev/sdj is busy - skipping
      mdadm: /dev/sdk is busy - skipping
      mdadm: /dev/sdl is busy - skipping
      mdadm: /dev/sdm is busy - skipping
      mdadm: /dev/sdn is busy - skipping
      mdadm: /dev/sdo is busy - skipping
      mdadm: /dev/sdp is busy - skipping
      mdadm: /dev/sdq is busy - skipping





      Please help me, so that i can bring this volme back online...

      BR
      Files
      • raid6.txt

        (22.13 kB, downloaded 17 times, last: )

      The post was edited 1 time, last by snoopy78 ().

    • it think i did it...

      after some more tries and readings i did use

      mdadm --create --assume-clean --level=6 --chunk 512 --raid-devices=16 /dev/md127 /dev/sdc /dev/sdg /dev/sdp /dev/sdo /dev/sdk /dev/sdd /dev/sdl /dev/sdi /dev/sdm /dev/sdn /dev/sdq /dev/sde /dev/sdr /dev/sdf /dev/sdj /dev/sdh

      and the raid6 went back online.

      However, my brtfs volume on it is not visible... so currently i use testdisk to look for it....

      in worst case scenario, i'll have to destroy the raid and rebuild it from the scratch...^^ luckily it was only ~37TB of data on it..so i can see my LTO6 backup strategy working...it'll be a pain in the ass... but hey.. it may take a week to get all data back.. luckily i'm on vacation right now..^^
    • yes, i know this, that's why i have a full backup on LTO6 tapes as well as an "old" DS-1812+ backup (for them most important data) in cold standy ..and...it's just my private system at home, so having the data 100% available isn't the goal...it's max. capacity .... currently i've "filled" ~26 LTO6 tapes with data and still have 34 unused tapes for further backups available. (i got the tapes for a cheap price of ~7€/Tape OV-LTO901620 (HP C7976A)
    • holy gral...

      as i alread wrote yesterday.. after i re-created my raid the btrfs volume was GONE and i couldn't see it.. so i yesterday only destroyed the raid and re-created it with the original values.
      i found it strange then, that i then saw my old btrfs volume (of course not mountable, since the raid was rebuilding)...so today the raid was finished and i tried to mount the volume => error message...
      so i simply rebooted the system and after the reboot i saw my old btrfs volumen mounted and i could access it by midnight commander...^^ ^^ ^^ so it's back online ^^ ^^ ^^

      saves me quite a lot of time, since the lto backup ich quite slow...
    • i'm currently copying all data to an other volume (raid) and then test the data after it's finished... but it seems, that 1 drive may have some error, so i already ordered a replacement.. after i saved as much data as i could (bcz it's sooo much faster than LTO) i'll replace the volume and let it rebuild ... maybe afterwards i can recover the "error" data i can't get now... if this works, i still scup the drives/volume...