How recovery RAID and filesystem without format

    • How recovery RAID and filesystem without format

      Hi
      I was replice one HDD in my RAID6 and I can't assemble now drive.
      I was delete filesystem to can stop mdadm and use command sudo mdadm -C /dev/md127 --assume-clean --level=6 --raid-devices=4 /dev/sdb /dev/sdc /dev/sdd /dev/sde
      now i have nothing... pleas healp me:(


      Source Code

      1. /dev/sdb1: UUID="babe33cf-3f47-408a-adbc-f734b4822555" TYPE="ext4" PARTUUID="916131fd-01"
      2. /dev/sdb5: UUID="a785d4b6-e96b-4298-929f-b8016185502b" TYPE="swap" PARTUUID="916131fd-05"
      3. /dev/sdc: UUID="13c08ee8-50a9-8d2b-bf97-5654b9ba8ff1" UUID_SUB="1b3077dc-36aa-1464-3e84-98c36a1a4a01" LABEL="nas:Raid6" TYPE="linux_raid_member"
      4. /dev/sda: UUID="172fada9-4237-450b-1bd6-a70db7d74b63" UUID_SUB="8ee94d27-7512-09b7-d6e8-fa176826690c" LABEL="nas:127" TYPE="linux_raid_member"
      5. /dev/sdd: UUID="172fada9-4237-450b-1bd6-a70db7d74b63" UUID_SUB="3b80834f-2ca6-67b4-c8ee-c3e98472879f" LABEL="nas:127" TYPE="linux_raid_member"
      6. /dev/sde: UUID="172fada9-4237-450b-1bd6-a70db7d74b63" UUID_SUB="aaa1e468-1ac0-e21e-4c11-0e3ca5544c23" LABEL="nas:127" TYPE="linux_raid_member"

      Source Code

      1. Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
      2. md127 : inactive sde[3](S)
      3. 488255512 blocks super 1.2
      4. unused devices: <none>

      Source Code

      1. # /etc/fstab: static file system information.
      2. #
      3. # Use 'blkid' to print the universally unique identifier for a
      4. # device; this may be used with UUID= as a more robust way to name devices
      5. # that works even if disks are added and removed. See fstab(5).
      6. #
      7. # <file system> <mount point> <type> <options> <dump> <pass>
      8. # / was on /dev/sda1 during installation
      9. UUID=babe33cf-3f47-408a-adbc-f734b4822555 / ext4 noatime,nodiratime,errors=remount-ro 0 1
      10. # swap was on /dev/sda5 during installation
      11. #UUID=a785d4b6-e96b-4298-929f-b8016185502b none swap sw 0 0
      12. #tmpfs /media/ramdisk tmpfs nodev,nosuid,noexec,nodiratime,size=2048M 0 0
      13. #ramdisk /media/ramdisk tmpfs defaults,size=2g,mode=1777 0 0
      14. # >>> [openmediavault]
      15. # <<< [openmediavault]
      16. tmpfs /tmp tmpfs defaults 0 0
      Display All
      mdadm --assemble /dev/md127 /dev/sd[bcde] --verbose --force
      mdadm: Unknown keyword INACTIVE-ARRAY
      mdadm: Unknown keyword INACTIVE-ARRAY
      mdadm: looking for devices for /dev/md127
      mdadm: Cannot assemble mbr metadata on /dev/sdb
      mdadm: /dev/sdb has no superblock - assembly aborted
    • What you wanted to do which was to replace a drive can all be done from the GUI, please follow this and post each output, but initially it doesn't look good;

      /dev/sdc Label=nas:Raid6
      /dev/sd[ade] Label-nas:127

      Non of the "Linux Raid Member" drives are not showing a filesystem /dev/sd[acde]

      Your fstab shows nothing between lines 14 and 15 which is where the raid would be located.

      You've tried to assemble using [bcde] I am guessing that you are trying to replace [a]

      You may have to boot with a live cd to check the filesystem (if there is one) on each drive
      Raid is not a backup! Would you go skydiving without a parachute?
    • I want to repilic disk /dev/sdc Label=nas:Raid6

      cat /proc/mdstat
      Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
      md127 : inactive sde[3](S)
      488255512 blocks super 1.2

      unused devices: <none>

      blkid
      /dev/sdb1: UUID="babe33cf-3f47-408a-adbc-f734b4822555" TYPE="ext4" PARTUUID="916131fd-01"
      /dev/sdb5: UUID="a785d4b6-e96b-4298-929f-b8016185502b" TYPE="swap" PARTUUID="916131fd-05"
      /dev/sdc: UUID="13c08ee8-50a9-8d2b-bf97-5654b9ba8ff1" UUID_SUB="1b3077dc-36aa-1464-3e84-98c36a1a4a01" LABEL="nas:Raid6" TYPE="linux_raid_member"
      /dev/sda: UUID="172fada9-4237-450b-1bd6-a70db7d74b63" UUID_SUB="8ee94d27-7512-09b7-d6e8-fa176826690c" LABEL="nas:127" TYPE="linux_raid_member"
      /dev/sdd: UUID="172fada9-4237-450b-1bd6-a70db7d74b63" UUID_SUB="3b80834f-2ca6-67b4-c8ee-c3e98472879f" LABEL="nas:127" TYPE="linux_raid_member"
      /dev/sde: UUID="172fada9-4237-450b-1bd6-a70db7d74b63" UUID_SUB="aaa1e468-1ac0-e21e-4c11-0e3ca5544c23" LABEL="nas:127" TYPE="linux_raid_member"

      fdisk -l | grep "Disk "

      Disk /dev/sdb: 298.1 GiB, 320072933376 bytes, 625142448 sectors
      Disk identifier: 0x916131fd
      Disk /dev/sdc: 465.8 GiB, 500107862016 bytes, 976773168 sectors
      Disk /dev/sda: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
      Disk /dev/sdd: 465.8 GiB, 500107862016 bytes, 976773168 sectors
      Disk /dev/sde: 465.8 GiB, 500107862016 bytes, 976773168 sectors

      cat /etc/mdadm/mdadm.conf
      # mdadm.conf
      #
      # Please refer to mdadm.conf(5) for information about this file.
      #

      # by default, scan all partitions (/proc/partitions) for MD superblocks.
      # alternatively, specify devices to scan, using wildcards if desired.
      # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      # used if no RAID devices are configured.
      DEVICE partitions

      # auto-create devices with Debian standard permissions
      CREATE owner=root group=disk mode=0660 auto=yes

      # automatically tag new arrays as belonging to the local system
      HOMEHOST <system>

      # definitions of existing MD arrays
      INACTIVE-ARRAY /dev/md127 metadata=1.2 name=nas:127 UUID=172fada9:4237450b:1bd6a70d:b7d74b63
      INACTIVE-ARRAY /dev/md126 metadata=1.2 name=nas:Raid6 UUID=13c08ee8:50a98d2b:bf975654:b9ba8ff1

      mdadm --detail --scan --verbose
      mdadm: Unknown keyword INACTIVE-ARRAY
      mdadm: Unknown keyword INACTIVE-ARRAY
      INACTIVE-ARRAY /dev/md127 num-devices=1 metadata=1.2 name=nas:127 UUID=172fada9:4237450b:1bd6a70d:b7d74b63
      devices=/dev/sde
    • Ok looking at the above I can't help, whatever you have done you have successfully destroyed your data;

      From the mdadm conf there are 2 arrays specified md127 name=nas:127 and md126 name=nas:Raid6

      mdadm --detail shows an inactive array md127 with 1 device /dev/sde even trying the following;

      mdadm --stop /dev/md127
      mdadm --assemble --verbose --force /dev/md127 /dev/sd[ade] would fail anyway

      the output from fdisk shows 4 drives 3x500Gb and 1x1Tb whilst using mismatched drive sizes is not a problem it's somewhat of a waste and why raid such small drives.
      Raid is not a backup! Would you go skydiving without a parachute?
    • omg ;(
      Any idea? connect to another machine and using some recovery hdd tools ? how connect drive from raid to osx ?

      mdadm --assemble --verbose --force /dev/md127 /dev/sd[ade]
      mdadm: Unknown keyword INACTIVE-ARRAY
      mdadm: Unknown keyword INACTIVE-ARRAY
      mdadm: looking for devices for /dev/md127
      mdadm: /dev/sda is identified as a member of /dev/md127, slot 0.
      mdadm: /dev/sdd is identified as a member of /dev/md127, slot 2.
      mdadm: /dev/sde is identified as a member of /dev/md127, slot 3.
      mdadm: no uptodate device for slot 1 of /dev/md127
      mdadm: added /dev/sdd to /dev/md127 as 2
      mdadm: added /dev/sde to /dev/md127 as 3
      mdadm: added /dev/sda to /dev/md127 as 0
      mdadm: /dev/md127 has been started with 3 drives (out of 4).


      mdadm: Unknown keyword INACTIVE-ARRAY
      /dev/md127:
      Version : 1.2
      Creation Time : Fri Jul 19 12:19:01 2019
      Raid Level : raid6
      Array Size : 976510976 (931.27 GiB 999.95 GB)
      Used Dev Size : 488255488 (465.64 GiB 499.97 GB)
      Raid Devices : 4
      Total Devices : 3
      Persistence : Superblock is persistent

      Intent Bitmap : Internal

      Update Time : Fri Jul 19 12:28:22 2019
      State : clean, degraded
      Active Devices : 3
      Working Devices : 3
      Failed Devices : 0
      Spare Devices : 0

      Layout : left-symmetric
      Chunk Size : 512K

      Name : nas:127 (local to host nas)
      UUID : 172fada9:4237450b:1bd6a70d:b7d74b63
      Events : 2

      Number Major Minor RaidDevice State
      0 8 0 0 active sync /dev/sda
      - 0 0 1 removed
      2 8 48 2 active sync /dev/sdd
      3 8 64 3 active sync /dev/sde


      and i have filesystem but
      Failed to execute command 'export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin; export LANG=C.UTF-8; mount -v --source '/dev/disk/by-label/raid6' 2>&1' with exit code '32': mount: mount /dev/md127 on /srv/dev-disk-by-label-raid6 failed: Structure needs cleaning

      The post was edited 1 time, last by keramart ().

    • Ok you have md127 running with 3 drives? - yes
      a made reboot :(
      cat /etc/mdadm/mdadm.conf
      # mdadm.conf
      #
      # Please refer to mdadm.conf(5) for information about this file.
      #

      # by default, scan all partitions (/proc/partitions) for MD superblocks.
      # alternatively, specify devices to scan, using wildcards if desired.
      # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      # used if no RAID devices are configured.
      DEVICE partitions

      # auto-create devices with Debian standard permissions
      CREATE owner=root group=disk mode=0660 auto=yes

      # automatically tag new arrays as belonging to the local system
      HOMEHOST <system>

      # definitions of existing MD arrays
      INACTIVE-ARRAY /dev/md127 metadata=1.2 name=nas:127 UUID=172fada9:4237450b:1bd6a70d:b7d74b63
      INACTIVE-ARRAY /dev/md126 metadata=1.2 name=nas:Raid6 UUID=13c08ee8:50a98d2b:bf975654:b9ba8ff1
      root@nas:~# cat /proc/mdstat
      Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
      md127 : active raid6 sde[3] sdd[2] sdb[0]
      976510976 blocks super 1.2 level 6, 512k chunk, algorithm 2 [4/3] [U_UU]
      bitmap: 1/4 pages [4KB], 65536KB chunk

      md126 : inactive sdc[5](S)
      488255512 blocks super 1.2

      unused devices: <none>
    • omv-mkconf mdadm
      mdadm: Unknown keyword INACTIVE-ARRAY
      /usr/share/openmediavault/mkconf/mdadm: 99: [: INACTIVE-ARRAY: unexpected operator
      update-initramfs: Generating /boot/initrd.img-4.19.0-0.bpo.5-amd64
      dropbear: WARNING: Invalid authorized_keys file, remote unlocking of cryptroot via SSH won't work!
      mdadm: Unknown keyword INACTIVE-ARRAY
      update-initramfs: Generating /boot/initrd.img-4.19.0-0.bpo.4-amd64
      dropbear: WARNING: Invalid authorized_keys file, remote unlocking of cryptroot via SSH won't work!
      mdadm: Unknown keyword INACTIVE-ARRAY
      update-initramfs: Generating /boot/initrd.img-4.19.0-0.bpo.2-amd64
      dropbear: WARNING: Invalid authorized_keys file, remote unlocking of cryptroot via SSH won't work!
      mdadm: Unknown keyword INACTIVE-ARRAY
      update-initramfs: Generating /boot/initrd.img-4.19.0-0.bpo.1-amd64
      dropbear: WARNING: Invalid authorized_keys file, remote unlocking of cryptroot via SSH won't work!
      mdadm: Unknown keyword INACTIVE-ARRAY
      update-initramfs: Generating /boot/initrd.img-4.18.0-0.bpo.3-amd64
      dropbear: WARNING: Invalid authorized_keys file, remote unlocking of cryptroot via SSH won't work!
      mdadm: Unknown keyword INACTIVE-ARRAY
      update-initramfs: Generating /boot/initrd.img-4.18.0-0.bpo.1-amd64
      dropbear: WARNING: Invalid authorized_keys file, remote unlocking of cryptroot via SSH won't work!
      mdadm: Unknown keyword INACTIVE-ARRAY

      small success but still can't mount filestestem
      [Blocked Image: https://ibb.co/6gfp5xk]

      The post was edited 2 times, last by keramart ().

    • keramart wrote:

      Before...md127 has sd[bcde]
      I add sdc
      Ok without appearing to be rude you are now on your own, I am trying to assist you to get this working by going through a process and you giving information back.

      The output from the commands clearly shows /dev/sd[ade] as being part of the md127 array, whatever you have done or are doing you have done yourself.

      Good luck.
      Raid is not a backup! Would you go skydiving without a parachute?