Recover Destroyed RAID5 using mdam

    • OMV 4.x
    • Resolved
    • Thnks for this: missed it: all my apologies:

      1:

      Source Code

      1. Command (m for help): ^C
      2. root@nas:~# cat /proc/mdstat
      3. Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
      4. md127 : inactive sdf[4](S) sdd[7](S) sde[6](S) sdc[5](S)
      5. 11720548192 blocks super 1.2




      2:

      Source Code

      1. root@nas:~# blkid
      2. /dev/sda1: UUID="96bc4db4-c752-40ca-8b74-70d3a81e3148" TYPE="ext4" PARTUUID="7ec649cb-01"
      3. /dev/sda5: UUID="bd8c27fa-ebfc-4ae7-b101-90a099755c47" TYPE="swap" PARTUUID="7ec649cb-05"
      4. /dev/sdb2: LABEL="Seagate Backup Plus Drive" UUID="A67487B1748782B3" TYPE="ntfs" PARTLABEL="Basic data partition" PARTUUID="1d595348-aa6d-4331-b86b-20abfd08e5da"
      5. /dev/sdc: UUID="14b30c45-d415-83ce-9f24-34002fc3499b" UUID_SUB="fd82e71d-3d5f-72ca-2ba0-7e33164a7a13" LABEL="OMV:Raid1" TYPE="linux_raid_member"
      6. /dev/sdd: UUID="14b30c45-d415-83ce-9f24-34002fc3499b" UUID_SUB="b25e373b-6319-e8cb-71d3-4140c29b1d5c" LABEL="OMV:Raid1" TYPE="linux_raid_member"
      7. /dev/sde: UUID="14b30c45-d415-83ce-9f24-34002fc3499b" UUID_SUB="d9abefd5-ca45-0aa6-eefb-31ec8f66bbcd" LABEL="OMV:Raid1" TYPE="linux_raid_member"
      8. /dev/sdf: UUID="14b30c45-d415-83ce-9f24-34002fc3499b" UUID_SUB="c5f5e360-33f4-dc2f-b8b7-721672dc9656" LABEL="OMV:Raid1" TYPE="linux_raid_member"
      9. /dev/sdb1: PARTLABEL="Microsoft reserved partition" PARTUUID="6ab46b18-2378-4e34-8b3e-0cbb315be203"
      3:

      Source Code

      1. root@nas:~# fdisk -l | grep "Disk "
      2. Partition 2 does not start on physical sector boundary.
      3. Disk /dev/sda: 111.8 GiB, 120034123776 bytes, 234441648 sectors
      4. Disk identifier: 0x7ec649cb
      5. Partition 1 does not start on physical sector boundary.
      6. Disk /dev/sdb: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors
      7. Disk identifier: BF56CB67-CA8E-4BA0-A9D4-97D6111EBB04
      8. Disk /dev/sdc: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
      9. Disk /dev/sdd: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
      10. Disk /dev/sde: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
      11. Disk /dev/sdf: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
      Display All
      4:

      Source Code

      1. # mdadm.conf
      2. #
      3. # Please refer to mdadm.conf(5) for information about this file.
      4. #
      5. # by default, scan all partitions (/proc/partitions) for MD superblocks.
      6. # alternatively, specify devices to scan, using wildcards if desired.
      7. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      8. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      9. # used if no RAID devices are configured.
      10. DEVICE partitions
      11. # auto-create devices with Debian standard permissions
      12. CREATE owner=root group=disk mode=0660 auto=yes
      13. # automatically tag new arrays as belonging to the local system
      14. HOMEHOST <system>
      15. # definitions of existing MD arrays
      16. INACTIVE-ARRAY /dev/md127 metadata=1.2 name=OMV:Raid1 UUID=14b30c45:d41583ce:9f243400:2fc3499b
      Display All

      Source Code

      1. mdadm: Unknown keyword INACTIVE-ARRAY
      2. INACTIVE-ARRAY /dev/md127 num-devices=4 metadata=1.2 name=OMV:Raid1 UUID=14b30c45:d41583ce:9f243400:2fc3499b
      3. devices=/dev/sdc,/dev/sdd,/dev/sde,/dev/sdf
    • mdadm --stop /dev/md127
      mdadm --assemble --verbose --force /dev/md127 /dev/sd[fdec]

      post output of: cat /proc/mdstat
      If that looks ok, then omv-mkconf mdadm
      omv 4.1.12 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.11
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!