[RAID1] not assembling after move to other hardware

    • [RAID1] not assembling after move to other hardware

      Hello gents,

      due to power consumption I decided to move a 2 disks raid1 array to another machine - that is pretty power efficient.
      There was a good review about this setup: Technikaffe: ASROCK Q1900 & openmediavault

      Some history

      The array consists of 2 Seagate 4TB disks. It was initially created on OMV 2.
      Later I had to re-install OMV 3 on the machine and the array was recognized when using:

      mdadm --assemble --scan

      What happend now

      Based on the good experience with the last "move", I just shutdown the old machine.
      After that I removed the disks and put them into the new machine.

      But the new machine does not:

      root@files:~# mdadm --assemble --scan
      mdadm: No arrays found in config file or automatically

      So I gave it a try to force devices:

      root@files:~# mdadm --assemble /dev/md0 /dev/sdc /dev/sdd
      mdadm: Cannot assemble mbr metadata on /dev/sdc
      mdadm: /dev/sdc has no superblock - assembly aborted

      What I found so far

      There seem to be issues in ASROCKs firmware. At least I understood this answer in that way: [RAID5] Missing superblocks after restart

      The output of blkid is this:

      root@files:~# blkid
      /dev/sda1: UUID="65c2b8e4-e0bc-41d4-92e4-33076b4196f6" TYPE="ext4" PARTUUID="dd31c21e-01"
      /dev/sda5: UUID="03dde5e4-cac1-43e1-a1c5-10b3a9d2d30a" TYPE="swap" PARTUUID="dd31c21e-05"
      /dev/sdb1: UUID="54e0c731-7fb2-43d4-a41a-da6bdc1752bf" TYPE="ext4" PARTUUID="22560a85-2147-45cf-85e3-c14f3deaed9c"
      /dev/sdc: PTUUID="2d2393b9-19b2-47d4-bc70-94434b5c7b5b" PTTYPE="gpt"
      /dev/sdd: PTUUID="c9e13f03-1aac-483f-97cb-f4f3139b0830" PTTYPE="gpt"

      ... with /dev/sdc and /dev/sdd being the disks in question. So no additional information here.

      The output of fdisk -l does not help that much:

      root@files:~# fdisk -l /dev/sdc /dev/sdd

      Disk /dev/sdc: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
      Units: sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 4096 bytes
      I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      Disklabel type: gpt
      Disk identifier: 2D2393B9-19B2-47D4-BC70-94434B5C7B5B

      Disk /dev/sdd: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
      Units: sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 4096 bytes
      I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      Disklabel type: gpt
      Disk identifier: C9E13F03-1AAC-483F-97CB-F4F3139B0830

      The output of gdisk -l does not have additional information (output is the same as for /dev/sdd):

      root@files:~# gdisk -l /dev/sdc
      GPT fdisk (gdisk) version 0.8.10

      Partition table scan:
      MBR: protective
      BSD: not present
      APM: not present
      GPT: present

      Found valid GPT with protective MBR; using GPT.
      Disk /dev/sdc: 7814037168 sectors, 3.6 TiB
      Logical sector size: 512 bytes
      Disk identifier (GUID): 2D2393B9-19B2-47D4-BC70-94434B5C7B5B
      Partition table holds up to 128 entries
      First usable sector is 34, last usable sector is 7814037134
      Partitions will be aligned on 2048-sector boundaries
      Total free space is 7814037101 sectors (3.6 TiB)

      Number Start (sector) End (sector) Size Code Name

      The output of mdadm --examine is strange as well (output is the same for /dev/sdd):

      root@files:~# mdadm --examine /dev/sdc
      MBR Magic : aa55
      Partition[0] : 4294967295 sectors at 1 (type ee)

      What I have at hand

      I have some older mdadm --examine output for both drives.
      I have the mdadm.conf from the old machine.
      I have the old machine, but I did not find addition useful metadata - I probably did not look for the right stuff.

      How to proceed to access data on these disks?

      Thanks in advance and best regards
      ASROCK Q1900DC-ITX - 8 GB - 2x Seagate 4 TB - Samsung 500 GB system drive - OMV 3.0.86
    • I just want to add the output requested in the pinned thread Degraded or missing raid array questions


      1. root@files:~# cat /proc/mdstat
      2. Personalities : [raid1]
      3. unused devices: <none>
      4. root@files:~# blkid
      5. /dev/sda1: UUID="65c2b8e4-e0bc-41d4-92e4-33076b4196f6" TYPE="ext4" PARTUUID="dd31c21e-01"
      6. /dev/sda5: UUID="03dde5e4-cac1-43e1-a1c5-10b3a9d2d30a" TYPE="swap" PARTUUID="dd31c21e-05"
      7. /dev/sdb1: UUID="54e0c731-7fb2-43d4-a41a-da6bdc1752bf" TYPE="ext4" PARTUUID="22560a85-2147-45cf-85e3-c14f3deaed9c"
      8. /dev/sdc: PTUUID="2d2393b9-19b2-47d4-bc70-94434b5c7b5b" PTTYPE="gpt"
      9. /dev/sdd: PTUUID="c9e13f03-1aac-483f-97cb-f4f3139b0830" PTTYPE="gpt"
      10. root@files:~# fdisk -l | grep "Disk "
      11. Disk /dev/sda: 465.8 GiB, 500107862016 bytes, 976773168 sectors
      12. Disk identifier: 0xdd31c21e
      13. Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
      14. Disk identifier: AA844ED8-CC51-49F4-9CC5-733445AFF1B9
      15. Disk /dev/sdc: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
      16. Disk identifier: 2D2393B9-19B2-47D4-BC70-94434B5C7B5B
      17. Disk /dev/sdd: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
      18. Disk identifier: C9E13F03-1AAC-483F-97CB-F4F3139B0830
      19. root@files:~# cat /etc/mdadm/mdadm.conf
      20. # mdadm.conf
      21. #
      22. # Please refer to mdadm.conf(5) for information about this file.
      23. #
      24. # by default, scan all partitions (/proc/partitions) for MD superblocks.
      25. # alternatively, specify devices to scan, using wildcards if desired.
      26. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      27. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      28. # used if no RAID devices are configured.
      29. DEVICE partitions
      30. #DEVICE /dev/sdc /dev/sdd
      31. # auto-create devices with Debian standard permissions
      32. CREATE owner=root group=disk mode=0660 auto=yes
      33. # automatically tag new arrays as belonging to the local system
      34. HOMEHOST <system>
      35. # definitions of existing MD arrays
      36. #ARRAY /dev/md/RAIDSTORAGE1 metadata=1.2 name=storage:RAIDSTORAGE1 UUID=02d8ddd3:6a5974c8:ffb63f3e:5748775d
      37. #ARRAY /dev/md/RAIDSTORAGE1 metadata=1.2 name=files:RAIDSTORAGE1 UUID=02d8ddd3:6a5974c8:ffb63f3e:5748775d
      38. #ARRAY /dev/md0 devices=/dev/sdc,/dev/sdd
      39. # instruct the monitoring daemon where to send mail alerts
      40. MAILADDR me@local
      41. MAILFROM root
      42. root@files:~# mdadm --detail --scan --verbose
      43. root@files:~#
      Display All
      I hope this gives additional information.

      Best regards
      ASROCK Q1900DC-ITX - 8 GB - 2x Seagate 4 TB - Samsung 500 GB system drive - OMV 3.0.86
    • I read a lot about smart behavior of mdadm when trying to recover/restore a raid 5 array when using --create.

      I was curious if this is working as well for raid 1 arrays. So I created a similar setup in a VM and did some tests ...
      It seems to me that this is working for raid 1 as well.

      So I gave it a try on one of my disks.
      I did the steps shown below.

      dd should not be necessary ... wrote:

      dd if=/dev/zero of=/dev/sdX bs=512 count=1
      sgdisk --zap-all /dev/sdX

      shutdown -r 0

      I assume this was not necessary ... wrote:

      added line from mdadm.conf (old machine) to current machines mdadm.conf
      changed the parts that were needed to change

      #original string:
      ARRAY /dev/md/RAIDSTORAGE1 metadata=1.2 name=storage:RAIDSTORAGE1 UUID=02d8ddd3:6a5974c8:ffb63f3e:5748775d

      #adapted string:
      ARRAY /dev/md0 level=raid1 num-devices=1 devices=/dev/sdc

      shutdown -r 0

      crossing fingers on this one ... wrote:

      mdadm --create /dev/md0 --raid-devices=2 --level=1 /dev/sdc missing

      make the result accessible ... wrote:

      mkdir /srv/dev-disk-by-id-ata-ST4000DM000-1F2168_Z303RQLY
      mount /dev/md0 /srv/dev-disk-by-id-ata-ST4000DM000-1F2168_Z303RQLY/

      I could verify the file system is there.
      After downloading a single file to my local machine (laptop), all seems fine.

      But how to proceed now?

      I would like to avoid this raid 1 in the future. When setup some years ago, the presumptions were wrong ...
      Now there was trouble, that I would not have without it.

      Thanks for any comments and recommendations.

      Best regards
      ASROCK Q1900DC-ITX - 8 GB - 2x Seagate 4 TB - Samsung 500 GB system drive - OMV 3.0.86
    • If you want a RAID1 array, have you given thought to a ZFS mirror? (The ZFS equivalent of RAID1) You'd get bitrot protection, self healing files and a file system that will take control of your disks and integrate logical volume management.

      Outside of a ZFS mirror, I'd pass on RAID1. You'd be far better off to split your disks, using 1 for data and the other for 100% backup.

      Video Guides :!: New User Guide :!: Docker Guides :!: Pi-hole in Docker
      Good backup takes the "drama" out of computing.
      Primary: OMV 3.0.99, ThinkServer TS140, 12GB ECC, 32GB USB boot, 4TB+4TB zmirror, 3TB client backup.
      Backup: OMV 4.1.13, Intel Server SC5650HCBRP, 6GB ECC, 16GB USB boot, UnionFS+SNAPRAID
      2nd Backup: OMV 4.1.9, Acer RC-111, 4GB, 32GB USB boot, 3TB+3TB zmirror, 4TB Rsync'ed disk
    • Hello flmaxey,

      thanks for your reply.

      I read about ZFS and mirroring. Protection against bitrot is nice.
      I did not decide yet, how to go on.

      To document the whole story, please read below what the next steps have been.

      Next steps in my story

      I cleaned the second disk:

      ... using this command: wrote:

      dd if=/dev/zero | dd of=/dev/sdd

      After creating a new file system, I copied all content from the degraded array to sdd.

      Best regards
      ASROCK Q1900DC-ITX - 8 GB - 2x Seagate 4 TB - Samsung 500 GB system drive - OMV 3.0.86