Hello gents,
due to power consumption I decided to move a 2 disks raid1 array to another machine - that is pretty power efficient.
There was a good review about this setup: Technikaffe: ASROCK Q1900 & openmediavault
Some history
The array consists of 2 Seagate 4TB disks. It was initially created on OMV 2.
Later I had to re-install OMV 3 on the machine and the array was recognized when using:
mdadm --assemble --scan
What happend now
Based on the good experience with the last "move", I just shutdown the old machine.
After that I removed the disks and put them into the new machine.
But the new machine does not:
root@files:~# mdadm --assemble --scan
mdadm: No arrays found in config file or automatically
So I gave it a try to force devices:
root@files:~# mdadm --assemble /dev/md0 /dev/sdc /dev/sdd
mdadm: Cannot assemble mbr metadata on /dev/sdc
mdadm: /dev/sdc has no superblock - assembly aborted
What I found so far
There seem to be issues in ASROCKs firmware. At least I understood this answer in that way: [RAID5] Missing superblocks after restart
The output of blkid is this:
root@files:~# blkid
/dev/sda1: UUID="65c2b8e4-e0bc-41d4-92e4-33076b4196f6" TYPE="ext4" PARTUUID="dd31c21e-01"
/dev/sda5: UUID="03dde5e4-cac1-43e1-a1c5-10b3a9d2d30a" TYPE="swap" PARTUUID="dd31c21e-05"
/dev/sdb1: UUID="54e0c731-7fb2-43d4-a41a-da6bdc1752bf" TYPE="ext4" PARTUUID="22560a85-2147-45cf-85e3-c14f3deaed9c"
/dev/sdc: PTUUID="2d2393b9-19b2-47d4-bc70-94434b5c7b5b" PTTYPE="gpt"
/dev/sdd: PTUUID="c9e13f03-1aac-483f-97cb-f4f3139b0830" PTTYPE="gpt"
... with /dev/sdc and /dev/sdd being the disks in question. So no additional information here.
The output of fdisk -l does not help that much:
root@files:~# fdisk -l /dev/sdc /dev/sdd
Disk /dev/sdc: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 2D2393B9-19B2-47D4-BC70-94434B5C7B5B
Disk /dev/sdd: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: C9E13F03-1AAC-483F-97CB-F4F3139B0830
The output of gdisk -l does not have additional information (output is the same as for /dev/sdd):
root@files:~# gdisk -l /dev/sdc
GPT fdisk (gdisk) version 0.8.10
Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present
Found valid GPT with protective MBR; using GPT.
Disk /dev/sdc: 7814037168 sectors, 3.6 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): 2D2393B9-19B2-47D4-BC70-94434B5C7B5B
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 7814037134
Partitions will be aligned on 2048-sector boundaries
Total free space is 7814037101 sectors (3.6 TiB)
Number Start (sector) End (sector) Size Code Name
The output of mdadm --examine is strange as well (output is the same for /dev/sdd):
root@files:~# mdadm --examine /dev/sdc
/dev/sdc:
MBR Magic : aa55
Partition[0] : 4294967295 sectors at 1 (type ee)
What I have at hand
I have some older mdadm --examine output for both drives.
I have the mdadm.conf from the old machine.
I have the old machine, but I did not find addition useful metadata - I probably did not look for the right stuff.
How to proceed to access data on these disks?
Thanks in advance and best regards