Dear community,
I have recently started building a new homeserver. After setting up the OS drive, I started moving my data drives from the old system to the new one. When I saw that they got picked up instantly by the new NAS I also moved both drives that formed a Raid 1 (md0).
In the OMV documentation, I read that Arrays created in any other linux distro should be recognized inmmediatly by the server
To my surprise, the RAID wasn't visible. More interestingly, the drives wouldn't get picked up by the BIOS, either. They don't even seem to spin up.
I put both drives back in the old NAS. The old OMV build was looking for them, but they weren't starting up either.
Here's some termnal output from the old server:
In there, I have my old OS drive (sdb) and an empty 3TB drive (sda)
cat /proc/mdstat:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
unused devices: <none>
blkid:
/dev/sda1: UUID="c1d9ddb0-90f8-4b80-ace9-8a5ceab3ea08" TYPE="ext4" PARTUUID="fd2f0024-4549-4e2b-8a59-a0cf5ebff479"
/dev/sdb1: UUID="7bbb3506-3861-4c1e-98c6-5ac867281e0a" TYPE="ext4" PARTUUID="8a4f28e5-01"
/dev/sdb5: UUID="dc53b66a-35c9-4418-9e75-45f89f4eef5f" TYPE="swap" PARTUUID="8a4f28e5-05"
fdisk -l | grep "Disk "
Disk /dev/sda: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Disk identifier: 7C0C81D3-5A4E-4D8C-B6F8-05BD7A29A851
Disk /dev/sdb: 223.6 GiB, 240065183744 bytes, 468877312 sectors
Disk identifier: 0x8a4f28e5
cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
# Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
# To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
# used if no RAID devices are configured.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# definitions of existing MD arrays
ARRAY /dev/md0 metadata=1.2 name=openmediavault:10er UUID=daaf87b7:6ca48789:2d36b4c1:12ba69d3
Alles anzeigen
Since I removed some of the old drives already, the sda, sdb, etc order is not correct anymore...
But would that be enough for the drives not to be recognized anymore?
I did not make a backup of the 10 TB RAID before moving to the new system, as I thought it would get picket up similar to the other drives.
Most notably is that the drives don't seem to turn on at all, so I really think that there must be a different problem.
Both drives are 10TB WD DC HC510. I really hope that I only made some kind of stupid mistake here.
Please let me know if you have some ideas.
Best,
-h