RAID doesn't start up anymore

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • RAID doesn't start up anymore

      Dear community,

      I have recently started building a new homeserver. After setting up the OS drive, I started moving my data drives from the old system to the new one. When I saw that they got picked up instantly by the new NAS I also moved both drives that formed a Raid 1 (md0).

      In the OMV documentation, I read that Arrays created in any other linux distro should be recognized inmmediatly by the server


      To my surprise, the RAID wasn't visible. More interestingly, the drives wouldn't get picked up by the BIOS, either. They don't even seem to spin up.
      I put both drives back in the old NAS. The old OMV build was looking for them, but they weren't starting up either.

      Here's some termnal output from the old server:
      In there, I have my old OS drive (sdb) and an empty 3TB drive (sda)


      cat /proc/mdstat:

      Source Code

      1. Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
      2. unused devices: <none>




      blkid:

      Source Code

      1. /dev/sda1: UUID="c1d9ddb0-90f8-4b80-ace9-8a5ceab3ea08" TYPE="ext4" PARTUUID="fd2f0024-4549-4e2b-8a59-a0cf5ebff479"
      2. /dev/sdb1: UUID="7bbb3506-3861-4c1e-98c6-5ac867281e0a" TYPE="ext4" PARTUUID="8a4f28e5-01"
      3. /dev/sdb5: UUID="dc53b66a-35c9-4418-9e75-45f89f4eef5f" TYPE="swap" PARTUUID="8a4f28e5-05"

      fdisk -l | grep "Disk "


      Source Code

      1. Disk /dev/sda: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
      2. Disk identifier: 7C0C81D3-5A4E-4D8C-B6F8-05BD7A29A851
      3. Disk /dev/sdb: 223.6 GiB, 240065183744 bytes, 468877312 sectors
      4. Disk identifier: 0x8a4f28e5
      cat /etc/mdadm/mdadm.conf


      Source Code

      1. # mdadm.conf
      2. #
      3. # Please refer to mdadm.conf(5) for information about this file.
      4. #
      5. # by default, scan all partitions (/proc/partitions) for MD superblocks.
      6. # alternatively, specify devices to scan, using wildcards if desired.
      7. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      8. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      9. # used if no RAID devices are configured.
      10. DEVICE partitions
      11. # auto-create devices with Debian standard permissions
      12. CREATE owner=root group=disk mode=0660 auto=yes
      13. # automatically tag new arrays as belonging to the local system
      14. HOMEHOST <system>
      15. # definitions of existing MD arrays
      16. ARRAY /dev/md0 metadata=1.2 name=openmediavault:10er UUID=daaf87b7:6ca48789:2d36b4c1:12ba69d3
      Display All




      Since I removed some of the old drives already, the sda, sdb, etc order is not correct anymore...
      But would that be enough for the drives not to be recognized anymore?


      I did not make a backup of the 10 TB RAID before moving to the new system, as I thought it would get picket up similar to the other drives.

      Most notably is that the drives don't seem to turn on at all, so I really think that there must be a different problem.
      Both drives are 10TB WD DC HC510. I really hope that I only made some kind of stupid mistake here.

      Please let me know if you have some ideas.

      Best,
      -h
    • hauschka wrote:

      Most notably is that the drives don't seem to turn on at all, so I really think that there must be a different problem.
      For the drives to be recognised by OMV or any OS they have to be present in the BIOS, if the drives haven't simply died start with cables, power connection, sata ports just plug one in to start and check the BIOS.
      Raid is not a backup! Would you go skydiving without a parachute?