RAID doesn't start up anymore

  • Dear community,


    I have recently started building a new homeserver. After setting up the OS drive, I started moving my data drives from the old system to the new one. When I saw that they got picked up instantly by the new NAS I also moved both drives that formed a Raid 1 (md0).


    In the OMV documentation, I read that Arrays created in any other linux distro should be recognized inmmediatly by the server



    To my surprise, the RAID wasn't visible. More interestingly, the drives wouldn't get picked up by the BIOS, either. They don't even seem to spin up.
    I put both drives back in the old NAS. The old OMV build was looking for them, but they weren't starting up either.


    Here's some termnal output from the old server:
    In there, I have my old OS drive (sdb) and an empty 3TB drive (sda)



    cat /proc/mdstat:

    Code
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]                                                                                                                              
    unused devices: <none>




    blkid:

    Code
    /dev/sda1: UUID="c1d9ddb0-90f8-4b80-ace9-8a5ceab3ea08" TYPE="ext4" PARTUUID="fd2f0024-4549-4e2b-8a59-a0cf5ebff479"                                                                                                 
    /dev/sdb1: UUID="7bbb3506-3861-4c1e-98c6-5ac867281e0a" TYPE="ext4" PARTUUID="8a4f28e5-01"                                                                                                                          
    /dev/sdb5: UUID="dc53b66a-35c9-4418-9e75-45f89f4eef5f" TYPE="swap" PARTUUID="8a4f28e5-05"


    fdisk -l | grep "Disk "



    Code
    Disk /dev/sda: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors                                                                                                                                                    
    Disk identifier: 7C0C81D3-5A4E-4D8C-B6F8-05BD7A29A851                                                                                                                                                              
    Disk /dev/sdb: 223.6 GiB, 240065183744 bytes, 468877312 sectors                                                                                                                                                    
    Disk identifier: 0x8a4f28e5

    cat /etc/mdadm/mdadm.conf






    Since I removed some of the old drives already, the sda, sdb, etc order is not correct anymore...
    But would that be enough for the drives not to be recognized anymore?



    I did not make a backup of the 10 TB RAID before moving to the new system, as I thought it would get picket up similar to the other drives.


    Most notably is that the drives don't seem to turn on at all, so I really think that there must be a different problem.
    Both drives are 10TB WD DC HC510. I really hope that I only made some kind of stupid mistake here.


    Please let me know if you have some ideas.


    Best,
    -h

    • Offizieller Beitrag

    Most notably is that the drives don't seem to turn on at all, so I really think that there must be a different problem.

    For the drives to be recognised by OMV or any OS they have to be present in the BIOS, if the drives haven't simply died start with cables, power connection, sata ports just plug one in to start and check the BIOS.

    Raid is not a backup! Would you go skydiving without a parachute?


    OMV 6x amd64 running on an HP N54L Microserver

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!