Lost RAID 1 when moved to a new system

  • Hi.


    When I decided to move an existing RAID 1 array consisting of two disks to a new system, I checked online and saw that this would not be a problem- it would be autodetected. Well, this old array is not showing up in webgui Raid Management. It was installed under OMV 3.x on another hardware, software raid using OMV. It worked as a charm, until the motherboard got killed due to a powersurge so I had to move them to another OMV installation.


    The two new disks shows up under Physical disks as /dev/sdc and /dev/sdd (which is kind of strange, because the already existing raid array (two disks, raid1) in the new system was /dev/sdc & /dev/sdd.. I had to install a PCI Express SATA card to this new system for this expansion. However, I do not know if this is relevant- just wanted to inform you guys). The existing raid (md0) is working and everything is in order.


    Anyways, I have been googling some forums, and my guess is that under this new hardware the uuid is wrong.


    Could someone guide me for creating a md1 with these two new disks? I want to keep the data that is on that old array.



    cat /proc/mdstat:

    Code
    root@takaguwa:/# cat /proc/mdstat 
    Personalities : [raid1] 
    md0 : active raid1 sdf[0] sdg[1]
          5860391488 blocks super 1.2 [2/2] [UU]
          bitmap: 0/44 pages [0KB], 65536KB chunk
    
    
    unused devices: <none>


    mdadm --detail --scan --verbose: (this new system is called takaguwa)

    Code
    root@takaguwa:/# mdadm --detail --scan --verbose
    ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=takaguwa:0 UUID=ee467228:89fa28ca:d461e0cb:ff61ef8c
       devices=/dev/sdf,/dev/sdg


    mdadm --assemble --scan --verbose:


    mdadm --examine /dev/sdc: (the old system was calld hattori)


    mdadm --examine /dev/sdd: (the old system was called hattori)

    blkid:


    fdisk -l: (it comes with a warning saying "Partition table entries are not in order")


    So,, anyone?


    Is the problem due to the new PCI Express SATA card, and that the devices has swapped places (dev/sdf and /dev/sdg is the existing md0 array- which previosloy was /dev/sdc and /dev/sdd)? Maybe I will just swap cables to the new installation and it will somehow work?


    On the old installation I think that the raid 1 array was created from /dev/sdc and /dev/sdd.
    I do not know the order of the old raid.

  • Hi ryusaku,


    I might not be much help here since I do not even have my system built yet however I'm sniffing here and there to gain some knowledge even before I decide to build one so your case is perfect learning curve for me ;), any way, what if you would try to add the line:


    ARRAY /dev/md1 level=raid1 num-devices=2 metadata=1.2 name=takaguwa:1 UUID=ad30ac55:b2c3c863:4d6cd752:5c0b7a2f


    into /etc/mdadm/mdadm.conf and try to re-assemble again?


    Regards,
    Adam

  • Hi ryusaku,


    I've done some testing and you should be ok with manual assemble of the array:


    mdadm --assemble /dev/md1 /dev/sdc /dev/sdd


    that should bring the array md1 online but it still needs to be added to /etc/mdadm/mdadm.conf


    list the arrays:


    mdadm --detail --scan


    and add the line relating to md1


    Please share if that works for you.


    Regards,
    Adam

  • gad:


    Step 1
    Output from mdadm --assemble /dev/md1 /dev/sdc /dev/sdd was

    Code
    root@takaguwa:/# mdadm --assemble /dev/md1 /dev/sdc /dev/sdd
    mdadm: /dev/md1 has been started with 2 drives.


    When I checked under Raid Management it was listed as hattori:Raid /dev/md1 clean Mirror 3.64 TiB and with disks /dev/sdc and /dev/sdd.


    However, moving on to File Systems it was listed as
    /dev/md1 Data ext4 n/a n/a n/a No No Online


    Step 2
    Therefore I am thinking that adding the output from mdadm --detail --scan to mdadm.conf would be the next approach:

    Code
    root@takaguwa:/# mdadm --detail --scan
    ARRAY /dev/md0 metadata=1.2 name=takaguwa:0 UUID=ee467228:89fa28ca:d461e0cb:ff61ef8c
    ARRAY /dev/md1 metadata=1.2 name=hattori:Raid UUID=ad30ac55:b2c3c863:4d6cd752:5c0b7a2f

    So I added "ARRAY /dev/md1 metadata=1.2 name=hattori:Raid UUID=ad30ac55:b2c3c863:4d6cd752:5c0b7a2f" (minus the quotes) to /etc/mdadm/mdadm.conf, went to the Gui and took a reboot.


    Step 3


    Under File Systems it is still listed as n/a, thus not accessible (?) with the data intact.


    I very much appreciate your suggestions. I might be a step closer. But I have read somewhere that the order of the disks matter? It did not make any difference doing the --assemble with /dev/md1 /dev/sdc /dev/sdd or vice versa (I tried it under step 1 and checked in File Systems, before I added the line in mdadm.conf (which was added using the output from --assemble /dev/md1 /dev/sdc /dev/sdd) - it was showing up as n/a).


    Soo, what is the next step? :)

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!