Raid 1 verschwindet beim Plattentausch

  • Hallo,


    ich möchte mein Raid 1 von 2x2TB auf 2x4TB vergrößern. Die neuen Platten sind da. Ein Backup ist erstellt. Wenn ich jetzt eine Platte tausche, ist das gesamte Raid in OMV weg. Stecke ich die alte Platte wieder rein, funktioniert alles wieder normal.



    Vor Tausch einer Platte:


    Code
    root@zangs-nas:~# cat /proc/mdstat
    Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
    md127 : active raid1 sda[0] sdb[1]
    1953383512 blocks super 1.2 [2/2] [UU]
    bitmap: 0/15 pages [0KB], 65536KB chunk
    
    
    unused devices: <none>



    Eine Platte ist ausgehängt (Raid ist verschwunden):



    Code
    root@zangs-nas:~# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md127 : inactive sda[1](S)
          1953383512 blocks super 1.2
    
    
    unused devices: <none


    Wie gehe ich nun am besten vor, um das Raid so vorzubereiten, dass es normal funktioniert. Platte tauschen - Wiederherstellen....


    Vielen Dank im Voraus.


    Gruß
    Alex

  • Und vollständigkeitshalber - so siehts aus, wenn eine neue Platte drin ist:



    Code
    root@zangs-nas:~# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md127 : inactive sdc[1](S)
    1953383512 blocks super 1.2
    
    
    unused devices: <none>


  • Hallo,



    Wenn du ein Backup hast würde ich das RAID neu aufsetzen, die Platten freuen sich wenn sie nicht zweimal einen kompletten Sync machen müssen.


    leider ist das keine Option. Das Raid muss auch im einem Ernstfall funktionieren wie es soll - die Wiederherstellung vom Raid ist für mich eine Vorbereitung auf den Ernstfall.



    @geaves hat den Ablauf hier beschrieben; wenn du beide Platten tauschen willst, dass ganze 2x machen

    Habs gemacht. Leider taucht die Festplatte in der Wiederherstellungs-Toolbox nicht auf.


    Aktueller Stand:


    Code
    root@zangs-nas:~# cat /proc/mdstat                                              Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
    md127 : active raid1 sdb[1]
          1953383512 blocks super 1.2 [2/1] [_U]
          bitmap: 2/15 pages [8KB], 65536KB chunk
    
    
    unused devices: <none>





    Gruß
    Alex

  • Hallo,


    Überlege es dir nochmal, ein Sync ist eine extreme Belastung für die Platten. Damit tust du der Lebensdauer der Platten keine Freude. Einen Wiederherstellung kannst du auch in einer virtuellen Maschine testen.

    Lieber Stress für die Platten als später graue Haare für mich. Sorry, ich will den Ernstfall in Ruhe auf der Maschine einmal durchmachen. Die Platten werden später eh nur sehr moderat benutzt.
    Kannst Du mir im aktuellen Stand weiter helfen?


    Gruß
    Alex

    • Offizieller Beitrag

    Did you wipe and format the new drive (format with the same filesystem)

    And have you done both from the GUI of OMV (just to make sure we understand exactly what you did)
    Please also mention, if the new drives have been used before in another server or what ever.


    BTW: did you check if your md127 with one disk is working (as part of you disaster simulation)

  • And have you done both from the GUI of OMV (just to make sure we understand exactly what you did)Please also mention, if the new drives have been used before in another server or what ever.


    BTW: did you check if your md127 with one disk is working (as part of you disaster simulation)

    I have done both from GUI of OMV. The new drive is absolutely new from the store. RAID md127 ist working.

  • That makes no sense that the new drive does not show in the pop up box when you select recover, it should all drives.


    The only other option is to try from the cli, mdadm --add /dev/md127 /dev/sda


    I had restarted the computer in the meantime. Now the drive is sdb. I still had to unmount. Now the RAID is recovering!


    Code
    root@zangs-nas:~# mdadm --add /dev/md127 /dev/sdb
    mdadm: added /dev/sdb
    Code
    root@zangs-nas:~# cat /proc/mdstat
    Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
    md127 : active raid1 sdb[2] sdc[1]
          1953383512 blocks super 1.2 [2/1] [_U]
          [>....................]  recovery =  0.5% (9851008/1953383512) finish=200.2min speed=161720K/sec
          bitmap: 7/15 pages [28KB], 65536KB chunk
    
    
    unused devices: <none>

    Thank you all so far! I'll get in touch when I'm done, or get stuck!


    greeting Alex

  • This morning the RAID was in the clean state. I have dismantled the second old plate after the instructions and installed a new one. The RAID was completely gone. Good, reinstall the old one. The RAID is now inactive!


    Status:

    Code
    root@zangs-nas:~# cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md127 : inactive sdb[1](S)
          1953383512 blocks super 1.2
    
    
    unused devices: <none>


  • OK, I have now continued. I'm doing everything on the console now.


    Stop RAID:
    mdadm --stop /dev/md127


    RAID will only start working with the old disk:
    mdadm --assemble --run /dev/md127 /dev/sdb


    RAID is now active. Status:

    Code
    cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md127 : active raid1 sdb[1]
          1953383512 blocks super 1.2 [2/1] [_U]
          bitmap: 7/15 pages [28KB], 65536KB chunk
    
    
    unused devices: <none>


    Include second disk again:
    mdadm --add /dev/md127 /dev/sda


    Strangely, sync is now running again...!?

    Code
    cat /proc/mdstat
    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md127 : active raid1 sda[2] sdb[1]
          1953383512 blocks super 1.2 [2/1] [_U]
          [>....................]  recovery =  0.8% (16502208/1953383512) finish=188.8min speed=170937K/sec
          bitmap: 7/15 pages [28KB], 65536KB chunk
    
    
    unused devices: <none>
  • No matter what I tried, the RAID does not work with a large and a small disk. This is probably caused by the direct use of the disks and not the partitions in the RAID.


    I reset everything. The RAID is now as originally with two small disks.


    Can someone help me to adjust the RAID so that I can exchange the disks with larger ones.



    What do you think:
    mdadm --build /dev/md127 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1




    current status:

    Code
    cat /proc/mdstat
    Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
    md127 : active raid1 sda[2] sdb[1]
          1953383512 blocks super 1.2 [2/2] [UU]
          bitmap: 0/15 pages [0KB], 65536KB chunk
    
    
    unused devices: <none>
    • Offizieller Beitrag

    This is probably caused by the direct use of the disks and not the partitions in the RAID.

    I believe the problem is caused by formatting the larger drive before attempting to add it to the array. The user is using the GUI and replacing a drive if it is formatted.


    My own experience tells me that that is wrong; raid disk fails, remove the failed drive using the built in bios or software, install the new drive. Having used ZFS it's a similar procedure, but in both instances the new drive is not formatted prior to adding it to the raid.


    Let's just say you've got the wrong size, so again in the GUI file system -> resize. Now you have a new larger working array.


    So to complete your upgrade you are going to use a combination of the GUI and the cli DO NOT REBOOT !!


    Remove a drive from the array using the GUI let's say /dev/sda
    Remove the drive from the computer.
    Install the new drive.
    Check the GUI to confirm the drive has been seen, it may display a new drive reference e.g. /dev/sdc
    Wipe the drive in the GUI


    CLI:


    mdadm --stop /dev/md127
    mdadm --add /dev/md127 /dev/sdc here I am assuming the new drive is /dev/sdc
    cat /proc/mdstat


    Hopefully this will show the array being rebuilt with the new drive.


    If the above works comeback, DO NOT REBOOT, do not pass GO :) as you are doing this one drive at time you still have a working array if this first step does not work.

    • Offizieller Beitrag

    Does it matter that motherboard has only two SATA ports?

    No because your current 2 drives are connected to those I take it?


    The formatting on the previous has gone weird, I'm going to try and sort it out, give up the formatting has gone wrong but there should be enough there to get started.

  • now I fail at the first command


    umount /dev/md127


    mdadm --stop /dev/md127


    mdadm: Cannot get exclusive access to /dev/md127:Perhaps a running process, mounted filesystem or active volume group?


    Status RAID

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!