Raid 1 verschwindet beim Plattentausch

    • OMV 4.x
    • Resolved

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • Raid 1 verschwindet beim Plattentausch

      Hallo,

      ich möchte mein Raid 1 von 2x2TB auf 2x4TB vergrößern. Die neuen Platten sind da. Ein Backup ist erstellt. Wenn ich jetzt eine Platte tausche, ist das gesamte Raid in OMV weg. Stecke ich die alte Platte wieder rein, funktioniert alles wieder normal.


      Vor Tausch einer Platte:

      Source Code

      1. root@zangs-nas:~# cat /proc/mdstat
      2. Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
      3. md127 : active raid1 sda[0] sdb[1]
      4. 1953383512 blocks super 1.2 [2/2] [UU]
      5. bitmap: 0/15 pages [0KB], 65536KB chunk
      6. unused devices: <none>

      Source Code

      1. root@zangs-nas:~# fdisk -l
      2. Disk /dev/sda: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors
      3. Units: sectors of 1 * 512 = 512 bytes
      4. Sector size (logical/physical): 512 bytes / 4096 bytes
      5. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      6. Disklabel type: dos
      7. Disk identifier: 0xba980230
      8. Device Boot Start End Sectors Size Id Type
      9. /dev/sda1 1 4294967295 4294967295 2T ee GPT
      10. Partition 1 does not start on physical sector boundary.
      11. Disk /dev/sdb: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors
      12. Units: sectors of 1 * 512 = 512 bytes
      13. Sector size (logical/physical): 512 bytes / 4096 bytes
      14. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      15. Disklabel type: dos
      16. Disk identifier: 0x022cba24
      17. Device Boot Start End Sectors Size Id Type
      18. /dev/sdb1 1 4294967295 4294967295 2T ee GPT
      19. Partition 1 does not start on physical sector boundary.
      20. Disk /dev/md127: 1,8 TiB, 2000264716288 bytes, 3906767024 sectors
      21. Units: sectors of 1 * 512 = 512 bytes
      22. Sector size (logical/physical): 512 bytes / 4096 bytes
      23. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      24. Disk /dev/sdc: 111,8 GiB, 120040980480 bytes, 234455040 sectors
      25. Units: sectors of 1 * 512 = 512 bytes
      26. Sector size (logical/physical): 512 bytes / 512 bytes
      27. I/O size (minimum/optimal): 512 bytes / 33553920 bytes
      28. Disklabel type: gpt
      29. Disk identifier: 3792DC2C-864A-41F8-8642-731B956CF465
      30. Device Start End Sectors Size Type
      31. /dev/sdc1 65535 1048559 983025 480M EFI System
      32. /dev/sdc2 1048560 217969409 216920850 103,4G Linux filesystem
      33. /dev/sdc3 217969410 234418694 16449285 7,9G Linux swap
      Display All

      Source Code

      1. root@zangs-nas:~# cat /etc/mdadm/mdadm.conf
      2. # mdadm.conf
      3. #
      4. # Please refer to mdadm.conf(5) for information about this file.
      5. #
      6. # by default, scan all partitions (/proc/partitions) for MD superblocks.
      7. # alternatively, specify devices to scan, using wildcards if desired.
      8. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      9. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      10. # used if no RAID devices are configured.
      11. DEVICE partitions
      12. # auto-create devices with Debian standard permissions
      13. CREATE owner=root group=disk mode=0660 auto=yes
      14. # automatically tag new arrays as belonging to the local system
      15. HOMEHOST <system>
      16. # definitions of existing MD arrays
      17. ARRAY /dev/md/Zangs metadata=1.2 name=Zangs-NAS:Zangs UUID=3ca82093:31bdaba6:e3fc2da7:8924c663
      18. # instruct the monitoring daemon where to send mail alerts
      19. MAILADDR XXXX
      Display All

      Eine Platte ist ausgehängt (Raid ist verschwunden):


      Source Code

      1. root@zangs-nas:~# cat /proc/mdstat
      2. Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
      3. md127 : inactive sda[1](S)
      4. 1953383512 blocks super 1.2
      5. unused devices: <none

      Source Code

      1. root@zangs-nas:~# fdisk -l Disk /dev/sda: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors
      2. Units: sectors of 1 * 512 = 512 bytes
      3. Sector size (logical/physical): 512 bytes / 4096 bytes
      4. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      5. Disklabel type: dos
      6. Disk identifier: 0x022cba24
      7. Device Boot Start End Sectors Size Id Type
      8. /dev/sda1 1 4294967295 4294967295 2T ee GPT
      9. Partition 1 does not start on physical sector boundary.
      10. Disk /dev/sdb: 111,8 GiB, 120040980480 bytes, 234455040 sectors
      11. Units: sectors of 1 * 512 = 512 bytes
      12. Sector size (logical/physical): 512 bytes / 512 bytes
      13. I/O size (minimum/optimal): 512 bytes / 33553920 bytes
      14. Disklabel type: gpt
      15. Disk identifier: 3792DC2C-864A-41F8-8642-731B956CF465
      16. Device Start End Sectors Size Type
      17. /dev/sdb1 65535 1048559 983025 480M EFI System
      18. /dev/sdb2 1048560 217969409 216920850 103,4G Linux filesystem
      19. /dev/sdb3 217969410 234418694 16449285 7,9G Linux swap
      Display All

      Wie gehe ich nun am besten vor, um das Raid so vorzubereiten, dass es normal funktioniert. Platte tauschen - Wiederherstellen....

      Vielen Dank im Voraus.

      Gruß
      Alex
    • Und vollständigkeitshalber - so siehts aus, wenn eine neue Platte drin ist:


      Source Code

      1. root@zangs-nas:~# cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
      2. md127 : inactive sdc[1](S)
      3. 1953383512 blocks super 1.2
      4. unused devices: <none>

      Source Code

      1. root@zangs-nas:~# fdisk -l
      2. Disk /dev/sdb: 3,7 TiB, 4000787030016 bytes, 7814037168 sectors
      3. Units: sectors of 1 * 512 = 512 bytes
      4. Sector size (logical/physical): 512 bytes / 4096 bytes
      5. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      6. Disklabel type: dos
      7. Disk identifier: 0x16f2a91f
      8. Disk /dev/sdc: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors
      9. Units: sectors of 1 * 512 = 512 bytes
      10. Sector size (logical/physical): 512 bytes / 4096 bytes
      11. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      12. Disklabel type: dos
      13. Disk identifier: 0x022cba24
      14. Device Boot Start End Sectors Size Id Type
      15. /dev/sdc1 1 4294967295 4294967295 2T ee GPT
      16. Partition 1 does not start on physical sector boundary.
      17. Disk /dev/sda: 111,8 GiB, 120040980480 bytes, 234455040 sectors
      18. Units: sectors of 1 * 512 = 512 bytes
      19. Sector size (logical/physical): 512 bytes / 512 bytes
      20. I/O size (minimum/optimal): 512 bytes / 33553920 bytes
      21. Disklabel type: gpt
      22. Disk identifier: 3792DC2C-864A-41F8-8642-731B956CF465
      23. Device Start End Sectors Size Type
      24. /dev/sda1 65535 1048559 983025 480M EFI System
      25. /dev/sda2 1048560 217969409 216920850 103,4G Linux filesystem
      26. /dev/sda3 217969410 234418694 16449285 7,9G Linux swap
      Display All
    • Wenn du ein Backup hast würde ich das RAID neu aufsetzen, die Platten freuen sich wenn sie nicht zweimal einen kompletten Sync machen müssen.
      Absolutely no support through PM!

      I must not fear.
      Fear is the mind-killer.
      Fear is the little-death that brings total obliteration.
      I will face my fear.
      I will permit it to pass over me and through me.
      And when it has gone past I will turn the inner eye to see its path.
      Where the fear has gone there will be nothing.
      Only I will remain.

      Litany against fear by Bene Gesserit
    • Hallo,


      votdev wrote:

      Wenn du ein Backup hast würde ich das RAID neu aufsetzen, die Platten freuen sich wenn sie nicht zweimal einen kompletten Sync machen müssen.

      leider ist das keine Option. Das Raid muss auch im einem Ernstfall funktionieren wie es soll - die Wiederherstellung vom Raid ist für mich eine Vorbereitung auf den Ernstfall.


      macom wrote:

      @geaves hat den Ablauf hier beschrieben; wenn du beide Platten tauschen willst, dass ganze 2x machen
      Habs gemacht. Leider taucht die Festplatte in der Wiederherstellungs-Toolbox nicht auf.

      Aktueller Stand:

      Source Code

      1. root@zangs-nas:~# cat /proc/mdstat Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
      2. md127 : active raid1 sdb[1]
      3. 1953383512 blocks super 1.2 [2/1] [_U]
      4. bitmap: 2/15 pages [8KB], 65536KB chunk
      5. unused devices: <none>



      Source Code

      1. root@zangs-nas:~# fdisk -l Disk /dev/sda: 3,7 TiB, 4000787030016 bytes, 7814037168 sectors
      2. Units: sectors of 1 * 512 = 512 bytes
      3. Sector size (logical/physical): 512 bytes / 4096 bytes
      4. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      5. Disklabel type: gpt
      6. Disk identifier: 026A0759-A0B8-4EE3-A458-11B59B68A113
      7. Device Start End Sectors Size Type
      8. /dev/sda1 2048 7814037134 7814035087 3,7T Linux filesystem
      9. Disk /dev/sdb: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors
      10. Units: sectors of 1 * 512 = 512 bytes
      11. Sector size (logical/physical): 512 bytes / 4096 bytes
      12. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      13. Disklabel type: dos
      14. Disk identifier: 0x022cba24
      15. Device Boot Start End Sectors Size Id Type
      16. /dev/sdb1 1 4294967295 4294967295 2T ee GPT
      17. Partition 1 does not start on physical sector boundary.
      18. Disk /dev/md127: 1,8 TiB, 2000264716288 bytes, 3906767024 sectors
      19. Units: sectors of 1 * 512 = 512 bytes
      20. Sector size (logical/physical): 512 bytes / 4096 bytes
      21. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      22. Disk /dev/sdc: 111,8 GiB, 120040980480 bytes, 234455040 sectors
      23. Units: sectors of 1 * 512 = 512 bytes
      24. Sector size (logical/physical): 512 bytes / 512 bytes
      25. I/O size (minimum/optimal): 512 bytes / 33553920 bytes
      26. Disklabel type: gpt
      27. Disk identifier: 3792DC2C-864A-41F8-8642-731B956CF465
      28. Device Start End Sectors Size Type
      29. /dev/sdc1 65535 1048559 983025 480M EFI System
      30. /dev/sdc2 1048560 217969409 216920850 103,4G Linux filesystem
      31. /dev/sdc3 217969410 234418694 16449285 7,9G Linux swap
      Display All

      Source Code

      1. root@zangs-nas:~# cat /etc/mdadm/mdadm.conf
      2. # mdadm.conf
      3. #
      4. # Please refer to mdadm.conf(5) for information about this file.
      5. #
      6. # by default, scan all partitions (/proc/partitions) for MD superblocks.
      7. # alternatively, specify devices to scan, using wildcards if desired.
      8. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      9. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      10. # used if no RAID devices are configured.
      11. DEVICE partitions
      12. # auto-create devices with Debian standard permissions
      13. CREATE owner=root group=disk mode=0660 auto=yes
      14. # automatically tag new arrays as belonging to the local system
      15. HOMEHOST <system>
      16. # definitions of existing MD arrays
      17. ARRAY /dev/md/Zangs metadata=1.2 name=Zangs-NAS:Zangs UUID=3ca82093:31bdaba6:e3fc2da7:8924c663
      18. # instruct the monitoring daemon where to send mail alerts
      19. MAILADDR xxx
      Display All



      Gruß
      Alex

      The post was edited 1 time, last by Jahresprogramm ().

    • Überlege es dir nochmal, ein Sync ist eine extreme Belastung für die Platten. Damit tust du der Lebensdauer der Platten keine Freude. Einen Wiederherstellung kannst du auch in einer virtuellen Maschine testen.
      Absolutely no support through PM!

      I must not fear.
      Fear is the mind-killer.
      Fear is the little-death that brings total obliteration.
      I will face my fear.
      I will permit it to pass over me and through me.
      And when it has gone past I will turn the inner eye to see its path.
      Where the fear has gone there will be nothing.
      Only I will remain.

      Litany against fear by Bene Gesserit
    • Hallo,

      votdev wrote:

      Überlege es dir nochmal, ein Sync ist eine extreme Belastung für die Platten. Damit tust du der Lebensdauer der Platten keine Freude. Einen Wiederherstellung kannst du auch in einer virtuellen Maschine testen.
      Lieber Stress für die Platten als später graue Haare für mich. Sorry, ich will den Ernstfall in Ruhe auf der Maschine einmal durchmachen. Die Platten werden später eh nur sehr moderat benutzt.
      Kannst Du mir im aktuellen Stand weiter helfen?

      Gruß
      Alex

      The post was edited 2 times, last by Jahresprogramm ().

    • geaves wrote:

      Did you wipe and format the new drive (format with the same filesystem)
      And have you done both from the GUI of OMV (just to make sure we understand exactly what you did)
      Please also mention, if the new drives have been used before in another server or what ever.

      BTW: did you check if your md127 with one disk is working (as part of you disaster simulation)
      Odroid HC2 - armbian - Seagate ST4000DM004 - OMV4.x
      Asrock Q1900DC-ITX - 16GB - 2x Seagate ST3000VN000 - Intenso SSD 120GB - OMV4.x
      :!: Backup - Solutions to common problems - OMV setup videos - OMV4 Documentation - user guide :!:
    • macom wrote:

      And have you done both from the GUI of OMV (just to make sure we understand exactly what you did)Please also mention, if the new drives have been used before in another server or what ever.

      BTW: did you check if your md127 with one disk is working (as part of you disaster simulation)
      I have done both from GUI of OMV. The new drive is absolutely new from the store. RAID md127 ist working.

      Source Code

      1. Version : 1.2
      2. Creation Time : Sat Nov 11 17:45:46 2017
      3. Raid Level : raid1
      4. Array Size : 1953383512 (1862.89 GiB 2000.26 GB)
      5. Used Dev Size : 1953383512 (1862.89 GiB 2000.26 GB)
      6. Raid Devices : 2
      7. Total Devices : 1
      8. Persistence : Superblock is persistent
      9. Intent Bitmap : Internal
      10. Update Time : Sat Jul 27 18:44:29 2019
      11. State : clean, degraded
      12. Active Devices : 1
      13. Working Devices : 1
      14. Failed Devices : 0
      15. Spare Devices : 0
      16. Name : Zangs-NAS:Zangs
      17. UUID : 3ca82093:31bdaba6:e3fc2da7:8924c663
      18. Events : 5071
      19. Number Major Minor RaidDevice State
      20. - 0 0 0 removed
      21. 1 8 16 1 active sync /dev/sdb
      Display All

      The post was edited 1 time, last by Jahresprogramm ().

    • geaves wrote:

      That makes no sense that the new drive does not show in the pop up box when you select recover, it should all drives.

      The only other option is to try from the cli, mdadm --add /dev/md127 /dev/sda

      I had restarted the computer in the meantime. Now the drive is sdb. I still had to unmount. Now the RAID is recovering!

      Source Code

      1. root@zangs-nas:~# mdadm --add /dev/md127 /dev/sdb
      2. mdadm: added /dev/sdb

      Brainfuck Source Code

      1. root@zangs-nas:~# cat /proc/mdstat
      2. Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
      3. md127 : active raid1 sdb[2] sdc[1]
      4. 1953383512 blocks super 1.2 [2/1] [_U]
      5. [>....................] recovery = 0.5% (9851008/1953383512) finish=200.2min speed=161720K/sec
      6. bitmap: 7/15 pages [28KB], 65536KB chunk
      7. unused devices: <none>
      Thank you all so far! I'll get in touch when I'm done, or get stuck!

      greeting Alex

      The post was edited 2 times, last by Jahresprogramm ().

    • This morning the RAID was in the clean state. I have dismantled the second old plate after the instructions and installed a new one. The RAID was completely gone. Good, reinstall the old one. The RAID is now inactive!

      Status:

      Source Code

      1. root@zangs-nas:~# cat /proc/mdstat
      2. Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
      3. md127 : inactive sdb[1](S)
      4. 1953383512 blocks super 1.2
      5. unused devices: <none>


      Source Code

      1. root@zangs-nas:~# fdisk -l
      2. Disk /dev/sdb: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors
      3. Units: sectors of 1 * 512 = 512 bytes
      4. Sector size (logical/physical): 512 bytes / 4096 bytes
      5. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      6. Disklabel type: dos
      7. Disk identifier: 0x022cba24
      8. Device Boot Start End Sectors Size Id Type
      9. /dev/sdb1 1 4294967295 4294967295 2T ee GPT
      10. Partition 1 does not start on physical sector boundary.
      11. Disk /dev/sda: 3,7 TiB, 4000787030016 bytes, 7814037168 sectors
      12. Units: sectors of 1 * 512 = 512 bytes
      13. Sector size (logical/physical): 512 bytes / 4096 bytes
      14. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      15. Disklabel type: gpt
      16. Disk identifier: 026A0759-A0B8-4EE3-A458-11B59B68A113
      17. Device Start End Sectors Size Type
      18. /dev/sda1 2048 7814037134 7814035087 3,7T Linux filesystem
      19. Disk /dev/sdc: 111,8 GiB, 120040980480 bytes, 234455040 sectors
      20. Units: sectors of 1 * 512 = 512 bytes
      21. Sector size (logical/physical): 512 bytes / 512 bytes
      22. I/O size (minimum/optimal): 512 bytes / 33553920 bytes
      23. Disklabel type: gpt
      24. Disk identifier: 3792DC2C-864A-41F8-8642-731B956CF465
      25. Device Start End Sectors Size Type
      26. /dev/sdc1 65535 1048559 983025 480M EFI System
      27. /dev/sdc2 1048560 217969409 216920850 103,4G Linux filesystem
      28. /dev/sdc3 217969410 234418694 16449285 7,9G Linux swap
      Display All
    • OK, I have now continued. I'm doing everything on the console now.

      Stop RAID:
      mdadm --stop /dev/md127

      RAID will only start working with the old disk:
      mdadm --assemble --run /dev/md127 /dev/sdb

      RAID is now active. Status:

      Source Code

      1. cat /proc/mdstat
      2. Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
      3. md127 : active raid1 sdb[1]
      4. 1953383512 blocks super 1.2 [2/1] [_U]
      5. bitmap: 7/15 pages [28KB], 65536KB chunk
      6. unused devices: <none>

      Include second disk again:
      mdadm --add /dev/md127 /dev/sda

      Strangely, sync is now running again...!?

      Brainfuck Source Code

      1. cat /proc/mdstat
      2. Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
      3. md127 : active raid1 sda[2] sdb[1]
      4. 1953383512 blocks super 1.2 [2/1] [_U]
      5. [>....................] recovery = 0.8% (16502208/1953383512) finish=188.8min speed=170937K/sec
      6. bitmap: 7/15 pages [28KB], 65536KB chunk
      7. unused devices: <none>
    • No matter what I tried, the RAID does not work with a large and a small disk. This is probably caused by the direct use of the disks and not the partitions in the RAID.

      I reset everything. The RAID is now as originally with two small disks.

      Can someone help me to adjust the RAID so that I can exchange the disks with larger ones.


      What do you think:
      mdadm --build /dev/md127 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1



      current status:

      Source Code

      1. cat /proc/mdstat
      2. Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
      3. md127 : active raid1 sda[2] sdb[1]
      4. 1953383512 blocks super 1.2 [2/2] [UU]
      5. bitmap: 0/15 pages [0KB], 65536KB chunk
      6. unused devices: <none>

      Source Code

      1. fdisk -l
      2. Disk /dev/sda: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors
      3. Units: sectors of 1 * 512 = 512 bytes
      4. Sector size (logical/physical): 512 bytes / 4096 bytes
      5. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      6. Disklabel type: dos
      7. Disk identifier: 0xba980230
      8. Device Boot Start End Sectors Size Id Type
      9. /dev/sda1 1 4294967295 4294967295 2T ee GPT
      10. Partition 1 does not start on physical sector boundary.
      11. Disk /dev/sdb: 1,8 TiB, 2000398934016 bytes, 3907029168 sectors
      12. Units: sectors of 1 * 512 = 512 bytes
      13. Sector size (logical/physical): 512 bytes / 4096 bytes
      14. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      15. Disklabel type: dos
      16. Disk identifier: 0x022cba24
      17. Device Boot Start End Sectors Size Id Type
      18. /dev/sdb1 1 4294967295 4294967295 2T ee GPT
      19. Partition 1 does not start on physical sector boundary.
      20. Disk /dev/md127: 1,8 TiB, 2000264716288 bytes, 3906767024 sectors
      21. Units: sectors of 1 * 512 = 512 bytes
      22. Sector size (logical/physical): 512 bytes / 4096 bytes
      23. I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      24. Disk /dev/sdc: 111,8 GiB, 120040980480 bytes, 234455040 sectors
      25. Units: sectors of 1 * 512 = 512 bytes
      26. Sector size (logical/physical): 512 bytes / 512 bytes
      27. I/O size (minimum/optimal): 512 bytes / 33553920 bytes
      28. Disklabel type: gpt
      29. Disk identifier: 3792DC2C-864A-41F8-8642-731B956CF465
      30. Device Start End Sectors Size Type
      31. /dev/sdc1 65535 1048559 983025 480M EFI System
      32. /dev/sdc2 1048560 217969409 216920850 103,4G Linux filesystem
      33. /dev/sdc3 217969410 234418694 16449285 7,9G Linux swap
      Display All

      Source Code

      1. cat /etc/mdadm/mdadm.conf
      2. # mdadm.conf
      3. #
      4. # Please refer to mdadm.conf(5) for information about this file.
      5. #
      6. # by default, scan all partitions (/proc/partitions) for MD superblocks.
      7. # alternatively, specify devices to scan, using wildcards if desired.
      8. # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      9. # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      10. # used if no RAID devices are configured.
      11. DEVICE partitions
      12. # auto-create devices with Debian standard permissions
      13. CREATE owner=root group=disk mode=0660 auto=yes
      14. # automatically tag new arrays as belonging to the local system
      15. HOMEHOST <system>
      16. # definitions of existing MD arrays
      17. ARRAY /dev/md/Zangs metadata=1.2 name=Zangs-NAS:Zangs UUID=3ca82093:31bdaba6:e3f c2da7:8924c663
      18. # instruct the monitoring daemon where to send mail alerts
      19. MAILADDR xx@xx
      Display All

      The post was edited 1 time, last by Jahresprogramm ().

    • Jahresprogramm wrote:

      This is probably caused by the direct use of the disks and not the partitions in the RAID.
      I believe the problem is caused by formatting the larger drive before attempting to add it to the array. The user is using the GUI and replacing a drive if it is formatted.

      My own experience tells me that that is wrong; raid disk fails, remove the failed drive using the built in bios or software, install the new drive. Having used ZFS it's a similar procedure, but in both instances the new drive is not formatted prior to adding it to the raid.

      Let's just say you've got the wrong size, so again in the GUI file system -> resize. Now you have a new larger working array.

      So to complete your upgrade you are going to use a combination of the GUI and the cli DO NOT REBOOT !!

      Remove a drive from the array using the GUI let's say /dev/sda
      Remove the drive from the computer.
      Install the new drive.
      Check the GUI to confirm the drive has been seen, it may display a new drive reference e.g. /dev/sdc
      Wipe the drive in the GUI

      CLI:

      mdadm --stop /dev/md127
      mdadm --add /dev/md127 /dev/sdc here I am assuming the new drive is /dev/sdc
      cat /proc/mdstat

      Hopefully this will show the array being rebuilt with the new drive.

      If the above works comeback, DO NOT REBOOT, do not pass GO :) as you are doing this one drive at time you still have a working array if this first step does not work.
      Raid is not a backup! Would you go skydiving without a parachute?

      The post was edited 3 times, last by geaves ().

    • Jahresprogramm wrote:

      Does it matter that motherboard has only two SATA ports?
      No because your current 2 drives are connected to those I take it?

      The formatting on the previous has gone weird, I'm going to try and sort it out, give up the formatting has gone wrong but there should be enough there to get started.
      Raid is not a backup! Would you go skydiving without a parachute?

      The post was edited 1 time, last by geaves ().

    • now I fail at the first command

      umount /dev/md127

      mdadm --stop /dev/md127

      mdadm: Cannot get exclusive access to /dev/md127:Perhaps a running process, mounted filesystem or active volume group?

      Status RAID

      Source Code

      1. Version : 1.2
      2. Creation Time : Sat Nov 11 17:45:46 2017
      3. Raid Level : raid1
      4. Array Size : 1953383512 (1862.89 GiB 2000.26 GB)
      5. Used Dev Size : 1953383512 (1862.89 GiB 2000.26 GB)
      6. Raid Devices : 2
      7. Total Devices : 1
      8. Persistence : Superblock is persistent
      9. Intent Bitmap : Internal
      10. Update Time : Tue Jul 30 20:16:52 2019
      11. State : clean, degraded
      12. Active Devices : 1
      13. Working Devices : 1
      14. Failed Devices : 0
      15. Spare Devices : 0
      16. Name : Zangs-NAS:Zangs
      17. UUID : 3ca82093:31bdaba6:e3fc2da7:8924c663
      18. Events : 14734
      19. Number Major Minor RaidDevice State
      20. - 0 0 0 removed
      21. 1 8 16 1 active sync /dev/sdb
      Display All