Defekte Platte im RAID5 tauschen

  • Hallo zusammen


    Ich habe ein Problem, wozu ich jedoch noch keine passende Lösung im Forum finden konnte. Sollte ich eine bereits exisitierende Lösung übersehen haben, so würde ich mich über einen Hinweis/Link zum Thread freuen ;)


    Ich habe mir vor längerem ein OMV System mit der Version 3.0.99 aufgesetzt, was nun seit mehreren Jahren problemlos läuft. Das System selbst läuft auf einer SSD und der Array wird aus 4 x 2TB Platten im RAID 5 gebildet.


    Nun zum eigentlichen Problem. Vor einigen Wochen hat eine Festplatte angefangen Ärger zu machen und einge defekte Sektoren gemeldet. Nun wollte ich diese Platte tauschen und habe das System ausgeschaltet, alte Platte raus, neue Platte (gleicher Typ und Größe) rein und es wird nun auf der GUI kein RAID Verbund mehr angezeigt. Ebenso bleibt die Widerherstellungsoption ausgegraut. Lediglich die Festplatten als Hardware, inklusive der neuen werden noch angezeigt. Die Shares (SMB, etc.) werden ebenfalls noch angezeigt und sind über das Netzwerk sichtbar. OK, tief durchatmen, System wieder herunter gefahren, alte Platte wieder rein, hochgefahren und siehe da, der RAID Verbund ist wieder da, online und die Inhalte verfügbar.


    Bevor ich nun weiter experimentiere die Preisfrage, wie tausche ich diese defekte Platte nun am Idealsten aus, so dass sich der Array wieder neu aufbaut?


    Kleines Manko noch, sollte am Besten alles über die GUI laufen oder im Detail für die CMD beschrieben werden. Bin leider nicht ganz so firm mit Linux...



    LG, Honk

    • Offizieller Beitrag

    You can complete this from the GUI


    Raid Management -> select delete from the menu -> from the dialog select the defect drive and click ok, the drive has been removed from the array and can be removed form the machine.


    Install the new drive, Storage -> Disks -> select the drive and click wipe on the menu (short wipe will be ok)


    The following may not work but it's worth a try, Raid Management -> select the raid -> click recover on the menu, in the dialog pop up if the new drive is not displayed, you will need to format the drive the same as your raid, then repeat the process to add it to your raid.

  • Hi geaves


    I followed the steps as described above and it works. RAID is currently recovering the data to the new disk to get the downgraded array up and running again. Just takes some time to rebuild...


    Thank´s a lot for the qick reply and the short guidance! Wish you a happy new year.



    Cheers

  • Hallo Ich habe das gleiche Problem, aber mein Englisch ist nicht das Beste...desshalb frage ich lieber noch mal nach:


    Unter Raid-Verwaltung kann ich keine Platte löschen...


    Löschen kann ich aber unter Laufwerke:



    ich hatte aber verstanden, dass man da erst nach der neuinstalltion löscht?!


    Kann mit bitte jemand noch mal mitteilen, ob ich beide Male auf "Lauferke" gehen muss und ob beim ersten Mal auch die "schnelle Löschung" reicht, beim zweiten Mal steht es ja explizit drin. Wäre super. vielen Dank.

    • Offizieller Beitrag

    Under Raid management I cannot delete a disk ...

    ? ( are you wanting to remove the raid completely, because the image shows a Raid5 in a clean/degraded state you remove another drive and the raid is gone so is your data!

  • 1.


    root@OMVneu:~# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
    md127 : active raid5 sda[3] sdb[1]
    5860270080 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]



    unused devices: <none>


    2.


    root@OMVneu:~# blkid
    /dev/sdb: UUID="69bd812a-cd81-f419-8a59-0c2f87234e77" UUID_SUB="d8537840-a12f-45e3-eeef-8d8697590e29" LABEL="openmediavault:raid5home" TYPE="linux_raid_member"
    /dev/md127: LABEL="homeSAVE" UUID="2f22f941-e6a7-424e-8ce2-da43bc4a56e3" TYPE="ext4"
    /dev/sda: UUID="69bd812a-cd81-f419-8a59-0c2f87234e77" UUID_SUB="8852b181-5cc0-9da0-7c8d-750026c01618" LABEL="openmediavault:raid5home" TYPE="linux_raid_member"
    /dev/sdc1: UUID="22af3500-51d1-4eda-ad95-1106caa7e01b" TYPE="ext4" PARTUUID="47f7cbe9-01"
    /dev/sdc5: UUID="c611ba4f-b7bc-4fc0-9b3a-b00eae14443f" TYPE="swap" PARTUUID="47f7cbe9-05"
    /dev/sdd: UUID="69bd812a-cd81-f419-8a59-0c2f87234e77" UUID_SUB="68880560-8617-e66f-a57b-2ba1b236d8cc" LABEL="openmediavault:raid5home" TYPE="linux_raid_member"



    3.



    root@OMVneu:~# fdisk -1 | grep "Disk."
    fdisk: Ungültige Option -- 1



    Usage:
    fdisk [options] <disk> change partition table
    fdisk [options] -l [<disk>] list partition table(s)



    Display or manipulate a disk partition table.



    Options:
    -b, --sector-size <size> physical and logical sector size
    -B, --protect-boot don't erase bootbits when creating a new label
    -c, --compatibility[=<mode>] mode is 'dos' or 'nondos' (default)
    -L, --color[=<when>] colorize output (auto, always or never)
    colors are enabled by default
    -l, --list display partitions and exit
    -o, --output <list> output columns
    -t, --type <type> recognize specified partition table type only
    -u, --units[=<unit>] display units: 'cylinders' or 'sectors' (default)
    -s, --getsz display device size in 512-byte sectors [DEPRECATED]
    --bytes print SIZE in bytes rather than in human readable format
    -w, --wipe <mode> wipe signatures (auto, always or never)
    -W, --wipe-partitions <mode> wipe signatures from new partitions (auto, always or never)



    -C, --cylinders <number> specify the number of cylinders
    -H, --heads <number> specify the number of heads
    -S, --sectors <number> specify the number of sectors per track



    -h, --help display this help and exit
    -V, --version output version information and exit



    Available columns (for -o):
    gpt: Device Start End Sectors Size Type Type-UUID Attrs Name UUID
    dos: Device Start End Sectors Cylinders Size Type Id Attrs Boot End-C/H/S
    Start-C/H/S
    bsd: Slice Start End Sectors Cylinders Size Type Bsize Cpg Fsize
    sgi: Device Start End Sectors Cylinders Size Type Id Attrs
    sun: Device Start End Sectors Cylinders Size Type Id Flags



    For more details see fdisk(8).


    4.


    root@OMVneu:~# cat /etc/mdadm/mdadm.conf
    # mdadm.conf
    #
    # Please refer to mdadm.conf(5) for information about this file.
    #



    # by default, scan all partitions (/proc/partitions) for MD superblocks.
    # alternatively, specify devices to scan, using wildcards if desired.
    # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
    # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
    # used if no RAID devices are configured.
    DEVICE partitions



    # auto-create devices with Debian standard permissions
    CREATE owner=root group=disk mode=0660 auto=yes



    # automatically tag new arrays as belonging to the local system
    HOMEHOST <system>



    # definitions of existing MD arrays
    ARRAY /dev/md/raid5home metadata=1.2 name=openmediavault:raid5home UUID=69bd812a:cd81f419:8a590c2f:87234e77



    # instruct the monitoring daemon where to send mail alerts


    5.


    root@OMVneu:~# mdadm --detail --scan --verbose
    ARRAY /dev/md/raid5home level=raid5 num-devices=3 metadata=1.2 name=openmediavault:raid5home UUID=69bd812a:cd81f419:8a590c2f:87234e77
    devices=/dev/sda,/dev/sdb


    6.


    3 x WD red (each 3TB)
    1x SDD 32GB


    7.


    Nothing happend, I just receeved:
    A DegradedArray event had been detected on md device/dev/md/raid5home.
    Faithfully yours, etc.
    P.S. The /proc/mdstat file currently contains thefollowing:



    Dear geaves, I really understund nothing....
    thanks

    • Offizieller Beitrag

    Dear geaves, I really understund nothing....

    The output from the above shows 3 drives which are part of your raid 5 /dev/sd[abd] whilst your raid is active it is a clean/degraded state as your first image shows, the drive /dev/sdd is missing, are you trying to add that back to the array?

  • 1.


    root@OMVneu:~# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
    md127 : active raid5 sda[3] sdb[1]
    5860270080 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]



    unused devices: <none>


    2.


    root@OMVneu:~# blkid
    /dev/sdb: UUID="69bd812a-cd81-f419-8a59-0c2f87234e77" UUID_SUB="d8537840-a12f-45e3-eeef-8d8697590e29" LABEL="openmediavault:raid5home" TYPE="linux_raid_member"
    /dev/md127: LABEL="homeSAVE" UUID="2f22f941-e6a7-424e-8ce2-da43bc4a56e3" TYPE="ext4"
    /dev/sda: UUID="69bd812a-cd81-f419-8a59-0c2f87234e77" UUID_SUB="8852b181-5cc0-9da0-7c8d-750026c01618" LABEL="openmediavault:raid5home" TYPE="linux_raid_member"
    /dev/sdc1: UUID="22af3500-51d1-4eda-ad95-1106caa7e01b" TYPE="ext4" PARTUUID="47f7cbe9-01"
    /dev/sdc5: UUID="c611ba4f-b7bc-4fc0-9b3a-b00eae14443f" TYPE="swap" PARTUUID="47f7cbe9-05"
    /dev/sdd: UUID="69bd812a-cd81-f419-8a59-0c2f87234e77" UUID_SUB="68880560-8617-e66f-a57b-2ba1b236d8cc" LABEL="openmediavault:raid5home" TYPE="linux_raid_member"



    3.



    root@OMVneu:~# fdisk -1 | grep "Disk."
    fdisk: Ungültige Option -- 1



    Usage:
    fdisk [options] <disk> change partition table
    fdisk [options] -l [<disk>] list partition table(s)



    Display or manipulate a disk partition table.



    Options:
    -b, --sector-size <size> physical and logical sector size
    -B, --protect-boot don't erase bootbits when creating a new label
    -c, --compatibility[=<mode>] mode is 'dos' or 'nondos' (default)
    -L, --color[=<when>] colorize output (auto, always or never)
    colors are enabled by default
    -l, --list display partitions and exit
    -o, --output <list> output columns
    -t, --type <type> recognize specified partition table type only
    -u, --units[=<unit>] display units: 'cylinders' or 'sectors' (default)
    -s, --getsz display device size in 512-byte sectors [DEPRECATED]
    --bytes print SIZE in bytes rather than in human readable format
    -w, --wipe <mode> wipe signatures (auto, always or never)
    -W, --wipe-partitions <mode> wipe signatures from new partitions (auto, always or never)



    -C, --cylinders <number> specify the number of cylinders
    -H, --heads <number> specify the number of heads
    -S, --sectors <number> specify the number of sectors per track



    -h, --help display this help and exit
    -V, --version output version information and exit



    Available columns (for -o):
    gpt: Device Start End Sectors Size Type Type-UUID Attrs Name UUID
    dos: Device Start End Sectors Cylinders Size Type Id Attrs Boot End-C/H/S
    Start-C/H/S
    bsd: Slice Start End Sectors Cylinders Size Type Bsize Cpg Fsize
    sgi: Device Start End Sectors Cylinders Size Type Id Attrs
    sun: Device Start End Sectors Cylinders Size Type Id Flags



    For more details see fdisk(8).


    4.


    root@OMVneu:~# cat /etc/mdadm/mdadm.conf
    # mdadm.conf
    #
    # Please refer to mdadm.conf(5) for information about this file.
    #



    # by default, scan all partitions (/proc/partitions) for MD superblocks.
    # alternatively, specify devices to scan, using wildcards if desired.
    # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
    # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
    # used if no RAID devices are configured.
    DEVICE partitions



    # auto-create devices with Debian standard permissions
    CREATE owner=root group=disk mode=0660 auto=yes



    # automatically tag new arrays as belonging to the local system
    HOMEHOST <system>



    # definitions of existing MD arrays
    ARRAY /dev/md/raid5home metadata=1.2 name=openmediavault:raid5home UUID=69bd812a:cd81f419:8a590c2f:87234e77



    # instruct the monitoring daemon where to send mail alerts


    5.


    root@OMVneu:~# mdadm --detail --scan --verbose
    ARRAY /dev/md/raid5home level=raid5 num-devices=3 metadata=1.2 name=openmediavault:raid5home UUID=69bd812a:cd81f419:8a590c2f:87234e77
    devices=/dev/sda,/dev/sdb


    6.


    3 x WD red (each 3TB)
    1x SDD 32GB


    7.


    Nothing happend, I just receeved:
    A DegradedArray event had been detected on md device/dev/md/raid5home.
    Faithfully yours, etc.
    P.S. The /proc/mdstat file currently contains thefollowing:



    Dear geaves, I really understund nothing....
    thanks

    • Offizieller Beitrag

    as if it is possible,

    If the drive is OK then it's possible, from the command line run these two and it should come back up,


    mdadm --stop /dev/md127


    mdadm --assemble --force --verbose /dev/md127 /dev/sd[abd]


    just copy and paste each one in turn come back if there are any errors, but there shouldn't be.

  • root@OMVneu:~# mdadm --stop /dev/md127
    mdadm: Cannot get exclusive access to /dev/md127:Perhaps a running process, mounted filesystem or active volume group?


    how can I find out if there is another process running? Or what else should I do ?


    (Does copy and paste not work with Putty? )


    Thanks

    • Offizieller Beitrag

    Does copy and paste not work with putty

    Yes, it's what most users use but I ssh from W10 cmd.


    : Cannot get exclusive access to / dev / md127: Perhaps a running process, mounted filesystem or active volume group?

    This suggests that something or someone is accessing the raid, i.e. something accessing samba shares, the above has never happened before. What's the output of mdadm --scan dev/md127

  • tried shutdown...
    received this after re-trying


    root@OMVneu:~# mdadm /dev/md127
    /dev/md127: 5588.79GiB raid5 3 devices, 0 spares. Use mdadm --detail for more detail.
    root@OMVneu:~# mdadm --assemble --force --verbose /dev/md127 /dev/sd[abd]
    mdadm: looking for devices for /dev/md127
    mdadm: /dev/sda is busy - skipping
    mdadm: /dev/sdb is busy - skipping
    mdadm: Found some drive for an array that is already active: /dev/md/raid5home
    mdadm: giving up.



    the asked output ist:


    root@OMVneu:~# mdadm --scan dev/md127
    mdadm: --scan does not set the mode, and so cannot be the first option.


    what to do now?

    • Offizieller Beitrag

    I'm sorry we losing this in the translation, I didn't want you to execute anything I was simply informing you that why that error busy -skipping happened.


    You must perform both commands from post 12 for this to work! If the array cannot stop something or someone is accessing it.

  • I stoped all services (samba, plex....) but the result is the same.


    I have to leave now (looking for the Kids), but if there is anything more I can do at the weekend i`d be lucky to hear about.


    Thanks so far.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!