Raid5 verliert eine Festplatte

  • Hallo zusammen,


    seit gestern ist ohne erdenklichen Grund eine von drei Festplatten in meinem Raid 5 verschwunden.

    Zu diesem Zeitpunkt war diese auch nicht mehr im OMV5 zu finden. Selbst der SW-Neustart über das Gui hat nichts gebracht.



    Erst nachdem ich alle Platten entfernt, und einen HW-Neustart durchgeführt hatte ist sie nun wieder unter Datenspeicher/Laufwerke (/dev/sda) zu finden.

    Mein Raid wird mir logischerweise als "degraded" angezeigt da ein Part entfernt worden ist.

    Wenn ich nun die Widerherstellung starten möchte, hätte ich erwartet die Platte dort wieder zu finden. Leider wird mir dort aber keine Platte angezeigt.


    Gibt es eine Möglichkeit mein Raid wieder herzustellen?



    Hoffe auf ein Paar Tipps =O

  • You need to post the output of each of these commands to do that ssh into omv as root copy the output and paste using the </> on the menu (makes it easier to read)

    Raid is not a backup! Would you go skydiving without a parachute?

  • Hello geaves, thanks for your support


    Code
    root@NAS-TANK:~# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
    md127 : active raid5 sdc[2] sda[0]
    19532611584 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [U_U]
    bitmap: 36/73 pages [144KB], 65536KB chunk
    unused devices: <none>
    Code
    root@NAS-TANK:~# blkid
    /dev/sdc: UUID="4b8e8b3f-2277-e62c-d01f-20467484a558" UUID_SUB="864e032b-fb3e-c1ff-d135-9bbd2c92dab2" LABEL="NAS-TANK:Pool" TYPE="linux_raid_member"
    /dev/sdb: UUID="4b8e8b3f-2277-e62c-d01f-20467484a558" UUID_SUB="f927d315-081a-87c6-f1f4-9b0ebf224766" LABEL="NAS-TANK:Pool" TYPE="linux_raid_member"
    /dev/sda: UUID="4b8e8b3f-2277-e62c-d01f-20467484a558" UUID_SUB="0bc01b67-715f-497c-eed2-f212dd17cb92" LABEL="NAS-TANK:Pool" TYPE="linux_raid_member"
    /dev/sdd1: UUID="700bc782-15d0-4a05-8e11-f6c5af744242" TYPE="ext4" PARTUUID="7d828a7f-01"
    /dev/sdd5: UUID="e3727021-2b32-4621-bf77-90b932399a11" TYPE="swap" PARTUUID="7d828a7f-05"
    /dev/sde1: LABEL="Backup" UUID="934380dd-d6bd-4853-b112-47b4fcda9d1d" TYPE="ext4" PARTUUID="bcc44841-8e9a-4357-8033-72ef59ea5a8c"
    /dev/md127: LABEL="Storage" UUID="940a810d-ee54-4e4d-9a03-cb663a9e10a4" TYPE="ext4"
    /dev/sdf1: LABEL="BackupExt" UUID="591e6d6a-0fea-410f-98d0-91d7d6c6f593" TYPE="ext4" PARTUUID="19cd9c46-f025-47b1-96b3-0b87912c9ee2"
    Code
    root@NAS-TANK:~# mdadm --detail --scan --verbose
    ARRAY /dev/md/NAS-TANK:Pool level=raid5 num-devices=3 metadata=1.2 name=NAS-TANK:Pool UUID=4b8e8b3f:2277e62c:d01f2046:7484a558
    devices=/dev/sda,/dev/sdc
  • OK from the output /dev/sdb has been ejected from the array by mdadm, that confirms the image in your first post, the question is why?


    1) The drive has physically failed

    2) The sata cable connected to that is faulty

    3) The port that drive is connected to is faulty

    4) Power surge causing that drive to disconnect


    Do you run regular SMART tests on your drives, even if it's only a short one?

    Is the drive showing showing a red dot in the smart settings?


    Output of mdadm --detail /dev/sdb


    Edit: If the the array has connections to shares within a docker container I would suggest stopping the container/s to reduce 'calls' to the array until it's rebuilt.

    Raid is not a backup! Would you go skydiving without a parachute?

  • it looks all good.

    All green


    Code
    root@NAS-TANK:~# mdadm --detail /dev/sdb
    mdadm: /dev/sdb does not appear to be an md device

    Edited once, last by paddl82: all access to the array has been deactivated. ().

  • it looks all good.

    All green

    All that is telling you is there are no bad sectors detected on any of the drives, it doesn't tell you if there any issues on the drive that was removed.


    It should be possible to add the drive back to the array;


    mdadm --add /dev/md127 /dev/sdb this will then display the output cat /proc/mdstat

    Raid is not a backup! Would you go skydiving without a parachute?

  • ok it looks good, now in progress. But why can't i do this over web-gui from omv5 ?


    Code
    root@NAS-TANK:~# mdadm --add /dev/md127 /dev/sdb
    mdadm: re-added /dev/sdb
    root@NAS-TANK:~# cat /proc/mdstat
    Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
    md127 : active raid5 sdb[1] sdc[2] sda[0]
    19532611584 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [U_U]
    [=====>...............] recovery = 28.3% (2766139936/9766305792) finish=662.3min speed=176146K/sec
    bitmap: 27/73 pages [108KB], 65536KB chunk
    unused devices: <none>
  • But why can't i do this over web-gui from omv5

    You could, however, I have had occasions where a users drive does not show when attempting recover on the menu, the solution to that is to wipe the drive first. The cli option will add it with a single command.


    If this happens again then I would look to hardware for the cause.

    Raid is not a backup! Would you go skydiving without a parachute?

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!