RAID 10 clean, degraded

  • Hi, I have an issue with my RAID10 in my OMV 4.1.36-1. I don't know what happened. I have noticed that only 3 out of 4 discs are working and in Raid Managment the status is active/degraded. I use 4 x 12TB WD Red disks.


    Any help from you is appreciated.


    Thank you very much.

    Code
    root@pandora:~# cat /proc/mdstat
    Personalities : [raid10] [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] 
    md0 : active raid10 sda[0] sde[2] sdb[1]
          23437508608 blocks super 1.2 512K chunks 2 near-copies [4/3] [UUU_]
          bitmap: 82/175 pages [328KB], 65536KB chunk
    
    unused devices: <none>
    Code
    root@pandora:~# blkid
    /dev/sda: UUID="8b767a7d-c52c-068d-c04f-1a3cfd8d4c5f" UUID_SUB="3904f2f1-fe1f-bde3-a965-d9dbe0074f66" LABEL="pandora:Raid4x12TBWdRed" TYPE="linux_raid_member"
    /dev/md0: LABEL="REDRAID4X12" UUID="5fd65f52-b922-45e3-a940-eb7c75460446" TYPE="ext4"
    /dev/sdb: UUID="8b767a7d-c52c-068d-c04f-1a3cfd8d4c5f" UUID_SUB="a6bb8aa8-4e9b-7f90-b105-45a9301acbce" LABEL="pandora:Raid4x12TBWdRed" TYPE="linux_raid_member"
    /dev/sdc1: UUID="2218-DC43" TYPE="vfat" PARTUUID="09f69470-ba7b-4b6b-9456-c09f4c6ad2ee"
    /dev/sdc2: UUID="87bfca96-9bee-4725-ae79-d8d7893d5a49" TYPE="ext4" PARTUUID="3c45a8f0-3106-4ba8-89bc-b15d22e81144"
    /dev/sdc3: UUID="856b0ba6-a0a9-49f2-81ef-27e24004aa98" TYPE="swap" PARTUUID="fda4b444-cf82-4ae8-b916-01b8244acee3"
    /dev/sde: UUID="8b767a7d-c52c-068d-c04f-1a3cfd8d4c5f" UUID_SUB="6c9c5433-6838-c39f-abfa-7807205a3238" LABEL="pandora:Raid4x12TBWdRed" TYPE="linux_raid_member"
    Code
    root@pandora:~# fdisk -l | grep "Disk "
    Disk /dev/sda: 10,9 TiB, 12000138625024 bytes, 23437770752 sectors
    Disk /dev/sdb: 10,9 TiB, 12000138625024 bytes, 23437770752 sectors
    Disk /dev/md0: 21,8 TiB, 24000008814592 bytes, 46875017216 sectors
    Disk /dev/sdc: 28,7 GiB, 30752636928 bytes, 60063744 sectors
    Disk identifier: 51328880-3F36-4C4F-A18D-76E5CF56DD7D
    Disk /dev/sdd: 100 MiB, 104857600 bytes, 204800 sectors
    Disk /dev/sde: 10,9 TiB, 12000138625024 bytes, 23437770752 sectors
    Code
    root@pandora:~# mdadm --detail --scan --verbose
    ARRAY /dev/md0 level=raid10 num-devices=4 metadata=1.2 name=pandora:Raid4x12TBWdRed UUID=8b767a7d:c52c068d:c04f1a3c:fd8d4c5f
       devices=/dev/sda,/dev/sdb,/dev/sde

    OMV 6.9.15-1 (Shaitan) - Debian 11 (Bullseye) - Linux 6.1.0-0.deb11.17-amd64
    ASRock J5005-ITX - 16GB DDR4 - 4x WD RED 12TB (Raid10), 2x WD RED 4TB (Raid1) [OFF], Boot from SanDisk Ultra Fit Flash Drive 32GB - Fractal Design Node 304

  • Dr. geaves will be here shortly

    Have you checked the SMART information for your missing disk?

    It is no longer in the device list. Among the Scheduled tests there is "Capacity: n/a".

    OMV 6.9.15-1 (Shaitan) - Debian 11 (Bullseye) - Linux 6.1.0-0.deb11.17-amd64
    ASRock J5005-ITX - 16GB DDR4 - 4x WD RED 12TB (Raid10), 2x WD RED 4TB (Raid1) [OFF], Boot from SanDisk Ultra Fit Flash Drive 32GB - Fractal Design Node 304

  • Output of mdadm --detail /dev/md0

    OMV 6.9.15-1 (Shaitan) - Debian 11 (Bullseye) - Linux 6.1.0-0.deb11.17-amd64
    ASRock J5005-ITX - 16GB DDR4 - 4x WD RED 12TB (Raid10), 2x WD RED 4TB (Raid1) [OFF], Boot from SanDisk Ultra Fit Flash Drive 32GB - Fractal Design Node 304

  • :/ I'm trying to work out which is the failed drive from the array, I'm guessing it's /dev/sdd as it's the only one listed in fdisk that is not listed in blkid, you're going to need to replace that drive

    The day after tomorrow I will have the disk available for replacement. At that point what procedure should I follow exactly? Thank you very much.

    OMV 6.9.15-1 (Shaitan) - Debian 11 (Bullseye) - Linux 6.1.0-0.deb11.17-amd64
    ASRock J5005-ITX - 16GB DDR4 - 4x WD RED 12TB (Raid10), 2x WD RED 4TB (Raid1) [OFF], Boot from SanDisk Ultra Fit Flash Drive 32GB - Fractal Design Node 304

    • Offizieller Beitrag

    At that point what procedure should I follow exactly? Thank you very much

    You need to identify the failed drive, to do this look in storage -> disks, take a screenshot for reference, it will list the drives with their reference in OMV i.e. /dev/sd? (? being a, b, c, etc) it will also show the make and serial number.


    Shut down the server, locate the failed drive using the information from the screenshot; WARNING!!! before you remove the failed drive from the server check and double check it's the right one, remove the failed drive and install the new one, make a note of the serial number.


    Start the server, storage -> disks select the new drive and click wipe on the menu, select short, click OK and wait until it has completed.


    Raid Management, select the raid and click recover on the menu, a dialog will open showing the new drive, select it and click OK, the array should now rebuild. BTW due to the size of the drives this will take a llllooooonnnnngggg time :)


    WARNING!! the last time I dealt with a Raid10 for a user, he lost all his data, how, why, I have no idea, a Raid10 is a striped raid using two Raid1 mirrors, I assume this is how OMV works, as I've never used Raid10.


    Due to the size of your drives and therefore the size of the array I assume you don't have a backup just in case this goes wrong.

  • You need to identify the failed drive, to do this look in storage -> disks, take a screenshot for reference, it will list the drives with their reference in OMV i.e. /dev/sd? (? being a, b, c, etc) it will also show the make and serial number.

    Schermata-2021-12-20-alle-12.04.03.png


    Could there be a problem inside the case? A problem with the Sata cable? Thank you.

    OMV 6.9.15-1 (Shaitan) - Debian 11 (Bullseye) - Linux 6.1.0-0.deb11.17-amd64
    ASRock J5005-ITX - 16GB DDR4 - 4x WD RED 12TB (Raid10), 2x WD RED 4TB (Raid1) [OFF], Boot from SanDisk Ultra Fit Flash Drive 32GB - Fractal Design Node 304

  • VERY IMPORTANT NEWS:

    You need to identify the failed drive, to do this look in storage -> disks, take a screenshot for reference, it will list the drives with their reference in OMV i.e. /dev/sd? (? being a, b, c, etc) it will also show the make and serial number.

    In Storage -> Disks i clicked "Scan" and the disk appeared as /dev/sdf!


    Schermata-2021-12-20-alle-12.29.23.png


    I can see it also under S.M.A.R.T. -> Devices (and Schedule tests), also under File Systems, but not under RAID Management (I assume to stay there it should have been /dev/sdd... not /dev/sdf...).


    What would be better to do now? Surely the system has worked for some time with just three drives.


    Thank you.

    OMV 6.9.15-1 (Shaitan) - Debian 11 (Bullseye) - Linux 6.1.0-0.deb11.17-amd64
    ASRock J5005-ITX - 16GB DDR4 - 4x WD RED 12TB (Raid10), 2x WD RED 4TB (Raid1) [OFF], Boot from SanDisk Ultra Fit Flash Drive 32GB - Fractal Design Node 304

  • Possibly, but the other question is what's your hardware, searching for asmt109x- config suggests it's a usb bridge!!

    Maybe it's the USB connection to the UPS?

    OMV 6.9.15-1 (Shaitan) - Debian 11 (Bullseye) - Linux 6.1.0-0.deb11.17-amd64
    ASRock J5005-ITX - 16GB DDR4 - 4x WD RED 12TB (Raid10), 2x WD RED 4TB (Raid1) [OFF], Boot from SanDisk Ultra Fit Flash Drive 32GB - Fractal Design Node 304

    • Offizieller Beitrag

    In Storage -> Disks i clicked "Scan" and the disk appeared as /dev/sdf

    Then that might suggest there is a faulty sata cable

    but not under RAID Management (I assume to stay there it should have been /dev/sdd... not /dev/sdf...).

    If you select the Raid under Raid Management does /dev/sdf show in the dialog

  • Then that might suggest there is a faulty sata cable

    If you select the Raid under Raid Management does /dev/sdf show in the dialog


    It is not shown. The fourth disk is always "removed".

    OMV 6.9.15-1 (Shaitan) - Debian 11 (Bullseye) - Linux 6.1.0-0.deb11.17-amd64
    ASRock J5005-ITX - 16GB DDR4 - 4x WD RED 12TB (Raid10), 2x WD RED 4TB (Raid1) [OFF], Boot from SanDisk Ultra Fit Flash Drive 32GB - Fractal Design Node 304

  • :/ have you tried recover on the menu after selecting the raid, if there's no drive available the dialog will be empty

    There is only /dev/sdd... I don't know why but I have a feeling that a reboot might fix the situation. Or could it be very dangerous?

  • :) are you a windows user

    Mac! :saint: But my firts server was a FreeBSD in 1995. Mac for everyday and Linux for servers, Win only in some VM only when it is not possible to do without it ;)

    :) are you a windows user


    As /dev/sdf is under SMART run a test on it, you're looking at the output from 197 and 198 particularly their value

    Thank you :)

    OMV 6.9.15-1 (Shaitan) - Debian 11 (Bullseye) - Linux 6.1.0-0.deb11.17-amd64
    ASRock J5005-ITX - 16GB DDR4 - 4x WD RED 12TB (Raid10), 2x WD RED 4TB (Raid1) [OFF], Boot from SanDisk Ultra Fit Flash Drive 32GB - Fractal Design Node 304

    • Offizieller Beitrag

    Dr. Geaves, if you'll allow me to interrupt ...

    That disk seems to be fine (although a little hot, 40º). I would do the same with the others and compare the number of starts and / or the number of hours of operation. Assuming they were purchased at the same time, if this disk is significantly lower it could be the result of a hardware failure preventing this disk from booting.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!