RAID 10 clean, degraded

  • Geaves, if you'll allow me to interrupt ...

    That disk seems to be fine (although a little hot, 40º). I would do the same with the others and compare the number of starts and / or the number of hours of operation. Assuming they were purchased at the same time, if this disk is significantly lower it could be the result of a hardware failure preventing this disk from booting.

    The others... (2/3)

    OMV 6.9.15-2 (Shaitan) - Debian 11 (Bullseye) - Linux 6.1.0-0.deb11.17-amd64

    OMV Plugins: backup 6.1.1 | compose 6.11.3 | cputemp 6.1.3 | flashmemory 6.2 | ftp 6.0.7-1 | kernel 6.4.10 | nut 6.0.7-1 | omvextrasorg 6.3.6 | resetperms 6.0.3 | sharerootfs 6.0.3-1

    ASRock J5005-ITX - 16GB DDR4 - 4x WD RED 12TB (Raid10), 2x WD RED 4TB (Raid1) [OFF], Boot from SanDisk Ultra Fit Flash Drive 32GB - Fractal Design Node 304

  • The others... (3/3)


    OMV 6.9.15-2 (Shaitan) - Debian 11 (Bullseye) - Linux 6.1.0-0.deb11.17-amd64

    OMV Plugins: backup 6.1.1 | compose 6.11.3 | cputemp 6.1.3 | flashmemory 6.2 | ftp 6.0.7-1 | kernel 6.4.10 | nut 6.0.7-1 | omvextrasorg 6.3.6 | resetperms 6.0.3 | sharerootfs 6.0.3-1

    ASRock J5005-ITX - 16GB DDR4 - 4x WD RED 12TB (Raid10), 2x WD RED 4TB (Raid1) [OFF], Boot from SanDisk Ultra Fit Flash Drive 32GB - Fractal Design Node 304

    • Offizieller Beitrag

    The disc in question has 11,303 hours of use and the other three have 15,197, 15198 and 15,198

    Did you create the Raid10 with new discs?

  • The disc in question has 11,303 hours of use and the other three have 15,197, 15198 and 15,198

    Did you create the Raid10 with new discs?

    Never, but last year somothing appened, I don't know if it is relevant, you can see here OMV Raid Missing after reboot


    It is also possible that I have been using 3 disks instead of 4 for about 5 months ... or not? Isn't there a way to get notified if one of the 4 disks in RAID10 goes missing or has problems? I only noticed it by chance by checking some configurations. The S.M.A.R.T.test is performed every morning at 6:00 am. on every disk but never got any alerts. Thank you.

    OMV 6.9.15-2 (Shaitan) - Debian 11 (Bullseye) - Linux 6.1.0-0.deb11.17-amd64

    OMV Plugins: backup 6.1.1 | compose 6.11.3 | cputemp 6.1.3 | flashmemory 6.2 | ftp 6.0.7-1 | kernel 6.4.10 | nut 6.0.7-1 | omvextrasorg 6.3.6 | resetperms 6.0.3 | sharerootfs 6.0.3-1

    ASRock J5005-ITX - 16GB DDR4 - 4x WD RED 12TB (Raid10), 2x WD RED 4TB (Raid1) [OFF], Boot from SanDisk Ultra Fit Flash Drive 32GB - Fractal Design Node 304

  • Did you create the Raid10 with new discs?

    I'm sorry, my correct answer is: "yes, in origin I create the Raid10 with 4 new discs". Thank you.

    OMV 6.9.15-2 (Shaitan) - Debian 11 (Bullseye) - Linux 6.1.0-0.deb11.17-amd64

    OMV Plugins: backup 6.1.1 | compose 6.11.3 | cputemp 6.1.3 | flashmemory 6.2 | ftp 6.0.7-1 | kernel 6.4.10 | nut 6.0.7-1 | omvextrasorg 6.3.6 | resetperms 6.0.3 | sharerootfs 6.0.3-1

    ASRock J5005-ITX - 16GB DDR4 - 4x WD RED 12TB (Raid10), 2x WD RED 4TB (Raid1) [OFF], Boot from SanDisk Ultra Fit Flash Drive 32GB - Fractal Design Node 304

    • Offizieller Beitrag

    in origin I create the Raid10 with 4 new discs

    Thank you. Let's see what geaves thinks of this ...

    • Offizieller Beitrag

    Initially the drives appear to be fine, either mdadm 'threw' the drive due to some intermittent fault, the drive certainly did not remove itself, otherwise the raid would have become inactive.


    I'm leaning toward an intermittent fault, which leaves either the sata cable and/or the sata port the drive is connected to

  • I'm leaning toward an intermittent fault, which leaves either the sata cable and/or the sata port the drive is connected to

    I agree. Replacing the cable is definitely worth doing to start. But before the shutdown to take care of the hardware inside the case I think it is necessary to fix the array, or can we fix it later?

    OMV 6.9.15-2 (Shaitan) - Debian 11 (Bullseye) - Linux 6.1.0-0.deb11.17-amd64

    OMV Plugins: backup 6.1.1 | compose 6.11.3 | cputemp 6.1.3 | flashmemory 6.2 | ftp 6.0.7-1 | kernel 6.4.10 | nut 6.0.7-1 | omvextrasorg 6.3.6 | resetperms 6.0.3 | sharerootfs 6.0.3-1

    ASRock J5005-ITX - 16GB DDR4 - 4x WD RED 12TB (Raid10), 2x WD RED 4TB (Raid1) [OFF], Boot from SanDisk Ultra Fit Flash Drive 32GB - Fractal Design Node 304

  • Bit before touching the hardware of an already degraded RAID you should take the time to do a backup (yes, it takes a long given the amount of storage).

    If you got help in the forum and want to give something back to the project click here (omv) or here (scroll down) (plugins) and write up your solution for others.

  • I changed the cable, after reboot "Failed to start File System Check on /dev/disk/by-label/REDRAID4X12.


    What is the best option to continue? Thank you.


    OMV 6.9.15-2 (Shaitan) - Debian 11 (Bullseye) - Linux 6.1.0-0.deb11.17-amd64

    OMV Plugins: backup 6.1.1 | compose 6.11.3 | cputemp 6.1.3 | flashmemory 6.2 | ftp 6.0.7-1 | kernel 6.4.10 | nut 6.0.7-1 | omvextrasorg 6.3.6 | resetperms 6.0.3 | sharerootfs 6.0.3-1

    ASRock J5005-ITX - 16GB DDR4 - 4x WD RED 12TB (Raid10), 2x WD RED 4TB (Raid1) [OFF], Boot from SanDisk Ultra Fit Flash Drive 32GB - Fractal Design Node 304

    • Offizieller Beitrag

    What is the best option to continue

    Wasn't expecting that, that would suggest there is a problem with that array, or at least it's file system, the norm to do this is to run it manually, but that is on a clean array, I have no idea what affect this might have on a degraded array, particularly a Raid10.


    So, login as root and run fsck /dev/md0 and accept everything with a y (yes) or fsck -y /dev/md0 this means it will correct errors without user input. I have no idea how long this take given the size of the drives.

  • So, login as root and run fsck /dev/md0 and accept everything with a y (yes) or fsck -y /dev/md0 this means it will correct errors without user input. I have no idea how long this take given the size of the drives.

    Thank you geaves it's ok, fixed, the system is up and everything seems working fine. Obviously there is always the problem of the fourth disk not present in the array.


    These are the check commands:


    Code
    root@pandora:~# cat /proc/mdstat
    Personalities : [raid10] [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] 
    md0 : active raid10 sda[0] sdb[1] sde[2]
          23437508608 blocks super 1.2 512K chunks 2 near-copies [4/3] [UUU_]
          bitmap: 83/175 pages [332KB], 65536KB chunk
    
    unused devices: <none>
    Code
    root@pandora:~# blkid
    /dev/sda: UUID="8b767a7d-c52c-068d-c04f-1a3cfd8d4c5f" UUID_SUB="3904f2f1-fe1f-bde3-a965-d9dbe0074f66" LABEL="pandora:Raid4x12TBWdRed" TYPE="linux_raid_member"
    /dev/sdb: UUID="8b767a7d-c52c-068d-c04f-1a3cfd8d4c5f" UUID_SUB="a6bb8aa8-4e9b-7f90-b105-45a9301acbce" LABEL="pandora:Raid4x12TBWdRed" TYPE="linux_raid_member"
    /dev/sdc1: UUID="2218-DC43" TYPE="vfat" PARTUUID="09f69470-ba7b-4b6b-9456-c09f4c6ad2ee"
    /dev/sdc2: UUID="87bfca96-9bee-4725-ae79-d8d7893d5a49" TYPE="ext4" PARTUUID="3c45a8f0-3106-4ba8-89bc-b15d22e81144"
    /dev/sdc3: UUID="856b0ba6-a0a9-49f2-81ef-27e24004aa98" TYPE="swap" PARTUUID="fda4b444-cf82-4ae8-b916-01b8244acee3"
    /dev/md0: LABEL="REDRAID4X12" UUID="5fd65f52-b922-45e3-a940-eb7c75460446" TYPE="ext4"
    /dev/sde: UUID="8b767a7d-c52c-068d-c04f-1a3cfd8d4c5f" UUID_SUB="6c9c5433-6838-c39f-abfa-7807205a3238" LABEL="pandora:Raid4x12TBWdRed" TYPE="linux_raid_member"
    /dev/sdf: UUID="8b767a7d-c52c-068d-c04f-1a3cfd8d4c5f" UUID_SUB="a0287edc-2404-a0cc-735b-3c99f2f923af" LABEL="pandora:Raid4x12TBWdRed" TYPE="linux_raid_member"
    Code
    root@pandora:~# fdisk -l | grep "Disk "
    Disk /dev/sda: 10,9 TiB, 12000138625024 bytes, 23437770752 sectors
    Disk /dev/sdb: 10,9 TiB, 12000138625024 bytes, 23437770752 sectors
    Disk /dev/sdc: 28,7 GiB, 30752636928 bytes, 60063744 sectors
    Disk identifier: 51328880-3F36-4C4F-A18D-76E5CF56DD7D
    Disk /dev/sdd: 100 MiB, 104857600 bytes, 204800 sectors
    Disk /dev/sde: 10,9 TiB, 12000138625024 bytes, 23437770752 sectors
    Disk /dev/md0: 21,8 TiB, 24000008814592 bytes, 46875017216 sectors
    Disk /dev/sdf: 10,9 TiB, 12000138625024 bytes, 23437770752 sectors
    Code
    # definitions of existing MD arrays
    ARRAY /dev/md0 metadata=1.2 name=pandora:Raid4x12TBWdRed UUID=8b767a7d:c52c068d:c04f1a3c:fd8d4c5f
    root@pandora:~# 
    root@pandora:~# 
    root@pandora:~# mdadm --detail --scan --verbose
    ARRAY /dev/md0 level=raid10 num-devices=4 metadata=1.2 name=pandora:Raid4x12TBWdRed UUID=8b767a7d:c52c068d:c04f1a3c:fd8d4c5f
       devices=/dev/sda,/dev/sdb,/dev/sde

    In Raid Management I have sda, sdb and sde, not sdf. In Recovery -> Devices there is listed only the mysterious sdd...


    What to do? Thank you so much for your support.

    OMV 6.9.15-2 (Shaitan) - Debian 11 (Bullseye) - Linux 6.1.0-0.deb11.17-amd64

    OMV Plugins: backup 6.1.1 | compose 6.11.3 | cputemp 6.1.3 | flashmemory 6.2 | ftp 6.0.7-1 | kernel 6.4.10 | nut 6.0.7-1 | omvextrasorg 6.3.6 | resetperms 6.0.3 | sharerootfs 6.0.3-1

    ASRock J5005-ITX - 16GB DDR4 - 4x WD RED 12TB (Raid10), 2x WD RED 4TB (Raid1) [OFF], Boot from SanDisk Ultra Fit Flash Drive 32GB - Fractal Design Node 304

  • Although the system seems to work correctly, I have read the logs and there was a problem at boot "quotaon: cannot find /srv/dev-disk-by-label-REDRAID4X12/aquota.user on /dev/md0 [/srv/dev-disk-by-label-REDRAID4X12]" and therefore "Failed to start Enable File System Quotas". Can this help us?

    OMV 6.9.15-2 (Shaitan) - Debian 11 (Bullseye) - Linux 6.1.0-0.deb11.17-amd64

    OMV Plugins: backup 6.1.1 | compose 6.11.3 | cputemp 6.1.3 | flashmemory 6.2 | ftp 6.0.7-1 | kernel 6.4.10 | nut 6.0.7-1 | omvextrasorg 6.3.6 | resetperms 6.0.3 | sharerootfs 6.0.3-1

    ASRock J5005-ITX - 16GB DDR4 - 4x WD RED 12TB (Raid10), 2x WD RED 4TB (Raid1) [OFF], Boot from SanDisk Ultra Fit Flash Drive 32GB - Fractal Design Node 304

    • Offizieller Beitrag

    In Raid Management I have sda, sdb and sde, not sdf. In Recovery -> Devices there is listed only the mysterious sdd

    The recovery will not work in this case as blkid has identified /dev/sdf with a raid signature, to use recovery you would have to wipe the drive first, a long process given the drive size. However, mdadm --add /dev/md0 /dev/sdf should add the drive back to the array

    Can this help us?

    Yes, but, I've never had to deal with this personally and AFAIK this was disabled, but it may not be OMV4, I'll tag a couple of the other mods who know more about this than I do, crashtest  macom

    • Offizieller Beitrag

    Although the system seems to work correctly, I have read the logs and there was a problem at boot "quotaon: cannot find /srv/dev-disk-by-label-REDRAID4X12/aquota.user on /dev/md0 [/srv/dev-disk-by-label-REDRAID4X12]" and therefore "Failed to start Enable File System Quotas". Can this help us?

    There may be a number of errors, found in the logs, that are harmless. If this is not affecting you in any tangible way, other than the log entries, I wouldn't worry about it.

    Otherwise, you could give the following a try.
    ___________________________________________________________

    Turn the quota service off.


    sudo /etc/init.d/quota stop



    (In the following examples, substitute the appropriate labels for your drives.)


    sudo quotaoff --user --group /srv/dev-disk-by-label-DATA

    sudo quotaoff --user --group /srv/dev-disk-by-label-RSYNC

  • The recovery will not work in this case as blkid has identified /dev/sdf with a raid signature, to use recovery you would have to wipe the drive first, a long process given the drive size. However, mdadm --add /dev/md0 /dev/sdf should add the drive back to the array

    I made a backup (very long) and then wiped the 4th disk. At that point I did the recovery of the raid. Everything went smoothly, thanks.


    I then upgraded from OMV 4 to OMV 5 with simultaneous upgrade from Debian 9 to Debian 10 following these dleidert instructions. It all worked out for the best.


    I really appreciate the help from the forum. Thank you all.

    OMV 6.9.15-2 (Shaitan) - Debian 11 (Bullseye) - Linux 6.1.0-0.deb11.17-amd64

    OMV Plugins: backup 6.1.1 | compose 6.11.3 | cputemp 6.1.3 | flashmemory 6.2 | ftp 6.0.7-1 | kernel 6.4.10 | nut 6.0.7-1 | omvextrasorg 6.3.6 | resetperms 6.0.3 | sharerootfs 6.0.3-1

    ASRock J5005-ITX - 16GB DDR4 - 4x WD RED 12TB (Raid10), 2x WD RED 4TB (Raid1) [OFF], Boot from SanDisk Ultra Fit Flash Drive 32GB - Fractal Design Node 304

  • zerozenit

    Hat das Label gelöst hinzugefügt.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!