Clean, Degraded RAID 5 with 2/3 disks

  • Hello,


    I recently found out that my RAID 5 as some issue as only 2 of the 3 disks are apparently being used.
    All 3 are apparently recognized but i can't "repair" the RAID by addind the missing disk on the grape


    I don't exactly know when this happened. I actually saw it while trying to get some Nextcloud plugin working on Docker i think.
    At the same time I tried to apply updates but i had some error message about the disk my Docker container were on which would block the updates. This was about two months ago, i didn't have time to work on it.
    Yesterday i took the time and decided to work on a fresh OMV. I installed it and started configuration. Updates worked fine, but the problem with the RAID remain.
    It's working if i mount it, I would prefer to repair it before going further in my configurations...


    So here are what I get from the commands :


    Code
    blkid                                                                                                                                                                                            
    /dev/sda1: UUID="2ef1dc7b-1703-475a-9482-bfe488560b4a" TYPE="ext4" PARTUUID="3c52c98d-01"                                                                                                                          
    /dev/sda3: LABEL="Docker Disk" UUID="47c8493a-c1f6-4b84-b8b9-425bf0fb3154" TYPE="ext4" PARTUUID="3c52c98d-03"                                                                                                      
    /dev/sda5: UUID="88b6a16f-807f-447e-a416-c0b41c632cdc" TYPE="swap" PARTUUID="3c52c98d-05"                                                                                                                          
    /dev/sdd: UUID="a635e782-9138-367c-1d14-9efe18c55a0c" UUID_SUB="c5fd7e5d-d361-5222-a0bc-4ac5077a14f9" LABEL="AlienNAS:AlienR5" TYPE="linux_raid_member"                                                            
    /dev/sdc: UUID="a635e782-9138-367c-1d14-9efe18c55a0c" UUID_SUB="3151ac6a-03d3-ceb7-9e09-39e9a31175a1" LABEL="AlienNAS:AlienR5" TYPE="linux_raid_member"                                                            
    /dev/md127: UUID="8cbb2c06-a2c0-4a8f-bf04-381cf8e9bc48" TYPE="ext4"                                                                                                                                                
    /dev/sdb: UUID="a635e782-9138-367c-1d14-9efe18c55a0c" UUID_SUB="36d89265-535a-3877-9fcf-75212503a3b9" LABEL="AlienNAS:AlienR5" TYPE="linux_raid_member"


    Code
    fdisk -l | grep "Disk "                                                                                                                                                                          
    Disk /dev/sda: 223.6 GiB, 240057409536 bytes, 468862128 sectors                                                                                                                                                    
    Disk identifier: 0x3c52c98d                                                                                                                                                                                        
    Disk /dev/sdd: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors                                                                                                                                                    
    Disk /dev/sdc: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors                                                                                                                                                    
    Disk /dev/sdb: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors                                                                                                                                                    
    Disk /dev/md127: 5.5 TiB, 6000916561920 bytes, 11720540160 sectors



    Code
    mdadm --detail --scan --verbose                                                                                                                                                                  
    ARRAY /dev/md127 level=raid5 num-devices=3 metadata=1.2 name=AlienNAS:AlienR5 UUID=a635e782:9138367c:1d149efe:18c55a0c                                                                                             
       devices=/dev/sdb,/dev/sdc


    Any Idea of what is the problem and what i should do?


    Thanks to anyone who can help me.

  • Can you output maybe hdd sentinel or smartctl on sdd ? There is probably some problem with it.

    Is this what you mean?
    I got this information from the S.M.A.R.T. in OMV GUI :


    sdd smartctl.txt


    (The code had too many characters i couldn't send the message with it in code)


    If you mean to get some more information from another way, could you tell me how to?
    Sorry I'm not really good with linux and code lines, I generally follow some guides to get what i want done :)

  • Ok it seems it was this simple.


    Code
    mdadm --add /dev/md127 /dev/sdd                                                                                                                                                                  
    mdadm: re-added /dev/sdd


    Seems to be working. I will confirm when recovering process is complet.


    Thanks for your help! ^^
    (I saw those threads with this answer but i prefered to have someone to confirm i could use it myself before destroying everything... :huh: )

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!