RAID 5 impossible to recreate

  • Hi, I have a RAID 5 with 4 disks that has run smoothly for 5 years. A few days ago one of the disks failed. I removed the damaged disk and put a new one started to recreate the RAID (mdadm --add /dev/md127 /dev/sdb) but it failed at about 20%.


    Now I get the following message:


    mdadm --add /dev/md127 /dev/sdb
    mdadm: Cannot get array info for /dev/md127


    I've tried to recreate the RAID:


    mdadm --assemble --run --force /dev/md127 /dev/sd[bcde]
    mdadm: Cannot assemble mbr metadata on /dev/sdb


    I tried many things, like deleting the mdadm configuration file, but without success.


    I have formatted the new disc but not assigned a file format.


    gdisk /dev/sdb


    I expose the necessary data to receive help:


    1. cat /proc/mdstat


    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md127 : inactive sdc[1](S) sdd[2](S) sde[3](S)
    8790796680 blocks super 1.2


    unused devices: <none>


    2. blkid


    /dev/sda1: UUID="5978d112-caf8-4756-9b80-714132f07bad" TYPE="ext4" PARTUUID="6efe7dd4-01"
    /dev/sda5: UUID="6d15bc03-d534-4e44-ab94-36eb3533791d" TYPE="swap" PARTUUID="6efe7dd4-05"
    /dev/sdc: UUID="04366c8b-6451-f12b-e9e0-6d4d88ab6493" UUID_SUB="9be5592b-cd79-9607-d85c-94f07e57391c" LABEL="NAS:NAS" TYPE="linux_raid_member"
    /dev/sdd: UUID="04366c8b-6451-f12b-e9e0-6d4d88ab6493" UUID_SUB="c8f4f160-c790-98f8-8608-5797e73b2c3c" LABEL="NAS:NAS" TYPE="linux_raid_member"
    /dev/sde: UUID="04366c8b-6451-f12b-e9e0-6d4d88ab6493" UUID_SUB="588fd1a3-cc0e-f274-8f1c-554bb8a5bde7" LABEL="NAS:NAS" TYPE="linux_raid_member"
    /dev/sdb1: PARTLABEL="Linux filesystem" PARTUUID="3e2cfb1b-82df-482e-bd55-a356582d4667"


    3. fdisk -l | grep "Disk "


    Disk /dev/sda: 29.8 GiB, 32017047552 bytes, 62533296 sectors
    Disk identifier: 0x6efe7dd4
    Disk /dev/sdb: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
    Disk identifier: 2EEB399B-3706-4153-A4E5-58A73B886706
    Disk /dev/sdc: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
    Disk /dev/sdd: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
    Disk /dev/sde: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors


    4. cat /etc/mdadm/mdadm.conf


    # mdadm.conf


    DEVICE partitions


    # auto-create devices with Debian standard permissions
    CREATE owner=root group=disk mode=0660 auto=yes


    # automatically tag new arrays as belonging to the local system
    HOMEHOST <system>


    # definitions of existing MD arrays
    ARRAY /dev/md/NAS metadata=1.2 name=NAS:NAS UUID=04366c8b:6451f12b:e9e06d4d:88ab6493


    # instruct the monitoring daemon where to send mail alerts
    MAILADDR xxxxx@gmail.com
    MAILFROM rootroot@NAS:~#


    5. mdadm --detail --scan --verbose


    INACTIVE-ARRAY /dev/md127 num-devices=3 metadata=1.2 name=NAS:NAS UUID=04366c8b:6451f12b:e9e06d4d:88ab6493
    devices=/dev/sdc,/dev/sdd,/dev/sde


    6. There are 4 WD NAS Red 3TB disks


    7. The RAID stopped working when bad sectors were detected on one of the disks.



    Thank you in advance for your help.

  • Hi, I have a RAID 5 with 4 disks that has run smoothly for 5 years. A few days ago one of the disks failed. I removed the damaged disk and put a new one started to recreate the RAID (mdadm --add /dev/md127 /dev/sdb) but it failed at about 20%.

    For two drive failures it is best to clone the failed drives to new drives, then install back into the RAID and recover.


    Generally the bad block list gets full, or some other failure which sets the drive to read only mode.
    The data is there, it's just that the drive will not take more data, so block clone to a good drive.


    If a few sectors are bad its unlikely that both drives have the same failed sectors, so a complete recovery is possible.


    Chances are that the other two drives are near end life, buy more spares. ;)
    If you care about the data swap out the soon to die drives. Do one, then when it finishes do the other.

    • Offizieller Beitrag

    I received the following error when posting:

    Read problem #4 - https://forum.openmediavault.o…tions-to-common-problems/

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I had my RAID5 array recently disappear after replacing a faulty drive. My issue remains growing the array. I have 18TB now and a little bit over 7TB drive space (i.e., the same size that the 5 original 2TB drives provided (I created a separate thread to get help).


    Are you certain you used the proper command to reassemble the array? I believe I had to open Linux (i.e., OMV) in recovery mode so the drives were not mounted, or I could unmount or stop them if busy, and then I assembled using the following command for a 5-disk array:


    sudo mdadm --assemble /dev/md127 /dev/sda /dev/sdb /dev/sdc/dev/sdd /dev/sde --verbose --force--


    Then my array magically reappeared. Commands I used to expand it to take advantage of the larger disks I installed proved unsuccessful, however. X(


    Just a thought...

    • Offizieller Beitrag

    The way to remove and add a drive using OMV is within OMV's GUI the use of the command line should be a last resort.


    OMV's raid management can remove a drive from an array, then remove the drive from the machine. Remove on the menu
    Add the new drive, wipe it and add it to the array (in some cases the drive may have to be formatted first) Recover on the menu


    If a raid has 'disappeared' from the GUI then it will be necessary to drop to the command line and run the commands found here this will give relevant information on the state of the array.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!