RAID 5 Lost, Not sure what to do.

  • Hey everyone. I'm not a linux wiz or know a lot of commands. I get the basics. I have OMV 3.0.94. I have 5 physical drives, 1 160gb boot drive and 4 3tb WD drives setup in RAID 5. I don't know when it happened but my raid array just stopped working. I couldn't tell you when since I dont access this system regularly. It is used as a backup of backup source. All drives are showing up in the bios, in the web gui and in other places when commands are used. I went ahead with this post (Degraded or missing raid array questions) to provide this info:


    cat /proc/mdstat
    Personalities :
    md127 : inactive sdc[3](S) sde[2](S) sdd[1](S) sdb[0](S)
    9552 blocks super external:imsm


    unused devices: <none>


    blkid
    /dev/sda1: UUID="1c5113d5-72e5-4ae1-a712-7b52fe8ecf6a" TYPE="ext4" PARTUUID="2e328e8d-01"
    /dev/sda5: UUID="81d88f38-9df7-4186-a564-e4d1f7cecc52" TYPE="swap" PARTUUID="2e328e8d-05"
    /dev/sde: TYPE="isw_raid_member"
    /dev/sdd: TYPE="isw_raid_member"
    /dev/sdb: TYPE="isw_raid_member"
    /dev/sdc: TYPE="isw_raid_member"


    fdisk -l | grep "Disk "
    Disk /dev/sda: 149.1 GiB, 160041885696 bytes, 312581808 sectors
    Disk identifier: 0x2e328e8d
    Disk /dev/sde: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
    Disk /dev/sdd: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
    Disk /dev/sdb: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
    Disk /dev/sdc: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors


    cat /etc/mdadm/mdadm.conf


    # mdadm.conf
    #
    # Please refer to mdadm.conf(5) for information about this file.
    #


    # by default, scan all partitions (/proc/partitions) for MD superblocks.
    # alternatively, specify devices to scan, using wildcards if desired.
    # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
    # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
    # used if no RAID devices are configured.
    DEVICE partitions


    # auto-create devices with Debian standard permissions
    CREATE owner=root group=disk mode=0660 auto=yes


    # automatically tag new arrays as belonging to the local system
    HOMEHOST <system>


    # definitions of existing MD arrays
    ARRAY /dev/md127 metadata=imsm UUID=946a3419:9d26306e:22bbc12b:931fad1c
    ARRAY metadata=imsm UUID=946a3419:9d26306e:22bbc12b:931fad1c
    ARRAY /dev/md/Volume0 container=946a3419:9d26306e:22bbc12b:931fad1c member=0 UUID=035dc25d:c21e7104:fba7638d:77508a55


    mdadm --detail --scan --verbose
    ARRAY /dev/md127 level=container num-devices=4 metadata=imsm UUID=946a3419:9d26306e:22bbc12b:931fad1c
    devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde
    root@openmediavault:~#


    I'm not really sure where to go from here since I really don't know what to search for. I've attempted to search to rebuild the raid array but I'm not sure. Any help or guidance would be appreciated. I would prefer to attempt to save the raid, if possible.

    • Offizieller Beitrag

    Based on your post, it didn't assemble correctly.


    mdadm --stop /dev/md127
    mdadm --assemble --force --verbose /dev/md127 /dev/sd[bcde]


    If it is assembling (look at cat /proc/mdstat), then:
    update-initramfs -u


    Then mount it (or reboot)
    mount -a

    omv 7.0.5-1 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.11 | compose 7.1.3 | k8s 7.1.0-3 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Here are the results of your commands.

    mdadm --stop /dev/md127
    mdadm: stopped /dev/md127


    mdadm --assemble --force --verbose /dev/md127 /dev/sd[bcde]
    mdadm: looking for devices for /dev/md127
    mdadm: /dev/sdb is identified as a member of /dev/md127, slot -1.
    mdadm: /dev/sdc is identified as a member of /dev/md127, slot -1.
    mdadm: /dev/sdd is identified as a member of /dev/md127, slot -1.
    mdadm: /dev/sde is identified as a member of /dev/md127, slot -1.
    mdadm: Marking array /dev/md127 as 'clean'
    mdadm: added /dev/sdc to /dev/md127 as -1
    mdadm: added /dev/sdd to /dev/md127 as -1
    mdadm: added /dev/sde to /dev/md127 as -1
    mdadm: added /dev/sdb to /dev/md127 as -1
    mdadm: Container /dev/md127 has been assembled with 4 drives


    update-initramfs -u
    update-initramfs: Generating /boot/initrd.img-4.9.0-0.bpo.4-amd64

    mount -a

    • Offizieller Beitrag

    You didn't post the output of cat /proc/mdstat but since you moved on to the update-initramfs command I assume it worked, you should be able to access your data then.

    omv 7.0.5-1 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.11 | compose 7.1.3 | k8s 7.1.0-3 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Sorry, I didn't know that was the next step. Here is the output:


    cat /proc/mdstat
    Personalities :
    md127 : inactive sdc[3](S) sde[2](S) sdb[1](S) sdd[0](S)
    9552 blocks super external:imsm


    unused devices: <none>


    I'm following what to do exactly. I don't want to mess anything up by doing anything out of order. I'm not real good with this command line stuff.

    • Offizieller Beitrag

    Now I see the problem. You created the array using the shitty onboard motherboard raid (external:imsm tells me that) . I didn't notice that in the first post. I have no idea how to fix it safely. I assume you setup the array in your bios the first time. You might have to look in there again but I won't be of any help there.

    omv 7.0.5-1 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.11 | compose 7.1.3 | k8s 7.1.0-3 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • No, I've never used the onboard raid controller. This onboard controller only has RAID 0, 1, 10. I had it setup using RAID 5 and xfs file system inside omv. Whether the SATA controller is set to raid or ahci, its the same results

  • I did some more searching and ran this:

    sudo mdadm --manage /dev/md127 --run

    This fixed it. Seems like such a simple command had the system down. I was able to mount it and its showing up as normal. I just have to create my shares again. Thanks for your time.

    • Offizieller Beitrag

    but not sure what to do in order to automate it.

    Check in /etc/default/mdadm for the following;


    # AUTOSTART:
    # should mdadm start arrays listed in /etc/mdadm/mdadm.conf automatically
    # during boot?
    AUTOSTART=true

    • Offizieller Beitrag

    No, I've never used the onboard raid controller.

    Your array is using it though. I don't have a board to figure out why it is using it though. I did run into this a couple of years ago but not sure if it is helpful - Crash of System Disc - Fresh 3.x Installation - Raid not accessible


    If I had to guess, it would be interesting to wipe the disks and recreate the array when the bios was set to AHCI. I think mdadm might be trying to do something smartdumb and use IMSM when raid is enabled in the bios.

    omv 7.0.5-1 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.11 | compose 7.1.3 | k8s 7.1.0-3 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Here are the base specs for the system:
    Motherboard: ASUS P86-M
    BIOS Version: 6702 x64
    CPU: Intel Xeon E3122
    RAM: 8GB ECC
    SATA chipset: C204 or LSI (set via physical jumper)


    A little history of the system: The ram and cpu were pulled from a decommissioned dell poweredge server that had some issues. I was able to get this asus board pretty cheap and its matx form factor made it easier to find a compatible compact rack chassis. The bios doesn't have an option to load a raid configuration utility without a physical jumper setting on the motherboard. I currently have it set to use the intel sata controller vs the onboard LSI controller. The intel controller is controlled by software in windows (obviously its not running). It does support raid 5, which i was mistaken. The LSI controller does not support RAID 5. I've had on board raid controllers fail so I don't like using them. The BIOS can be set (when using the Intel controller) to DISABLED, IDE, AHCI (current) or RAID.


    As far as OMV seeing it as a raid controller, maybe, I don't know. I know, at this point, the RAID 5 configuration is working. It seems to be a software issue as to why its not starting correctly. After everything, I ran sudo mdadm --manage /dev/md127 --run and the array showed up in the web gui, I mounted it and all the data was there, as if nothing happened. I had reinstalled OMV so I had to recreate my shares and users. I attempted to add the AUTOSTART to mdadm but I'm not having any luck after reboots. I would prefer to not wipe the drives. I can switch the bios to RAID and it seems to boot the same in OMV. I dont have a lot of experience with linux so i'm not real familiar with the commands or what some of the processes are. I only get to learn when there's a problem like this. Other than that, it sits in a rack with only power and ethernet.

    • Offizieller Beitrag

    I guess I didn't explain myself well. Your array isn't using the LSI controller. It is using the intel controller (IMSM) which is a software controller not hardware like the LSI. I pointed to the other thread because that user had a similar problem and we came up with something that worked. No need to change the bios since it is in AHCI mode now.


    I was just suggesting wiping the drives so that you wouldn't be using IMSM. If your board fails and you move to another board without IMSM (maybe even a different model), the array wouldn't work at all.


    If you try the command from Crash of System Disc - Fresh 3.x Installation - Raid not accessible, maybe it will start at boot like the other user's system.
    mdadm --incremental --verbose --metadata imsm /dev/md127
    I would do omv-mkconf mdadm after too.

    omv 7.0.5-1 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.11 | compose 7.1.3 | k8s 7.1.0-3 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!