OMV Raid lost after a power hit

  • as you can see below the status of my raid is "Missing" and unable to do anything with it. Note that everything is grayed out. All the drives are showing good in "SMART" tests.








  • root@openmediavault:/proc# cat /proc/mdstat

    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]

    md0 : inactive sde[2] sdd[1] sdc[3] sdb[0]

    7813529952 blocks super 1.2


    unused devices: <none>

    root@openmediavault:/proc#

  • psolimando

    Set the Label from OMV 0.5 to OMV 5.x
  • # Degraded or missing raid array questions

    # If you have a degraded or missing raid array, please post the following info (in code boxes) with your question.

    #

    # Login as root locally or via ssh - Windows users can use putty:

    #

    # cat /proc/mdstat

    # blkid

    # fdisk -l | grep "Disk "

    # cat /etc/mdadm/mdadm.conf

    # mdadm --detail --scan --verbose

    # Post type of drives and quantity being used as well.

    # Post what happened for the array to stop working? Reboot? Power loss?


    See answers below

    -----------------------------------------------------------------------------------------------------------------------------------------------------------


    root@openmediavault:~# cat /proc/mdstat

    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]

    md0 : inactive sde[2] sdd[1] sdc[3] sdb[0]

    7813529952 blocks super 1.2


    unused devices: <none>

    -----------------------------------------------------------------------------------------------------------------------------------------------------------

    root@openmediavault:~# blkid

    /dev/sda1: UUID="ed81636b-f347-4e63-9163-1d946ab96b1a" TYPE="ext4" PARTUUID="749029e3-01"

    /dev/sda5: UUID="02919ac6-d8c4-446f-a2a3-e7db440fb77a" TYPE="swap" PARTUUID="749029e3-05"

    /dev/sdb: UUID="990f19c7-dc6e-fc9e-944b-48e2efa66b33" UUID_SUB="b491f7f1-1fc0-74c2-6097-e4a88af469fe" LABEL="openmediavault:0" TYPE="linux_raid_member"

    /dev/sdc: UUID="990f19c7-dc6e-fc9e-944b-48e2efa66b33" UUID_SUB="94395aa4-4643-daae-442e-0ec46456ccd2" LABEL="openmediavault:0" TYPE="linux_raid_member"

    /dev/sdd: UUID="990f19c7-dc6e-fc9e-944b-48e2efa66b33" UUID_SUB="636e2d48-347c-7a77-a426-cd77051cd0ff" LABEL="openmediavault:0" TYPE="linux_raid_member"

    /dev/sde: UUID="990f19c7-dc6e-fc9e-944b-48e2efa66b33" UUID_SUB="96e43a1c-d340-db25-4e66-5253ee435822" LABEL="openmediavault:0" TYPE="linux_raid_member"


    -----------------------------------------------------------------------------------------------------------------------------------------------------------


    root@openmediavault:~# fdisk -l | grep "Disk "

    Disk /dev/sda: 111.8 GiB, 120034123776 bytes, 234441648 sectors

    Disk model: PNY CS900 120GB

    Disk identifier: 0x749029e3

    Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors

    Disk model: WDC WD20EFRX-68E

    Disk /dev/sdc: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors

    Disk model: WDC WD20EFRX-68E

    Disk /dev/sdd: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors

    Disk model: WDC WD20EFRX-68E

    Disk /dev/sde: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors

    Disk model: WDC WD20EARX-00P


    -----------------------------------------------------------------------------------------------------------------------------------------------------------


    root@openmediavault:~# cat /etc/mdadm/mdadm.conf

    # This file is auto-generated by openmediavault (https://www.openmediavault.org)

    # WARNING: Do not edit this file, your changes will get lost.


    # mdadm.conf

    #

    # Please refer to mdadm.conf(5) for information about this file.

    #


    # by default, scan all partitions (/proc/partitions) for MD superblocks.

    # alternatively, specify devices to scan, using wildcards if desired.

    # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.

    # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is

    # used if no RAID devices are configured.

    DEVICE partitions


    # auto-create devices with Debian standard permissions

    CREATE owner=root group=disk mode=0660 auto=yes


    # automatically tag new arrays as belonging to the local system

    HOMEHOST <system>


    # definitions of existing MD arrays

    ARRAY /dev/md0 metadata=1.2 name=openmediavault:0 UUID=990f19c7:dc6efc9e:944b48e2:efa66b33

    -----------------------------------------------------------------------------------------------------------------------------------------------------------

    root@openmediavault:~# mdadm --detail --scan --verbose

    INACTIVE-ARRAY /dev/md0 level=raid5 num-devices=4 metadata=1.2 spares=1 name=openmediavault:0 UUID=990f19c7:dc6efc9e:944b48e2:efa66b33

    devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde

    -----------------------------------------------------------------------------------------------------------------------------------------------------------

    Post type of drives and quantity being used as well.

    There are 3 Western Digital Red 2tb Drives

    and 1 WD Green 2tb

    -----------------------------------------------------------------------------------------------------------------------------------------------------------


    Post what happened for the array to stop working? Power loss /Power Hit up and down

  • Try stopping and reassembling;


    mdadm --stop /dev/md0


    mdadm --assemble --force --verbose /dev/md0 /dev/sd[bcde]


    no guarantee, but that usually works

    Raid is not a backup! Would you go skydiving without a parachute?

  • OK looks like it worked

    mdadm: stopped /dev/md0

    root@openmediavault:~# sudo mdadm --assemble --force --verbose /dev/md0 /dev/sd[bcde]

    mdadm: looking for devices for /dev/md0

    mdadm: /dev/sdb is identified as a member of /dev/md0, slot 0.

    mdadm: /dev/sdc is identified as a member of /dev/md0, slot 1.

    mdadm: /dev/sdd is identified as a member of /dev/md0, slot 2.

    mdadm: /dev/sde is identified as a member of /dev/md0, slot 3.

    mdadm: added /dev/sdc to /dev/md0 as 1

    mdadm: added /dev/sdd to /dev/md0 as 2

    mdadm: added /dev/sde to /dev/md0 as 3

    mdadm: added /dev/sdb to /dev/md0 as 0

    mdadm: /dev/md0 has been started with 3 drives (out of 4) and 1 rebuilding.

    Yep it worked ......


    Edited once, last by psolimando: Looks like a drive is rebuilding.... how can I get a status when it is done ? ().

  • Looks like Drives 3 are online and one is rebuilding (not sure which one), not sure how long this will take . Is there a command to show the status of the rebuild?

    Smart still says all my drives are good. I do notice that I now see the BackupDrive online but not all the data is available. I would like to confirm that the rebuild is done before I start trying to use this again and confirm all the Backup Data is back and available. (yes on the root thing , I noticed)

  • root@openmediavault:~# cat /proc/mdstat

    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]

    md0 : active (auto-read-only) raid5 sdb[0] sde[3] sdd[2] sdc[1]

    5860147200 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]

    bitmap: 8/15 pages [32KB], 65536KB chunk


    So in reading this, I am assuming that this shows only 4 of 3 drives are there ?


    If that is so and the rebuild is done then that drive does not look like it is back.

  • Well it's not showing as rebuilding, in fact it's showing the raid in auto-read-only ?( if that's the case mdadm --readwrite /dev/md0 should correct that

    Raid is not a backup! Would you go skydiving without a parachute?

  • root@openmediavault:~# cat /proc/mdstat

    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]

    md0 : active raid5 sdb[0] sde[3] sdd[2] sdc[1]

    5860147200 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]

    [>....................] recovery = 0.4% (8384532/1953382400) finish=894.0min speed=36257K/sec

    bitmap: 8/15 pages [32KB], 65536KB chunk


    unused devices: <none>

    Ahhh there is the status, OK so it is rebuilding and it is only .4% done. OK I am going to leave this alone till tomorrow ...

    Thanks again

  • FYI:

    Drive finally rebuilt and everything is back up and all my files, and my containers are there and up and running .....

    Thanks for the help!!!!

  • yeah .... working on it .... but it was the problem having two SATA cards and the raid was across the two cards. Have a 8 port sata card now and no more problems so far.

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!