Raid 5 not working after reinstallation of OMV

  • Hi all,

    a few days ago my omv web interface stopped working and the access to our samba share was very slow. So i decided to reinstall OMV without the raid devices and recover them afterwards, Somehow I didn't work out automatically as I thought. I have a ssd (dev/sda) and 5 WD Red as Raid 5. I have the following infos:

    cat /proc/mdstat

    Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
    md127 : inactive sdd[5](S) sdf[2](S) sdb[3](S) sdc[4](S) sde[1](S)
    14650677560 blocks super 1.2
    unused devices: <none>


    /dev/sda1: UUID="3614-7B2D" TYPE="vfat" PARTUUID="135fcc74-b1c5-47b4-bfd7-cb3472a10b42"
    /dev/sda2: UUID="b6ec9193-cb8b-4c69-970e-3f7342645c4b" TYPE="ext4" PARTUUID="783a0940-0c3b-4b73-a20b-5bcf0c5f4763"
    /dev/sda3: UUID="df415e4d-7cac-4b19-972d-21ca17232be6" TYPE="swap" PARTUUID="fa809040-657c-422c-b3cf-66484b0250ea"
    /dev/sdf: UUID="79686a5b-5c57-3d1d-195b-af3df4226866" UUID_SUB="4b25ec98-84a7-e6a8-7425-c7d5740c1617" LABEL="nasgul:Daten" TYPE="linux_raid_member"
    /dev/sdb: UUID="79686a5b-5c57-3d1d-195b-af3df4226866" UUID_SUB="e5227779-9ab2-d205-f323-f59b08fd9c9a" LABEL="nasgul:Daten" TYPE="linux_raid_member"
    /dev/sde: UUID="79686a5b-5c57-3d1d-195b-af3df4226866" UUID_SUB="448791b2-ce27-7e38-381a-458e02938798" LABEL="nasgul:Daten" TYPE="linux_raid_member"
    /dev/sdc: UUID="79686a5b-5c57-3d1d-195b-af3df4226866" UUID_SUB="882a53b5-2e57-61d1-212e-b1af30b4a118" LABEL="nasgul:Daten" TYPE="linux_raid_member"
    /dev/sdd: UUID="79686a5b-5c57-3d1d-195b-af3df4226866" UUID_SUB="f504861c-dc1c-c4d7-d7df-a069bc119e00" LABEL="nasgul:Daten" TYPE="linux_raid_member"

    fdisk -l | grep "Disk "

    cat /etc/mdadm/mdadm.conf

    mdadm --detail --scan --verbose

    INACTIVE-ARRAY /dev/md127 num-devices=5 metadata=1.2 name=nasgul:Daten UUID=79686a5b:5c573d1d:195baf3d:f4226866

    Hopefully somebody can help me. I tried to assemble the raid again.

    mdadm --assemble --run /dev/md127

    /dev/md127 not identified in config file.

    Could this be the problem and if so what has to be the content for my config file.

    Thank you for any help.


  • The raid is inactive, whilst I commend you for using the reference on the raid section the out output makes it difficult to read can you use the </> which is the code or the spoiler on the menu so it will look like this;

    root@omv5vm:~# blkid
    /dev/sdb: UUID="2ee7f17d-4613-5925-ad86-61530dce6f84" UUID_SUB="91364123-f08a-462c-c3ff-2949f74f410c" LABEL="omv5vm:test" TYPE="linux_raid_member"
    /dev/sdc: UUID="2ee7f17d-4613-5925-ad86-61530dce6f84" UUID_SUB="c02d369c-e13e-3df4-f1fc-7c6547b6ac13" LABEL="omv5vm:test" TYPE="linux_raid_member"
    /dev/sda1: UUID="8030181d-b92d-4c8a-9014-bb66466da05d" TYPE="ext4" PARTUUID="90415d1a-01"
    /dev/sda5: UUID="d3d5158e-ff87-4e1c-908e-9366d5c2cd45" TYPE="swap" PARTUUID="90415d1a-05"
    /dev/md0: LABEL="vmraid" UUID="b35b714c-484a-4e81-a43b-0a8cf65fbf0b" TYPE="ext4"

    That formats the output and makes it easier to read, thanks, also post the output of mdadm --detail /dev/md127 and what version of OMV

  • Hi Geaves,

    thanks for the fast response. This is the output of mdadm --detail /dev/md127.

    It says Raid Level : raid0 which is definetly not the one I did choose some years ago. This could be a default settings I guess, because the config is missing?

  • It says Raid Level : raid0 which is definetly not the one I did choose some years ago. This could be a default settings I guess, because the config is missing?

    AFAIK the config is auto generated from the information collected from the drives even if it's a reinstall or moved moved to other hardware.

    The Raid) is what is information it's collecting (simplest way of putting it) from the drives in the array. That's why I asked what version of OMV, as that will determine relevant commands to run

    I've just checked both of my mdadm conf files OMV4 displays no arrays as I stopped using raid, my test vm for 5 shows the array, but neither of them show this line;

    # This configuration was auto-generated on Wed, 25 Mar 2020 16:23:07 +0000 by mkconf after the definitions of existing MD arrays.

    The norm to bring the raid back up;

    mdadm --stop /dev/md127

    mdadm --assemble --force --verbose /dev/md127 /dev/sd[bcdef]

    Before you try the above have you tried rebooting

  • HI again,

    sry I forgot the OMV Version. It is and Kernel Linux 5.6.0-0.bpo.2-amd64. I did try your commands.

    It looks like it is running. Now to actually get this as a shared folder again in omv, two short follow up questions just to be absolutly sure. In the webgui under filesystem I find now the /dev/sda1, sda2, sda3, and /dev/md127. It says not mounted. So I have to mount it again and then make the shared folder again?

    Thanks alot already I didn't really sleep well the last couple of days as the lost of Data would be a almost a complete desaster. And of course I don't have a latest backup due to moving and packing the last couple of weeks which were quite stressfull.


  • So I have to mount it again and then make the shared folder again?

    Do not do anything until it has finished rebuilding!!

    The output from the assemble is not what I expected, assembled as clean, with 4 drives out of 5 and 1 spare. ?(

    I would suggest the following, once the array has rebuilt;

    Mount it, recreate the shares, backup the data you want to keep, destroy the raid, wipe all five drives and start again.

    PIA, yes, but what you believe you have done and what your setup is telling you are two different things. It is easier to move hardware and get it to work than it is to upgrade the OS, I have seen some odd behaviour when users have moved from one version to another, particularly with raid configs.

    You mention in your first post that access to samba was slow, this could be anything, networking, drive failing, m'board failing i.e. a sata port not functioning correctly, sata cables, the list goes on. Access to samba shares will slow if there is a problem with a drive, it's further exacerbated where a raid setup is involved.

  • Is it cat /proc/mdstat

    :thumbup: it will display the status of the raid.

    This -> mdadm: /dev/sdd is identified as a member of /dev/md127, slot -1 is confusing me, I've seen this before but as yet can't locate the thread, sata ports are numbered from zero to the number of ports available, the -1 is the port that could be causing your problem. The way to locate that is to use the drive reference /dev/sdd, then from storage -> disks, the drives serial number which will help you locate it in your machine.

  • Hi Geaves,

    thank you for all the help. As you suggested. First an foremost I will backup everything and then check into the drives of the raid system. I will also purchase anUSB drive to do a daily backup of the raid system.

    I am just happy I didn't lose all my data.


Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!