Recreate RAID10 after OS disk failure

    • OMV 1.0

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • Recreate RAID10 after OS disk failure

      Today I had to reboot my OMW because of failures on log in the web gui.. but after it failed to reboot hanging just after the grub I understood the disk was damaged.

      I had 1 OS hard disk (crashed), and 4 hd in raid 1+0 (coupled 2Tb sda + 1Tb sdb / 2Tb sdc + 1Tb sdd) to obtain 3Tb mirrored.

      Now I reinstalled a new OS hard disk, but I don't know how to recognize/recreate the old RAID array.

      # blkid

      /dev/sda1: UUID="0c0bc765-b7aa-4532-98eb-d4cb83d21b0e" TYPE="ext4"
      /dev/sda5: UUID="f163d7c5-e228-4306-9f37-09bceb734ba1" TYPE="swap"
      /dev/sde1: UUID="c17401d3-1d95-42d5-acc4-e0e64cdf0927" TYPE="ext4"
      /dev/sde5: UUID="0eb849ea-8a2b-4f6d-8618-a9a38cddcc9b" TYPE="swap"

      #fdisk -l

      Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
      255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
      Units = sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 4096 bytes
      I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      Disk identifier: 0x0008c37b
      Device Boot Start End Blocks Id System
      /dev/sda1 * 2048 3890782207 1945390080 83 Linux
      /dev/sda2 3890784254 3907028991 8122369 5 Extended
      Partition 2 does not start on physical sector boundary.
      /dev/sda5 3890784256 3907028991 8122368 82 Linux swap / Solaris

      Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
      255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
      Units = sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 512 bytes
      I/O size (minimum/optimal): 512 bytes / 512 bytes
      Disk identifier: 0x00000000
      Disk /dev/sdb doesn't contain a valid partition table

      Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes
      255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
      Units = sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 4096 bytes
      I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      Disk identifier: 0x00000000
      Disk /dev/sdc doesn't contain a valid partition table

      Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes
      255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
      Units = sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 4096 bytes
      I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      Disk identifier: 0x00000000
      Disk /dev/sdd doesn't contain a valid partition table

      OMV boot disk:

      Disk /dev/sde: 500.1 GB, 500107862016 bytes
      255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors
      Units = sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 512 bytes
      I/O size (minimum/optimal): 512 bytes / 512 bytes
      Disk identifier: 0x000560d9
      Device Boot Start End Blocks Id System
      /dev/sde1 * 2048 960526335 480262144 83 Linux
      /dev/sde2 960528382 976771071 8121345 5 Extended
      /dev/sde5 960528384 976771071 8121344 82 Linux swap / Solaris

      So, that's all.. please help!
    • Hey, I hoped for some ideas/suggestions..

      well, the status is still this: new clean install on new OS hard disk (the 500Gb one) and I don't know how to recreate the RAID.

      Physical disks are:
      Display Spoiler

      /dev/sda ST 500Gb (boot disk)
      /dev/sdb WDC 1,82Tb (once joined to sdc in stripe)
      /dev/sdc HDT 935Gb (once joined to sdb in stripe)
      /dev/sdd ST 1,82Tb (once joined to sde in stripe)
      /dev/sde ST 935Gb (once joined to sdd in stripe)

      All except the boot disk) are NAS certified hard drives and the OMW worked fine before the boot disk failure. And YES, I'll remember to backup it to avoid similar experiences 8)



      I think I could try to re-create the RAID 1+0 directly via GUI but when i go in storage/RAID (that is empty) and I try to add a RAID it offers me only sdc/sdd/sde or sdb/sdd/sde (it changed after a reboot).

      Trying to explain me I discovered that in storage/filesystem it shows:

      /dev/sda1 ext4 n/c n/c n/c No No Online
      /dev/sdb1 ext4 1.78 TiB 1.69 TiB 1.06 GiB yes yes Online

      so it seems to mount the boot fs alternatively on sdb1 or sdc1.

      frankly I don't know if this is normal..

      maybe I could use mdadm to recreate the raid but I don't know the syntax (even because mine is a raid 10 not so common..)

      PLEASE HELP!!
    • The OS drive crashing shouldn't have caused any issues with the raid array. It should show up in the raid tab and the filesystem should show up in the Filesystems tab. What is the output of:

      cat /proc/mdstat
      blkid
      omv 4.1.12 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.11
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • but fdisk -l shows that sda is the boot hard disk, the 500Gb..

      fdisk -l

      Disk /dev/sda: 500.1 GB, 500107862016 bytes
      255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors
      Units = sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 512 bytes
      I/O size (minimum/optimal): 512 bytes / 512 bytes
      Disk identifier: 0x000560d9

      Device Boot Start End Blocks Id System
      /dev/sda1 * 2048 960526335 480262144 83 Linux
      /dev/sda2 960528382 976771071 8121345 5 Extended
      /dev/sda5 960528384 976771071 8121344 82 Linux swap / Solaris

      Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes
      255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
      Units = sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 4096 bytes
      I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      Disk identifier: 0x0008c37b

      Device Boot Start End Blocks Id System
      /dev/sdb1 * 2048 3890782207 1945390080 83 Linux
      /dev/sdb2 3890784254 3907028991 8122369 5 Extended
      Partition 2 does not start on physical sector boundary.
      /dev/sdb5 3890784256 3907028991 8122368 82 Linux swap / Solaris

      Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
      255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
      Units = sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 512 bytes
      I/O size (minimum/optimal): 512 bytes / 512 bytes
      Disk identifier: 0x00000000

      Disk /dev/sdc doesn't contain a valid partition table

      Disk /dev/sdd: 2000.4 GB, 2000398934016 bytes
      255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
      Units = sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 4096 bytes
      I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      Disk identifier: 0x00000000

      Disk /dev/sdd doesn't contain a valid partition table

      Disk /dev/sde: 1000.2 GB, 1000204886016 bytes
      255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
      Units = sectors of 1 * 512 = 512 bytes
      Sector size (logical/physical): 512 bytes / 4096 bytes
      I/O size (minimum/optimal): 4096 bytes / 4096 bytes
      Disk identifier: 0x00000000

      Disk /dev/sde doesn't contain a valid partition table
    • Drive letters change sometimes. Post the output of: lsmod | grep raid
      If you see output from the above command, I would try starting the array with the command you list above: mdadm --assemble /dev/md127 /dev/sd[bcde] --verbose --force
      omv 4.1.12 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.11
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • That is why you raid is being detected. Raid isn't starting. What kind of system are you using?
      omv 4.1.12 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.11
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • It seems to have loaded fine. Check lsmod | grep raid to be sure. Wonder why the module isn't loading at boot?? You could try creating the raid array now that the module is loaded (assuming you are going to use raid 10).
      omv 4.1.12 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.11
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • Yep. Assembling should not risk data. If you use the create flag, it will.
      omv 4.1.12 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.11
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • You need to load the module for the raid type you have. You have different size drives so you probably aren't running raid10.
      omv 4.1.12 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.11
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • i'm afraid i'm sure it is a 10.. coupled in stripe and mirrored.. 2Tb+1Tb mirrored to 2Tb+1Tb.. unless it made "motu proprio" a silent substitution with a raid5..

      could I try to assemble a stripe between bc or de, to obtain a degraded mirror?

      I don't know the syntax of mdadm, how do you define the raid level, the correct assemble of disks.. ?

      the man is not so helpful for a noob :S
    • That is actually three raid arrays if I understand right. Two raid 0 and one raid 1. So, you need to assemble the raid 0 arrays and then assemble the raid 1 array. You will need to load the raid1 and raid0 modules as well.
      modprobe raid1
      modprobe raid0
      You might be able to start a degraded array but I haven't tried that configuration.
      omv 4.1.12 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.11
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • cat /proc/mdstat

      Personalities : [raid10] [raid1] [raid0]
      unused devices: <none>


      Ok, now I have 3 personalities, but if I try to assemble the two raid0 arrays:

      mdadm --assemble /dev/md127 /dev/sdb /dev/sdc --verbose --force

      I have:

      mdadm: looking for devices for /dev/md127
      mdadm: Cannot assemble mbr metadata on /dev/sdb
      mdadm: /dev/sdb has no superblock - assembly aborted


      and the same kind of result I have with sdd and sde.

      maybe I mistake the syntax?