After hard update the Raid 1 hdd shown as empty

    • OMV 5.x (beta)

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • After hard update the Raid 1 hdd shown as empty

      Hello everyone

      I ran OMV 2 few years as a NAS in Radi 1. The system were corrupted somehow and made no update anymore. All solutions and suggestions I found worked for others but not for me. So I decide to make an "hard" update from OMV 2 to OMV 5 to get a well running system again.

      All Raid drives were unplugged, the OMV 5 were installed and the Raid drives were reconnected again. Now the systems shows me these drives as empty but they wasn't. Is there any possibility to get back my data on these drives and still use it in Raid 1 mode ?

      Thany you in advance

      P.S.: Answers in german language also welcome.
    • Can you please post the output of blkid?

      Some users report the same in the past. The problem seems to be that the mdadm kernel modul can not identify or read the metadata. Can not explain it more detailed because i don't know the reason exactly. The solution to workaround this was to reinstall OMV2, backup the data, install the latest OMV and recreate the RAID, finally copy back the backup.
      Absolutely no support through PM!

      I must not fear.
      Fear is the mind-killer.
      Fear is the little-death that brings total obliteration.
      I will face my fear.
      I will permit it to pass over me and through me.
      And when it has gone past I will turn the inner eye to see its path.
      Where the fear has gone there will be nothing.
      Only I will remain.

      Litany against fear by Bene Gesserit
    • votdev wrote:

      Can you please post the output of blkid?
      Yes I can:

      /dev/sdc: UUID="6f6bc262-bb6c-50ca-ca2f-70803ec12977" UUID_SUB="dcc1cf93-125b-2ffb-2077-26697f4c0ce6" LABEL="openmediavault:Raid1" TYPE="linux_raid_member"
      /dev/sdb: UUID="6f6bc262-bb6c-50ca-ca2f-70803ec12977" UUID_SUB="4caf4111-2673-2448-9971-008e8484100d" LABEL="openmediavault:Raid1" TYPE="linux_raid_member"
      /dev/sda1: UUID="9e2b17f3-95f5-4c9f-8060-f4c569626fd6" TYPE="ext4" PARTUUID="3a938295-01"
      /dev/sda5: UUID="f290a704-b371-45da-990f-08eacf332bcc" TYPE="swap" PARTUUID="3a938295-05"
      /dev/md127: LABEL="Raid1a" UUID="86e958bb-8d60-4af4-a800-62f544a943c0" TYPE="ext4"

      It seems to me, that the drives will recognized as raid devices, but won't work as such.
    • Post the output of cat /proc/mdstat.
      Absolutely no support through PM!

      I must not fear.
      Fear is the mind-killer.
      Fear is the little-death that brings total obliteration.
      I will face my fear.
      I will permit it to pass over me and through me.
      And when it has gone past I will turn the inner eye to see its path.
      Where the fear has gone there will be nothing.
      Only I will remain.

      Litany against fear by Bene Gesserit
    • The filesystem should be there, but read-only. You should find it at /dev/disks/by-label/Raid1a.
      It should be listed in the filesystems page.
      Absolutely no support through PM!

      I must not fear.
      Fear is the mind-killer.
      Fear is the little-death that brings total obliteration.
      I will face my fear.
      I will permit it to pass over me and through me.
      And when it has gone past I will turn the inner eye to see its path.
      Where the fear has gone there will be nothing.
      Only I will remain.

      Litany against fear by Bene Gesserit
    • Tried all, but nothing works.

      Go back to Version 2.x failed due to find mirrors of Debian some with Verison 3.x.
      Recover the data with formost also fails.

      I really ran out of solutions. I'm also not sure to belive in openmediavault further, because running version 2.x were corrupted for no apperent reason.

      If anybody got an idea about a thing that I haven't tried yet, please let me know.
    • Triebwerk wrote:

      I'm also not sure to belive in openmediavault further, because running version 2.x were corrupted for no apperent reason.
      This isn't an OMV problem. It is a Debian kernel issue and/or mdadm package issue. I would still be curious to see the output of wipefs -n /dev/sdX for each drive.
      omv 5.1.2 usul | 64 bit | 5.3 proxmox kernel | omvextrasorg 5.1.9
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • OMV really does not do any magic when creating a RAID. You must be able to get it running on every Debian, Ubuntu, openSUSE, ...
      Absolutely no support through PM!

      I must not fear.
      Fear is the mind-killer.
      Fear is the little-death that brings total obliteration.
      I will face my fear.
      I will permit it to pass over me and through me.
      And when it has gone past I will turn the inner eye to see its path.
      Where the fear has gone there will be nothing.
      Only I will remain.

      Litany against fear by Bene Gesserit
    • The Debian base was a big reason to choose OMV. All Debian systems I ran works well every time and I never got "big" issues.

      Here is the output what you asked for. The sda drive is the system drive of OMV and I changed nothing (!!) in the output but why the system got the same UUID for both of the drives ?

      root@openmediavault:~# wipefs -n /dev/sdb
      offset type
      ----------------------------------------------------------------
      0x1000 linux_raid_member [raid]
      LABEL: openmediavault:Raid1
      UUID: 6f6bc262-bb6c-50ca-ca2f-70803ec12977

      root@openmediavault:~# wipefs -n /dev/sdc
      offset type
      ----------------------------------------------------------------
      0x1000 linux_raid_member [raid]
      LABEL: openmediavault:Raid1
      UUID: 6f6bc262-bb6c-50ca-ca2f-70803ec12977

      root@openmediavault:~#
    • geaves wrote:

      Going back to your post 7 running mdadm --readwrite /dev/md127 should have brought the array back up as active.
      And run update-initramfs -u after to make sure initramfs assembles it correctly.
      omv 5.1.2 usul | 64 bit | 5.3 proxmox kernel | omvextrasorg 5.1.9
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!