RAID1 disapear after reboot

    • OMV 3.x
    • RAID1 disapear after reboot

      Hi;
      I installed OMV3 from the CD, not over Debian
      My RAID was working well on CentOS
      but I decided to backup all my data and start a new one with OMV3.
      I try 3times (2times throught the webui, 1time by CLI) everytime after a reboot mdstats detect nothing.

      the RAID is base on 2 identical Western Digital
      without any issue (SMART and GPT)

      in my /etc/mdadm/mdadm.conf is well declare

      but mdadm --detail --scan --verbose
      or ls /dev/md*
      return nothing after a reboot

      I hope we will find something very fast and soon to help everyones ;)

      Regards!

      Jonathan
    • Yes obviously I saw them
      but none of them seems to have a solution
      and I didn't want someone told me it not the same case because I just have a mirror (RAID1) and/or I hijack a thread
      so I open a new one

      I also found this thread on ubuntu forum (ubuntuforums.org/showthread.php?t=884556)
      which in this one they lose the RAID because it's not declare in /etc/mdadm/mdadm.conf which is not my case, and the case of anyone if you use the OMV WebUI.

      Anyway if you find something substantial and potentially a solution i'm welling to try it and follow on it.

      Regards!
    • Post this info - Degraded or missing raid array questions

      I would seriously consider using rsync instead of a mirror as well. A mirror isn't backup.
      omv 4.0.14 arrakis | 64 bit | 4.13 backports kernel | omvextrasorg 4.1.0
      omv-extras.org plugins source code and issue tracker - github.com/OpenMediaVault-Plugin-Developers

      Please don't PM for support... Too many PMs!
    • jodumont wrote:

      it's funny how people presume
      I don't presume anything. I just remind people because LOTS of people think RAID=Backup. If you know the difference, great. I will still keep reminding people when they are having raid issues...

      jodumont wrote:

      I don't know what means PM
      Where did you see "PM"?
      omv 4.0.14 arrakis | 64 bit | 4.13 backports kernel | omvextrasorg 4.1.0
      omv-extras.org plugins source code and issue tracker - github.com/OpenMediaVault-Plugin-Developers

      Please don't PM for support... Too many PMs!
    • jodumont wrote:

      Sorry I realize after it was in your signature

      Please don't PM for support... Too many PMs!
      PM means Private Message. I guess it is called a Conversation on this board. I prefer all questions to posted in a thread.
      omv 4.0.14 arrakis | 64 bit | 4.13 backports kernel | omvextrasorg 4.1.0
      omv-extras.org plugins source code and issue tracker - github.com/OpenMediaVault-Plugin-Developers

      Please don't PM for support... Too many PMs!
    • This situation come back again

      I do rsync on USB
      but I want fail tolerance too

      Basicly I have 3 RAID1
      md0 for boot
      md1 for LVM
      md2 for data

      only md2 disappear on boot
      but it was also the only one I build throught OMV interface, not during the debian-installer

      # mdadm --detail /dev/md2

      /dev/md2:
      Version : 1.2
      Creation Time : Wed Dec 6 13:11:10 2017
      Raid Level : raid1
      Array Size : 2930135488 (2794.39 GiB 3000.46 GB)
      Used Dev Size : 2930135488 (2794.39 GiB 3000.46 GB)
      Raid Devices : 2
      Total Devices : 2
      Persistence : Superblock is persistent

      Intent Bitmap : Internal

      Update Time : Wed Dec 6 13:34:43 2017
      State : clean, resyncing
      Active Devices : 2
      Working Devices : 2
      Failed Devices : 0
      Spare Devices : 0

      Resync Status : 7% complete

      Name : ra:2 (local to host ra)
      UUID : 485973b2:2e0ec7d1:b256fb1a:1d7dca3d
      Events : 324

      Number Major Minor RaidDevice State
      0 8 48 0 active sync /dev/sdd
      1 8 32 1 active sync /dev/sdc

      # cat /etc/mdadm/mdadm.conf
      # mdadm.conf
      #
      # Please refer to mdadm.conf(5) for information about this file.
      #

      # by default, scan all partitions (/proc/partitions) for MD superblocks.
      # alternatively, specify devices to scan, using wildcards if desired.
      # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
      # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
      # used if no RAID devices are configured.
      DEVICE partitions

      # auto-create devices with Debian standard permissions
      CREATE owner=root group=disk mode=0660 auto=yes

      # automatically tag new arrays as belonging to the local system
      HOMEHOST <system>

      # definitions of existing MD arrays
      ARRAY /dev/md/0 metadata=1.2 name=ra:0 UUID=b341e185:33cce37b:7b27f804:6aece78e
      ARRAY /dev/md/1 metadata=1.2 name=ra:1 UUID=86c11a9d:8a3e5db6:e72397c5:89cefae9
      ARRAY /dev/md2 metadata=1.2 name=ra:2 UUID=485973b2:2e0ec7d1:b256fb1a:1d7dca3d

      Now I'll might seems rude and it's nice to have discussion about what a backup, how to manage my data and RAID5 doom but none of this bring a solution
      So please if you want to help me and potentials others users propose solution and/or command to try.

      Thanks!

      Jonathan
    • New

      Re,

      jodumont wrote:

      the funniest part is in my console I have this message :
      W: mdadm: /etc/mdadm/mdadm.conf defines no arrays
      That's normal, because OMV uses pure superblock autodetection ... no need for static configuration ... normally.

      jodumont wrote:

      This situation come back again
      Because one of the drives from array "md2" makes problems ... just check the logs and SMART data on both members AFTER the resync is finished:

      jodumont wrote:

      Update Time : Wed Dec 6 13:34:43 2017
      State : clean, resyncing
      [...] Resync Status : 7% complete
      you can check the ongoing with:
      cat /proc/mdstat


      Btw. ... may i ask why you use this layout:

      jodumont wrote:

      Basicly I have 3 RAID1
      md0 for boot
      md1 for LVM
      md2 for data
      RAID1 is only for a drive failure, but this will occur much later than any data corruption (silent or accidently).

      Sc0rp
    • Users Online 3

      3 Guests

    • Tags