RAID1 disapear after reboot

  • Hi;
    I installed OMV3 from the CD, not over Debian
    My RAID was working well on CentOS
    but I decided to backup all my data and start a new one with OMV3.
    I try 3times (2times throught the webui, 1time by CLI) everytime after a reboot mdstats detect nothing.


    the RAID is base on 2 identical Western Digital
    without any issue (SMART and GPT)


    in my /etc/mdadm/mdadm.conf is well declare


    but mdadm --detail --scan --verbose
    or ls /dev/md*
    return nothing after a reboot


    I hope we will find something very fast and soon to help everyones ;)


    Regards!


    Jonathan

  • There are several similar threads in this forum. A short search for 'mdadm' or "Raid missing" should deliver some relevant hits.
    E.g. this link could be helpful: RAID-5-Missing-need-help-for-rebuild

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • Yes obviously I saw them
    but none of them seems to have a solution
    and I didn't want someone told me it not the same case because I just have a mirror (RAID1) and/or I hijack a thread
    so I open a new one


    I also found this thread on ubuntu forum (https://ubuntuforums.org/showthread.php?t=884556)
    which in this one they lose the RAID because it's not declare in /etc/mdadm/mdadm.conf which is not my case, and the case of anyone if you use the OMV WebUI.


    Anyway if you find something substantial and potentially a solution i'm welling to try it and follow on it.


    Regards!

    • Offizieller Beitrag

    Post this info - Degraded or missing raid array questions


    I would seriously consider using rsync instead of a mirror as well. A mirror isn't backup.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • but I decided to backup all my data

    At least you still have your data :) That is not to be taken for granted.
    I don´t use mdadm, so I can´t give you more helpful information in your case regrettably.

    OMV 3.0.100 (Gray style)

    ASRock Rack C2550D4I C0-stepping - 16GB ECC - 6x WD RED 3TB (ZFS 2x3 Striped RaidZ1) - Fractal Design Node 304 -

    3x WD80EMAZ Snapraid / MergerFS-pool via eSATA - 4-Bay ICYCube MB561U3S-4S with fan-mod

  • So I reinstall OMV with my RAID already create and everything fine now


    the funniest part is in my console I have this message :
    W: mdadm: /etc/mdadm/mdadm.conf defines no arrays


    but it's working


    thanks for your post


    10/4

    • Offizieller Beitrag

    it's funny how people presume

    I don't presume anything. I just remind people because LOTS of people think RAID=Backup. If you know the difference, great. I will still keep reminding people when they are having raid issues...

    I don't know what means PM

    Where did you see "PM"?

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    Sorry I realize after it was in your signature


    Please don't PM for support... Too many PMs!

    PM means Private Message. I guess it is called a Conversation on this board. I prefer all questions to posted in a thread.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • This situation come back again


    I do rsync on USB
    but I want fail tolerance too


    Basicly I have 3 RAID1
    md0 for boot
    md1 for LVM
    md2 for data


    only md2 disappear on boot
    but it was also the only one I build throught OMV interface, not during the debian-installer


    # mdadm --detail /dev/md2


    /dev/md2:
    Version : 1.2
    Creation Time : Wed Dec 6 13:11:10 2017
    Raid Level : raid1
    Array Size : 2930135488 (2794.39 GiB 3000.46 GB)
    Used Dev Size : 2930135488 (2794.39 GiB 3000.46 GB)
    Raid Devices : 2
    Total Devices : 2
    Persistence : Superblock is persistent


    Intent Bitmap : Internal


    Update Time : Wed Dec 6 13:34:43 2017
    State : clean, resyncing
    Active Devices : 2
    Working Devices : 2
    Failed Devices : 0
    Spare Devices : 0


    Resync Status : 7% complete


    Name : ra:2 (local to host ra)
    UUID : 485973b2:2e0ec7d1:b256fb1a:1d7dca3d
    Events : 324


    Number Major Minor RaidDevice State
    0 8 48 0 active sync /dev/sdd
    1 8 32 1 active sync /dev/sdc


    # cat /etc/mdadm/mdadm.conf
    # mdadm.conf
    #
    # Please refer to mdadm.conf(5) for information about this file.
    #


    # by default, scan all partitions (/proc/partitions) for MD superblocks.
    # alternatively, specify devices to scan, using wildcards if desired.
    # Note, if no DEVICE line is present, then "DEVICE partitions" is assumed.
    # To avoid the auto-assembly of RAID devices a pattern that CAN'T match is
    # used if no RAID devices are configured.
    DEVICE partitions


    # auto-create devices with Debian standard permissions
    CREATE owner=root group=disk mode=0660 auto=yes


    # automatically tag new arrays as belonging to the local system
    HOMEHOST <system>


    # definitions of existing MD arrays
    ARRAY /dev/md/0 metadata=1.2 name=ra:0 UUID=b341e185:33cce37b:7b27f804:6aece78e
    ARRAY /dev/md/1 metadata=1.2 name=ra:1 UUID=86c11a9d:8a3e5db6:e72397c5:89cefae9
    ARRAY /dev/md2 metadata=1.2 name=ra:2 UUID=485973b2:2e0ec7d1:b256fb1a:1d7dca3d


    Now I'll might seems rude and it's nice to have discussion about what a backup, how to manage my data and RAID5 doom but none of this bring a solution
    So please if you want to help me and potentials others users propose solution and/or command to try.


    Thanks!


    Jonathan

  • Re,

    the funniest part is in my console I have this message :
    W: mdadm: /etc/mdadm/mdadm.conf defines no arrays

    That's normal, because OMV uses pure superblock autodetection ... no need for static configuration ... normally.


    This situation come back again

    Because one of the drives from array "md2" makes problems ... just check the logs and SMART data on both members AFTER the resync is finished:

    Update Time : Wed Dec 6 13:34:43 2017
    State : clean, resyncing
    [...] Resync Status : 7% complete

    you can check the ongoing with:
    cat /proc/mdstat



    Btw. ... may i ask why you use this layout:

    Basicly I have 3 RAID1
    md0 for boot
    md1 for LVM
    md2 for data

    RAID1 is only for a drive failure, but this will occur much later than any data corruption (silent or accidently).


    Sc0rp

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!