RAID 5 array disappearing after a reboot.

  • I have OMV 2.1 installed on a machine.
    i7-4790
    Asus Z series MoBo
    16GB Corsair memory
    Crucial M.2 SATA3 256GB
    4 x 3TB WD Red drives connected straight to the MoBo


    A couple times in a row now after installing I have lost the RAID array. At first I thought maybe it was me and I clicked a wrong button. After recreating and doing some searching it kept happening. I decided to do a fresh install. Same thing before any updates are applied at all. Here is my outputs from the "Degraded or Missing" thread...




    • Offizieller Beitrag

    What is the output of: fdisk -l | grep "Disk " | grep sd | sort

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.6 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

    • Offizieller Beitrag

    The following should assemble the array.
    mdadm --assemble --force --verbose /dev/md0 /dev/sd[abcd]


    Then update initramfs


    update-initramfs -u

  • Code
    root@openmediavault:~# mdadm --assemble --force --verbose /dev/md0 /dev/sd[abcd]
    mdadm: looking for devices for /dev/md0
    mdadm: Cannot assemble mbr metadata on /dev/sda
    mdadm: /dev/sda has no superblock - assembly aborted
    root@openmediavault:~#
    • Offizieller Beitrag

    Try to assemble a degraded array (without sda). Then wipe sda and add it later.


    mdadm --assemble --force --verbose /dev/md0 /dev/sd[bcd]
    cat /proc/mdstat

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.6 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Why do I get the feeling these drives, which I swore I wiped, I not actually wiped...

    • Offizieller Beitrag

    Your last two commands would have never worked since you can't start a raid 5 array with two or more missing disks.


    How did you wipe them? I use dd if=/dev/zero of=/dev/sda bs=512 count=100000 Obviously, you don't want to do that if there is data on them.

    omv 7.0.5-1 sandworm | 64 bit | 6.8 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.13 | compose 7.1.6 | k8s 7.1.0-3 | cputemp 7.0.1 | mergerfs 7.0.4


    omv-extras.org plugins source code and issue tracker - github - changelogs


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • I just used the feature in the gui. Whipe > quick wipe. The data on the disks does not matter.


    Code
    root@openmediavault:~# dd if=/dev/zero of=/dev/sda bs=512 count=100000
    100000+0 records in
    100000+0 records out
    51200000 bytes (51 MB) copied, 1.20614 s, 42.4 MB/s
  • Well this is a first. I went in through SSH and ran fdisk with options s, then w, then p, then d for each number, then s and w again. Now in the GUI it shows all is good when I choose the quick wipe feature.


    Making a new RAID 5 array and didn't tell me to kick rocks yet. I'll check this after work.

  • Well this is a first. I went in through SSH and ran fdisk with options s, then w, then p, then d for each number, then s and w again. Now in the GUI it shows all is good when I choose the quick wipe feature.


    Making a new RAID 5 array and didn't tell me to kick rocks yet. I'll check this after work.

    Hi,


    Not sure if it worked for yu already, I had a similar issue , yu can read here : Mdadm device (/dev/md3) raid5 get lost on every reboot


    My problem was solved when I used primary partitions instead the whole disk ; using /dev/sda1 /dev/sdb1 etc ... instead of /dev/sda /dev/sdb .


    I think it should work anyway with full disk device instead of using partiions, but for some reason after the reboot the superblocks were overwrited.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!