Loose Raid on reboot

  • 1. Created a software raid through the admin menu
    2. Hardware:

    • Asrock Z87 EXTREME4 Mainboard Sockel LGA 1150
    • Intel i7 4770
    • 2 WD 3tb Red


    It worked well with my old hardware with the same harddisks.

  • Re,


    hmm, i did not get the glue about this behavior ... normally should the raid being autodetected on boot.


    Hardware issues don't fit in this case ... may be some UEFI magic?


    Did you checked your logs?


    Sc0rp

  • Which log should log shall I check?


    And where can I find the correct mdadm.conf?


    In the following folder /etc/mdadm/mdadm.conf I cannot see the configured harddrives.

  • try to look in my topic too.
    Also I suggest you to search for mdadm, I found various different solution (sadly none of them worked for me)

    Intel G4400 - Asrock H170M Pro4S - 8GB ram - Be Quiet Pure Power 11 400 CM - Nanoxia Deep Silence 4 - 6TB Seagate Ironwolf - RAIDZ1 3x10TB WD - OMV 5 - Proxmox Kernel

  • I got the following result.


    root@NAS:~# sudo mdadm --assemble --force --verbose /dev/md0 /dev/sdf /dev/sdg
    mdadm: looking for devices for /dev/md0
    mdadm: Cannot assemble mbr metadata on /dev/sdf
    mdadm: /dev/sdf has no superblock - assembly aborted



    Maybe I should say, that I created the raid with my old hardware and wanted to reuse it with my new one.

  • I could find now the following message at boot:


    [Depend] Dependency failed for/srv/dev-disk-by-label/PrivateData
    [Depend] Dependency failed for File System Check on /dev-disk-by-label/PrivateData



    Hardware
    old: Core 2 Quad


    new: Intel i7 4770

  • Seems that you used the disks in your array before

    1. Created a software raid through the admin menu
    2. Hardware:

    • Asrock Z87 EXTREME4 Mainboard Sockel LGA 1150
    • Intel i7 4770
    • 2 WD 3tb Red


    It worked well with my old hardware with the same harddisks.

    Did you wipe the disks before using them in your new rig? I assume you didn't...


    I had the very similiar problem because I used my disks in other machine before. There were remains from other file systems on them and for unknown reasons mdmadm did not write the new superblocks properly.
    I created the RAID in the GUI, started copying my files and after the first reboot the RAID disappeared... Very annoying...


    Wipe your disks first (be aware, that all data on the disks will be lost!),



    Code
    dd if=/dev/zero of=/dev/sdX bs=1M (replace X with the target drive letter)

    then recreate your RAID in the GUI or in shell. Doublecheck the mdmadm.conf for an existing array, reboot and your RAID will (hopefully) consist permanent.

    OMV 4.1.8.2-1 (Arrakis) - Kernel Linux 4.16.0-0.bpo.2-amd64 | PlugIns: OMV-Extras, Shell-in-a-Box, Plex Media Server, Openmediavault-Diskstats
    Mainboard: MSI C236M Workstation | CPU: Intel Pentium G4500 | RAM: 2 x 4GB Kingston ECC | Systemdrive: 1 x Samsung EVO 860 | Datadrives: 4 x IronWolf ST6000VN0033 6 TB (Raid5) | NIC: Intel I350T2 PCIe x4

  • Re,

    There were remains from other file systems on them and for unknown reasons mdmadm did not write the new superblocks properly.

    Yeah. That's a bug ... well known for me.


    dd if=/dev/zero of=/dev/sdX bs=1M (replace X with the target drive letter)

    Better use:
    dd if=/dev/zero of=/dev/sdX bs=4096 count=16 (Blocksize is better for 4Kn-drives and 16x4KiB-Blocks will overide any GPT Informations)


    But if you zero the drive, you have to add it to the array again ...


    Sc0rp

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!