mdadm: no arrays found in config file or automatically

    • Official Post

    I'm not surprised you would need a BIOS hack. The HP BIOS is terrible, options are hidden in disparate sections and there is insufficient granularity for booting options.

    That's exactly what it does, it allowed me to remove the cd and add 2 extra drives one of which is connected to the esata port. I'm booting from the internal usb and all the drives are seen as individual drives, the 'Onchip SATA Configuration' is set to Enabled and Legacy IDE and SATA IDE Combined Mode to Disabled.


    I started using this about 12 months ago replacing a commercial server which was old to say the least.


    BTW there is a HP server thread on the forum and another here not sure if any of those would be of any help.


    As soon as I try to load the HDDs, mdadm error.

    What's the mdadm error.

  • Sorry I meant to say mdadm.


    Not trying to upset any wagons, if it is misinformation - great - really do wanna find and use a Linux Debian based solution.


    If OVM is going to be multi back end (BTRfs, mdadm, lvm, zfs etc... ) possible solution, why does it need to try and do Any raid configuration for any set up until after the initial OS install? Couldn't that be done after the OS is up and running? For my install I seem to recall I had to drop the use of the OMV install CD and use the Debian first approach which I think is what finally got me going.


    Once it does get going - so far it is a great fit for my needs and does everything I need it to do much easier than I recall FreeNAS being. I did struggle to get OMV running just as this user did though.


    Either way, the developers of OVM are doing a great job and the forum is very helpful - my only reservation was the possible misinformation.


    Have a great Thanksgiving guys, and good luck hope it works out.

  • hi,
    new install of OMV 4 on SSD, from bootable usb sitck, on motherboard 775 socket, e8400, 8GB ddr2.
    after rebooting, all was fine. Setup in OMV webgui, network, ip address, ... all was fine.
    shutdown
    install LSi card, 8 sata ports. Plug 8 hdd. setup in lsi bios virtual drive, fine.
    boot on ssd... and


    mdadm : no array found... messages, a lot. without stopping, until new small OS cli.


    a few commands exist in this small reduced os.


    i think, OMV tries to boot on my new LSI volume which has never been formatted, because never succeed to run omv gui.


    /dev/sda1 is certainly lsi volume. Where is my SSD ? how can i know uuid of my ssd, without disconnecting lsi card ?


    maybe i have to install a small omv3 on a new ssd, just to have omv gui, and then having time to initialize lsi volume as ext4 raid 6.


    what i don't know is:


    how to get debian prompt with unplug anything ?
    what is the command line, in small os (i don't remember its name) to access to mdadm.conf on ssd ?


    if you have some single steps.


    i can try to install on another new ssd, omv5 :) or older omv3.
    in fact all my omv (i have some running systems) are made from OMV3, and were upgraded to omv4, or, the hw raid array were already created and initialized)


    it is this reason, i am pretty sure that there is a mistake on omv installer (or debian installer), for new user, who want to run 1st omv install, and then plug new raid volume.
    take time, please, to answer or help me, my new big system is initializing raid 6... for a few days :)


    i am sure the solution is not far...

  • same problem. Installed OMV 5.0.5 on a SSD. lots of


    mdadm: no arrays found in config or autmatically.
    But when the USB stick with the installer is in the comnputer, no problems.
    Dieter

  • What are the contents of this file?


    /etc/initramfs-tools/conf.d/resume


    If RESUME= some filesystem UUID and it is not available then you will get about a dozen of those warnings before it gives up and boots.

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

  • This is what's in my OMV 5 file. I have no idea how it got in there.


    ARRAY <ignore> devices=*


    I tried your fix which I have seen posted on the net on and off for years but it never worked.


    Are you running with swap enabled?

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

  • Hi,
    This is what i did to fix the "OMV" issue:

    • in bios, i set to disabled pci-express slot where was connected my raid card.
    • reboot.
    • login
    • editing mdadm.conf with nano /etc/mdadm/mdadm.conf
    • adding ARRAY <ignore> devices=/dev/sda
    • save
    • reboot
    • enabled pci-express slot

    same error.


    Then, I checked on different OMV 4.x machines of mine, and there is in mdadm.conf at the end of file, a section with:


    Code
    # definitions of existing MD arrays
    ARRAY <ignore> devices=*

    this machine runs OMV4 and have 1 SSD and 1 Raid HW volume (with card).
    /dev/sda1 => raid HW volume
    /dev/sdb1 => ssd OS.


    but on the new machine, issue.
    During my different checks, i start my machine, with disabled pci-express slot, and in OMV webgui, i could see raid HW volume. I was surprised.
    And i understood what was wrong:


    it was not disabling pci-express slot, but disabling Bios on pci-express slot. So, with disabled bios pci-e slot, my OMV works fine, my Raid volume is mounted, and i opened mdadm.conf file,
    there is nothing after this line # definitions of existing MD arrays


    So, there were a lot of messages mdadm during boot, because of my wrong settings in bios. Not OMV cause.


    thanks :thumbup:

  • Hi,
    I had the same problem on my HP Microserver Gen8 running OMV 4.
    It turned out that I recently replaced the SSD on which the system was installed for a bigger one, and one of the HDD for a bigger one as well.
    I had 2 issues causing slow boot and the "mdadm no arrays found":

    • the /etc/initramfs-tools/conf.d/resume file was referencing the swap partition on the previous SSD (thanks @gderf)
    • the replaced HDD was still found in /etc/fstab file

    After correcting these 2 mistakes, no more mdadm messages on boot.

  • hi,


    yesterday evening i was trying to install a new OMV4 on a motherboard, x58, 8GB ram, 1 only drive = SSD 120GB. No hw card, no more disk (for the moment)
    after a lot of different tries, i always get "mdadm message".


    with or without OMV usb stick present on boot.


    so i decide to change bios settings, no boot on USB.
    But using the button to select which boot drive. So, in this ways, system sees only my SSD. (for the moment). You can set SSD/HDD boot Before USB.


    Just before installing OMV i do this:


    I also put my SSD into a Win10 PC, it was the 5th disk into it. Then with run alt-Win, cmd, Diskpart, i ran a couple of commands (List disk, Select disk 5, clean - be careful, it is fast). To have No partition present on SSD. (equal gparted). SSD is now Not initialized, what i wanted.


    unplug it.


    plug in future OMV system.


    plug usb stick (omv4)


    as there is an only bootable drive was present (the usb stick, not the ssd, because, it has been erased before with previous action), OMV install began.


    And during OMV installation, i could see, this: for the 1st time, SDA1 was selected and prefered as SDB1, as never as my few previous install since yesterday, even if OMV USB stick was present.


    So, when OMV install usb stick is present, and you decide to install to a drive, be sure that your new drive was clean, no partition, no previous partition information. AND in your BIOS, do not select USB drive before HDD/SDD (just for the install of OMV).


    This is how i get a new and fresh install on OMV with 1 only SSD, and from its OMV install USB key (no need to change anything in grub or mdadm.conf). without mdadm issue.


    Thanks for your precious help.

  • Hello, I've experienced the same issue after a clean install (on an external 2.5" usb hdd, uefi)
    The solution is "simple" : when the system is up to date and ready to go, shutdown the server, replug all raid hdd, reboot but go in the boot menu options and choose "debian" (if uefi menu) or your omv system disk (if legacy)
    That's all, then initramfs will be update, nothing to do in /etc/fstab because uuid stays correctly configured
    The issue is due to try to boot automatically.
    Now, go in omv web-ui, check the raid section, conf was imported automatically.
    EDIT : forgot to mention : I've rebooted the server, hdd system is now see as /dev/sde2 but don't worry about that, it is normal because first hdd detected were sata hdd then sata0 = sda, sata1 = sdb, etc...

  • My 2 cents.
    I had the same problem. Fixed it with 2 steps:
    1) cat /etc/initramfs-tools/conf.d/resume:
    RESUME=/dev/vda1 #instead of turned-off swap uuid
    2) omv-mkconf mdadm
    It updates /etc/mdadm/mdadm.conf and rebuilds initramfs.

  • My 2 cents.
    I had the same problem. Fixed it with 2 steps:
    1) cat /etc/initramfs-tools/conf.d/resume:
    RESUME=/dev/vda1 #instead of turned-off swap uuid
    2) omv-mkconf mdadm
    It updates /etc/mdadm/mdadm.conf and rebuilds initramfs.


    i had the same problem with omv 5. it is a ssd (system) and a hdd (data) installed.

    i had to make some adjustments:


    1) nano /etc/initramfs-tools/conf.d/resume

    RESUME=/dev/sda1


    2) since omv-mkconf is no longer available, I had to use omv-salt.


    omv-salt deploy run mdadm


    Edit: okay, after a few restarts the error messages still come, but are skipped.

  • i had the issue again during a fresh install from omv4 to 5, i did as normal removed raid disks leaving only the os disk attached reinstalled and as soon as the raid went in error. but remember seeing one the post further back if the raid is left plugged in it updates the needful during the end of the install. so i tried been careful what i selected during the install and it all worked fine booted right up after install raid all present and working.

    OMV 5 - 64 bit
    Dell T430, 16gb Ram, 5 x 3TB HDD Raid5, 1 x 120GB 2.5" SSD (OS)

  • What's the mdadm error.

    Sorry I didn't respond to this earlier. I have succeeded installing OMV5 on my HP Microserver now. I'll give a summary of my experience below.


    During my initial research for HP Microserver Gen8 OMV setups, I was led to believe that I should be changing the interface in BIOS from default B120 to Legacy IDE with the intention of letting OMV handle the (soft) RAID (https://www.reddit.com/r/OpenM…4_on_hp_microserver_gen8/). I'm not sure if I interpreted the instruction correctly, however I ran with this and encountered the previously mentioned mdadm errors. After endless permutations at my wits end, I decided to disregard this advice and revert back to the B120 interface as a last ditch effort and the install went flawlessly.


    So much for months of on/off attempts!

  • I had similar problem. mdadm errors were spoiling console and slowing down boot.

    I've issued a command:

    omv-salt deploy run mdadm

    which regenerated mdadm.conf and those mdadm messages are gone

  • Getting same issue new install on Dell 3070 micro the SSD still got win 10 booted on legacy

    tryto install on usb stick OK, at reboot saying

    /dev/MD No such a file or directory

    missing modules cat /proc/modules; ls /dev

    Alert! /Dev/sdb1does not exist. Dropping to a shell!

    Same card on an other computer boot NP

    So at this point what should I do?

  • This solution worked perfectly for me I I just had to replace sdb1 by sda1.


    now my other issue I do not see the new install on the network and I did the #1 in omv-firstaid

    Now I just did the web interface everything is OK

  • Hey folks,


    I solved it for my situation, where I have OMV5 installed on SSD and am adding more HDDs.


    The trick is not to hook up the new HDDs before the system has booted. Do as follows:

    1. Fire up your unchanged system
    2. Log into the shell (I logged in as root)
    3. Attach the HDD to your system (next SATA port for me) while the system is still up and running
    4. Run "update-grub" in the shell
    5. Restart your system

    My system now just started up with the new HDD attached, without any mdadm message.

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!