mdadm: no arrays found in config file or automatically

    • OMV 4.x
    • onebutters wrote:

      I'm not surprised you would need a BIOS hack. The HP BIOS is terrible, options are hidden in disparate sections and there is insufficient granularity for booting options.
      That's exactly what it does, it allowed me to remove the cd and add 2 extra drives one of which is connected to the esata port. I'm booting from the internal usb and all the drives are seen as individual drives, the 'Onchip SATA Configuration' is set to Enabled and Legacy IDE and SATA IDE Combined Mode to Disabled.

      I started using this about 12 months ago replacing a commercial server which was old to say the least.

      BTW there is a HP server thread on the forum and another here not sure if any of those would be of any help.

      onebutters wrote:

      As soon as I try to load the HDDs, mdadm error.
      What's the mdadm error.
      Raid is not a backup! Would you go skydiving without a parachute?
    • Sorry I meant to say mdadm.

      Not trying to upset any wagons, if it is misinformation - great - really do wanna find and use a Linux Debian based solution.

      If OVM is going to be multi back end (BTRfs, mdadm, lvm, zfs etc... ) possible solution, why does it need to try and do Any raid configuration for any set up until after the initial OS install? Couldn't that be done after the OS is up and running? For my install I seem to recall I had to drop the use of the OMV install CD and use the Debian first approach which I think is what finally got me going.

      Once it does get going - so far it is a great fit for my needs and does everything I need it to do much easier than I recall FreeNAS being. I did struggle to get OMV running just as this user did though.

      Either way, the developers of OVM are doing a great job and the forum is very helpful - my only reservation was the possible misinformation.

      Have a great Thanksgiving guys, and good luck hope it works out.
    • hi,
      new install of OMV 4 on SSD, from bootable usb sitck, on motherboard 775 socket, e8400, 8GB ddr2.
      after rebooting, all was fine. Setup in OMV webgui, network, ip address, ... all was fine.
      shutdown
      install LSi card, 8 sata ports. Plug 8 hdd. setup in lsi bios virtual drive, fine.
      boot on ssd... and

      mdadm : no array found... messages, a lot. without stopping, until new small OS cli.

      a few commands exist in this small reduced os.

      i think, OMV tries to boot on my new LSI volume which has never been formatted, because never succeed to run omv gui.

      /dev/sda1 is certainly lsi volume. Where is my SSD ? how can i know uuid of my ssd, without disconnecting lsi card ?

      maybe i have to install a small omv3 on a new ssd, just to have omv gui, and then having time to initialize lsi volume as ext4 raid 6.

      what i don't know is:

      how to get debian prompt with unplug anything ?
      what is the command line, in small os (i don't remember its name) to access to mdadm.conf on ssd ?

      if you have some single steps.

      i can try to install on another new ssd, omv5 :) or older omv3.
      in fact all my omv (i have some running systems) are made from OMV3, and were upgraded to omv4, or, the hw raid array were already created and initialized)

      it is this reason, i am pretty sure that there is a mistake on omv installer (or debian installer), for new user, who want to run 1st omv install, and then plug new raid volume.
      take time, please, to answer or help me, my new big system is initializing raid 6... for a few days :)

      i am sure the solution is not far...
    • What are the contents of this file?

      /etc/initramfs-tools/conf.d/resume

      If RESUME= some filesystem UUID and it is not available then you will get about a dozen of those warnings before it gives up and boots.
      --
      Google is your friend and Bob's your uncle!

      RAID - Its ability to disappoint is inversely proportional to the user's understanding of it.

      ASRock Rack C2550D4I C0 Stepping - 16GB ECC - Silverstone DS380 + Silverstone DS380 DAS Box.
    • This is what's in my OMV 5 file. I have no idea how it got in there.

      ARRAY <ignore> devices=*

      I tried your fix which I have seen posted on the net on and off for years but it never worked.

      Are you running with swap enabled?
      --
      Google is your friend and Bob's your uncle!

      RAID - Its ability to disappoint is inversely proportional to the user's understanding of it.

      ASRock Rack C2550D4I C0 Stepping - 16GB ECC - Silverstone DS380 + Silverstone DS380 DAS Box.
    • Hi,
      This is what i did to fix the "OMV" issue:
      • in bios, i set to disabled pci-express slot where was connected my raid card.
      • reboot.
      • login
      • editing mdadm.conf with nano /etc/mdadm/mdadm.conf
      • adding ARRAY <ignore> devices=/dev/sda
      • save
      • reboot
      • enabled pci-express slot
      same error.

      Then, I checked on different OMV 4.x machines of mine, and there is in mdadm.conf at the end of file, a section with:

      Source Code

      1. # definitions of existing MD arrays
      2. ARRAY <ignore> devices=*
      this machine runs OMV4 and have 1 SSD and 1 Raid HW volume (with card).
      /dev/sda1 => raid HW volume
      /dev/sdb1 => ssd OS.

      but on the new machine, issue.
      During my different checks, i start my machine, with disabled pci-express slot, and in OMV webgui, i could see raid HW volume. I was surprised.
      And i understood what was wrong:

      it was not disabling pci-express slot, but disabling Bios on pci-express slot. So, with disabled bios pci-e slot, my OMV works fine, my Raid volume is mounted, and i opened mdadm.conf file,
      there is nothing after this line # definitions of existing MD arrays

      So, there were a lot of messages mdadm during boot, because of my wrong settings in bios. Not OMV cause.

      thanks :thumbup:
    • Hi,
      I had the same problem on my HP Microserver Gen8 running OMV 4.
      It turned out that I recently replaced the SSD on which the system was installed for a bigger one, and one of the HDD for a bigger one as well.
      I had 2 issues causing slow boot and the "mdadm no arrays found":
      1. the /etc/initramfs-tools/conf.d/resume file was referencing the swap partition on the previous SSD (thanks @gderf)
      2. the replaced HDD was still found in /etc/fstab file
      After correcting these 2 mistakes, no more mdadm messages on boot.

      The post was edited 1 time, last by nagubal ().

    • hi,

      yesterday evening i was trying to install a new OMV4 on a motherboard, x58, 8GB ram, 1 only drive = SSD 120GB. No hw card, no more disk (for the moment)
      after a lot of different tries, i always get "mdadm message".

      with or without OMV usb stick present on boot.

      so i decide to change bios settings, no boot on USB.
      But using the button to select which boot drive. So, in this ways, system sees only my SSD. (for the moment). You can set SSD/HDD boot Before USB.

      Just before installing OMV i do this:

      I also put my SSD into a Win10 PC, it was the 5th disk into it. Then with run alt-Win, cmd, Diskpart, i ran a couple of commands (List disk, Select disk 5, clean - be careful, it is fast). To have No partition present on SSD. (equal gparted). SSD is now Not initialized, what i wanted.

      unplug it.

      plug in future OMV system.

      plug usb stick (omv4)

      as there is an only bootable drive was present (the usb stick, not the ssd, because, it has been erased before with previous action), OMV install began.

      And during OMV installation, i could see, this: for the 1st time, SDA1 was selected and prefered as SDB1, as never as my few previous install since yesterday, even if OMV USB stick was present.

      So, when OMV install usb stick is present, and you decide to install to a drive, be sure that your new drive was clean, no partition, no previous partition information. AND in your BIOS, do not select USB drive before HDD/SDD (just for the install of OMV).

      This is how i get a new and fresh install on OMV with 1 only SSD, and from its OMV install USB key (no need to change anything in grub or mdadm.conf). without mdadm issue.

      Thanks for your precious help.

      The post was edited 1 time, last by Genna ().

    • Hello, I've experienced the same issue after a clean install (on an external 2.5" usb hdd, uefi)
      The solution is "simple" : when the system is up to date and ready to go, shutdown the server, replug all raid hdd, reboot but go in the boot menu options and choose "debian" (if uefi menu) or your omv system disk (if legacy)
      That's all, then initramfs will be update, nothing to do in /etc/fstab because uuid stays correctly configured
      The issue is due to try to boot automatically.
      Now, go in omv web-ui, check the raid section, conf was imported automatically.
      EDIT : forgot to mention : I've rebooted the server, hdd system is now see as /dev/sde2 but don't worry about that, it is normal because first hdd detected were sata hdd then sata0 = sda, sata1 = sdb, etc...

      The post was edited 1 time, last by 4kristof6 ().