mdadm: no arrays found in config file or automatically

    • OMV 4.x
    • mdadm: no arrays found in config file or automatically

      This is a fresh install ov OMV 4.0.9 to USB flash drive. During the install I had internal HD removed. With the HD removed boot generated no errors. After installing the HD I get the following error during boot. It repeats the same error message about 30 - 40 times before finishing the boot:

      mdadm: no arrays found in config file or automatically

      As elsewhere suggested I modified /etc/mdadm/mdadm.conf to include:

      # definitions of existing MD arrays
      ARRAY <ignore> devices=/dev/sda

      Then I did: omv-mkconf mdadm

      This generated the follwing error:

      mdadm /etc/mdadm/mdadm.conf defines no arrays

      I rebooted and still get the same error message again. So I checked /etc/mdadm/mdadm.conf and noticed that the line I added was gone, so I added it again and rebooted; and I am still getting the same error message.

      Any idea how get rid off this error at boot?

      Thanks
    • Im getting this

      Just done a fresh installed unplugged all my drives in the software raid installed the os onto the ssd which then booted fine and i could access the webinterface. then ive shutdown and reconnected the drives and started the sytem again but all i get is

      mdadm: no arrays found in config file or automatically
      gave up waiting for root file system device. common problems:
      - boot args (cat /proc/cmdline)
      - check rootdelay= (did the system wait long enough?)
      - missing modules (cat /proc/modules; ls /dev)
      ALERT! /dev/sda1 does not exist dropping to shell!

      then goes to busybox.

      but if i shutdown and unplug the raid drives and power back up it boots normally. any suggestions.
      OMV 3.0.58 - 64 bit - Nut, SABnzbd, Sonarr, Couchpotato
      HP N40L Microserver, 8gb Ram, 5 x 3TB HDD Raid5, 1 x 120GB 2.5" SSD (OS)

      The post was edited 1 time, last by chclark ().

    • update,

      as a test i have reinstalled omv3 onto the os drive with the swraid drives unplugged once installed ive plugged the sw raid drives back in and booted the system which boots normally. looks like its a omv4 issue but im no expert with linux so who knows.
      OMV 3.0.58 - 64 bit - Nut, SABnzbd, Sonarr, Couchpotato
      HP N40L Microserver, 8gb Ram, 5 x 3TB HDD Raid5, 1 x 120GB 2.5" SSD (OS)
    • Hi All,

      Didnt want to create another thread about the same issue, and since the OP managed to sort it figured that it wasn't bad form to jump in.

      I too have this issue on a brand new build, using build 4.0.14.

      Built on an HP microserver GEN8, following the Setup found here : abyssproject.net/2017/03/prepa…rver-gen-8-servir-de-nas/
      (Used google Translate to give an english version).

      However i cant get past the error to allow the build of a new array.

      I will try and see if i can fire this up using OMV3.X tomorrow... however i would say the indications from above would suggest it would be ok... So whats changed in this respect between OMV3.X and OMV4.X?

      Any pointers gratefully appreciated!

      Many thanks
      Paul

      The post was edited 1 time, last by toibs ().

    • toibs wrote:

      Hi All,

      Didnt want to create another thread about the same issue, and since the OP managed to sort it figured that it wasn't bad form to jump in.

      I too have this issue on a brand new build, using build 4.0.14.

      Built on an HP microserver GEN8, following the Setup found here : abyssproject.net/2017/03/prepa…rver-gen-8-servir-de-nas/
      (Used google Translate to give an english version).

      However i cant get past the error to allow the build of a new array.

      I will try and see if i can fire this up using OMV3.X tomorrow... however i would say the indications from above would suggest it would be ok... So whats changed in this respect between OMV3.X and OMV4.X?

      Any pointers gratefully appreciated!

      Many thanks
      Paul
      did you try putting omv3 back on and did it work as expected like it did for me.

      It’s a shame omv4 didn’t work since everything’s in docker looks like I could of moved over with out any issues.
      OMV 3.0.58 - 64 bit - Nut, SABnzbd, Sonarr, Couchpotato
      HP N40L Microserver, 8gb Ram, 5 x 3TB HDD Raid5, 1 x 120GB 2.5" SSD (OS)
    • I am running into the same issue. Fresh install of Debian 9, installed openmediavault via apt-get and am now getting the no arrays error. I have Debian running on a single SSD and I have a four drive array that is not yet configured in OMV (it does have an old configuration from another software raid).

      Shell-Script

      1. W: mdadm: /etc/mdadm/mdadm.conf defines no arrays.

      z0rk wrote:

      I rebooted and still get the same error message again. So I checked /etc/mdadm/mdadm.conf and noticed that the line I added was gone, so I added it again and rebooted; and I am still getting the same error message.

      I am experiencing the same as well.
    • gderf wrote:

      chclark wrote:

      Anyone had a update on this. Or does it look like where sticking to omv3
      Are you having any other problems besides that repeating warning message during boot?
      thats the only issue i can see as this stops the system booting. i have had to roll back and install omv3 as that is the only way the system will boot.
      OMV 3.0.58 - 64 bit - Nut, SABnzbd, Sonarr, Couchpotato
      HP N40L Microserver, 8gb Ram, 5 x 3TB HDD Raid5, 1 x 120GB 2.5" SSD (OS)
    • I had the same issue, I've been running OMV 2 and in order to upgrade to OMV 4 I disconnected all data disks as manual tells to and performed fresh install, OMV was booting with no issues, I reconnected data disks (RAID5) and got stuck in (initramfs) console. At that point I remembered that I forgot to edit /etc/mdadm/mdadm.conf before re-install it had

      Source Code

      1. # definitions of existing MD arrays
      2. ARRAY /dev/md0 metadata=1.2 name=openmediavault:Storage UUID=xxxxxxxx:xxxxxxxx:xxxxxxxx:xxxxxxxx
      at the bottom (I replaced my UUID value with x-es here), so I disconnected all the drives again, booted, put the ARRAY line in mdadm.conf and ran
      > update-initramfs -u
      (-u for update) and
      > update-grub
      for good measure, shut down, connected all the drives, booted up - the problem was gone and the raid is up and running (you have to mount the filesystem (the one that is on the RAID) through OMV interface though)

      Note: I don't know about UUID is it in the superblock or is it coming from this config, but I'm pretty sure the name parameter should match the name of your raid array.

      Googled mdadm.conf, manual says

      uuid=
      The value should be a 128 bit uuid in hexadecimal, with punctuation interspersed if desired. This must match the uuid stored in the superblock.
      name=
      The value should be a simple textual name as was given to mdadm when the array was created. This must match the name stored in the superblock on a device for that device to be included in the array. Not all superblock formats support names.

      So it's better to have original mdadm.conf to copy the ARRAY line from, otherwise you have to get that information from the RAID array somehow.

      The post was edited 2 times, last by curlymike ().

    • curlymike wrote:

      I had the same issue, I've been running OMV 2 and in order to upgrade to OMV 4 I disconnected all data disks as manual tells to and performed fresh install, OMV was booting with no issues, I reconnected data disks (RAID5) and got stuck in (initramfs) console. At that point I remembered that I forgot to edit /etc/mdadm/mdadm.conf before re-install it had

      Source Code

      1. # definitions of existing MD arrays
      2. ARRAY /dev/md0 metadata=1.2 name=openmediavault:Storage UUID=xxxxxxxx:xxxxxxxx:xxxxxxxx:xxxxxxxx
      at the bottom (I replaced my UUID value with x-es here), so I disconnected all the drives again, booted, put the ARRAY line in mdadm.conf and ran
      > update-initramfs -u
      (-u for update) and
      > update-grub
      for good measure, shut down, connected all the drives, booted up - the problem was gone and the raid is up and running (you have to mount the filesystem (the one that is on the RAID) through OMV interface though)

      Note: I don't know about UUID is it in the superblock or is it coming from this config, but I'm pretty sure the name parameter should match the name of your raid array.

      Googled mdadm.conf, manual says

      uuid=
      The value should be a 128 bit uuid in hexadecimal, with punctuation interspersed if desired. This must match the uuid stored in the superblock.
      name=
      The value should be a simple textual name as was given to mdadm when the array was created. This must match the name stored in the superblock on a device for that device to be included in the array. Not all superblock formats support names.

      So it's better to have original mdadm.conf to copy the ARRAY line from, otherwise you have to get that information from the RAID array somehow.
      well I’m lost allready seems like a bigger problem as never had to mess with this before. Will wait for dev/mod input before I attempt a omv4 install again.
      OMV 3.0.58 - 64 bit - Nut, SABnzbd, Sonarr, Couchpotato
      HP N40L Microserver, 8gb Ram, 5 x 3TB HDD Raid5, 1 x 120GB 2.5" SSD (OS)
    • +1 other here, fresh install on a flash drive, 2*1TB disks partitions removed in GPared so disk is not initialized and same error message.
      Linux openmediavault 4.15.0-0.bpo.2-amd64 #1 SMP Debian 4.15.11-1~bpo9+1 (2018-04-07) x86_64 GNU/Linux
      hp proliant N40L 8GB ram on 64GB flashAMD Turion(tm) II Neo N54L Dual-Core Processor
      More abut me: etiennebretteville.com