mdadm: no arrays found in config file or automatically

    • OMV 4.x

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • liquid7 wrote:

      Markess wrote:

      Definitely need to defer to you on these things, as its way above my expertise. In any case, I now think my issue is slightly different from the others here. I'm failing to boot due to the mdadm message others are getting, followed by busybox prompt, But in my case, I was starting with a fresh install and no disks connected. I wanted to pull some baseline power numbers with no disks attached, and couldn't get it to boot due to that message. I never got to the point where I connected the disks, so different situation than failure to re-assemble an array.
      Check the last reply on my thread Unable to boot after installation (Dropping to a shell!) I think you may have the same problem which I did.
      Your screenshot in the first post of that thread is exactly what I was getting. I'll reinstall in the next day or so when I have a moment and take a look at Grub at boot. I'll see if the fix in that thread works for me. and will try what @ryecoaaron recommends above: raid=noautodetect .

      What is really strange to me is that I haven't used RAID in years. Not since @ryecoaaron recommended I try rsync instead of RAID quite some time back. Now I'm using Snapraid, so there's never been an array to assemble or reassemble, or have any other kind of issue with!
      Working with computers since the days when unboxing and set-up required 3 weeks with a soldering iron!
    • quid7 and Bohatyr are correct from what I can see yet I am still having issues. When installing from a USB drive it boots to GRUB after the install. Going to edit the boot menu brings up what is below:

      [IMG:https://imgur.com/Focqwr2]

      [IMG:https://imgur.com/ixZTWoV]

      From this I can see that the UUID is not right at all. Before trying this install I grabbed the UUID of the drive the OS was to be installed on which is e984d5db-5116-4c31-bfe7-c2a39775f9eb. The UUID's don't match at all and the entire "Search" section of the code here is something I don't think should be there. So I tried removing that entire If --Else Statement, and edited the "Linux" line so the UUID is present instead of the /dev*. That can be seen below:

      [IMG:https://imgur.com/PXY0dlH]

      Yet, when trying to run the boot from this code another issue pops up. It cannot find the Linux set up:

      [IMG:https://imgur.com/dvU94Vu]

      I don't know why but during install it definitely is not setting up the boot sequence correct from what I can see. Any ideas on what can be tried to fix the boot?
    • to ZeroGravitas23:
      When you boot to GRUB, you have to edit only this row:
      linux /boot/vmlinuz-4.14 bla bla root=UUID=( here set your root UUID instead /dev/sdb1 ) without parenthesis
      After change it, push F10 - your OMV shoul be startup right
      It is important update your OMV for permanent changes.
    • Thank you for the quick reply Bohatyr.

      You steps worked like a charm (once I got the UUID correct haha).

      I did notice that the change isn't persistent so how do we permanently change the boot so the UUID sticks? Do we just need to edit fstab?

      This is what is shown on the fstab after successfully changing GRUB and booting into OMV:

      Source Code

      1. /etc/fstab: static file system information.
      2. #
      3. # Use 'blkid' to print the universally unique identifier for a
      4. # device; this may be used with UUID= as a more robust way to name devices
      5. # that works even if disks are added and removed. See fstab(5).
      6. # <file system> <mount point> <type> <options> <dump> <pass>
      7. # / was on /dev/sdb1 during installation
      8. UUID=9ea1746d-8a08-49f3-8146-212c3b6264a4 / ext4 errors=remount-ro 0 1
      9. # swap was on /dev/sdb5 during installation
      10. UUID=facef956-bc3f-406c-90f7-b76ed25c4e37 none swap sw 0 0
      11. tmpfs /tmp tmpfs defaults 0 0
      Display All

    • You are 100% correct. After correcting the "Linux" line in the boot menu, booted up, and ran all of the available updates for the system it is now able to boot into OMV 4 successfully after rebooting.

      Thank you for all of the help, and I learned something from this which is the best part.

      So for anyone that seems to be having the same issue:

      Edit GRUB where this is located linux /boot/vmlinuz-4.14 bla bla root=/dev/***
      Replace the root = /dev/*** with the below
      root = UUID = "Your UUID for the main OMV partition"

      @ryecoaaron Any idea why this might be happening to some of us? The install seems to be choosing the incorrect partition when creating the initial boot sequence. Changing it to the precise UUID and then booting into OMV and running any system updates necessary seems to cement the UUID when corrected in GRUB first.

      Just want to make sure other users are aware of how to fix this or why it might be happening.
    • ZeroGravitas23 wrote:

      Any idea why this might be happening to some of us? The install seems to be choosing the incorrect partition when creating the initial boot sequence. Changing it to the precise UUID and then booting into OMV and running any system updates necessary seems to cement the UUID when corrected in GRUB first.
      This isn't an OMV thing. This is caused by the debian installer. Not sure why your systems are using the device name instead of the uuid. All of my systems I just checked are using UUID for grub including a system installed from a usb stick to a usb stick.
      omv 4.1.11 arrakis | 64 bit | 4.15 proxmox kernel | omvextrasorg 4.1.11
      omv-extras.org plugins source code and issue tracker - github

      Please read this before posting a question and this and this for docker questions.
      Please don't PM for support... Too many PMs!
    • ryecoaaron wrote:

      ZeroGravitas23 wrote:

      Any idea why this might be happening to some of us? The install seems to be choosing the incorrect partition when creating the initial boot sequence. Changing it to the precise UUID and then booting into OMV and running any system updates necessary seems to cement the UUID when corrected in GRUB first.
      This isn't an OMV thing. This is caused by the debian installer. Not sure why your systems are using the device name instead of the uuid. All of my systems I just checked are using UUID for grub including a system installed from a usb stick to a usb stick.
      Did you check right after the installation finished and OMV boots for the first time from disk? I noticed that the OMV/Debian installer does not use UUIDs for the first boot. It is corrected only after the first boot, probably through apt-get when the grub config gets updated, I didn't check though.

      So if you add drives before booting into the OS for the first time (as I did) there's a high chance OMV won't be able to boot without adjusting the boot parameters manually since the drive letters will have changed.
    • Just to clarify, during the entire time of trying to move from either OMV 2 or OMV 3, I personally did not have any drives connected during or after each installation minus of course the SSD I was trying to install OMV 4 onto.

      I was attempting to install OMV 4 from a USB image onto an SSD with all of the extra data drives removed from the system. Upon installation and rebooting for the first time into the OS it would venture into GRUB and not find any OS drive to boot from even though the only drive available was the SSD that was just used to install OMV 4 onto.

      And Domi, you are 100% correct that the first time when trying to go into the OS it does not use the UUID for the boot. It tried to use something along the lines of /dev/sda when really the correct drive was /dev/sdg1 (this also is a little odd since the OMV installation seems to have made several partitions on /dev/sdg and the boot existed precisely on sdg1.

      So something during installation must be reverting the GRUB setup to using the /dev/*** nomenclature instead of using the UUID. I am not versed with GRUB or Linux enough to understand why Debian is doing this. But it is quite frustrating if you don't know how to navigate through GRUB well. Thank you again for all of the help so far. It has solved the problem for myself, I just wish we knew the exact cause in case other users see the same thing.
    • hi all,

      today finally I decided to upgrade from omv2 to omv4, windows10 has deprecated old smb v1 protocol so it convinced me to make a fresh install in my nas.

      I installed from a usb media, sda, to an ssd, sdb, connected as usb (to spare a sata port) and with all old data hdd unplugged.
      After rebooting without installation media, the ssd switched from sdb to sda and I faced the same problem, mdadm array etc etc.
      Plug-in the data hdd (5), as suggested, not solved as os ssd it's now sdf.

      The problem is in grub conf, its entry has as linux root the wrong id device (good only during installation config), in my case root=/dev/sdb1

      Long story short, to get rid of the problem I pressed 'e' in the grub menu to edit parameters and I changed the root attribute with the current device so I managed to boot and then on the shell I executed "update-grub" that changed root values in uuid version 'root=UUID=...'.

      I hope it could be of any aid to whom is still facing it.

      Bye
    • AndreaDiPietro wrote:

      Long story short, to get rid of the problem I pressed 'e' in the grub menu to edit parameters and I changed the root attribute with the current device so I managed to boot and then on the shell I executed "update-grub" that changed root values in uuid version 'root=UUID=...'.
      Hey mate - sorry can you be a bit mores specific here?

      I got the to edit screen, but i'm just not clear exactly where I should be making changes and what to change it to. Code below, if you can help please?

      Shell-Script

      1. set params 'Debian GNU/Linux'
      2. load_video
      3. insmod gzip
      4. if_[ x$grub_platform = xxen ]; then ismod xzio; ismod lzopio; fi
      5. ismod part_msdos
      6. ismod ext2
      7. set root='hd7,msdos1'
      8. if [ x$feature_platform_search_hint = xy ]; then
      9. search --no-floppy --fs-uuid --set-root --hint-bios=hd7,msdos1 --hint-efi=hd7,msdos1 --hint-baremetal=ahci7,msdos1 89cc63e0-dbae-427e-973e-411ab11bd160
      10. else
      11. seach --no-floppy --fs-uuid --set-root 89cc63e0-dbae-427e-973e-411ab11bd160
      12. fi
      13. echo 'loading Linux 4.14.0-0.bpo.3-amd64 ...'
      14. linux /boot/vmlinuz-4.14.0-0.bpo.3-amd64 root=/dev/sdh1 ro quiet
      15. echo 'Loading initial ramdisk ...'
      16. initrd /boot/initrd.img-4.14.0-0.bpo.3-amd64
      Display All
      UPDATE: Got it sorted.

      Boot up - when the terminal stops loading the error and gives you the mdadm prompt, type 'blkid'
      This should show you all attached disks. You're looking for the one toward the bottom (arrarys/sata disk should be at top). For me it was a bit ambiguous - there were two that my boot USB could have been so I just remembered both (happened to be sdg1 and sdh1).
      Reboot, and when you see the bluescrren GRUB hit 'e'
      In the above code block change line #15 - so mine became 'root=/dev/sdg1'
      Then hit F10 to boot. If it does, you've got the right one - if not, go back and try another disk name/id.
      You should not have a 'login' prompt. Login in, then type 'update-grub'. This will make your change to the GRUB permanent.

      Most of the above has been said in one form or anything by others, I just had trouble following it as a newb, so hopefully my elongated writeup will help another newb!

      The post was edited 1 time, last by couch_potatozes ().

    • bbddpp wrote:

      I googled this issue and it brought me here, as I see this warning all the time now, not just at boot but any time I run package updates.

      I also found another thread

      askubuntu.com/questions/834903…dm-conf-defines-no-arrays

      which mentions using an edit in /etc/mdadm/mdadm.conf

      ARRAY <ignore> devices=/dev/sda

      as a solution. Anyone tried this? Seems pretty simple.
      That solution didn't work for me.
      OMV 4.x - ASRock Rack C2550D4I - 16GB ECC - Silverstone DS380
    • chclark wrote:

      Im getting this

      Just done a fresh installed unplugged all my drives in the software raid installed the os onto the ssd which then booted fine and i could access the webinterface. then ive shutdown and reconnected the drives and started the sytem again but all i get is

      mdadm: no arrays found in config file or automatically
      gave up waiting for root file system device. common problems:
      - boot args (cat /proc/cmdline)
      - check rootdelay= (did the system wait long enough?)
      - missing modules (cat /proc/modules; ls /dev)
      ALERT! /dev/sda1 does not exist dropping to shell!

      then goes to busybox.

      but if i shutdown and unplug the raid drives and power back up it boots normally. any suggestions.

      I want to add my input here. I've read through this thread--the post from chclark on March 4th (above) states exactly what I experienced. I'd like to point out this is nearly 5 months ago, so as far as I can tell, nothing is changed. I am a Newbie at this DYI NAS thing, and also pretty much with Linux language and procedures. (I'm pretty adept with Windows.) I tried two other NAS programs, both bombed for me! I really appreciated the guidance with OpenMediaVault, and the relative ease of installation. and the sincere attempts to make this useful to the non-savvy consumer. Having gotten so far, you can imagine my dismay--in this my third software attempt--at running into this problem with version 4.1.3. For me the only real solution was to go back to the previous version--3.0.86. That installed easily and works.

      So my question from here is, how and when should I be able to upgrade to version 4? Should this be done from inside the program with packages now? Should I wait till 4 is "perfected"? How will I know that it is safe to upgrade? Maybe I don't need to upgrade, as so far I have not had problems with Version 3.

      Michael
    • AndreaDiPietro wrote:

      hi all,

      today finally I decided to upgrade from omv2 to omv4, windows10 has deprecated old smb v1 protocol so it convinced me to make a fresh install in my nas.

      I installed from a usb media, sda, to an ssd, sdb, connected as usb (to spare a sata port) and with all old data hdd unplugged.
      After rebooting without installation media, the ssd switched from sdb to sda and I faced the same problem, mdadm array etc etc.
      Plug-in the data hdd (5), as suggested, not solved as os ssd it's now sdf.

      The problem is in grub conf, its entry has as linux root the wrong id device (good only during installation config), in my case root=/dev/sdb1

      Long story short, to get rid of the problem I pressed 'e' in the grub menu to edit parameters and I changed the root attribute with the current device so I managed to boot and then on the shell I executed "update-grub" that changed root values in uuid version 'root=UUID=...'.

      I hope it could be of any aid to whom is still facing it.

      Bye
      Hey guys. I'm coming to update my OMV 2 to 4 aswell. I had the same issue like you all had. The solution was to not even manually do anything under 'e' in the grub. I unplugged all drives except the omv one and without others it started. I ran the "update-grub" command, turned off, replugged the drives and it's now working again.

      Thanks guys
    • New

      couch_potatozes wrote:

      AndreaDiPietro wrote:

      Long story short, to get rid of the problem I pressed 'e' in the grub menu to edit parameters and I changed the root attribute with the current device so I managed to boot and then on the shell I executed "update-grub" that changed root values in uuid version 'root=UUID=...'.
      Hey mate - sorry can you be a bit mores specific here?
      I got the to edit screen, but i'm just not clear exactly where I should be making changes and what to change it to. Code below, if you can help please?

      Shell-Script

      1. set params 'Debian GNU/Linux'
      2. load_video
      3. insmod gzip
      4. if_[ x$grub_platform = xxen ]; then ismod xzio; ismod lzopio; fi
      5. ismod part_msdos
      6. ismod ext2
      7. set root='hd7,msdos1'
      8. if [ x$feature_platform_search_hint = xy ]; then
      9. search --no-floppy --fs-uuid --set-root --hint-bios=hd7,msdos1 --hint-efi=hd7,msdos1 --hint-baremetal=ahci7,msdos1 89cc63e0-dbae-427e-973e-411ab11bd160
      10. else
      11. seach --no-floppy --fs-uuid --set-root 89cc63e0-dbae-427e-973e-411ab11bd160
      12. fi
      13. echo 'loading Linux 4.14.0-0.bpo.3-amd64 ...'
      14. linux /boot/vmlinuz-4.14.0-0.bpo.3-amd64 root=/dev/sdh1 ro quiet
      15. echo 'Loading initial ramdisk ...'
      16. initrd /boot/initrd.img-4.14.0-0.bpo.3-amd64
      Display All
      UPDATE: Got it sorted.

      Boot up - when the terminal stops loading the error and gives you the mdadm prompt, type 'blkid'
      This should show you all attached disks. You're looking for the one toward the bottom (arrarys/sata disk should be at top). For me it was a bit ambiguous - there were two that my boot USB could have been so I just remembered both (happened to be sdg1 and sdh1).
      Reboot, and when you see the bluescrren GRUB hit 'e'
      In the above code block change line #15 - so mine became 'root=/dev/sdg1'
      Then hit F10 to boot. If it does, you've got the right one - if not, go back and try another disk name/id.
      You should not have a 'login' prompt. Login in, then type 'update-grub'. This will make your change to the GRUB permanent.

      Most of the above has been said in one form or anything by others, I just had trouble following it as a newb, so hopefully my elongated writeup will help another newb!
      Thanks this resolved my issue ive fresh installed OMV4 now only because my samsung ssd was reporting bad strangly the smart test says its ok though.
      OMV 3.0.58 - 64 bit - Nut, SABnzbd, Sonarr, Couchpotato
      HP N40L Microserver, 8gb Ram, 5 x 3TB HDD Raid5, 1 x 120GB 2.5" SSD (OS)