SOLVED No Boot after last kernel update... HELP!!!!

  • Hi,


    First of all thanks for the nice work with OMV.
    I have a serious problem though... After beeing using OMV for the last 2 1/2 months my systems stopped booting after the first reboot following the last kernel update, I can't recall but I beleive there was a kernel update midle last week. My system runs 24/7 so I didn't do a reboot at the time. But in order to change the place where my system was beeing kept I had to shut it down last wednesday... After that the system never came up by it self anymore. The Raid-6 arrays fail to start and the system stops asking for troubleshooting.
    I managed to find out that the problem has something to do with the partition detection system on this new update because if I stop the "bogus" raid arrays and run a partprobe on each drive I can then run a mdam-raid start and the raid arrays get started properly...


    Any idea?


    here his a boot log and my RAID configuration:



    This is the RAID layout after this failed boot:



    And this is the RAID layout after running partprobe (the RAID's are in read-only on purpose):
    This is my correct RAID layout.



    Any help would be greatly appreciated.


    Cheers

  • Maybe you can boot the old kernel and see if it helps. Which kernel do you have?
    The drives are mounted with UUIDs and it shouldn't be any problem on kernel updates.

  • Hi


    Kernel is:
    root@ANANAS:~# uname -a
    Linux ANANAS 2.6.32-5-amd64 #1 SMP Mon Feb 25 00:26:11 UTC 2013 x86_64 GNU/Linux


    but under boot I only have:
    root@ANANAS:/boot# ls
    config-2.6.32-5-amd64 grub initrd.img-2.6.32-5-amd64 System.map-2.6.32-5-amd64 vmlinuz-2.6.32-5-amd64


    UUID is not helping because the issue is that not all the drive partitions are recognized under /proc/partitions until I manually run partprobe.


    Thanks for your help.

  • Not sure what your problem is, but putting three raid arrays on the same devices will not help performance, make anyhting more secure or help in another way. You could achieve the same with a single MD,LVM and creating volumes.


    As I am curious, what was the design principle of setting up three raids on the same physical disks?

    Everything is possible, sometimes it requires Google to find out how.

  • Ok the principle is:


    Performance is not an issue.
    RAID-6 is mandatory for double disc failure.
    Maximum amount of usable RAID space is desired.


    So if you can propose a diferent way to get 10TB Usable out of 3x 3TB Drives and 3x 2TB Drives and 2x 1TB Drives. I would apreciate.



    About my problem, system does not boot because some of the partitions are not recognized and do not get to /proc/partitions unless I manually run partprobe after the boot fails.

  • I have no idea what is going on. I am not aware that anything change in the updated kernel, to recognize the partitions.


    Nothing that occured anywhere else.


    I suggest (and this is painful) to write down all settings and reinstall OMV from scratch. Do not touch any of the data disks, you can bring them in again after reinstalling OMV with the same pathes and mount points.

    Everything is possible, sometimes it requires Google to find out how.

  • The only other symptom I have is that no disc displays Serial number information in the omv web interface...


    But yes apparently that would be the only way to get my system back ...

  • Solved.


    Recovered by restoring /boot from backup.
    And then cleaned the RAID definition that where wrong on each disk by using

    Code
    mdadm --zero-superblock /dev/sda
  • Raid is still working because I used Partitions, instead of full drives.


    If I had run mdadm --zero-superblock /dev/sda1 then yes it would destroy my array.
    But that was not the case.

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!