Dowgrade Kernel of OMV4

  • Hi,


    My raid 6 array won't work on kernel higher than 4.14 bpo3.
    If I try 4.19, my Raid array block:


    Code
    Jan 18 14:59:52 BigNAS kernel: [  363.708804] INFO: task md127_raid6:270 blocked for more than 120 seconds.
    Jan 18 14:59:52 BigNAS kernel: [  363.708873]       Tainted: G     U      E     4.19.0-0.bpo.1-amd64 #1 Debian 4.19.12-1~bpo9+1
    Jan 18 14:59:52 BigNAS kernel: [  363.708953] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    Jan 18 14:59:52 BigNAS kernel: [  363.709030] md127_raid6     D    0   270      2 0x80000000

    I try all 4.18 and 4.19 kernel and it block my Raid array
    The 4.14 kernel is no more available in OMVExtra on my OMV Gui
    Is there a way for reinstall this kernel??
    Thanks

  • Finally, I installed OMV 4 on another disk available to do my tests.Here I am in 4.14 and I find my RAID clean, accessible and error free ...I am making copies of my data then I will delete and put back a new RAID 6 under linux 4.19, which normally should settle things ... I hope.I'm just worried that my SATA card will not be properly handled by Linux updates ... I'm using this 10xSATA card (https://www.amazon.fr/gp/product/B01ENKHLS6/ref=ppx_yo_dt_b_asin_title_o07__o00_s00?ie=UTF8&psc=1)
    Thks, I follow the evolution...

  • I'm giving up...
    I tried a lot of different thing:
    . I change my SataCard with a Marvell 9215 for test
    . I have done a clean installation of OMV 4.14
    . I made all OMV and OMV-Extra updates.
    . Erase all disks and make a Raid6 Array in 4.19 bpo2. and... got the same error: mdadm blocked for more than 120sec Not tainted for 4.19 bpo2
    . I try adding GRUB_CMDLINE_LINUX="scsi_mod.use_blk_mq=0 dm_mod.use_blk_mq=0" in /etc/default/grub. Nothing change...
    . So I downgrade kernel to 4.14, pin it with apt-mark hold 4.14.0-0.bpo.3-amd64
    . The array resync work well without any freeze or error message.
    . Every disk is OK with Smart status
    . I check every Sata cableSo I can stay in 4.14 but I prefer be up to date...
    Any advice??

    • Offizieller Beitrag

    Any advice??

    Unfortunately, I don't have any good way to test this and I'm not have the issue on the systems of mine still using mdadm raid. Use the proxmox kernel is the first thing I would try. I run it on about half of my systems now.

    omv 7.0.4-2 sandworm | 64 bit | 6.5 proxmox kernel

    plugins :: omvextrasorg 7.0 | kvm 7.0.10 | compose 7.1.2 | k8s 7.0-6 | cputemp 7.0 | mergerfs 7.0.3


    omv-extras.org plugins source code and issue tracker - github


    Please try ctrl-shift-R and read this before posting a question.

    Please put your OMV system details in your signature.
    Please don't PM for support... Too many PMs!

  • Thank you again ryecoaaron, your advice is always a grim help


    My OMV has been running for 2 days on proxmox, no error since!!


    The Raid array work like a charm and all my services are On (Docker, plex, remote mount...)

Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!