Onboard NIC / LSI HBA conflict

  • Hello all,


    Yesterday I had problems installing OMV on a new PC. I have some basic PC knowledge, but not being familiar with Linux, I had to do a fair amount of research. I believe I have corrected the problem and that the issue has more to do with the Linux kernal, than OMV, but I would appreciate your thoughts.


    When I installed OMV without my LSI 9201 SAS HBA card installed, everything worked fine. After shutting down and installing the card, it wouldn't work and my onboard realtek gigabit NIC wouldn't work. So I tried reinstalling OMV with the LSI card present and during installation OMV couldn't detect my onboard NIC. So I tried to reinstall a couple different ways, but it was always the same. During boot I saw messages like this:


    r8169 000:22:00.0: no MMIO resource found
    xhci_hcd 0000:03:00.0: init 0000:03:00.0 fail
    mpt2sas_cm0: unable to map adapter memory!
    sp5100-tc0 sp5100-tc0: Watchdog hardware is disabled


    Those messages indicated to me that there was some sort of resource conflict. So I did a bunch of other searching and found people with similar problems. I added the following kernal parameter pci=realloc=off during installation and everything was detected properly. I permanently added it to GRUB's configuration file and the system appears to be just fine? This "issue" appears to have noticed as early as 2015 as I found some people discussing it in regards to linux kernal 3.19 rc6 https://www.spinics.net/lists/linux-pci/msg38416.html


    I also noticed that my onboard NIC was referred to as enp24s0 when the LSI care was not installed and then when using pci=realloc=off and the LSI card installed my onboard nic was then referred to as enp34s0.


    So I suppose my questions are:


    Why is my system doing this and why can't linux detect and allocate resources to my hardware properly (edited to add: don't want to come across like I'm just blaming Linux haha, perhaps my motherboard is causing the problem?) ?


    Are there any negative effects for using pci=realloc=off in terms of stability, performance, etc?


    I plan on using this system and OMV for many years to come, is this "fix" going to remain working or will it "break" sometime in the future leaving me stranded with non functional hardware?


    Thank you very much for your thoughts on this.

  • I am in OMV 5.5.

    Use onboard SATA ports and onboard Realtek NIC to install OMV, everything work fine.

    But I run out all the SATA ports, so I add a LSI HBA card.

    The LSI HBA work fine but I lost the NIC.


    Solution :

    1. Login into console.

    2. # omv-firstaid.

    3. choose 1: configure network interface, input the Ethernet setting.

    4. done.


    This process take a while to activate, but the NIC back to work !!!!!!


    Another issue:

    The new LSI HBA card have higher priority than onboard SATAs. All my disks shown in Disk menu have different device name.

    For example, my boot disk was /dev/sda, but after LSI HBA card installed (I attach 2 disk on the card), the disk become /dev/sdc.

    I received mdadm error while booting, but system still boot successfully.


    Cliff Liao

  • In my machine the disks connected to the HBA (the ones in my DAS box) are enumerated first. But this doesn't matter because those IDs are not used by anything that would become broken or confused when they change.


    However, if you did use those IDs in homebrewed scripts and such, they can and will be broken.

    --
    Google is your friend and Bob's your uncle!


    OMV AMD64 7.x on headless Chenbro NR12000 1U 1x 8m Quad Core E3-1220 3.1GHz 32GB ECC RAM.

    Einmal editiert, zuletzt von gderf ()

  • Yes. everything is working fine except there is a "mdadm: No devices list in the conf file were found" error .


    Google it then found the solution:

    # nano /etc/default/grub

    add rootdelay to this line, GRUB_CMDLINE_LINUX_DEFAULT="quiet splash rootdelay=10".

    CTRL-O, CTRL-X

    # update-grub

    # reboot


    Those disks connect to LSI HBA card need some delay to get ready.

    Because Linux support booting from raid. the mdadm scan the devices and try to build the raid but they are not ready yet.


Jetzt mitmachen!

Sie haben noch kein Benutzerkonto auf unserer Seite? Registrieren Sie sich kostenlos und nehmen Sie an unserer Community teil!