HP microserver Gen10

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • Short Bump:

      HPE released a new BIOS for the Gen10 Microserver. The changelog:
      - Correct memory error of SMBIOS HCT testing
      - Update for Red Hat Enterprise Linux 7.4 certification.
      - Add ACPI table BERT and HEST to report memory error

      looks like it will fix some of the dmesg acpi warnings / better linux support.

      Also, I have some power consumption figures for my System:
      • Microserver Gen10, 8GB RAM, X3216 CPU, headless, no keyboard/usb/etc.
      • Debian Stretch, Mainline Kernel 4.13.x, OMV4.x
      • Disks:
        • Root/Cache: SSD Crucial_CT275MX3 (AMD AHCI)
        • RAID1 1.0TB: 2xTOSHIBA MK1059GS (Marvell AHCI)
        • RAID1 3.0TB: 2xTOSHIBA DT01ACA3 (Marvell AHCI)
        • RAID1 0.5TB: Hitachi HTS54755, SAMSUNG HM500JI (vi Highpoint HPT RocketRaid 2300 SATA, taped inside case)
      • idle, disks standby: ~20W
      • idle, disks idle/active: ~32W
      • measured at 230V AC via power-meter with impulse counter in a dedicated wall socket
      The power figures might seem a bit high, but the DT01ACA3 disks are real power hogs - even in standby they use more than 1W each. The old HPT controller also does no power management whatsoever. But since all disks and the controller where unused and essentially free...

      For reference: My internet/WiFi router with battery backup alone uses 12W more or less permanently.

      Update:
      • installed BIOS/UEFI ZA10320 from 2017-09-20, works fine, kept all settings.
        • Update works by extracting EFI-Script and Updater from *.zip to /boot/efi, which is fs0: in the efi shell - no USB drive needed. Afterwards follow flash instructions
      • installed NIC firmware 20.6.41
        • download for RedHat; extract RPM to /
        • run /usr/lib/x86_64-linux-gnu/firmware-nic-broadcom-2.18.15-1.1/setup
        • version-bumped the NIC firmware in EFI quite a bit, but no changelog :(
      • IOMMU
        • manually enabled it in BIOS
        • added kernel boot parameter iommu=pt (via /etc/default/grub and sudo update-grub
          • you must modify both GRUB_CMDLINE_LINUX_DEFAULT and GRUB_CMDLINE_LINUX, otherwise the controller will not work in recovery mode
          • also, this obviously does not work for OMV's SystemrescueCD-addon
        • the marvell controller (still) works with all disks.



      Currently, dmesg looks like this:

      Source Code

      1. $ dmesg --level=warn,err
      2. [ 0.000000] ACPI BIOS Warning (bug): Optional FADT field Pm2ControlBlock has valid Length but zero Address: 0x0000000000000000/0x1 (20170531/tbfadt-658)
      3. [ 0.152777] PCCT header not found.
      4. [ 0.152777] pmd_set_huge: Cannot satisfy [mem 0xf8000000-0xf8200000] with a huge-page mapping due to MTRR override.
      5. [ 0.174800] [Firmware Bug]: HEST: Table contents overflow for hardware error source: 2.
      6. [ 0.282204] pnp 00:04: disabling [mem 0xfeb00000-0xfeb00fff] because it overlaps 0000:00:01.0 BAR 5 [mem 0xfeb00000-0xfeb3ffff]
      7. [ 1.274028] pci 0000:00:00.2: can't derive routing for PCI INT A
      8. [ 1.274031] pci 0000:00:00.2: PCI INT A: not connected
      9. [ 1.276556] PPR NX GT IA GA PC GA_vAPIC
      10. [ 1.324824] usb: port power management may be unreliable
      11. [ 1.371336] [Firmware Warn]: valid bits set for fields beyond structure
      12. [ 10.195494] Error: Driver 'pcspkr' is already registered, aborting...
      Display All

      I think this is quite an improvement because most warnings are gone now.
      Two rules of success in life:
      1. Don't tell people everything you know.

      The post was edited 2 times, last by qwertz123: Update Info about EFI and NIC firmware update, IOMMU ().

    • I do not think posting the firmware publicly would be a good idea.

      Download links are:
      All Drivers and Firmware for Gen10
      UEFI/BIOS Firmware ZA10A350(28 May 2018) for Linux --> Updates from Linux x86/amd64
      • This is a critical update, but no mention about what'a critical about it
      • fixes: NIC and HDD are detected as removable devices
      Broadcom NX1 Online Firmware Upgrade Utility for HPE MicroServer Gen10 Server with Linux x86 and x64

      Warning:
      While updating UEFI/BIOS from an version <= ZA10A320, all settings will be reset to factory defaults. This means iommu it will get disabled (again) and in my case thermal shutdown got re-enabled.

      Older Versions:
      UEFI/BIOS Firmware ZA10A340(15 Mrz 2018) for Linux --> Updates from Linux x86/amd64
      UEFI/BIOS Firmware ZA10A330(29 Jan 2018) for UEFI --> Updates from UEFI shell
      UEFI/BIOS Firmware ZA10A330(29 Jan 2018) for Linux --> Updates from Linux x86/amd64
      UEFI/BIOS Firmware ZA10A320(12 Okt 2017) for Linux
      UEFI/BIOS Firmware ZA10A320(12 Okt 2017) for UEFI --> Updates from UEFI shell
      UEFI/BIOS Firmware ZA10A290(23 Jun 2017)
      UEFI/BIOS Firmware ZA10A280(15 Jun 2017)

      To download the UEFI/BIOS you need an account with linked support or valid warranty status.
      The links ask for your login, if you click on download.

      For german users:
      On hardwareluxx there is also a lengthy thread about the microserver. Since someone mentioned this thread, it seems polite to exchange links.
      Two rules of success in life:
      1. Don't tell people everything you know.

      The post was edited 9 times, last by qwertz123: Added HPE firmware updater for linux Added 330 BIOS Added warning about factory defaults Added 340 BIOS and changelog ().

    • Currently quite fed up with the UEFI/BIOS of my Gen10: It's getting annoying. Very annoying.
      1. There are no options to set "do not ever halt boot".
        This means on every litte snag the BIOS runs into, it halts the boot. The list so far includes:
        1. temperature errors including <10°C room/board temp (which can/must be switched off - execption to the rule, otherwise random shutdowns)
        2. fan errors (even if the fan is currently spinning on full power!)
        3. memory size changes
        4. power failure hints (don't stall the boot I believe)
        This alone makes headless usage almost impossible, because you never know if the server comes up after a reboot. Also, you can never change the hardware without keyboard and video, because the damn "are you shure you wanted to open the case and plug in a new ram module?" prompt...


      2. PCIe initialization
        Plug in the wrong board (for example, HPT Rocket 620) with an BIOS boot extension and the thing stalls. Without any message, the bios extension gets not even displayed. Only clue ist the BIOS debug code "92" in the lower right corner of the screen (which otherwise shows the HPE logo)
      3. Marvell SATA / PCIe intialization
        Same as before, gets stuck with code 92 (again). I *believe* its actually the Marvell BIOS which stalls while looking for disks. Known causes:
        1. Disk(s) which have SMART disabled.
        Also, no messages whatsoever, just the fancy logo. This caught me completely by surprise (thus my ranting here), I spent almost an hour diagnosing this before I remembered the last thing I did before rebooting was to disable SMART on 2 of my disks (which curiously won't sleep with it enabled). And behold, upon unplugging those disks the machine boots!

      4. No way of disabling initialization of PCIe addon cards or at least their BIOS extensions.
        (Getting repetetive? I've lost hours because of this!) This would solve / work around points 2) and 3) nicely. Come on HPE, this is basic stuff even on cheap boards!



      All in all I'd prefer an BIOS which can be made "dumb as a rock", because I'm doing the advanced stuff (RAID etc.) without it anyway *and* modern OS almost completely reinitialize the hardware even with a working BIOS...
      But no, HPE tries to be smart and falls flat on it's face (especially without iLO).

      Combined with the really slow boot (and thus extra punishment in case of troubles), the Gen10 is barely adequate for a semi-professional environment: At work I want a server that boots. Period. If there's something wrong with the hardware, let the OS deal with it.

      Now, iLO/IPMI/iKVM would be a great way to work around the above issues, but I'm usually happy without it. Never needed it till today, even on dirt cheap hardware which runs 24/7. But with the BIOS in its current state, an iKVM, IPMI or iLO would be nice.

      Edit:
      Just confirmed it - enabling smart support (smartctl -s on /dev/sd?) makes the server boot again with the disks.
      You can disable SMART support via the Marvell UEFI tool, but this does exactly nothing on "unconfigured" disks - after restarting the tool, settings are lost and SMART is enabled again. I presume they are saved in the RAID metadata block on the disk(s), which is not present on unconfigured disks.

      Also another nitpick:
      Who at HPE had the bright idea to save 0.00002ct by installing only green LEDs (which are brighter than the sun btw.) and skipping the reset switch? I mean, a red HDD led would have broken the budget? Or one of those newfangled yellow or even a blue LAN LEDs? It's really hard telling if the server is booting when all LEDs are the same color and the LAN LED blinks twice as much as the HDD LED. Having all errors/warnings in amber isn't really helpful either, but at least you can differentiate between the power and message LED by shape.
      Resetting via Power switch is also quite bad for the disks, it even get logged in smart as unexpected power loss.
      Two rules of success in life:
      1. Don't tell people everything you know.

      The post was edited 1 time, last by qwertz123: Nitpicking now. ().

    • @qwertz123 I feel you pal.

      I'm basically feeling like have deployed a bomb in the SOHO :°D that for now is running smooth but who knows?! :°D

      Didn't notice the bug on smart, because I use smart on every hard disk, and they go to sleep fine..does omv tell you anything in syslog why it didn't go to sleep? maybe is not smart (that seems rather odd if you set it the right way) are you using plex by any chance, that would keep your hard disk always on no matter what if you are using it or not.

      below 10°C are you living in a fridge?! 8| how do you get those temps?!

      Completely agree with all your other picks, I feel the same especially with the leds, the memory and boot slowness not so much, unless you need to be uptime and less downtime possibile for critical usage like webserver or such (but I guess this is not the server to use for such kind of stuff).

      I mean yes it's annoying, but how many times do you really change ram?
      When you settled with how much ram you want, it's done, one time at configuration and you will never touch it again, if doesn't get fucked up, but again...after the hassle to go there open up the pc and change it, connect a keyboard\video to it and say yes is not that troublesome (is it stupid?! fuck yes!).

      Same goes for the boot speed...I don't care too much it's running 24\7 under ups, so I hope it will stay online forever :°D, if not wait once a month to reboot is not big deal (again is annoying?! fuck yes, especially because I don't understand what the heck is doing under the hood to take all that time :°D, for sure makes you want to throw your ssd out of the window not being so useful :°D)

      Definitely the iLo absence is not a smart move from my point of view, but with omv on it I guess I would not use that anyway ...so again..dumb move but not essential.

      The led choice is beyond me...I guess is not fashion with different leds :°D or it doesn't match the new hpe logo\color scheme!?!?!? :°D you know fashion style notoriously important for a server that probably will be put inside a dusty cabinet :°D

      Looking at the bright side at least it got a dust filter...I've got an old gen6 that it doesn't even have one with two fan blowing inside as hell...for a tower pc that probably would have eaten up more dust than a junkie, that wasn't a smart decision..but again...hp I guess decision department sometime is composed by drunk monkey :°D or just they think like "this seems useful..well fuck that! let's just screw it up!" :°D
    • Wek wrote:

      @qwertz123
      Didn't notice the bug on smart, because I use smart on every hard disk, and they go to sleep fine..does omv tell you anything in syslog why it didn't go to sleep? maybe is not smart (that seems rather odd if you set it the right way) are you using plex by any chance, that would keep your hard disk always on no matter what if you are using it or not.
      I wasted a lot of time on this before I found out:
      OMV enables smart "offline" data collection (smartctl -o on) which is generally a good Idea. Unfortunately, it's an automated scan of some sort which should run every 4 hours (the disk firmware manages this). Contrary to all other disks I know the Toshiba DTA01ATAx00 Series seems:
      • to take especcialy long for this scan (above 6 hours, 22000-24000s)
      • wake up from sleep to scan
      Which explained why I could not find any activity even when using iosnoop-perf from perf-tools-unstable to watch for read/writes.

      I hope this was only an initial scan (disks where in storage a long time), otherwise I'll just patch the setting out of /usr/share/openmediavault/mkconf/smartmontools. The power consumption logging will show what's happening...
      below 10°C are you living in a fridge?! 8| how do you get those temps?!
      The server is located in the attic of an (old) house, which is basically the same as saying its outside :S
      I mean yes it's annoying, but how many times do you really change ram?
      Not often, I know. I was basically just whining at this point, working my frustration out ;(
      But IMHO its important to document these caveats and I'll have to remember them - because those points are different from basically any computer I ever owned.
      Same goes for the boot speed...I don't care too much it's running 24\7 under ups, so I hope it will stay online forever :°D,
      I've automated updates on almost all the machines I have, servers reboot automatically. So yeah, non-issue, except when you are troubleshooting, which I was - hence the whining.
      Definitely the iLo absence is not a smart move from my point of view, but with omv on it I guess I would not use that anyway
      I agree: With any linux OS it falls under "nice to have", for my use having iLo would be worse than not having it: The BMC which is running all the time usually consumes 5-10W power, which is expensive in the long run (because power is expensive here).

      Looking at the bright side at least it got a dust filter...
      Yeah, I's only a flimsy one, but still better than without it. Before I got my first servers i'd never believed that those thing would filter out anything - but after 2+ years runtime it shows...
      Two rules of success in life:
      1. Don't tell people everything you know.

      The post was edited 1 time, last by qwertz123: 2017-11-15 fixed bbcode typo ().

    • qwertz1234 wrote:

      Yeah, I's only a flimsy one, but still better than without it. Before I got my first servers i'd never believed that those thing would filter out anything - but after 2+ years runtime it shows...
      Hell yes! :°D my old hp gen 6 seemed to have created a new animal species when I got it outside the drawer :°D
    • qwertz123 wrote:

      IOMMU


      manually enabled it in BIOS

      added kernel boot parameter iommu=pt (via /etc/default/grub and sudo update-grub


      you must modify both GRUB_CMDLINE_LINUX_DEFAULT and GRUB_CMDLINE_LINUX, otherwise the controller will not work in recovery mode

      also, this obviously does not work for OMV's SystemrescueCD-addon

      the marvell controller (still) works with all disks.
      Did you try pass trough GPU to kvm machine?
    • qwertz123 wrote:

      below 10°C are you living in a fridge?! 8| how do you get those temps?!
      The server is located in the attic of an (old) house, which is basically the same as saying its outside :S (Klicken um eine Quelle anzugeben)
      Hello, I'm new to this forum, but I have been used OMV now for a while. I received my HPE Microserver Gen10 about two weeks ago, it's running with the recent OMV release (3.0.91).
      It's a nice toy so far, doing everything I want. However, this morning the server was down. It turned out, that it might be a temperature problem, The boot screen told me something like this. My server is located in the roof top of our house, which is kind of isolated, but at recent temperatures (the outside value dropped below 0°C tonight) it get's cold there. Is there a workaround for this? I prefer to have it located there, because no one gets annoyed by the server's noise and there's no dirt in the room.
      Thanks in advance
      Christof

      The post was edited 1 time, last by christof1977 ().

    • pejot wrote:

      Did you try pass trough GPU to kvm machine?

      No, I'm currently not using virtualization on the machine. I was only happy that it booted.


      On other news it seems like kernel 4.15 is the way to go for audio support:
      Phoronix: AMD Stoney Ridge Audio Supported By Linux 4.15

      Also, some other goodies mostly for the gpu seem to be included in the upcoming pull.

      christof1977 wrote:

      located in the roof top of our house, which is kind of isolated, but at recent temperatures (the outside value dropped below 0°C tonight) it get's cold there. Is there a workaround for this? I prefer to have it located there, because no one gets annoyed by the server's noise and there's no dirt in the room.

      It is a known problem, documented in the users manual and can be switched off in BIOS/UEFI. I do not know the setting offhand, but it was something about thermal shutdown or critical temperatures in one of the submenus. Please note that you'll also lose over temperature protection. Also, the system does not crash or anything, it gets shut down cleanly via ACPI as if you'd pressed the power button.
      Two rules of success in life:
      1. Don't tell people everything you know.
    • qwertz123 wrote:

      On other news it seems like kernel 4.15 is the way to go for audio support:
      Phoronix: AMD Stoney Ridge Audio Supported By Linux 4.15

      Also, some other goodies mostly for the gpu seem to be included in the upcoming pull.
      Good to hear.

      I tired passthrough GPU on esxi(6/6.5) /proxmox no success.

      I have Nvidia GT210 and Radeon HD5450 not working ( even making GPU bios support EFI ). Whole week of failures :(
      Nvidia GT210 in esxi "working" but when i reboot guest whole server resets.
      If you will have some news about MS gen10 share here.
    • qwertz123 wrote:

      christof1977 wrote:

      located in the roof top of our house, which is kind of isolated, but at recent temperatures (the outside value dropped below 0°C tonight) it get's cold there. Is there a workaround for this? I prefer to have it located there, because no one gets annoyed by the server's noise and there's no dirt in the room.
      It is a known problem, documented in the users manual and can be switched off in BIOS/UEFI. I do not know the setting offhand, but it was something about thermal shutdown or critical temperatures in one of the submenus. Please note that you'll also lose over temperature protection. Also, the system does not crash or anything, it gets shut down cleanly via ACPI as if you'd pressed the power button.
      Thanks for your answer. I thought, there's no possibility to switch this off. On the other hand, it's not very nice to not have any temperature protection then. For the moment, I relocated the machine to my cellar, which is not the best solution indeed, because of the possible dirty and dust.
    • hi, I will buy an entry-level gen10 microsever (x3216 - 8GB ram), then i'll install OMV, or alternatively install PROXMOX and virtualize OMV in it.
      but I have a some doubts that I hope help me solve:
      - the processor can support the workload generated by virtualization of these services (file server, dns server, nextcloud syncro[calendar, task, contact], plex instance, p2p instance, printer server, web-testing) for 2-3 users?
      - have this machine the hardaware support for raid 1? or i need to take an additional card?
      thx :)
    • Thanks guys for your work!!

      At home I use OMV 2.2.14 with an AMD sempron 2GB of RAM Linux 3.2.0-4-AMD64 it is great solid and with 4 HDD i can made backup of my data.

      At the office (samll) we like to do our backup on a server. So we bought a new HPgen10 I have read the thread so what are the best metho we need only SMB.

      1) Update the BIOS
      2) Debian 9 minimal install
      3) choose a new kernel?
      4) Install some drivers ??
      5) upgarede nic firmware???
      6) add OMV REPO 3 or 4??

      My only needs is to do a stable NAS.

      Thanks

      OT

      nothingtosay wrote:

      hi, I will buy an entry-level gen10 microsever (x3216 - 8GB ram), then i'll install OMV, or alternatively install PROXMOX and virtualize OMV in it.
      but I have a some doubts that I hope help me solve:
      - the processor can support the workload generated by virtualization of these services (file server, dns server, nextcloud syncro[calendar, task, contact], plex instance, p2p instance, printer server, web-testing) for 2-3 users?
      - have this machine the hardaware support for raid 1? or i need to take an additional card?
      thx :)
      Have you done some experiments? why not KVM the box should be RedHat certified
    • Got my Gen10 today and updated the BIOS to ZA10A320 (2017/09/20). Afterwards I installed OMV3 as per the default instructions. Had to enable legacy mode in bios (not UEFI) to boot from USB. After installation I updated and now I have Debian 8.10 and kernel 4.9.0-0.bpo.4-amd64 GNU/Linux.

      Unfortunately this won't boot to the CLI with a monitor attached like mentioned by others in this thread. I get a black screen. SSH and Web GUI work though.

      What are my best options? Its supposed to the CLI, but it doesn't. Doesn't make me feel comfortable about the stability of the system...

      Does OMV3 with a different kernel solve this issue? If so which kernel and what is the downside of another kernel?

      I could also install OMV4 if that solves the problem. But it isn't officially released so what issues can I expect going that route? Will the default OMV4 installation ISO work or do I need to install Debian first and OVM on top of it manually? What about the kernels? Which one works best according to your experience?

      Thanks!
    • On the Debian.org Wiki I found a page about the ProLiant Microservers. According to the article the 'black screen issue' I'm having with Jessie also applies when installing stretch-4.9.0-4-amd64-netinst. It mentions the solution is to to install firmware-linux-nonfree. I tried that on Jessie (OMV3) with:

      Source Code

      1. apt-get update
      2. apt-get install firmware-linux-nonfree
      And after a reboot the console showed up :) so I guess that this is the least fuss getting OMV running on a Gen10 today.
    • Hello,
      my turn now

      1 ) Buy an HP gen 10 micro server. From a nice reseller
      2 ) Go to HP register yourself from another pc
      3 ) Turn on the HP and enter in the BIOS and go in advanced where you see FRU Information take a snapshot of every number.
      4 ) Login in HP an go to this page support.hpe.com/hpsc/wc/public/linkWarranty from another pc
      5 ) Add your product I used System serial number +System SKU Number
      6 ) *Optional* (and parallel to next operations)Become Angry if your warranty start a manth back of today call the HP an waste a lot of time to resolve none
      7 ) Download new firmware and if you are curious a raid utility from another pc
      8 ) Follow the instruction put the extracted zip inside an USB
      9 ) Check in the bios if boot is UEFI and exit from BIOS
      10 ) Put the usb key in usb in the front panel
      11 ) Boot and F11(sorry don't remember) to boot in uefi choose uefi shell
      12 ) follow the instruction when you downloaded the filesremember tha the keyboard is in english layout and in command fsx: remember ":" to go inside the usb drive than cd xxx to move inside the folders and run the program
      13 ) Cross your finger this take a lots (IMO) checking wiping and filling each area of your bios (next time I'll do IT only with UPS attached update in this style create a crazy multiple point of failure IMO)
      14 ) Shutdown
      15 ) power on an check bios version
      16 ) power off again
      17 ) open the case put th main HDD attached to the free spare sata (buy only original HP product... or a cheap floppy sata power adapter and a cheap sata cable)
      18 ) close the case
      19 ) On another PC download OMV i'm happy with openmediavault_4.0.14-amd64.iso
      20 ) If you have win10 PC use Rufus with a Internet connection to download the correct syslinux and create a bootable USB
      21 ) Power on the enter in the BIOS and disable UEFI
      22 ) Power oof
      23 ) Insert USB
      24 ) Power ON now you see raid utility bios (ignore) then hp bios choose boot menu and your usb key
      25 ) OMV install process should start
      26 ) Install choose the correct HDD
      27 ) At the end when send reboot command extract the USB
      28 ) Now OMV start (mine without particular error) and i can see the shell on VGA
      29 ) login as root - choosedpasswordduringinstalprocess
      30 ) In the shell submit apt-get update
      31 ) You should have an error
      32 ) read this thread Upgrade Debian 9 and 4.x and you discover the solution ....
      HINT:
      /lib/python3.6/weakref.pyline 109: def remove(wr, selfref=ref(self)): replace with: def remove(wr, selfref=ref(self), _atomic_removal=_remove_dead_weakref):
      line 117: _remove_dead_weakref(d, wr.key) replace with: _atomic_removal(d, wr.key)
      33 ) submit apt-get update
      34 ) submit apt-get install firmware-linux-nonfree (you should need to kill apt related process I had to)
      35 ) reboot
      36 ) omv-firstaid and tune OMV password, network cards etc
      37 ) Login in web admin shell
      38 ) Update the system and enjoy
      39 ) Thanks the forum
      40 ) The most important Make a donation or montezuma ghost will strech your legs during the nigts....
      41) after some days if you have mail monitoring enabled you should receive an error mesg: ttyname failed: Inappropriate ioctl for device
      edit /root/.profile and replace
      mesg n || true --> test -t 0 && mesg n || true

      The post was edited 1 time, last by bbm: - 8 is not an emoticons - Added a step for ioctl debian BUG ().

    • News:

      - I have updated to kernel 4.14.0-bpo3
      - OMV now is 4.0.17-1

      Now I have some Issues
      1) If antivirus enabled (with default option) it kill the machine when i transfer 2GB of 200k odt only web interface is alive
      2) So i have trid Soft reboot and big big big problem machine hangs looking for unmount (av process) then do standar shutdown an at the end leave me there with last ok no realy power off / reboot the system ACPI problem?
      3)At restart A problem about register that I can't find in logs

      looking in logs i see

      Source Code

      1. Jan 24 07:34:24 nasGST kernel: [ 0.000000] ACPI BIOS Warning (bug): Optional FADT field Pm2ControlBlock has valid Length but zero Address: 0x0000000000000000/0x1 (20170728/tbfadt-658)

      Brainfuck Source Code

      1. Jan 24 07:34:24 nasGST kernel: [ 0.105157] sysfs: cannot create duplicate filename '/firmware/acpi/tables/data/BERT'
      2. Jan 24 07:34:24 nasGST kernel: [ 0.105188] ------------[ cut here ]------------
      3. Jan 24 07:34:24 nasGST kernel: [ 0.105200] WARNING: CPU: 0 PID: 1 at /build/linux-3RM5ap/linux-4.14.13/fs/sysfs/dir.c:31 sysfs_warn_dup+0x51/0x60
      4. Jan 24 07:34:24 nasGST kernel: [ 0.105201] Modules linked in:
      5. Jan 24 07:34:24 nasGST kernel: [ 0.105208] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.14.0-0.bpo.3-amd64 #1 Debian 4.14.13-1~bpo9+1
      6. Jan 24 07:34:24 nasGST kernel: [ 0.105210] Hardware name: HPE ProLiant MicroServer Gen10/ProLiant MicroServer Gen10, BIOS 5.12 09/20/2017
      7. Jan 24 07:34:24 nasGST kernel: [ 0.105213] task: ffff9b99b59fc040 task.stack: ffffa9a000c88000
      8. Jan 24 07:34:24 nasGST kernel: [ 0.105218] RIP: 0010:sysfs_warn_dup+0x51/0x60
      9. Jan 24 07:34:24 nasGST kernel: [ 0.105220] RSP: 0018:ffffa9a000c8bdd8 EFLAGS: 00010286
      10. Jan 24 07:34:24 nasGST kernel: [ 0.105223] RAX: 0000000000000049 RBX: ffff9b99b6936000 RCX: ffffffffa3a4d248
      11. Jan 24 07:34:24 nasGST kernel: [ 0.105225] RDX: 0000000000000000 RSI: 0000000000000092 RDI: 0000000000000283
      12. Jan 24 07:34:24 nasGST kernel: [ 0.105227] RBP: ffffffffa3864cf5 R08: 0000000000000001 R09: 00000000000000f5
      13. Jan 24 07:34:24 nasGST kernel: [ 0.105229] R10: 0000000000000000 R11: 00000000000000f5 R12: ffff9b99b5bcb180
      14. Jan 24 07:34:24 nasGST kernel: [ 0.105230] R13: ffffffffa2e08880 R14: ffffffffa3ab2570 R15: ffff9b99b6932048
      15. Jan 24 07:34:24 nasGST kernel: [ 0.105233] FS: 0000000000000000(0000) GS:ffff9b99bec00000(0000) knlGS:0000000000000000
      16. Jan 24 07:34:24 nasGST kernel: [ 0.105236] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
      17. Jan 24 07:34:24 nasGST kernel: [ 0.105237] CR2: ffffa9a000d34000 CR3: 000000004b60a000 CR4: 00000000001406f0
      18. Jan 24 07:34:24 nasGST kernel: [ 0.105240] Call Trace:
      19. Jan 24 07:34:24 nasGST kernel: [ 0.105252] sysfs_add_file_mode_ns+0x10f/0x170
      20. Jan 24 07:34:24 nasGST kernel: [ 0.105259] acpi_sysfs_init+0x176/0x248
      21. Jan 24 07:34:24 nasGST kernel: [ 0.105266] ? set_debug_rodata+0x11/0x11
      22. Jan 24 07:34:24 nasGST kernel: [ 0.105270] acpi_init+0x1d0/0x361
      23. Jan 24 07:34:24 nasGST kernel: [ 0.105275] ? acpi_sleep_proc_init+0x24/0x24
      24. Jan 24 07:34:24 nasGST kernel: [ 0.105279] do_one_initcall+0x4e/0x190
      25. Jan 24 07:34:24 nasGST kernel: [ 0.105284] ? set_debug_rodata+0x11/0x11
      26. Jan 24 07:34:24 nasGST kernel: [ 0.105287] kernel_init_freeable+0x167/0x1e8
      27. Jan 24 07:34:24 nasGST kernel: [ 0.105292] ? rest_init+0xb0/0xb0
      28. Jan 24 07:34:24 nasGST kernel: [ 0.105295] kernel_init+0xa/0xf7
      29. Jan 24 07:34:24 nasGST kernel: [ 0.105298] ret_from_fork+0x1f/0x30
      30. Jan 24 07:34:24 nasGST kernel: [ 0.105302] Code: 85 c0 48 89 c3 74 12 b9 00 10 00 00 48 89 c2 31 f6 4c 89 e7 e8 a1 c9 ff ff 48 89 ea 48 89 de 48 c7 c7 40 1c 83 a3 e8 3a 76 e2 ff <0f> ff 48 89 df 5b 5d 41 5c e9 e1 71 f5 ff 90 0f 1f 44 00 00 41
      31. Jan 24 07:34:24 nasGST kernel: [ 0.105359] ---[ end trace 1124c71069e8d3d9 ]---
      Display All

      Source Code

      1. Jan 24 07:34:24 nasGST kernel: [ 2.204810] pcieport 0000:00:02.2: Signaling PME with IRQ 25
      2. Jan 24 07:34:24 nasGST kernel: [ 2.204847] pcieport 0000:00:02.5: Signaling PME with IRQ 26
      3. Jan 24 07:34:24 nasGST kernel: [ 2.204884] pciehp 0000:00:02.5:pcie004: Slot #0 AttnBtn- PwrCtrl- MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ LLActRep+
      4. Jan 24 07:34:24 nasGST kernel: [ 2.205414] ERST: Error Record Serialization Table (ERST) support is initialized.
      5. Jan 24 07:34:24 nasGST kernel: [ 2.205419] pstore: using zlib compression
      6. Jan 24 07:34:24 nasGST kernel: [ 2.205422] pstore: Registered erst as persistent store backend
      7. Jan 24 07:34:24 nasGST kernel: [ 2.205424] GHES: HEST is not enabled!
      8. Jan 24 07:34:24 nasGST kernel: [ 2.205558] Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
      9. Jan 24 07:34:24 nasGST kernel: [ 2.206231] AMD0020:00: ttyS0 at MMIO 0xfedc6000 (irq = 10, base_baud = 3000000) is a 16550A
      10. Jan 24 07:34:24 nasGST kernel: [ 2.206899] Linux agpgart interface v0.103
      11. Jan 24 07:34:24 nasGST kernel: [ 2.207294] AMD IOMMUv2 driver by Joerg Roedel <jroedel@suse.de>
      12. Jan 24 07:34:24 nasGST kernel: [ 2.207295] AMD IOMMUv2 functionality not available on this system
      Display All

      The post was edited 2 times, last by bbm ().

    • Source Code

      1. Jan 24 07:34:24 nasGST kernel: [ 2.210550] BERT: Error records from previous boot:
      2. Jan 24 07:34:24 nasGST kernel: [ 2.210553] [Hardware Error]: event severity: fatal
      3. Jan 24 07:34:24 nasGST kernel: [ 2.210556] [Hardware Error]: Error 0, type: fatal
      4. Jan 24 07:34:24 nasGST kernel: [ 2.210557] [Hardware Error]: fru_text: DIMM# Sourced.AY
      5. Jan 24 07:34:24 nasGST kernel: [ 2.210560] [Hardware Error]: section_type: memory error

      Source Code

      1. Jan 24 07:34:37 nasGST kernel: [ 20.326390] softdog: initialized. soft_noboot=0 soft_margin=60 sec soft_panic=0 (nowayout=0)
      2. Jan 24 07:36:49 nasGST kernel: [ 154.212120] amdgpu: [powerplay] min_core_set_clock not set
      3. Jan 24 07:36:49 nasGST kernel: [ 154.342685] amdgpu: [powerplay] min_core_set_clock not set
      4. Jan 24 07:36:49 nasGST kernel: [ 154.343156] amdgpu: [powerplay] min_core_set_clock not set
      5. Jan 24 07:36:49 nasGST kernel: [ 154.343235] amdgpu: [powerplay] min_core_set_clock not set
      6. Jan 24 07:36:49 nasGST kernel: [ 154.343317] amdgpu: [powerplay] min_core_set_clock not set
      7. Jan 24 07:36:49 nasGST kernel: [ 154.343391] amdgpu: [powerplay] min_core_set_clock not set
      8. Jan 24 07:36:49 nasGST kernel: [ 154.343464] amdgpu: [powerplay] min_core_set_clock not set
      9. Jan 24 07:36:49 nasGST kernel: [ 154.343543] amdgpu: [powerplay] min_core_set_clock not set
      10. Jan 24 07:36:49 nasGST kernel: [ 154.343617] amdgpu: [powerplay] min_core_set_clock not set
      11. Jan 24 07:36:49 nasGST kernel: [ 154.343690] amdgpu: [powerplay] min_core_set_clock not set
      Display All

      Source Code

      1. 2018-01-24T07:34:35+0100 nasGST systemd[1]: Started Daily apt download activities.
      2. 2018-01-24T07:34:35+0100 nasGST systemd[1]: apt-daily.timer: Adding 41min 22.987516s random time.
      3. 2018-01-24T07:34:35+0100 nasGST systemd[1]: apt-daily.timer: Adding 6h 5min 47.580843s random time.
      4. 2018-01-24T07:34:35+0100 nasGST systemd[1]: Starting Daily apt upgrade and clean activities...
      5. 2018-01-24T07:34:36+0100 nasGST systemd[1]: Started Daily apt upgrade and clean activities.
      6. 2018-01-24T07:34:36+0100 nasGST systemd[1]: apt-daily-upgrade.timer: Adding 26min 25.008634s random time.
      7. 2018-01-24T07:34:36+0100 nasGST systemd[1]: apt-daily-upgrade.timer: Adding 22min 41.519483s random time.

      Source Code

      1. 2018-01-24T07:34:34+0100 nasGST collectd[1164]: rrdcached plugin: Failed to connect to RRDCacheD at unix:/var/run/rrdcached.sock: Unable to connect to rrdcached: No such file or directory (status=2)
      2. 2018-01-24T07:34:34+0100 nasGST collectd[1164]: read-function of plugin `rrdcached' failed. Will suspend it for 20.000 seconds.
      3. 2018-01-24T07:34:34+0100 nasGST collectd[1164]: rrdcached plugin: Failed to connect to RRDCacheD at unix:/var/run/rrdcached.sock: Unable to connect to rrdcached: No such file or directory (status=2)
      4. 2018-01-24T07:34:34+0100 nasGST collectd[1164]: Filter subsystem: Built-in target `write': Dispatching value to all write plugins failed with status -1.
      5. 2018-01-24T07:34:34+0100 nasGST collectd[1164]: rrdcached plugin: Failed to connect to RRDCacheD at unix:/var/run/rrdcached.sock: Unable to connect to rrdcached: No such file or directory (status=2)
      6. 2018-01-24T07:34:34+0100 nasGST collectd[1164]: rrdcached plugin: Failed to connect to RRDCacheD at unix:/var/run/rrdcached.sock: Unable to connect to rrdcached: No such file or directory (status=2)
      7. 2018-01-24T07:34:34+0100 nasGST collectd[1164]: rrdcached plugin: Failed to connect to RRDCacheD at unix:/var/run/rrdcached.sock: Unable to connect to rrdcached: No such file or directory (status=2)
      8. 2018-01-24T07:34:34+0100 nasGST collectd[1164]: rrdcached plugin: Failed to connect to RRDCacheD at unix:/var/run/rrdcached.sock: Unable to connect to rrdcached: No such file or directory (status=2)
      9. 2018-01-24T07:34:34+0100 nasGST collectd[1164]: rrdcached plugin: Failed to connect to RRDCacheD at unix:/var/run/rrdcached.sock: Unable to connect to rrdcached: No such file or directory (status=2)
      10. 2018-01-24T07:34:34+0100 nasGST collectd[1164]: rrdcached plugin: Failed to connect to RRDCacheD at unix:/var/run/rrdcached.sock: Unable to connect to rrdcached: No such file or directory (status=2)
      11. 2018-01-24T07:34:34+0100 nasGST collectd[1164]: rrdcached plugin: Failed to connect to RRDCacheD at unix:/var/run/rrdcached.sock: Unable to connect to rrdcached: No such file or directory (status=2)
      12. 2018-01-24T07:34:34+0100 nasGST collectd[1164]: rrdcached plugin: Failed to connect to RRDCacheD at unix:/var/run/rrdcached.sock: Unable to connect to rrdcached: No such file or directory (status=2)
      13. 2018-01-24T07:34:34+0100 nasGST collectd[1164]: rrdcached plugin: Failed to connect to RRDCacheD at unix:/var/run/rrdcached.sock: Unable to connect to rrdcached: No such file or directory (status=2)
      14. 2018-01-24T07:34:34+0100 nasGST collectd[1164]: rrdcached plugin: Failed to connect to RRDCacheD at unix:/var/run/rrdcached.sock: Unable to connect to rrdcached: No such file or directory (status=2)
      15. 2018-01-24T07:34:34+0100 nasGST collectd[1164]: rrdcached plugin: Failed to connect to RRDCacheD at unix:/var/run/rrdcached.sock: Unable to connect to rrdcached: No such file or directory (status=2)
      16. 2018-01-24T07:34:34+0100 nasGST collectd[1164]: rrdcached plugin: Failed to connect to RRDCacheD at unix:/var/run/rrdcached.sock: Unable to connect to rrdcached: No such file or directory (status=2)
      17. 2018-01-24T07:34:34+0100 nasGST collectd[1164]: rrdcached plugin: Failed to connect to RRDCacheD at unix:/var/run/rrdcached.sock: Unable to connect to rrdcached: No such file or directory (status=2)
      18. 2018-01-24T07:34:34+0100 nasGST collectd[1164]: rrdcached plugin: Failed to connect to RRDCacheD at unix:/var/run/rrdcached.sock: Unable to connect to rrdcached: No such file or directory (status=2)
      19. 2018-01-24T07:34:34+0100 nasGST collectd[1164]: rrdcached plugin: Failed to connect to RRDCacheD at unix:/var/run/rrdcached.sock: Unable to connect to rrdcached: No such file or directory (status=2)
      20. 2018-01-24T07:34:34+0100 nasGST collectd[1164]: rrdcached plugin: Failed to connect to RRDCacheD at unix:/var/run/rrdcached.sock: Unable to connect to rrdcached: No such file or directory (status=2)
      21. 2018-01-24T07:34:34+0100 nasGST collectd[1164]: rrdcached plugin: Failed to connect to RRDCacheD at unix:/var/run/rrdcached.sock: Unable to connect to rrdcached: No such file or directory (status=2)
      22. 2018-01-24T07:34:34+0100 nasGST collectd[1164]: rrdcached plugin: Failed to connect to RRDCacheD at unix:/var/run/rrdcached.sock: Unable to connect to rrdcached: No such file or directory (status=2)
      23. 2018-01-24T07:34:34+0100 nasGST collectd[1164]: rrdcached plugin: Failed to connect to RRDCacheD at unix:/var/run/rrdcached.sock: Unable to connect to rrdcached: No such file or directory (status=2)
      24. 2018-01-24T07:34:34+0100 nasGST collectd[1164]: rrdcached plugin: Failed to connect to RRDCacheD at unix:/var/run/rrdcached.sock: Unable to connect to rrdcached: No such file or directory (status=2)
      Display All

      Source Code

      1. 2018-01-24T07:34:24+0100 nasGST systemd[1]: Starting Enable File System Quotas...
      2. 2018-01-24T07:34:24+0100 nasGST systemd[1]: Started Load/Save Screen Backlight Brightness of backlight:acpi_video0.
      3. 2018-01-24T07:34:24+0100 nasGST quotaon[514]: quotaon: cannot find /srv/dev-disk-by-label-Dati/aquota.group on /dev/mapper/VG1-LV1 [/srv/dev-disk-by-label-Dati]
      4. 2018-01-24T07:34:24+0100 nasGST quotaon[514]: quotaon: cannot find /srv/dev-disk-by-label-Dati/aquota.user on /dev/mapper/VG1-LV1 [/srv/dev-disk-by-label-Dati]
      5. 2018-01-24T07:34:24+0100 nasGST systemd[1]: quotaon.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
      6. 2018-01-24T07:34:24+0100 nasGST systemd[1]: Failed to start Enable File System Quotas.
      7. 2018-01-24T07:34:24+0100 nasGST systemd[1]: quotaon.service: Unit entered failed state.
      8. 2018-01-24T07:34:24+0100 nasGST systemd[1]: quotaon.service: Failed with result 'exit-code'.
      9. 2018-01-24T07:34:24+0100 nasGST systemd[1]: Reached target Local File Systems.

      The post was edited 1 time, last by bbm ().