Crashing

    This site uses cookies. By continuing to browse this site, you are agreeing to our Cookie Policy.

    • Hi, i'm running OMV on my rock64 (2gb) variant, with the propper psu, a 16gb microsd class 10.

      I currently use it as a Nas & a plex media server, but it keeps crashing every other day, and that is without using it as a plex server,
      I have a rpi running Kodi and it reads from the rock64 through samba, when playing a media file it is not under load but crashes for some reason.

      Would be grateful if someone could point me in the right direction, attached the syslog below

      Thank you

      also have a 4tb hdd connected through the usb3 port
      Files
      • syslog.zip

        (315.14 kB, downloaded 46 times, last: )

      The post was edited 1 time, last by omarius ().

    • I am not sure how much of the syslog file was needed, it was large and i could not pase all of it, the site just frooze for me, I hope this will help.

      and the images show the aprox load and what docket images running

      Source Code

      1. Dec 24 02:43:51 rock64 NetworkManager[917]: <warn> [1545615831.3363] arping[0xab917d18,3]: arping could not be found; no ARPs will be sent
      2. Dec 24 02:43:51 rock64 containerd[1331]: time="2018-12-24T02:43:51.455774695+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/60a2ccbbcbcbe7a59046e43eccf6898e1282490450a455a4496904b10212d9eb/shim.sock" debug=false pid=1957
      3. Dec 24 02:43:51 rock64 containerd[1331]: time="2018-12-24T02:43:51.492699850+01:00" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a20fe90b9b3872d946d26a10bc5d2241f54349fd876e49dee2febcdf71dad987/shim.sock" debug=false pid=1960
      4. Dec 24 02:43:52 rock64 dockerd[1325]: time="2018-12-24T02:43:52.349619000+01:00" level=info msg="Loading containers: done."
      5. Dec 24 02:43:52 rock64 dockerd[1325]: time="2018-12-24T02:43:52.452160065+01:00" level=info msg="Docker daemon" commit=4d60db4 graphdriver(s)=overlay2 version=18.09.0
      6. Dec 24 02:43:52 rock64 dockerd[1325]: time="2018-12-24T02:43:52.456376283+01:00" level=info msg="Daemon has completed initialization"
      7. Dec 24 02:43:52 rock64 systemd[1]: Started Docker Application Container Engine.
      8. Dec 24 02:43:52 rock64 systemd[1]: Reached target Multi-User System.
      9. Dec 24 02:43:52 rock64 dockerd[1325]: time="2018-12-24T02:43:52.606398507+01:00" level=info msg="API listen on /var/run/docker.sock"
      10. Dec 24 02:43:52 rock64 systemd[1]: Starting Beep after system start...
      11. Dec 24 02:43:52 rock64 systemd[1]: Starting watchdog daemon...
      12. Dec 24 02:43:52 rock64 sh[2132]: modprobe: FATAL: Module softdog not found in directory /lib/modules/4.4.154-1124-rockchip-ayufan-ged3ce4d15ec1
      13. Dec 24 02:43:52 rock64 systemd[1]: watchdog.service: Control process exited, code=exited status=1
      14. Dec 24 02:43:52 rock64 systemd[1]: Failed to start watchdog daemon.
      15. Dec 24 02:43:52 rock64 systemd[1]: watchdog.service: Unit entered failed state.
      16. Dec 24 02:43:52 rock64 systemd[1]: watchdog.service: Triggering OnFailure= dependencies.
      17. Dec 24 02:43:52 rock64 systemd[1]: watchdog.service: Failed with result 'exit-code'.
      18. Dec 24 02:43:52 rock64 systemd[1]: Starting watchdog keepalive daemon...
      19. Dec 24 02:43:52 rock64 systemd[1]: Reached target Graphical Interface.
      20. Dec 24 02:43:52 rock64 sh[2143]: modprobe: FATAL: Module softdog not found in directory /lib/modules/4.4.154-1124-rockchip-ayufan-ged3ce4d15ec1
      21. Dec 24 02:43:52 rock64 systemd[1]: Starting Update UTMP about System Runlevel Changes...
      22. Dec 24 02:43:52 rock64 systemd[1]: wd_keepalive.service: Control process exited, code=exited status=1
      23. Dec 24 02:43:52 rock64 systemd[1]: Failed to start watchdog keepalive daemon.
      24. Dec 24 02:43:52 rock64 systemd[1]: wd_keepalive.service: Unit entered failed state.
      25. Dec 24 02:43:52 rock64 systemd[1]: wd_keepalive.service: Failed with result 'exit-code'.
      26. Dec 24 02:43:52 rock64 systemd[1]: Started Update UTMP about System Runlevel Changes.
      27. Dec 24 02:43:53 rock64 systemd[1]: Started Beep after system start.
      28. Dec 24 02:43:53 rock64 systemd[1]: Startup finished in 12.553s (kernel) + 26.135s (userspace) = 38.688s.
      29. Dec 24 02:43:53 rock64 collectd[1550]: Not sleeping because the next interval is 7.352 seconds in the past!
      30. Dec 24 02:44:01 rock64 collectd[1550]: Filter subsystem: Built-in target `write': Some write plugin is back to normal operation. `write' succeeded.
      31. Dec 24 02:44:03 rock64 kernel: [ 48.920011] "python" (2480) uses deprecated CP15 Barrier instruction at 0xf6d2e0ec
      32. Dec 24 02:44:03 rock64 kernel: [ 49.012631] "python" (2480) uses deprecated CP15 Barrier instruction at 0xf6d2e130
      33. Dec 24 02:44:03 rock64 kernel: [ 49.013596] "python" (2480) uses deprecated CP15 Barrier instruction at 0xf6d2e130
      34. Dec 24 02:44:03 rock64 kernel: [ 49.014429] "python" (2480) uses deprecated CP15 Barrier instruction at 0xf6d2e130
      35. Dec 24 02:44:03 rock64 kernel: [ 49.039043] "python" (2480) uses deprecated CP15 Barrier instruction at 0xf6d2e130
      36. Dec 24 02:44:03 rock64 kernel: [ 49.039900] "python" (2480) uses deprecated CP15 Barrier instruction at 0xf6d2e130
      37. Dec 24 02:44:03 rock64 kernel: [ 49.040927] "python" (2480) uses deprecated CP15 Barrier instruction at 0xf6d2e130
      38. Dec 24 02:44:03 rock64 kernel: [ 49.041955] "python" (2480) uses deprecated CP15 Barrier instruction at 0xf6d2e130
      39. Dec 24 02:44:03 rock64 kernel: [ 49.043088] "python" (2480) uses deprecated CP15 Barrier instruction at 0xf6d2e130
      40. Dec 24 02:44:03 rock64 kernel: [ 49.043927] "python" (2480) uses deprecated CP15 Barrier instruction at 0xf6d2e130
      41. Dec 24 02:44:16 rock64 kernel: [ 62.588090] cp15barrier_handler: 213 callbacks suppressed
      42. Dec 24 02:44:16 rock64 kernel: [ 62.588686] "python" (2480) uses deprecated CP15 Barrier instruction at 0xf6d2e130
      43. Dec 24 02:44:16 rock64 kernel: [ 62.591525] "python" (2480) uses deprecated CP15 Barrier instruction at 0xf6d2e130
      44. Dec 24 02:44:16 rock64 kernel: [ 62.592369] "python" (2480) uses deprecated CP15 Barrier instruction at 0xf6d2e130
      45. Dec 24 02:44:16 rock64 kernel: [ 62.597397] "python" (2480) uses deprecated CP15 Barrier instruction at 0xf6d2e130
      46. Dec 24 02:44:16 rock64 kernel: [ 62.598338] "python" (2480) uses deprecated CP15 Barrier instruction at 0xf6d2e130
      47. Dec 24 02:44:16 rock64 kernel: [ 62.599464] "python" (2480) uses deprecated CP15 Barrier instruction at 0xf6d2e130
      48. Dec 24 02:44:16 rock64 kernel: [ 62.600668] "python" (2480) uses deprecated CP15 Barrier instruction at 0xf6d2e130
      49. Dec 24 02:44:16 rock64 kernel: [ 62.602119] "python" (2480) uses deprecated CP15 Barrier instruction at 0xf6d2e130
      50. Dec 24 02:44:16 rock64 kernel: [ 62.603007] "python" (2480) uses deprecated CP15 Barrier instruction at 0xf6d2e130
      51. Dec 24 02:44:16 rock64 kernel: [ 62.604200] "python" (2480) uses deprecated CP15 Barrier instruction at 0xf6d2e130
      52. Dec 24 02:45:01 rock64 CRON[2939]: (root) CMD (/usr/sbin/omv-mkrrdgraph >/dev/null 2>&1)
      53. Dec 24 02:45:01 rock64 CRON[2941]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
      Display All
    • Sadly that doesn't really provide any useful info, you can try enabling persistent journaling here, which is what I did when trying to diagnose my reboots, but it doesn't tell me what brought down my system either, mine is a clean reboot that seems to happen randomly though, not a crash. I have a feeling it's the watchdog service, which I just disabled.

      Try upgrading your kernel or other packages, the newest kernel for OMV is 4.18. You also may just simply be overloading it, it is an ARM chip after all.
    • Another place to look is the power supply and the sd card. The most common problems on the pi for random stuff are an old or slow or corrupt sd card and a power supply that is gong bad. As brando stated above, it could be that you are over loading your Rock64.

      Anyone with a ROCK64 who can answer this question?
      Build, Learn, Create.

      How to Videos for OMV

      Post any questions to the forum, so others can benefit from your curiosity. :thumbsup:
      No private support.
    • brando56894 wrote:

      Sadly that doesn't really provide any useful info, you can try enabling persistent journaling here, which is what I did when trying to diagnose my reboots, but it doesn't tell me what brought down my system either, mine is a clean reboot that seems to happen randomly though, not a crash. I have a feeling it's the watchdog service, which I just disabled.

      Try upgrading your kernel or other packages, the newest kernel for OMV is 4.18. You also may just simply be overloading it, it is an ARM chip after all.
      what do you mean by overload? all i really need is a "samba server" to run, would there be a better option or a more lightweight option other than the open media vault?
    • Trying another SD card is a good idea. Preferably a A1 card.

      Make sure that you use a short high quality USB cable to the HDD. Switching to a better cable might be a quick, simple and easy way to fix the problems. Or at least reduce them.

      Another thing to try is to use a powered hdd. That significantly reduce the power needed by the SBC. A high load means increased current and that means a bigger voltage drop in the power cable and possibly even more in the USB cable. That may cause a crash.

      Just guesses.
      OMV 4, 7 x ODROID HC2, 1 x ODROID HC1, 5 x 12TB, 1 x 8TB, 1 x 2TB SSHD, 1 x 500GB SSD, GbE, WiFi mesh
    • Adoby wrote:

      Trying another SD card is a good idea. Preferably a A1 card.

      Make sure that you use a short high quality USB cable to the HDD. Switching to a better cable might be a quick, simple and easy way to fix the problems. Or at least reduce them.

      Another thing to try is to use a powered hdd. That significantly reduce the power needed by the SBC. A high load means increased current and that means a bigger voltage drop in the power cable and possibly even more in the USB cable. That may cause a crash.

      Just guesses.
      currently i am using a normal hdd with an external enclosure and a psu for it and the original usb3 cable that came with it, when i looked at the sd card it was some random brand, but what is the "a1"? and how does it relate to "class 1 - 10"?
    • Class 10 is about how fast the card is when writing/reading sequentially. That is nice for video recording/playing. But for use as a storage for computer filesystem, or apps in a phone, random writing/reading speed is also important. Perhaps more important. The A1 standard (A=Application) means that the card has at least a decent random writing/reading speed. There is also a A2 standard, but those cards are seldom fully supported by the hardware. A SanDisk Ultra A1 card is not a lot more expensive than a high quality fast "normal" card. I use the 32 GB SanDisk Ultra A1. They are just under $10. SanDisk Extreme A1 is more durable and faster but also more expensive.

      A demo here:

      OMV 4, 7 x ODROID HC2, 1 x ODROID HC1, 5 x 12TB, 1 x 8TB, 1 x 2TB SSHD, 1 x 500GB SSD, GbE, WiFi mesh

      The post was edited 4 times, last by Adoby ().

    • omarius wrote:

      what do you mean by overload?
      You can't 'overload' a Linux system. And you can't run a Linux reliably on insufficient hardware.

      The usual problems with SBC are hardware and/or settings related. The two most favorite problems are underpowering and SD card hassles and then sometimes images run with silly defaults (like overclocked DRAM for no reason).

      Since you're using an ARM image providing the output from armbianmonitor -u might help.